Topologies are great. They are the tool which enable you to say that a sequence of objects converge to another object in the end. However, I really don’t like the conventional topologies which people use on probability distributions, because I don’t think they respect fundamental intuitions of what it should mean to converge.

Let’s start this by presenting the two villains: the weak topology, and the total-variation topology.

### The weak topology

The weak topology is defined in the following way. A sequence of random variables (rv) converges to some limit rv if the expected value of any **bounded** function converges:

This is the most common way of defining that some sequence of rv converges. If you have ever taken a course or two of probability theory, you have used this before (and its extensions: convergence “in probability”, “almost sure” and “sure” convergence)

In order to prove weak convergence, we actually only need to prove the convergence of these functions: , the Fourier basis of , so it’s not as hard as it might look from the definition.

### The total-variation distance, and its topology

The total-variation topology is basically an all-around improvement of the weak-topology.

First of all, let’s define the total-variation distance. I prefer to define it as a distance between probability density functions (pdf) and :

In words: we find the event for which the probability is maximally different under p and q. The difference of probability is the total-variation distance.

The total-variation distance is a metric on the space of probability distribution and so you can use it to define a topology. converges to if the probability densities converge according to the the total variation distance:

There is a very strong link between and the weak convergence. Indeed, we can compute the distance by computing a max over all functions which are bounded by 1:

So convergence in total-variation is equivalent to uniform convergence of all bounded functions, which is why I said that TV improves on the weak convergence.

### Why they’re weird

Let’s recapitulate. Both the weak convergence and the TV convergence prove that the expected values of bounded functions converge, with slight differences between the two: TV is a uniform convergence, whereas the weak convergence is more “point wise”. So why do I think that they’re weird notions of convergence ?

My problem is that I’m also very interested in convergence of some unbounded functions. For example, the mean, the variance, the skew, the kurtosis, etc. I really dislike that both the weak convergence and the total variation do not require the convergence of all those important statistics.

#### A paradoxical example

Let me give you an example of the pathological behavior of these two topologies.

Let’s define a random variable sequence of rv which have a mixture distribution. With probability , is picked according to a Gaussian distribution centered at 0. With probability , we pick it instead as a Gaussian centered at n. Both Gaussians have variance 1.

All have mean 1, and their variance grows to infinity when . It seems unreasonable to believe that that sequence of RV converges to a well behaved distribution, or to anything at all really.

But, according to both the weak and the TV topologies, converges to a Gaussian centered at 0, with variance 1. I’ll leave the proof as an exercise to the interested reader. Start from the definition and it’s straightforward to find that .

It absolutely blew my mind when I realized that the TV topology predicts that the sequence converges towards a Gaussian at 0. To me, this means that it’s myopic: they do not look at the full picture. If has very rare but extremely large events, they get ignored by the TV distance.

### What’s the solution ?

To be honest, I don’t really now. I was hoping for a second that the Wasserstein distances might prove better, but they are equivalent to TV + convergence of the first moments, so they still seem a little bit too weak for me.

There isn’t much left after that. The Kullback-Leibler divergence is one avenue I’m looking at right now. I’ll make another blog post on it once I understand better what convergence in KL implies, but while it seemed good at first, I’m not so sure now.

### Is it really that bad ?

To be honest again, the weak and TV topologies are not that bad. From a statistical point of view, you can still compute a lot of relevant quantities from the limit rv. For example, you can construct asymptotic confidence intervals in the following way.

Let’s build a confidence interval on . Assume that you want a 90% confidence interval: an interval in which the random variable is present 90% of the time. First, we select a 95% confidence interval on which we’ll call . For any , as soon as , the probability of becomes bigger than 90% from a simple application on the definition of the TV distance. is thus an (asymptotic) confidence interval.

It’s thus quite possible that I’m over-reacting, but I think there are good reasons for seeking and working with stronger topologies than the conventional weak and TV topologies.

As always, feel free to correct any inaccuracies, errors, spelling mistakes and to send comments on my email ! I’ll be glad to hear from you