I’ve been reading on the bias-variance trade-off and, most importantly, on the Stein phenomenon. Here are some of my thoughts on the subject which I hope can help others with this slightly thorny subject.
First, let us set the stage. We want to infer the value of some d-dimensional parameter: . In order to do so, we are given a single observation which corresponds to corrupted by Gaussian noise with covariance the identity matrix:
How should we estimate from ?
A natural idea consists in using itself. This is the maximum-likelihood estimator of and is indeed a good estimator of . It is the best estimator that is translation-invariant; it is minimax; etc.
However, Stein has shown that is not actually perfect: there exists a whole family of estimators which are better than it, if . These are estimators of the form:
No matter the true value of , these estimators always have lower Mean-Squared Error than . In other words, they always do a better job! In sense, it is thus slightly “stupid” to use instead of them, because you are going to make bigger errors by doing so.
The reason this occurs is due to the strong power of biasing your estimators in high-dimensions. Biasing causes an increase in error, but causes a stronger reduction in the variance, and is thus beneficial. The Stein-estimator creates a bias towards which is able to improve the performance of the estimation. While, other biased estimators such as adding a or penalty (in machine-learning’s horrible jargon, these are known as Lasso and Ridge penalties) would also improve over when close enough to , the more general estimator is better than them when we are far away. However, the Stein-estimator is always better since its bias vanishes when is far from .
When I first saw this result, I was very perplexed by the very peculiar form of the Stein-estimator. There is this tendency in math to present properties like this one as magical: “here is this one guy that just happens to have a crazy property”, instead of detailling where the property comes from. The Stein-estimator is usually presented in this manner, but it does have an interesting origin. Indeed, it is a actually close to a Bayesian estimator (more precisely, he can be derived from an empirical Bayes approach. Since it isn’t truly Bayesian, it should mean that there exists another estimator that dominates it). This makes a little bit of sense, since Bayesian estimators are known to have good Mean Squared Error. Of course, I’m still very curious whether their could be other biases which result in estimators which dominate everywhere. I’m just never satisfied with a single counter-example: I want to know the set of all counter-examples.
Thus, the Stein phenomenon gives us the high-level lesson that “Bias is good (in high-dimensions)”. However, there remains one question: in which direction should we bias our estimate? This is an interesting question.
Indeed, if we have no idea which , one idea is to choose randomly around which constitues a natural first guess for . However, that is stupid, by the following argument:
- Biasing our estimator towards is only good compared to if is pulling us towards the true value .
- If we choose randomly around , half of the values that we choose are going to be further away from than
- Thus, randomly choosing our bias direction is a bad idea: it’s going to be worse than (see: https://arxiv.org/pdf/1203.5626.pdf for a more detailled presentation of that idea).
Thus, biasing is indeed good if we have true prior information about the direction in which we should bias our guess. If we don’t, then it is better to use the unbiased estimator since a randomly chosen bias is unlikely to actually achieve a reduction in error.