Tag Archives: Goldilocks

Unbiased Estimators vs. Minimizing a Quadratic Loss Function

Unbiased estimators are important in statistics. I guess because of Cramér Rao bound, for the variance. In the sense that if  , then  https://latex.codecogs.com/gif.latex?\text{Var}[\widehat{\theta}]\geq%20I_\theta^{-1}, where   denotes Fisher information (the proof was writen in an old post).

But what could we be the variance if   is not unbiased ?

Consider the following simple case, with a Gaussian i.id. sample   from a  . We know that the estimator of the Method of Moments is the same as the Maximum Likelihood estimator, i.e.  . And this estimator is efficient, in the sense that its variance is equal to Cramér-Rao lower bound,  .

But what if we consider another estimator? For instance  , with   not necessarily equal to 1. In that case, this estimator is (usually) biased since

 

And its variance is

 

We can visualise those two functions (the bias and the variance) using


n=10
alpha=seq(0,2,by=.01)
b=1-alpha
v=alpha^2/n
plot(alpha,b,xlab="alpha",col="red",type="l")
par(new=TRUE)
plot(alpha,v,col="blue",type="l",axes=FALSE
axis(4,)
mtext("bias",side=2,line=-1,col="red")
mtext("variance",side=4,line=-1,col="blue")

Observe that if is small (smaller than 1), the variance is smaller than the Cramér-Rao lower bound. And here, the mean squared error, defined as

which is, here,

The optimal value is obtained when the first order condition is satisfied

i.e.

So biased estimators can be more interesting than unbiased estimators, if the goal is the minimize the mean square error.

Statistics, and the Goldilocks Principle

By the end of May, in Toronto, we had that great talk at the SSC by Jeff Rosenthal, on monte carlo techniques, and Jeff mention the name of “the Goldilocks principle” (it was in the contect of MCMC, and I did mention it in my talk in London on MCMC, when I discussed the value of the rejection rate of the Hastings Metropolis algorithm, which should be not to large, and not too small…). In the story, Goldilocks, there are always three alternative, one is always too much in one extreme (too hot – for the soup – or too large – for the bed, or the chaiir), one is too much in the opposite extreme (too cold, or too small), and one is “just right“.

Continue reading Statistics, and the Goldilocks Principle