Unbiased estimators are important in statistics. I guess because of Cramér Rao bound, for the variance. In the sense that if , then , where denotes Fisher information (the proof was writen in an old post).
But what could we be the variance if is not unbiased ?
Consider the following simple case, with a Gaussian i.id. sample from a . We know that the estimator of the Method of Moments is the same as the Maximum Likelihood estimator, i.e. . And this estimator is efficient, in the sense that its variance is equal to Cramér-Rao lower bound, .
But what if we consider another estimator? For instance , with not necessarily equal to 1. In that case, this estimator is (usually) biased since
And its variance is
We can visualise those two functions (the bias and the variance) using
n=10
alpha=seq(0,2,by=.01)
b=1-alpha
v=alpha^2/n
plot(alpha,b,xlab="alpha",col="red",type="l")
par(new=TRUE)
plot(alpha,v,col="blue",type="l",axes=FALSE
axis(4,)
mtext("bias",side=2,line=-1,col="red")
mtext("variance",side=4,line=-1,col="blue")
Observe that if is small (smaller than 1), the variance is smaller than the Cramér-Rao lower bound. And here, the mean squared error, defined as
which is, here,
The optimal value is obtained when the first order condition is satisfied
i.e.
So biased estimators can be more interesting than unbiased estimators, if the goal is the minimize the mean square error.