Unbiased Estimators vs. Minimizing a Quadratic Loss Function

Unbiased estimators are important in statistics. I guess because of Cramér Rao bound, for the variance. In the sense that if  , then  https://latex.codecogs.com/gif.latex?\text{Var}[\widehat{\theta}]\geq%20I_\theta^{-1}, where   denotes Fisher information (the proof was writen in an old post).

But what could we be the variance if   is not unbiased ?

Consider the following simple case, with a Gaussian i.id. sample   from a  . We know that the estimator of the Method of Moments is the same as the Maximum Likelihood estimator, i.e.  . And this estimator is efficient, in the sense that its variance is equal to Cramér-Rao lower bound,  .

But what if we consider another estimator? For instance  , with   not necessarily equal to 1. In that case, this estimator is (usually) biased since

 

And its variance is

 

We can visualise those two functions (the bias and the variance) using


n=10
alpha=seq(0,2,by=.01)
b=1-alpha
v=alpha^2/n
plot(alpha,b,xlab="alpha",col="red",type="l")
par(new=TRUE)
plot(alpha,v,col="blue",type="l",axes=FALSE
axis(4,)
mtext("bias",side=2,line=-1,col="red")
mtext("variance",side=4,line=-1,col="blue")

Observe that if is small (smaller than 1), the variance is smaller than the Cramér-Rao lower bound. And here, the mean squared error, defined as

which is, here,

The optimal value is obtained when the first order condition is satisfied

i.e.

So biased estimators can be more interesting than unbiased estimators, if the goal is the minimize the mean square error.

Re-parametrization and Maximum Likelihood

The maximum likelihood estimator is invariant in the sense that for all bijective function  , if   is the maximum likelihood estimator of   then  . Let  , then   is equal to  , and the likelihood function in   is  . And since   is the maximum likelihood estimator of  ,

hence,   is the maximum likelihood estimator of  .

For instance, the Bernoulli distribution is   with   and

 

Given sample  , the likelihood is

 

The log-likelihood is then

 

with ICI

 https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20p}\log\mathcal{L}(p)=\frac{\sum%20x_i}{p}-\frac{n-\sum%20x_i}{1-p}.

Thus, the first order condition

 https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20p}\log\mathcal{L}(p)=0

is satisfied when  . In order to illustrate, consider the following data


> set.seed(1)
> X=sample(0:1,size=15,replace=TRUE)
> X
[1] 0 0 1 1 0 1 1 1 1 0 0 0 1 0 1

The (negative) log-likelihood is here


> loglik=function(p){
+ -sum(log(dbinom(X,size=1,prob=p)))
+ }

that we can visualize below


> u=seq(0,1,by=.025)
> v=-Vectorize(loglik)(u)
> plot(u,v,type="l",xlab="",ylab="")

From calculations above, we know that the maximum likelihood estimator for  is


> mean(X)
[1] 0.5333333

The numerical version is


> (opt=optim(.5,loglik))
$par
[1] 0.5333008

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL

Somehow, we were lucky here, because we did not say that the optimization was on the interval . Nevertheless, our estimator for the probability belongs to . In order to insure that the optimal value is in , we can consider some constrained optimization routine


> constrOptim(.5, loglik, grad=NULL,ui=matrix(c(1,-1),2,1), ci=c(0,-1))
$par
[1] 0.5333008

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 6.909277e-05

On the previous graph, we did – indeed – reach that maximum of the log-likelihood


> abline(v=opt$par,col="red")

An alternative is to consider   (as in the exponential family). The log-likelihood is then

 

since

 

Here

 

Thus, the first order condition

 

is satisfied when

i.e.

 

From a numerical perspective, we have the same optimal value


> loglik=function(theta){
+ -sum(log(dbinom(X,size=1,prob=exp(theta)/(1+exp(theta)))))
+ }
> (opt=optim(0,loglik))
$par
[1] 0.1335938

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL
> exp(opt$par)/(1+exp(opt$par))
[1] 0.5333489

Big Data (3) – Comment garantir au mieux la confidentialité ?

Dernier billet de la série publiée par Philippe Tassi, en ligne sur variance.eu, avec aujourd’hui une réflexion sur la confidentialité.

Les données ou statistiques, détenues ou élaborées par des administrations ou des entreprises, sont en général construites à partir d’informations individuelles, ce qui pose la question de la protection des sources, c’est-à-dire de la vie privée, compte tenu des progrès constants de la science et des process de traitement. Comment établir et maintenir la confiance du grand public, partie prenante numéro 1, tout en respectant l’équilibre entre promesse de confidentialité et utilisation des données recueillies ?

Pour y répondre, deux approches sont complémentaires : l’une est réglementaire, et montre que les Etats ont pris conscience depuis longtemps de la nécessité d’établir des garde-fous juridiques ; l’autre vise à s’appuyer sur la technologie en mettant en place des obstacles techniques empêchant la diffusion de données contre le gré de leur auteur.

[à suivre…]