Tag Archives: profile

Profile Likelihood

Consider some simulated data

> set.seed(1)
> x=exp(rnorm(100))

Assume that those data are observed i.id. random variables with distribution, with . The natural idea is to consider the maximum likelihood estimator

For instance, consider some maximum likelihood estimator,

> library(MASS)
> (F=fitdistr(x,"gamma"))
     shape       rate   
  1.4214497   0.8619969 
 (0.1822570) (0.1320717)
> F$estimate[1]+c(-1,1)*1.96*F$sd[1]
[1] 1.064226 1.778673

Here, we have an approximated (since the maximum likelihood has an asymptotic Gaussian distribution) confidence interval for . We can use numerical optimization routine to get the maximum of the log-likelihood function

> log_lik=function(theta){
+   a=theta[1]
+   b=theta[2]
+   logL=sum(log(dgamma(x,a,b)))
+   return(-logL)
+ }

> optim(c(1,1),log_lik)
$par
[1] 1.4214116 0.8620311
 
$value
[1] 146.5909

And we have the same value.

Now, what if we care only about , and not . The we can use profile likelihood. The idea is to solve

i.e.

or, equivalently,

> prof_log_lik=function(a){
+   b=(optim(1,function(z) -sum(log(dgamma(x,a,z)))))$par
+   return(-sum(log(dgamma(x,a,b))))
+ }

> vx=seq(.5,3,length=101)
> vl=-Vectorize(prof_log_lik)(vx)
> plot(vx,vl,type="l")
> optim(1,prof_log_lik)
$par
[1] 1.421094
 
$value
[1] 146.5909

A few weeks ago, we have mentioned the likelihood ratio test, i.e.

The analogous can be obtained here, since

(the 1 comes from the fact that  is a one-dimensional coefficient). The (technical) proof can be found in Suhasini Subba Rao’s notes (see also Section 4.5.2 in Antony Davison’s Statistical Models). From that property, we can easily obtain a confidence interval for 

Hence, from our sample, we get the following 95% confidence interval,

> abline(v=optim(1,prof_log_lik)$par,lty=2)
> abline(h=-optim(1,prof_log_lik)$value)
> abline(h=-optim(1,prof_log_lik)$value-qchisq(.95,1)/2)
 
> segments(F$estimate[1]-1.96*F$sd[1],
-170,F$estimate[1]+1.96*F$sd[1],-170,lwd=3,col="blue")
> borne=-optim(1,prof_log_lik)$value-qchisq(.95,1)/2
> (b1=uniroot(function(z) Vectorize(prof_log_lik)(z)+borne,c(.5,1.5))$root)
[1] 1.095726
> (b2=uniroot(function(z) Vectorize(prof_log_lik)(z)+borne,c(1.25,2.5))$root)
[1] 1.811809

that can be visualized below,

> segments(b1,-168,b2,-168,lwd=3,col="red")

In blue the obtained obtained using the asymptotic Gaussian property of the maximum likelihood estimator, and in red, the obtained obtained using the asymptotic chi-square distribution of the log (profile) likelihood ratio.

Likelihood Based Methods, for Extremes

This week, in the MAT8595 course, we will start the section on inference for extreme values. To start with something simple, we will use maximum likelihood techniques on a Generalized Pareto Distribution (we’ve seen Monday Pickands-Balkema-de Hann theorem).

  • Maximum Likelihood Estimation

In the context of parametric models, the standard technique is to consider the maximum of the likelihood (or the log-likelihod).Let denote the parameter (with ). Given some – stnardard – technical assumptions, such as , or  on some neighbourhood of , then

where https://latex.codecogs.com/gif.latex%20?I denotes Fisher information matrix (see any textbook for mathematical statistics courses). Consider here some i.i.d. sample, from a Generalized Pareto Distribution, with parameter https://latex.codecogs.com/gif.latex?\boldsymbol{\theta}=(\xi,\sigma), so that

https://latex.codecogs.com/gif.latex?%20%20%20%20%20F_{(\xi,\sigma)}(x)%20=%20\begin{cases}%201%20-%20\left(1+%20\frac{\xi%20x}{\sigma}\right)^{-1/\xi}%20&,%20\xi%20\neq%200%20\\%201%20-%20\exp%20\left(-\frac{x}{\sigma}\right)%20&,%20\xi%20=%200%20\end{cases}

If we solve (numerically) the first order condition of the maximum likelihood, we get an estimator  https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) which satisfies

https://latex.codecogs.com/gif.latex?\sqrt{n}\left(\left[\begin{array}{c}\widehat{\xi}_n\\\widehat{\sigma%20}_n\end{array}\right]-\left[\begin{array}{c}\xi_0\\\sigma_0%20\end{array}\right]\right)\rightarrow%20\mathcal{N}\left(\left[\begin{array}{c}0\\end{array}\right],\left[\begin{array}{cc}(1+\xi_0)^2%20&%20\sigma_0[1+\xi_0]\\%20\sigma_0%20[1+\xi_0]%20&%202\sigma^2_0(1+\xi_0)%20\end{array}\right]\right)

The idea of this asymptotic normality is the following : if the true distribution of the sample is a GPD with parameter , then, if https://latex.codecogs.com/gif.latex%20?n is large enough, then https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) will have a joint normal distribution. So if we generate a lot of sample (sufficently large, say 200 observations), then the scatterplot of the estimator should the same as the scatterplot of a Gaussian distribution,

> library(evir)
> n=200
> param=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ }
> m=apply(param,2,mean)
> S=var(param)
> library(mnormt)
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=101)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> COL=rev(heat.colors(100))
> image(x,y,z,col=COL)
> points(param)

and to get a 3d representation

> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=31)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=31)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> persp(x,y,t(z),shade=TRUE,col="green",theta=-30,phi=20,ticktype="detailed",
+ xlab="xi",ylab="sigma")

With 200 observations, if the true underlying distribution is a GPD, then, indeed, the joint distribution of https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) seems to be normal. That would be interesting to generate some confidence intervals for instance, or define some tests.

To go further, see any standard textbook on statistical mathematics, e.g. Casella & Berger (2002).

  • Delta Method

Another important property is the so called delta-method (we’ve seen Monday in class that it was obtained easily using a first order Taylor expansion). The idea is that if https://latex.codecogs.com/gif.latex%20?\widehat{\boldsymbol{\theta}}_n is asymptotically normal, and if is sufficently smooth, then https://latex.codecogs.com/gif.latex%20?h(\widehat{\boldsymbol{\theta}}_n) will also be asymptotically Gaussian. More precicely (see also the header of this blog)

From this property, we can get the normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n=\widehat{\xi}_n^{-1} (which is another parametrization used in extreme value models), or on any quantile, https://latex.codecogs.com/gif.latex%20?\widehat{Q}_u=F^{-1}_{\widehat{\boldsymbol{\theta}}_n}(u)=h_u(\widehat{\xi}_n,\widehat{\sigma}_n). Let us run some simulation, one more time to check that we actually have a joint normality.

> library(evir)
> n=200
> param=riskm=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ xihat=param[s,1]
+ shat=param[s,2]
+ q=shat * (.01^(-xihat) - 1)/xihat
+ tvar=q+(shat + xihat * q)/(1 - xihat)
+ riskm[s,]=c(1/xihat,q)
+ }
> m=apply(riskm,2,mean)
> S=var(riskm)
> library(mnormt)
> x=seq(min(riskm[,1])-.05,max(riskm[,1])+.05,length=101)
> y=seq(min(riskm[,2])-.05,max(riskm[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> image(x,y,t(z),col=COL)
> points(riskm)

As we can see bellow, with samples of size 200, we cannot use this asymptotical result: it looks like we do not have enought data. But if we run the same code with

> n=5000

We get the joint normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n and https://latex.codecogs.com/gif.latex%20?\widehat{Q}_n(u). This is what we can get from this result, called delta-method in statistical textbooks. See again Casella & Berger (2002) for more details.

  • Profile Likelihood

Another interesting tool is the concept of profile likelihood. This would be interesting here since the main interest is the tail index https://latex.codecogs.com/gif.latex%20?\xi, https://latex.codecogs.com/gif.latex%20?\sigma being here some kind of auxilary parameter. See Venzon & Moolgavkar (1988) for more details. Here, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif is the set of interesting parameters. Then (under standard suitable conditions) we can prove that

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals. In the GPD case, for each https://latex.codecogs.com/gif.latex%20?\xi, we have to find an optimal https://latex.codecogs.com/gif.latex%20?\sigma^\star(\xi). We compute the (profile) likelihood i.e. https://latex.codecogs.com/gif.latex%20?\mathcal{L}(\xi,\sigma^\star(\xi)). And we can compute the maximum of this profile likelihood. This two-stage optimization is, in general, not equivalent with the (global) maximization of the likelihood, as computed below

>  n=500
>  set.seed(1)
>  x=rgpd(n,xi=1/1.5,beta=1)
>  loglikelihood=function(xi,beta){
+  sum(log(dgpd(x,xi,mu=0,beta))) }
>  XIV=(1:300)/100;L=rep(NA,300)
>  for(i in 1:300){
+  XI=XIV[i]
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  L[i]=-optim(par=1,fn=profilelikelihood)$value }
>  plot(XIV,L,type="l")
>  XIV[which.max(L)]
[1] 0.67
>  gpd(x,0)$par.ests
       xi      beta 
0.6730145 0.9725483

We are not far away. Actually, if we want to compute the maximum of the profile likelihood (and not only compute the values of the profile likelihood on a grid, as before), we use

>  PL=function(XI){
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  return(optim(par=1,fn=profilelikelihood)$value)}
>  (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6731025

$objective
[1] 822.5574

Observe that, indeed, we are not far away from the maximum likelihood estimator of https://latex.codecogs.com/gif.latex%20?\xi (I believe that it’s mainly a computational issue here, and theat the two are similar, here… actually, I’d be glad to hear about cases where maximum of the profile likelihood is not the same as the maximum of the likelihood). The interesting point is that we can use this technique to compute a confidence interval, and even visualize it on a graph

>  up=OPT$objective
>  abline(h=-up)
>  abline(h=-up-qchisq(p=.95,df=1),col="red")
>  I=which(L>=-up-qchisq(p=.95,df=1))
>  lines(XIV[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+  lwd=5,col="red")
>  abline(v=range(XIV[I]),lty=2,col="red")

The vertical lines are the lower and the upper bound of a 95% confidence interval for parameter https://latex.codecogs.com/gif.latex%20?\xi.

To go further, see Murphy, S.A & van der Vaart, A.W. (2000). On Profile Likelihood.

a short word on profile likelihood

Profile likelihood is an interesting theory to visualize and compute confidence interval for estimators (see e.g. Venzon & Moolgavkar (1988)). As we will use is, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif. Then (under standard suitable conditions)

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> u=5

The function to draw the profile likelihood for the tail index parameter is then

> Y=X[X>u]-u
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
> plot(XIV,L,type="l")

It is possible to use it that profile likelihood function to derive a confidenceinterval,

> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6315989

$objective
[1] 754.1115
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1)/2,col="red")
> I=which(L>=-up-qchisq(p=.95,df=1)/2)
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1)/2,length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")

This is done with the following code

> library(ismev)
> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)