This week, in the MAT8595 course, we will start the section on inference for extreme values. To start with something simple, we will use maximum likelihood techniques on a Generalized Pareto Distribution (we’ve seen Monday Pickands-Balkema-de Hann theorem).
- Maximum Likelihood Estimation
In the context of parametric models, the standard technique is to consider the maximum of the likelihood (or the log-likelihod).Let denote the parameter (with ). Given some – stnardard – technical assumptions, such as , or on some neighbourhood of , then
where denotes Fisher information matrix (see any textbook for mathematical statistics courses). Consider here some i.i.d. sample, from a Generalized Pareto Distribution, with parameter , so that
If we solve (numerically) the first order condition of the maximum likelihood, we get an estimator which satisfies
The idea of this asymptotic normality is the following : if the true distribution of the sample is a GPD with parameter , then, if is large enough, then will have a joint normal distribution. So if we generate a lot of sample (sufficently large, say 200 observations), then the scatterplot of the estimator should the same as the scatterplot of a Gaussian distribution,
> library(evir)
> n=200
> param=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ }
> m=apply(param,2,mean)
> S=var(param)
> library(mnormt)
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=101)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> COL=rev(heat.colors(100))
> image(x,y,z,col=COL)
> points(param)
and to get a 3d representation
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=31)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=31)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> persp(x,y,t(z),shade=TRUE,col="green",theta=-30,phi=20,ticktype="detailed",
+ xlab="xi",ylab="sigma")
With 200 observations, if the true underlying distribution is a GPD, then, indeed, the joint distribution of seems to be normal. That would be interesting to generate some confidence intervals for instance, or define some tests.
To go further, see any standard textbook on statistical mathematics, e.g. Casella & Berger (2002).
Another important property is the so called delta-method (we’ve seen Monday in class that it was obtained easily using a first order Taylor expansion). The idea is that if is asymptotically normal, and if is sufficently smooth, then will also be asymptotically Gaussian. More precicely (see also the header of this blog)
From this property, we can get the normality of (which is another parametrization used in extreme value models), or on any quantile, . Let us run some simulation, one more time to check that we actually have a joint normality.
> library(evir)
> n=200
> param=riskm=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ xihat=param[s,1]
+ shat=param[s,2]
+ q=shat * (.01^(-xihat) - 1)/xihat
+ tvar=q+(shat + xihat * q)/(1 - xihat)
+ riskm[s,]=c(1/xihat,q)
+ }
> m=apply(riskm,2,mean)
> S=var(riskm)
> library(mnormt)
> x=seq(min(riskm[,1])-.05,max(riskm[,1])+.05,length=101)
> y=seq(min(riskm[,2])-.05,max(riskm[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> image(x,y,t(z),col=COL)
> points(riskm)
As we can see bellow, with samples of size 200, we cannot use this asymptotical result: it looks like we do not have enought data. But if we run the same code with
> n=5000
We get the joint normality of and . This is what we can get from this result, called delta-method in statistical textbooks. See again Casella & Berger (2002) for more details.
Another interesting tool is the concept of profile likelihood. This would be interesting here since the main interest is the tail index , being here some kind of auxilary parameter. See Venzon & Moolgavkar (1988) for more details. Here, we will plot
But more generally, it is possible to consider
where is the set of interesting parameters. Then (under standard suitable conditions) we can prove that
which can be used to derive confidence intervals. In the GPD case, for each , we have to find an optimal . We compute the (profile) likelihood i.e. . And we can compute the maximum of this profile likelihood. This two-stage optimization is, in general, not equivalent with the (global) maximization of the likelihood, as computed below
> n=500
> set.seed(1)
> x=rgpd(n,xi=1/1.5,beta=1)
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(x,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
> plot(XIV,L,type="l")
> XIV[which.max(L)]
[1] 0.67
> gpd(x,0)$par.ests
xi beta
0.6730145 0.9725483
We are not far away. Actually, if we want to compute the maximum of the profile likelihood (and not only compute the values of the profile likelihood on a grid, as before), we use
> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6731025
$objective
[1] 822.5574
Observe that, indeed, we are not far away from the maximum likelihood estimator of (I believe that it’s mainly a computational issue here, and theat the two are similar, here… actually, I’d be glad to hear about cases where maximum of the profile likelihood is not the same as the maximum of the likelihood). The interesting point is that we can use this technique to compute a confidence interval, and even visualize it on a graph
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1),col="red")
> I=which(L>=-up-qchisq(p=.95,df=1))
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")
The vertical lines are the lower and the upper bound of a 95% confidence interval for parameter .
To go further, see Murphy, S.A & van der Vaart, A.W. (2000). On Profile Likelihood.