# a short word on profile likelihood

Profile likelihood is an interesting theory to visualize and compute confidence interval for estimators (see e.g. Venzon & Moolgavkar (1988)). As we will use is, we will plot

But more generally, it is possible to consider

where . Then (under standard suitable conditions)

which can be used to derive confidence intervals.

```> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
> library(evir)
> X=base1\$Loss.in.DKM
> u=5```

The function to draw the profile likelihood for the tail index parameter is then

```> Y=X[X>u]-u
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)\$value }
> plot(XIV,L,type="l")```

It is possible to use it that profile likelihood function to derive a confidenceinterval,

```> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)\$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
\$minimum
[1] 0.6315989

\$objective
[1] 754.1115
> up=OPT\$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1)/2,col="red")
> I=which(L>=-up-qchisq(p=.95,df=1)/2)
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1)/2,length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")```

This is done with the following code

```> library(ismev)
> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)```

# Does the Student based confidence interval have any interest in practice ?

Friday in the course of statistics, we started the section on confidence interval, and like always, I got a bit confused with the degrees of freedom of the Student (should it be or ?) and which empirical variance (should we consider the one where we divide by or the one with ?).
And each time I start to get confused, the student obviously see it, and start to ask tricky questions… So let us make it clear now. The correct formula is the following: let

then

is a confidence interval for the mean of a Gaussian i.i.d. sample.
But the important thing is neither the n-1 that appear as degrees of freedom nor the that appear in the estimation of the standard error. Like always in mathematical result, the most important part of that result is not mentioned here: observations have to be i.i.d. and to be normally distributed. And not “almost” normally distributed….
Consider the following case: we have =20 observations that are almost normally distributed. Hence, I consider a student t distribution

`n=20; X=rt(n,df=3)`

An Anderson Darling normality test accepts a normal distribution in 2 cases out of 3.

```for(s in 1:10000){
X=rt(n,df=3)
}
mean(pv>.05)
[1] 0.6799```

With a true normal distribution if would be 95% of the cases, so in some sense, I can pretend that I generate almost normal samples.
For those samples, we can look at bounds of the 90% confidence interval for the mean, with three different formulas,

i.e. the correct one, or the one where I considered degrees of freedom instead of ,

and the one were we condired a Gaussian quantile instead of a Student t one,

(and one might think to look at the non-unbiased estimator of the variance, also).
```for(s in 1:10000){
X=rt(n,df=3)
m[s]=mean(X)
sd=sqrt(var(X))
IC1[s]=m[s]-qt(.95,df=n-1)*sd/sqrt(n)
IC2[s]=m[s]-qt(.95,df=n)*sd/sqrt(n)
IC3[s]=m[s]-qnorm(.95)*sd/sqrt(n)
}```

One the graph below are plotted the distributions of the values obtained as lower bound of the 90% confidence interval,

(the curves with and degrees of freedom in quantiles are the same, here).
The dotted vertical line is the true lower bound of the 90%-confidence interval, given the true distribution (which was not a Gaussian one).
If I get back to the standard procedure in any statistical textbook, since the sample is almost Gaussian, the lower bound of the confidence interval should be (since we have a Student t distribution)

```mean(IC1)
[1] -0.605381```

```mean(IC3)
```quantile(m,.05)