An alternative (to profile likelihood techniques) to derive confidence intervals is to use the delta method. Consider an estimator such that
then for any differentiable transformation
The proof of that result is based on Taylor’s expansion (see here or there for more details on the theory, or even on this blog – here, in French or there in English – for some codes in R). This can be used to derive an asymptotic confidence interval for a quantile. Consider the following dataset
> base1=read.table( + "http://freakonometrics.free.fr/danish-univariate.txt", + header=TRUE) > library(evir) > X=base1$Loss.in.DKM
It is possible to fit a Generalized Pareto distribution on observations above a given threshold,
In that case, if exceed the threshold out of a sample of size
, the estimator of the quantile
i.e. . Then
while
i.e. it is now possible to implement the delta-method to derive the asymptotic variance of the quantile, and also (asymptotic) confidence intervals.
> u=5 > GPD=gpd(X,u) > theta=GPD$par.ests > sigma=GPD$varcov > k=GPD$n.exceed > n=length(X) > p=.975 > Q=u+theta[2]/theta[1]*((n*(1-p)/k)^(-theta[1])-1) > nabla=c(-theta[2]/theta[1]^2*((1-p)^(-theta[1])-1)- + theta[2]/theta[1]*(1-p)^(-theta[1]*log(1-p)), + 1/theta[1]*((1-p)^(-theta[1])-1)) > variance=t(nabla)%*%sigma%*%nabla
Based on the assumption of normality, it is possible to derive confidence intervals, and to compare them with the one obtained in R,
> c(Q-1.96/sqrt(k)*sqrt(variance), + Q+1.96/sqrt(k)*sqrt(variance)) [1] 13.11562 16.82852 + tailplot(gpd(X,5)) + gpd.q(tailplot(gpd(X,5)), .975, ci.type = + "likelihood", ci.p = 0.95,like.num = 50) Lower CI Estimate Upper CI 13.33329 14.97207 17.18094