Category Archives: MAT8886

Bounding sums of random variables, part 2

It is possible to go further, much more actually, on bounding sums of random variables (mentioned in the previous post). For instance, if everything has been defined, in that previous post, on distributions on , it is possible to extend bounds of distributions on . Especially if we deal with quantiles. Everything we’ve seen remain valid. Consider for instance two  distributions. Using the previous code, it is possible to compute bounds for the quantiles of the sum of two Gaussian variates. And one has to remember that those bounds are sharp.

> Finv=function(u) qnorm(u,0,1)
> Ginv=function(u) qnorm(u,0,1)
> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Actually, it is possible to compare here with two simple cases: the independent case, where the sum has a  distribution, and the comonotonic case where the sum has a  distribution.

>  lines(x,qnorm(x,sd=sqrt(2)),col="blue",lty=2)
>  lines(x,qnorm(x,sd=2),col="blue",lwd=2)

On the graph below, the comonotonic case (usually considered as the worst case scenario) is the plain blue line (with here an animation to illustrate the convergence of the numerical algorithm)

Below that (strong) blue line, then risks are sub-additive for the Value-at-Risk, i.e.

but above, risks are super-additive for the Value-at-RIsk. i.e.

(since for comonotonic variates, the quantile of the sum is the sum of quantiles). It is possible to visualize those two cases above, in green the area where risks are super-additive, while the yellow area is where risks are sub-additive.

Recall that with a Gaussian random vector, with correlation then the quantile is the quantile of a random variable centered, with variance Thus, on the graph below, we can visualize case that can be obtained with this Gaussian copula. Here the yellow area can be obtained with a Gaussian copulas, the upper and the lower bounds being respectively the comonotonic and the countermononic cases.

But the green area can also be obtained when we sum two Gaussian variables ! We just have to go outside the Gaussian world, and consider another copula.

Another point is that, in the previous post,^- was the lower Fréchet-Hoeffding bound on the set of copulas. But all the previous results remain valid if^- is alower bound on the set of copulas of interest. Especially\tau_{C^-,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

for all such that\geq%20C^-. For instance, if we assume that the copula should have positive dependence, i.e.\geq%20C^\perp, then\tau_{C^\perp,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^\perp,L}(F,G)

Which means we should have sharper bounds. Numerically, it is possible to compute those sharper bounds for quantiles. The lower bound becomes\sup_{u\in[x,1]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x}{u}\right)\right\}

while the upper bound is\sup_{u\in[0,x]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x-u}{1-u}\right)\right\}

Again, one can easily compute those quantities on a grid of the unit interval,

> Qinfind=Qsupind=rep(NA,n-1)
> for(i in 1:(n-1)){
+  J=1:(i)
+  Qinfind[i]=max(Finv(J/n)+Ginv((i-J)/n/(1-J/n)))
+  J=(i):(n-1)
+  Qsupind[i]=min(Finv(J/n)+Ginv(i/J))
+ }

We get the graph below (the blue area is here to illustrate how sharper those bounds get with the assumption that we do have positive dependence, this area been attained only with copulas exhibiting non-positive dependence)

For high quantiles, the upper bound is rather close to the one we had before, since worst case are probably obtained when we do have positive correlation. But it will strongly impact the lower bound. For instance, it becomes now impossible to have a negative quantile, when the probability exceeds 75% if we do have positive dependence…

> Qinfind[u==.75]
[1] 0

Bounding sums of random variables, part 1

For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding  the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).

This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).

Let\Delta denote the set of univariate continuous distribution function, left-continuous, on\mathbb{R}. And\Delta^+ the set of distributions on\mathbb{R}^+. Thus,\in\Delta^+ if\in\Delta and Consider now two distributions,G\in\Delta^+. In a very general setting, it is possible to consider operators on\Delta^+\times%20\Delta^+. Thus, let[0,1]\times[0,1]\rightarrow[0,1] denote an operator, increasing in each component, thus that,1)=1. And consider some function\mathbb{R}^+\times\mathbb{R}^+\rightarrow\mathbb{R}^+ assumed to be also increasing in each component (and continuous). For such functions and, define the following (general) operator,\tau_{T,L}(F,G) as\tau_{T,L}(F,G)(x)=\sup_{L(u,v)=x}\{T(F(u),G(v))\}

One interesting case can be obtained when a copula, In that case,\tau_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and further, it is possible to write\tau_{C,L}(F,G)(x)=\sup_{(u,v)\in%20L^{-1}(x)}\{C(F(u),G(v))\}

It is also possible to consider other (general) operators, e.g. based on the sum\sigma_{C,L}(F,G)(x)=\int_{(u,v)\in%20L^{-1}(x)}%20dC(F(u),G(v))

or on the minimum,\rho_{C,L}(F,G)(x)=\inf_{(u,v)\in%20L^{-1}(x)}\{C^\star(F(u),G(v))\}

where^\star is the survival copula associated with, i.e.^\star(u,v)=u+v-C(u,v). Note that those operators can be used to define distribution functions, i.e.\sigma_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and similarly\rho_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

All that seems too theoretical ? An application can be the case of the sum, i.e.,y)=x+y, in that case\sigma_{C,+}(F,G) is the distribution of sum of two random variables with marginal distributions and, and copula Thus,\sigma_{C^\perp,+}(F,G) is simply the convolution of two distributions,\sigma_{C^\perp,+}(F,G)(x)=\int_{u+v=x}%20dC^\perp(F(u),G(v))

The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator, then, for any copula, one can find a lower bound for\sigma_{C,L}(F,G)\tau_{C^-,L}(F,G)\leq%20\tau_{C,L}(F,G)\leq\sigma_{C,L}(F,G)

as well as an upper bound\sigma_{C,L}(F,G)\leq%20\rho_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

Those inequalities come from the fact that for all copula,\geq%20C^-, where^- is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…

In the case of the sum of two random variables, with marginal distributions and, bounds for the distribution of the sum\mathbb{P}(X+Y\leq%20x), where\sim%20F and\sim%20G, can be written^-(x)=\tau_{C^-%20,+}(F,G)(x)=\sup_{u+v=x}\{%20\max\{F(u)+G(v)-1,0\}%20\}

for the lower bound, and^+(x)=\rho_{C^-%20,+}(F,G)(x)=\inf_{u+v=x}\{%20\min\{F(u)+G(v),1\}%20\}

for the upper bound. And those bounds are sharp, in the sense that, for all\in(0,1), there is a copula such that\tau_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

and there is (another) copula such that\sigma_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all\in\Delta^+ let^{-1} denotes its generalized inverse, left continuous, and let\nabla^+ denote the set of those quantile functions. Define then the dual versions of our operators,\tau^{-1}_{T,L}(F^{-1},G^{-1})(x)=\inf_{(u,v)\in%20T^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}


Those definitions are really dual versions of the previous ones, in the sense that\tau^{-1}_{T,L}(F^{-1},G^{-1})=[\tau_{T,L}(F,G)]^{-1} and\rho^{-1}_{T,L}(F^{-1},G^{-1})=[\rho_{T,L}(F,G)]^{-1}.

Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is\tau^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\inf_{\max\{u+v-1,0\}=x}\{F^{-1}(u)+G^{-1}(v)\}

while the upper bound is\rho^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\sup_{\min\{u+v,1\}=x}\{F^{-1}(u)+G^{-1}(v)\}

A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .

> F=function(x) plnorm(x,0,1)
> G=function(x) plnorm(x,0,1)
> n=100
> X=seq(0,10,by=.05)
> Hinf=Hsup=rep(NA,length(X))
> for(i in 1:length(X)){
+ x=X[i]
+ U=seq(0,x,by=1/n); V=x-U
+ Hinf[i]=max(pmax(F(U)+G(V)-1,0))
+ Hsup[i]=min(pmin(F(U)+G(V),1))}

If we plot those bounds, we obtain

> plot(X,Hinf,ylim=c(0,1),type="s",col="red")
> lines(X,Hsup,type="s",col="red")

But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here

> Finv=function(u) qlnorm(u,0,1)
> Ginv=function(u) qlnorm(u,0,1)

The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance\sup_{u\in[0,x]}\{F^{-1}(u)+G^{-1}(x-u)\}

The idea is to consider and, and the bound for the quantile function at point is then\sup_{j\in\{0,1,\cdots,i\}}\left\{F^{-1}\left(\frac{j}{n}\right)+G^{-1}\left(\frac{i-j}{n}\right)\right\}

The code to compute those bounds, for a given is here

> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Here we have (several were considered, so that we can visualize the convergence of that numerical algorithm),

Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…

Maximum likelihood estimates for multivariate distributions

Consider our loss-ALAE dataset, and – as in Frees & Valdez (1998) – let us fit a parametric model, in order to price a reinsurance treaty. The dataset is the following,

> library(evd)
> data(lossalae)
> Z=lossalae
> X=Z[,1];Y=Z[,2]

The first step can be to estimate marginal distributions, independently. Here, we consider lognormal distributions for both components,

> Fempx=function(x) mean(X<=x)
> Fx=Vectorize(Fempx)
> u=exp(seq(2,15,by=.05))
> plot(u,Fx(u),log="x",type="l",
+ xlab="loss (log scale)")
> Lx=function(px) -sum(log(Vectorize(dlnorm)(
+ X,px[1],px[2])))
> opx=optim(c(1,5),fn=Lx)
> opx$par
[1] 9.373679 1.637499
> lines(u,Vectorize(plnorm)(u,opx$par[1],
+ opx$par[2]),col="red")

The fit here is quite good,

For the second component, we do the same,

> Fempy=function(x) mean(Y<=x)
> Fy=Vectorize(Fempy)
> u=exp(seq(2,15,by=.05))
> plot(u,Fy(u),log="x",type="l",
+ xlab="ALAE (log scale)")
> Ly=function(px) -sum(log(Vectorize(dlnorm)(
+ Y,px[1],px[2])))
> opy=optim(c(1.5,10),fn=Ly)
> opy$par
[1] 8.522452 1.429645
> lines(u,Vectorize(plnorm)(u,opy$par[1],
+ opy$par[2]),col="blue")

It is not as good as the fit obtained on losses, but it is not that bad,

Now, consider a multivariate model, with Gumbel copula. We’ve seen before that it worked well. But this time, consider the maximum likelihood estimator globally.

> Cop=function(u,v,a) exp(-((-log(u))^a+
+ (-log(v))^a)^(1/a))
> phi=function(t,a) (-log(t))^a
> cop=function(u,v,a) Cop(u,v,a)*(phi(u,a)+
+ phi(v,a))^(1/a-2)*(
+ a-1+(phi(u,a)+phi(v,a))^(1/a))*(phi(u,a-1)*
+ phi(v,a-1))/(u*v)
> L=function(p) {-sum(log(Vectorize(dlnorm)(
+ X,p[1],p[2])))-
+ sum(log(Vectorize(dlnorm)(Y,p[3],p[4])))-
+ sum(log(Vectorize(cop)(plnorm(X,p[1],p[2]),
+ plnorm(Y,p[3],p[4]),p[5])))}
> opz=optim(c(1.5,10,1.5,10,1.5),fn=L)
> opz$par
[1] 9.377219 1.671410 8.524221 1.428552 1.468238

Marginal parameters are (slightly) different from the one obtained independently,

> c(opx$par,opy$par)
[1] 9.373679 1.637499 8.522452 1.429645
> opz$par[1:4]
[1] 9.377219 1.671410 8.524221 1.428552

And the parameter of Gumbel copula is close to the one obtained with heuristic methods in class.

Now that we have a model, let us play with it, to price a reinsurance treaty. But first, let us see how to generate Gumbel copula… One idea can be to use the frailty approach, based on a stable frailty. And we can use Chambers et al (1976)to generate a stable distribution. So here is the algorithm to generate samples from Gumbel copula

> alpha=opz$par[5]
> invphi=function(t,a) exp(-t^(1/a))
> n=500
> x=matrix(rexp(2*n),n,2)
> angle=runif(n,0,pi)
> E=rexp(n)
> beta=1/alpha
> stable=sin((1-beta)*angle)^((1-beta)/beta)*
+ (sin(beta*angle))/(sin(angle))^(1/beta)/
+ (E^(alpha-1))
> U=invphi(x/stable,alpha)
> plot(U)

Here, we consider only 500 simulations,

Based on that copula simulation, we can then use marginal transformations to generate a pair, losses and allocated expenses,

> Xloss=qlnorm(U[,1],opz$par[1],opz$par[2])
> Xalae=qlnorm(U[,2],opz$par[3],opz$par[4])

In standard reinsurance treaties – see e.g. Clarke (1996) – allocated expenses are splited prorata capita between the insurance company, and the reinsurer. If  denotes losses, and  the allocated expenses, a standard excess treaty can be has payoff

where  denotes the (upper) limit, and  the insurer’s retention. Using monte carlo simulation, it is then possible to estimate the pure premium of such a reinsurance treaty.

> L=100000
> R=50000
> Z=((Xloss-R)+(Xloss-R)/Xloss*Xalae)*
+ (R<=Xloss)*(Xloss<L)+
+ ((L-R)+(L-R)/R*Xalae)*(L<=Xloss)
> mean(Z)
[1] 12596.45

Now, play with it… it is possible to find a better fit, I guess…

(nonparametric) copula density estimation

Today, we will go further on the inference of copula functions. Some codes (and references) can be found on a previous post, on nonparametric estimators of copula densities (among other related things).  Consider (as before) the loss-ALAE dataset (since we’ve been working a lot on that dataset)

> library(MASS)
> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/(nrow(X)+1))

The standard tool to plot nonparametric estimators of densities is to use multivariate kernels. We can look at the density using

> mat1=kde2d(U[,1],U[,2],n=35)
> persp(mat1$x,mat1$y,mat1$z,col="green",
+ shade=TRUE,theta=s*5,
+ xlab="",ylab="",zlab="",zlim=c(0,7))

or level curves (isodensity curves) with more detailed estimators (on grids with shorter steps)

> mat1=kde2d(U[,1],U[,2],n=101)
> image(mat1$x,mat1$y,mat1$z,col=
+ rev(heat.colors(100)),xlab="",ylab="")
> contour(mat1$x,mat1$y,mat1$z,add=
+ TRUE,levels = pretty(c(0,4), 11))

Kernels are nice, but we clearly observe some border bias, extremely strong in corners (the estimator is 1/4th of what it should be, see another post for more details). Instead of working on sample,V_i) on the unit square, consider some transformed sample,Q(V_i)), where,1)\rightarrow\mathbb{R} is a given function. E.g. a quantile function of an unbounded distribution, for instance the quantile function of the\mathcal{N}(0,1) distribution. Then, we can estimate the density of the transformed sample, and using the inversion technique, derive an estimator of the density of the initial sample. Since the inverse of a (general) function is not that simple to compute, the code might be a bit slow. But it does work,

> gaussian.kernel.copula.surface <- function (u,v,n) {
+   s=seq(1/(n+1), length=n, by=1/(n+1))
+   mat=matrix(NA,nrow = n, ncol = n)
+ sur=kde2d(qnorm(u),qnorm(v),n=1000,
+ lims = c(-4, 4, -4, 4))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qnorm(s[i])+4)*1000/8)+1;
+ 	Yj<-round((qnorm(s[j])+4)*1000/8)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dnorm(qnorm(s[i]))*
+ 	dnorm(qnorm(s[j])))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Here, we get

Note that it is possible to consider another transformation, e.g. the quantile function of a Student-t distribution.

> student.kernel.copula.surface =
+  function (u,v,n,d=4) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(NA,nrow = n, ncol = n)
+ sur<-kde2d(qt(u,df=d),qt(v,df=d),n=5000,
+ lims = c(-8, 8, -8, 8))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qt(s[i],df=d)+8)*5000/16)+1;
+ 	Yj<-round((qt(s[j],df=d)+8)*5000/16)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dt(qt(s[i],df=d),df=d)*
+ 	dt(qt(s[j],df=d),df=d))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Another strategy is to consider kernel that have precisely the unit interval as support. The idea is here to consider the product of Beta kernels, where parameters depend on the location

> beta.kernel.copula.surface=
+  function (u,v,bx=.025,by=.025,n) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(0,nrow = n, ncol = n)
+ for (i in 1:n) {
+     a <- s[i]
+     for (j in 1:n) {
+     b <- s[j]
+ 	mat[i,j] <- sum(dbeta(a,u/bx,(1-u)/bx) *
+     dbeta(b,v/by,(1-v)/by)) / length(u)
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

On those two graphs, we can clearly observe strong tail dependence in the upper (right) corner, that cannot be intuited using a standard kernel estimator…

Copulas and tail dependence, part 3

We have seen extreme value copulas in the section where we did consider general families of copulas. In the bivariate case, an extreme value can be written
where\cdot) is Pickands dependence function, which is a convex function satisfying
Observe that in this case,\tau is Kendall’tau, and can be written instance, if, we obtain Gumbel copula. This is what we’ve seen in the section where we introduced this family. Now, let us talk about (nonparametric) inference, and more precisely the estimation of the dependence function. The starting point of the most standard estimator is to observe that if,V) has copula, then distribution function conversely, Pickands dependence function can be written
Thus, a natural estimator for Pickands function is
where\widehat{H}_n is the empirical cumulative distribution function of is the estimator proposed in Capéràa, Fougères  & Genest (1997). Here, we can compute everything here using

> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/
+ (nrow(X)+1))
> Z=log(U[,1])/log(U[,1]*U[,2])
> h=function(t) mean(Z<=t)
> H=Vectorize(h)
> a=function(t){
+ f=function(t) (H(t)-t)/(t*(1-t))
+ return(exp(integrate(f,lower=0,upper=t,
+ subdivisions=10000)$value))
+ }
> A=Vectorize(a)
> u=seq(.01,.99,by=.01)
> plot(c(0,u,1),c(1,A(u),1),type="l",col="red",
+ ylim=c(.5,1))

Even integrate to get an estimator of Pickands’ dependence function. Note that an interesting point is that the upper tail dependence index can be visualized on the graph, above,

> A(.5)/2
[1] 0.4055346

Copulas and tail dependence, part 2

An alternative to describe tail dependence can be found in the Ledford & Tawn (1996) for instance. The intuition behind can be found in Fischer & Klein (2007)). Assume that  and   have the same distribution. Now, if we assume that those variables are (strictly) independent,

But if we assume that those variables are (strictly) comonotonic (i.e. equal here since they have the same distribution), then

So assume that there is a such that
Then can be interpreted as independence while means strong (perfect) positive dependence. Thus, consider the following transformation to get a parameter in [0,1], with a strength of dependence increasing with the index, e.g.

In order to derive a tail dependence index, assume that there exists a limit to

which will be interpreted as a (weaktail dependence index. Thus define concentration functions

for the lower tail (on the left) and

for the upper tail (on the right). The R code to compute those functions is quite simple,
> library(evd); 
> data(lossalae)
> X=lossalae
> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1
> fL2emp=function(z) 2*log(mean(U<=z))/
+ log(mean((U<=z)&(V<=z)))-1
> fR2emp=function(z) 2*log(mean(U>=1-z))/
+ log(mean((U>=1-z)&(V>=1-z)))-1
> u=seq(.001,.5,by=.001)
> L=Vectorize(fL2emp)(u)
> R=Vectorize(fR2emp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL      UPPER TAIL")
> abline(v=.5,col="grey")

and again, it is possible to plot those empirical functions against some parametric ones, e.g. the one obtained from a Gaussian copula (with the same Kendall’s tau)

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) 2*log(z)/log(pCopula(c(z,z),
+ copgauss))-1
> Rgaussian=function(z) 2*log(1-z)/log(1-2*z+
+ pCopula(c(z,z),copgauss))-1
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) 2*log(z)/log(pCopula(c(z,z),
+ copgumbel))-1
> Rgumbel=function(z) 2*log(1-z)/log(1-2*z+
+ pCopula(c(z,z),copgumbel))-1
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

Again, one should look more carefully at confidence bands, but is looks like Gumbel copula provides a good fit here.

Copulas and tail dependence, part 1

As mentioned in the course last week Venter (2003) suggested nice functions to illustrate tail dependence (see also some slides used in Berlin a few years ago).

  • Joe (1990)’s lambda

Joe (1990) suggested a (strong) tail dependence index. For lower tails, for instance, consider

  • Upper and lower strong tail (empirical) dependence functions

The idea is to plot the function above, in order to visualize limiting behavior. Define

for the lower tail, and

for the upper tail, where is the survival copula associated with, in the sense that


Now, one can easily derive empirical conterparts of those function, i.e.


Thus, for upper tail, on the right, we have the following graph

and for the lower tail, on the left, we have

For the code, consider some real data, like the loss-ALAE dataset.

> library(evd)
> X=lossalae

The idea is to plot, on the left, the lower tail concentration function, and on the right, the upper tail function.

> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1)
> Lemp=function(z) sum((U<=z)&(V<=z))/sum(U<=z)
> Remp=function(z) sum((U>=1-z)&(V>=1-z))/sum(U>=1-z)
> u=seq(.001,.5,by=.001)
> L=Vectorize(Lemp)(u)
> R=Vectorize(Remp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL          UPPER TAIL")
> abline(v=.5,col="grey")

Now, we can compare this graph, with what should be obtained for some parametric copulas that have the same Kendall’s tau (e.g.). For instance, if we consider a Gaussian copula,

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) pCopula(c(z,z),copgauss)/z
> Rgaussian=function(z) (1-2*z+pCopula(c(z,z),copgauss))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel’s copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) pCopula(c(z,z),copgumbel)/z
> Rgumbel=function(z) (1-2*z+pCopula(c(z,z),copgumbel))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

That’s nice (isn’t it?), but since we do not have any confidence interval, it is still hard to conclude (even if it looks like Gumbel copula has a much better fit than the Gaussian one). A strategy can be to generate samples from those copulas, and to visualize what we had. With a Gaussian copula, the graph looks like

> u=seq(.0025,.5,by=.0025); nu=length(u)
> nsimul=500
> MGS=matrix(NA,nsimul,2*nu)
> for(s in 1:nsimul){
+ Xs=rCopula(nrow(X),copgauss)
+ Us=rank(Xs[,1])/(nrow(Xs)+1)
+ Vs=rank(Xs[,2])/(nrow(Xs)+1)
+ Lemp=function(z) sum((Us<=z)&(Vs<=z))/sum(Us<=z)
+ Remp=function(z) sum((Us>=1-z)&(Vs>=1-z))/sum(Us>=1-z)
+ MGS[s,1:nu]=Vectorize(Lemp)(u)
+ MGS[s,(nu+1):(2*nu)]=Vectorize(Remp)(rev(u))
+ lines(c(u,u+.5-u[1]),MGS[s,],col="red")
+ }

(including – pointwise – 90% confidence bands)

> Q95=function(x) quantile(x,.95)
> V95=apply(MGS,2,Q95)
> lines(c(u,u+.5-u[1]),V95,col="red",lwd=2)
> Q05=function(x) quantile(x,.05)
> V05=apply(MGS,2,Q05)
> lines(c(u,u+.5-u[1]),V05,col="red",lwd=2)

while it is

with Gumbel copula. Isn’t it a nice (graphical) tool ?

But as mentioned in the course, the statistical convergence can be slow. Extremely slow. So assessing if the underlying copula has tail dependence, or not, it now that simple. Especially if the copula exhibits tail independence. Like the Gaussian copula. Consider a sample of size 1,000. This is what we obtain if we generate random scenarios,

or we look at the left tail (with a log-scale)

Now, consider a 10,000 sample,

or with a log-scale

We can even consider a 100,000 sample,

or with a log-scale

On those graphs, it is rather difficult to conclude if the limit is 0, or some strictly positive value (again, it is a classical statistical problem when the value of interest is at the border of the support of the parameter). So, a natural idea is to consider a weaker tail dependence index. Unless you have something like 100,000 observations…

Copulas estimation and influence of margins

Just a short post to get back on results mentioned at the end of the course. Since copulas are obtained using (univariate) quantile functions in the joint cumulative distribution function, they are – somehow – related to the marginal distribution fitted. In order to illustrate this point, consider an i.i.d. sample from a Student-t distribution,


Thus, the true copula is Student-t. Here, with 4 degrees of freedom. Note that we can easily get the (true) value of the copula, on the diagonal

dg=function(t) pmt(qt(t,df=4),mean=c(0,0),

Four strategies are considered here to define pseudo-copula base variates,

  • misfit: consider an invalid marginal estimation: we have assumed that margins were Gaussian, i.e.
  • perfect fit: here, we know that margins were Student-t, with 4 degrees of freedom
  • standard fit: then, consider the case where we fit marginal distribution, but in the good family this time (e.g. among Student-t distributions),
  • ranks: finally, we consider nonparametric estimators for marginal distributions,

Now that we have a sample with margins in the unit square, let us construct the empirical copula,
Let us now compare those four approaches.

  • The first one is to illustrate model error, i.e. what’s going on if we fit distributions, but not in the proper family of parametric distributions.

Then, the following code is used to compute the value of the empirical copula, on the diagonal,

diagonale=function(t,Z) mean((Z[,1]<=t)&(Z[,2]<=t))
diagY=function(t) diagonale(t,Y)

On the graph below, 1,000 samples of size 200 have been generated. All trajectories are the estimation of the copula on the diagonal. The black plain line is the true value of the copula

Obviously, it is not good at all. Mainly because the distribution of can’t be a copula, since margins are not even uniform on the unit interval.

  • a perfect fit. Here, we use the following code to generate our copula-type sample

This time, the fit is much better.

  • Using maximum likelihood estimators to fit the best distribution within the Student-t family
F1=fitdistr(X0[,1],dt,list(df=5),lower = 0.001)
F2=fitdistr(X0[,2],dt,list(df=5),lower = 0.001)

Here, it is also very good. Even better than before, when the true distribution is considered.

(it is like using Lillie test for goodness of fit, versus Kolmogorov-Smirnov, seehere for instance, in French).

  • Finally, let us consider ranks, or nonparametric estimators for marginal distributions,

Here it is even better then the previous one

If we compare Box-plots of the value of the copula at point (.2,.2), we obtain the following, with on top ranks, then fitting with the good family, then using the true distribution, and finally, using a non-proper distribution.

Just to illustrate one more time a result mentioned in a previous post, “in statistics, having too much information might not be a good thing“.

Kendall’s function for copulas

As mentioned in the course on copulas, a nice tool to describe dependence it Kendall’s cumulative function. Given a random pair with distribution, define random variable Then Kendall’s cumulative function is

Genest and Rivest (1993) introduced that function to choose among Archimedean copulas (we’ll get back to this point below).

From a computational point of view, computing such a function can be done as follows,

  • for all, compute as the proportion of observation in the lower quadrant, with upper corner, i.e.

  • then compute the cumulative distribution function of‘s.

To visualize the construction of that cumulative distribution function, consider the following animation

Thus, here the code to compute simply that cumulative distribution function is


The graph can be obtain either using




The interesting point is that for an Archimedean copula with generator, then Kendall’s function is simply we’re too lazy to do the maths, at least, it is possible to compute those functions numerically. For instance, for Clayton copula,


Similarly, let us consider Gumbel copula,


If we plot the empirical Kendall’s function (obtained from the sample), with different theoretical ones, derived from Clayton copulas (on the left, in blue) or Gumbel copula (on the right, in purple), we have the following,

Note that the different curves were obtained when Clayton copula has Kendall’s tau equal to 0, .1, .2, .3, …, .9, 1, and similarly for Gumbel copula (so that Figures can be compared). The following table gives a correspondence, from Kendall’s tau to the underlying parameter of a copula (for different families)

as well as Spearman’s rho,

To conclude, observe that there are two important particular cases that can be identified here: the case of perfect dependent, on the first diagonal when, and the case of independence, the upper green curve, It should also be mentioned that it is also common to plot not function, but function, defined as,

Tests on tail index for extremes

Since several students got the intuition that natural catastrophes might be non-insurable (underlying distributions with infinite mean), I will post some comments on testing procedures for extreme value models.

A natural idea is to use a likelihood ratio test (for composite hypotheses). Let denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether is smaller or larger than (where in the context of finite versus infinite mean I.e. either belongs to the set or to its complementary Consider the maximum likelihood estimator, i.e.

Let and denote the constrained maximum likelihood estimators on and respectively,

Either and (on the left), or and (on the right)

So likelihood ratios

 are either equal to


If we use the code mentioned in the post on profile likelihood, it is easy to derive that ratio. The following graph is the evolution of that ratio, based on a GPD assumption, for different thresholds,

> base1=read.table(
+ "",
+ header=TRUE)
> library(evir)
> X=base1$
> U=seq(2,10,by=.2)
> LR=P=ES=SES=rep(NA,length(U))
> for(j in 1:length(U)){
+ u=U[j]
+ Y=X[X>u]-u
+ loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
+ XIV=(1:300)/100;L=rep(NA,300)
+ for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
+ plot(XIV,L,type="l")
+ PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
+ (L0=(OPT=optimize(f=PL,interval=c(0,10)))$objective)
+ profilelikelihood=function(beta){
+ -loglikelihood(1,beta) }
+ (L1=optim(par=1,fn=profilelikelihood)$value)
+ LR[j]=L1-L0
+ P[j]=1-pchisq(L1-L0,df=1)
+ G=gpd(X,u)
+ ES[j]=G$par.ests[1]
+ SES[j]=G$[1]
+ }
> plot(U,LR,type="b",ylim=range(c(0,LR)))
> abline(h=qchisq(.95,1),lty=2)

with on top the values of the ratio (the dotted line is the quantile of a chi-square distribution with one degree of freedom) and below the associated p-value

> plot(U,P,type="b",ylim=range(c(0,P)))
> abline(h=.05,lty=2)

In order to compare, it is also possible to look at confidence interval for the tail index of the GPD fit,

> plot(U,ES,type="b",ylim=c(0,1))
> lines(U,ES+1.96*SES,type="h",col="red")
> abline(h=1,lty=2)

To go further, see Falk (1995), Dietrich, de Haan & Hüsler (2002), Hüsler & Li (2006) with the following table, or Neves & Fraga Alves (2008). See also here or there (for the latex based version) for an old paper I wrote on that topic.

the Dirichlet distribution

In the course, since we are still introducing some concepts of dependent distributions, we will talk about the Dirichlet distribution, which is a distribution over the simplex of Let denote the Gamma distribution with density (on

Let denote independent random variables, with Then where

has a Dirichlet distribution with parameter

Note that has a distribution in the simplex of,

and has density

We will write

The density for different values of can be visualized below, e.g., with some kind of symmetry,
or and, below
and finally, below,

Note that marginal distributions are also Dirichlet, in the sense that if


if, and if, then‘s have Beta distributions,

See Devroye (1986) section XI.4, or Frigyik, Kapila & Gupta (2010) .This distribution might also be called multivariate Beta distribution. In R, this function can be used as follows

> library(MCMCpack)
> alpha=c(2,2,5)
> x=seq(0,1,by=.05)
> vx=rep(x,length(x))
> vy=rep(x,each=length(x))
> vz=1-x-vy
> V=cbind(vx,vy,vz)
> D=ddirichlet(V, alpha)
> persp(x,x,matrix(D,length(x),length(x))

(to plot the density, as figures above). Note that we will come back on that distribution later on so-called Liouville copulas (see also Gupta & Richards (1986)).

Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – (either the company defaults, or not), so that

i.e. we can derive the (unconditional) distribution of the sum

so that the probability function of the sum is, assuming that

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value
+ }
> QUANTILE=function(p=.99,a=2,m=.1,n=500){
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
+ V=V/sum(V)
+ return(min(which(cumsum(V)>p))) }

Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here


Thus, the correlation between two default indicators is then

Under the assumption that the latent factor is beta distributed

we get

Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,

> PICTURE=function(P){
+ A=seq(.01,2,by=.01)
+ VQ=matrix(NA,length(A),5)
+ for(i in 1:length(A)){
+ VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P)
+ VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P)
+ VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P)
+ VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P)
+ VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P)
+ }
+ plot(A,VQ[,5],type="s",col="red",ylim=
+ c(0,max(VQ)),xlab="",ylab="")
+ lines(A,VQ[,4],type="s",col="blue")
+ lines(A,VQ[,3],type="s",col="black")
+ lines(A,VQ[,2],type="s",col="blue",lty=2)
+ lines(A,VQ[,1],type="s",col="red",lty=2)
+ lines(A,rep(500*P,length(A)),col="grey")
+ legend(3,max(VQ),c("quantile 99.5%","quantile 99%",
+ "quantile 97.5%","quantile 95%","quantile 90%","mean"),
+ col=c("red","blue","black",
+ lty=c(1,1,1,2,2,1),border=n)

e.g. with a (marginal) default probability of 15%,

> PICTURE(.15)

On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,

Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),

> PICTURE(.05)

(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,

> PICTURE(.0075)

And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !

de Finetti’s theorem and exchangeability

This week, we will start to work on multivariate models, and non-independence. The first idea to discuss non-independence will be to use the concept ofexchangeability. A sequence of random variable is said to be exchangeable if for all any permutation of A standard example is the case where, with, a necessary condition is that
Since this inequality should hold for all it comes that necessarily
de Finetti (1931): Let be a sequence of random variables with values in is exchangeable if and only if there exists a distribution function on such that Note that is the distribution function of random variable nice proof of that result can be found in Heath & Sudderth (1995) – see alsoSchervish (1995)Chow & Teicher (1997) or Durrett (2010) and also probably in several bayesian books because that result has a strong interpretation in bayesian inference (as far as I understood, see e.g. Jaynes (1982)).
From the exchangeability condition, for any permutation of, can be inverted in idea is then to extend the size of the vector, i.e. for all, define that, if we condition on, since given the sum of components of, all possible rearrangements of the ones among the elements are equally likely, we can write first idea is to work on the blue term, and to invocate a theorem of approximation of the hypergeometric distribution to a binomial distribution, when becomes large. Then and let denote the cumulative distribution function of idea is then to write the sum as an integral, with respect to that distribution, theorem is then obtained since, i.e. the case of non-binary sequences, there is an extension of the previous result,
Hewitt & Savage (1955): Let be a sequence of random variables with values in is exchangeable if and only if there exists a measure on such that is the measure associated to the empirical measure instance, in the Gaussian case mentioned earlier, if conditionally on, the are conditionally independent, with distribution The proof can be found in Kingman (1978) and is based on martingale arguments.
Note that in the Gaussian case, where are i.i.d. random variables. To go further on exchangeability and related topics, see Aldous (1985)  (see also here).
This construction can be used in credit risk, to model defaults in an homogeneous portfolio, see e.g. Frey (2001),


Assuming a Beta distribution for the latent factor, we can derive the probability distribution of the sum we assume that – given the latent factor – (either the company defaults, or not),, we can derive the (unconditional) distribution of the sum 

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1)$value
+ }

Based on that function, it is possible to plot the probability distribution over In the upper corner is plotted the density of the Beta distribution.

> a=2
> m=.2
+ n=10
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
> barplot(V,names.arg=0:10)

Those two theorems are extremely close,

De Finetti’s theorem: a random sequence of random variables is exchangeable if and only if‘s are conditionnally independent, conditionnally on some random variable

Hewitt-Savage’s theorem: a random sequence is exchangeable if and only if‘s are conditionnally independent, conditionnally on some sigma-algebra

Olshen (1974), proposed an interesting discussion about those theorems, see also in the Encyclopedia of Statistical Science,

The subtle difference between those two theorem is also discussed in Freedman (1965)

MAT8886 from tail estimation to risk measure(s) estimation

This week, we conclude the part on extremes with an application of extreme value theory to risk measures. We have seen last week that, if we assume that above a threshold, a Generalized Pareto Distribution will fit nicely, then we can use it to derive an estimator of the quantile function (for percentages such that the quantile is larger than the threshold)

It the threshold is, i.e. we keep the largest observations to fit a GPD, then this estimator can be written

The code we wrote last week was the following (here based on log-returns of the SP500 index, and we focus on large losses, i.e. large values of the opposite of log returns, plotted below)

> library(tseries)
> X=get.hist.quote("^GSPC")
> T=time(X)
> D=as.POSIXlt(T)
> Y=X$Close
> R=diff(log(Y))
> D=D[-1]
> X=-R
> plot(D,X)
> library(evir)
> GPD=gpd(X,quantile(X,.975))
> xi=GPD$par.ests[1]
> beta=GPD$par.ests[2]
> u=GPD$threshold
> QpGPD=function(p){
+ u+beta/xi*((100/2.5*(1-p))^(-xi)-1)
+ }
> QpGPD(1-1/250)
> QpGPD(1-1/2500)

This is similar with the following outputs, with the return period of a yearly event (one observation out of 250 trading days)

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/250, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.04172534 0.04557655 0.05086785

or the decennial one

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/2500, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.07165395 0.08925558 0.13636620

Note that it is also possible to derive an estimator for another population risk measure (the quantile is simply the so-called Value-at-Risk), the expected shortfall (or Tail Value-at-Risk), i.e.

The idea is to write that expression

so that we recognize the mean excess function (discussed earlier). Thus, assuming again that above (and therefore above that high quantile) a GPD will fit, we can write

or equivalently

If we substitute estimators to unknown quantities on that expression, we get

The code is here

> EpGPD=function(p){
+ u-beta/xi+beta/xi/(1-xi)*(100/2.5*(1-p))^(-xi)
+ }
> EpGPD(1-1/250)
> EpGPD(1-1/2500)

An alternative is to use Hill’s approach (used to derive Hill’s estimator). Assume here that, where is a slowly varying function. Then, for all,

Since is a slowly varying function, it seem natural to assume that this ratio is almost 1 (which is true asymptotically). Thus

i.e. if we invert that function, we derive an estimator for the quantile function

which can also be written

(which is close to the relation we derived using a GPD model). Here the code is

> k=trunc(length(X)*.025)
> Xs=rev(sort(as.numeric(X)))
> xiHill=mean(log(Xs[1:k]))-log(Xs[k+1])
> u=Xs[k+1]
> QpHill=function(p){
+ u+u*((100/2.5*(1-p))^(-xiHill)-1)
+ }

with the following Hill plot

For yearly and decennial events, we have here

> QpHill(1-1/250)
[1] 0.04580548
> QpHill(1-1/2500)
[1] 0.1010204

Those quantities seem consistent since they are quite close, but they are different compared with empirical quantiles,

> quantile(X,1-1/250)
> quantile(X,1-1/2500)

Note that it is also possible to use some functions to derive estimators of those quantities,

> riskmeasures(gpd(X,quantile(X,.975)),1-1/250)
p   quantile      sfall
[1,] 0.996 0.04557655 0.06426859
> riskmeasures(gpd(X,quantile(X,.975)),1-1/2500)
p   quantile     sfall
[1,] 0.9996 0.08925558 0.1215137

(in this application, we have assumed that log-returns were independent and identically distributed… which might be a rather strong assumption).

Delta method and quantile estimation

An alternative (to profile likelihood techniques) to derive confidence intervals is to use the delta method. Consider an estimator such that

then for any differentiable transformation

The proof of that result is based on Taylor’s expansion (see here or there for more details on the theory, or even on this blog – here, in French or there in English – for some codes in R). This can be used to derive an asymptotic confidence interval for a quantile. Consider the following dataset

> base1=read.table(
+ "",
+ header=TRUE)
> library(evir)
> X=base1$

It is possible to fit a Generalized Pareto distribution on observations above a given threshold,

In that case, if exceed the threshold out of a sample of size, the estimator of the quantile

i.e. Then


i.e. it is now possible to implement the delta-method to derive the asymptotic variance of the quantile, and also (asymptotic) confidence intervals.

> u=5
> GPD=gpd(X,u)
> theta=GPD$par.ests
> sigma=GPD$varcov
> k=GPD$n.exceed
> n=length(X)
> p=.975
> Q=u+theta[2]/theta[1]*((n*(1-p)/k)^(-theta[1])-1)
> nabla=c(-theta[2]/theta[1]^2*((1-p)^(-theta[1])-1)-
+ theta[2]/theta[1]*(1-p)^(-theta[1]*log(1-p)),
+ 1/theta[1]*((1-p)^(-theta[1])-1))
> variance=t(nabla)%*%sigma%*%nabla

Based on the assumption of normality, it is possible to derive confidence intervals, and to compare them with the one obtained in R,

> c(Q-1.96/sqrt(k)*sqrt(variance),
+   Q+1.96/sqrt(k)*sqrt(variance))
[1] 13.11562 16.82852
+ tailplot(gpd(X,5))
+ gpd.q(tailplot(gpd(X,5)), .975, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI Estimate Upper CI
13.33329 14.97207 17.18094