Tag Archives: GPD

Pareto models for risk management

Our paper, with Emmanuel Flachaire, “Pareto models for risk management” is now online…

The Pareto model is very popular in risk management, since simple analytical formulas can be derived for financial downside risk measures (Value-at-Risk, Expected Shortfall) or reinsurance premiums and related quantities (Large Claim Index, Return Period). Nevertheless, in practice, distributions are (strictly) Pareto only in the tails, above (possible very) large threshold. Therefore, it could be interesting to take into account second order behavior to provide a better fit. In this article, we present how to go from a strict Pareto model to Pareto-type distributions. We discuss inference, and derive formulas for various measures and indices, and finally provide applications on insurance losses and financial risks.

Risk Measures with Extreme Value Models

We’ve seen Monday, in the MAT8595 course how to use the Generalized Pareto Distribution to estimate some downside risk measures, given a sample (assumed to be i.i.d., I will not mention here properties on extremes for stochastic processes) with distribution https://latex.codecogs.com/gif.latex?F. The cumulative distribution function of the  Pareto distribution is here

For some threshold , and https://latex.codecogs.com/gif.latex?x\geq%20u, we can write

From Pickands–Balkema–de Haan theorem, if is large enough, then

Given our sample https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_n\}, let  denote the number of observations over,  threshold . Then we can write

or equivalently

If we invert this function, we get the quantile of level ,

Actually, a threshold and then the implied number of observation exceeding that threshold, it is possible to consider a fixed number of observation, and then the associated threshold will be the associated order statistics.

The density of the Pareto distribution is here

https://latex.codecogs.com/gif.latex?%20%20%20%20%20g_{(\xi,\sigma)}(x)%20=%20\frac{1}{\sigma}\left(1%20+%20\frac{\xi%20x}{\sigma}\right)^{\left(-\frac{1}{\xi}%20-%201\right)}

which is here function of two paramters, https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?\sigma.As discussed in the course, it is possible to use the Delta method to derive the asymptotic distribution of any quantile, and get then an approximated (asymptotic) confidence interval.

But since https://latex.codecogs.com/gif.latex?\sigma is usually not a parameter of interest, why not considering a reparametrization of our density, as a function of  https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?Q(p) (for some probability https://latex.codecogs.com/gif.latex?p that will be considered as fixed from now on). We can easily get (assuming that https://latex.codecogs.com/gif.latex?\xi\neq%200) that

https://latex.codecogs.com/gif.latex?g_{\xi,Q(p)}(x)=\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi[Q(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{[Q(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

Tis expression is simple, and can be used to derive the likelihood (on the observations exceeding the threshold)

https://latex.codecogs.com/gif.latex?\log\mathcal{L}(\xi,Q(p);\boldsymbol{x})=\sum_{i=0}^{N_u-1}%20\log%20g_{\xi,Q(p)}(x_{n-i:n})Numerically, let us write (and plot) that function. Consider some real data here

> X=as.numeric(danish)
> Xs=sort(X,decreasing=TRUE)
> n=length(X)
> u=10
> nu=sum(X>u)

Consider, say, the 99.9% quantile,

> p=.999

The empirical quantile is here

> quantile(X,p)
   99.9% 
131.5519

The density and the loglikelihood functions are here

> gq=function(x,xi,q){
+ ( (n/nu*(1-p) ) ^ (-xi)-1)/(xi*(q-u))*
+ (1+((n/nu*(1-p))^(-xi)-1)/(q-u)*x)^(-1/xi-1)}

> loglik=function(param){
+ xi=param[2];q=param[1]
+ lg=function(i) log(gq(Xs[i],xi,q))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

We can try to plot this likelihood using

> h=201
> Q=seq(50,300,length=h)
> XI=seq(.1,1,length=h)
> XIQ=as.matrix(expand.grid(Q,XI))
> M=mapply(loglik,XIQ)

Unfortunately, it was not working, so I used the old style

> M=matrix(NA,h,h)
> for(i in 1:h){for(j in 1:h){M[i,j]=loglik(c(Q[i],XI[j]))}}

The level curves of the log-likelihood are here

> hc=heat.colors(100)
> image(Q,XI,-M,col=hc)
> contour(Q,XI,-M,add=TRUE)

Again, since our interest is in the quantile, we can draw the profile likelihood and get the maximum of that function

> PL=function(Q){
+ profilelikelihood=function(xi){
+ loglik(c(Q,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))

$minimum
[1] 111.1055

and the graph is

> XQ=seq(50,300,length=101)
> L=Vectorize(PL)(XQ)
> plot(XQ,-L,type="l")
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1),col="red")
> I=which(-L>=-up-qchisq(p=.95,df=1))
> lines(XQ[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+ lwd=5,col="red")
> abline(v=range(XQ[I]),lty=2,col="red")

which can be seen as an alternative to

> gpd.q(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 64.66184  94.28956 188.91752 

$objective
[1] 454.6481

If we want to focus on another downside risk measure, that shouldn’t be too difficult. For instance, the expected shortfall,  can be estimated as

where  denotes the mean excess function, which can be writen, with a Generalized Pareto Distribution

Thus, a natural estimator for the expected shortfall is

One more time, it is possible to re-parametrize the density of the Pareto distribution, using https://latex.codecogs.com/gif.latex?ES(p) instead of https://latex.codecogs.com/gif.latex?\sigma. Here, we get

https://latex.codecogs.com/gif.latex?g_{\xi,ES(p)}(x)=\frac{\displaystyle{\xi+\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi(1-\xi)[ES(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{(1-\xi)[ES(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

The code to get the associated log-likelihood is here

> ge=function(x,xi,es){
+ (xi+(n/nu*(1-p))^(-xi)-1)/(xi*(1-xi)*(es-u))*(1+(xi+(n/nu*(1-p))^(-xi)
+ -1)/((es-u)*(1-xi))*x)^(-1/xi-1)
+ }
> loglik=function(param){
+ xi=param[2];es=param[1]
+ lg=function(i) log(ge(Xs[i],xi,es))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

and again, we can plot it

and the profile (log) likelihood is here (for the 99.9% expected shortfall)

> PL=function(ES){
+ profilelikelihood=function(xi){
+ loglik(c(ES,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))
$minimum
[1] 143.66

$objective
[1] 454.6481

which could be compared with

> gpd.sfall(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 96.64625 191.36972 394.87555

Likelihood Based Methods, for Extremes

This week, in the MAT8595 course, we will start the section on inference for extreme values. To start with something simple, we will use maximum likelihood techniques on a Generalized Pareto Distribution (we’ve seen Monday Pickands-Balkema-de Hann theorem).

  • Maximum Likelihood Estimation

In the context of parametric models, the standard technique is to consider the maximum of the likelihood (or the log-likelihod).Let denote the parameter (with ). Given some – stnardard – technical assumptions, such as , or  on some neighbourhood of , then

where https://latex.codecogs.com/gif.latex%20?I denotes Fisher information matrix (see any textbook for mathematical statistics courses). Consider here some i.i.d. sample, from a Generalized Pareto Distribution, with parameter https://latex.codecogs.com/gif.latex?\boldsymbol{\theta}=(\xi,\sigma), so that

https://latex.codecogs.com/gif.latex?%20%20%20%20%20F_{(\xi,\sigma)}(x)%20=%20\begin{cases}%201%20-%20\left(1+%20\frac{\xi%20x}{\sigma}\right)^{-1/\xi}%20&,%20\xi%20\neq%200%20\\%201%20-%20\exp%20\left(-\frac{x}{\sigma}\right)%20&,%20\xi%20=%200%20\end{cases}

If we solve (numerically) the first order condition of the maximum likelihood, we get an estimator  https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) which satisfies

https://latex.codecogs.com/gif.latex?\sqrt{n}\left(\left[\begin{array}{c}\widehat{\xi}_n\\\widehat{\sigma%20}_n\end{array}\right]-\left[\begin{array}{c}\xi_0\\\sigma_0%20\end{array}\right]\right)\rightarrow%20\mathcal{N}\left(\left[\begin{array}{c}0\\end{array}\right],\left[\begin{array}{cc}(1+\xi_0)^2%20&%20\sigma_0[1+\xi_0]\\%20\sigma_0%20[1+\xi_0]%20&%202\sigma^2_0(1+\xi_0)%20\end{array}\right]\right)

The idea of this asymptotic normality is the following : if the true distribution of the sample is a GPD with parameter , then, if https://latex.codecogs.com/gif.latex%20?n is large enough, then https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) will have a joint normal distribution. So if we generate a lot of sample (sufficently large, say 200 observations), then the scatterplot of the estimator should the same as the scatterplot of a Gaussian distribution,

> library(evir)
> n=200
> param=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ }
> m=apply(param,2,mean)
> S=var(param)
> library(mnormt)
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=101)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> COL=rev(heat.colors(100))
> image(x,y,z,col=COL)
> points(param)

and to get a 3d representation

> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=31)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=31)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> persp(x,y,t(z),shade=TRUE,col="green",theta=-30,phi=20,ticktype="detailed",
+ xlab="xi",ylab="sigma")

With 200 observations, if the true underlying distribution is a GPD, then, indeed, the joint distribution of https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) seems to be normal. That would be interesting to generate some confidence intervals for instance, or define some tests.

To go further, see any standard textbook on statistical mathematics, e.g. Casella & Berger (2002).

  • Delta Method

Another important property is the so called delta-method (we’ve seen Monday in class that it was obtained easily using a first order Taylor expansion). The idea is that if https://latex.codecogs.com/gif.latex%20?\widehat{\boldsymbol{\theta}}_n is asymptotically normal, and if is sufficently smooth, then https://latex.codecogs.com/gif.latex%20?h(\widehat{\boldsymbol{\theta}}_n) will also be asymptotically Gaussian. More precicely (see also the header of this blog)

From this property, we can get the normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n=\widehat{\xi}_n^{-1} (which is another parametrization used in extreme value models), or on any quantile, https://latex.codecogs.com/gif.latex%20?\widehat{Q}_u=F^{-1}_{\widehat{\boldsymbol{\theta}}_n}(u)=h_u(\widehat{\xi}_n,\widehat{\sigma}_n). Let us run some simulation, one more time to check that we actually have a joint normality.

> library(evir)
> n=200
> param=riskm=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ xihat=param[s,1]
+ shat=param[s,2]
+ q=shat * (.01^(-xihat) - 1)/xihat
+ tvar=q+(shat + xihat * q)/(1 - xihat)
+ riskm[s,]=c(1/xihat,q)
+ }
> m=apply(riskm,2,mean)
> S=var(riskm)
> library(mnormt)
> x=seq(min(riskm[,1])-.05,max(riskm[,1])+.05,length=101)
> y=seq(min(riskm[,2])-.05,max(riskm[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> image(x,y,t(z),col=COL)
> points(riskm)

As we can see bellow, with samples of size 200, we cannot use this asymptotical result: it looks like we do not have enought data. But if we run the same code with

> n=5000

We get the joint normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n and https://latex.codecogs.com/gif.latex%20?\widehat{Q}_n(u). This is what we can get from this result, called delta-method in statistical textbooks. See again Casella & Berger (2002) for more details.

  • Profile Likelihood

Another interesting tool is the concept of profile likelihood. This would be interesting here since the main interest is the tail index https://latex.codecogs.com/gif.latex%20?\xi, https://latex.codecogs.com/gif.latex%20?\sigma being here some kind of auxilary parameter. See Venzon & Moolgavkar (1988) for more details. Here, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif is the set of interesting parameters. Then (under standard suitable conditions) we can prove that

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals. In the GPD case, for each https://latex.codecogs.com/gif.latex%20?\xi, we have to find an optimal https://latex.codecogs.com/gif.latex%20?\sigma^\star(\xi). We compute the (profile) likelihood i.e. https://latex.codecogs.com/gif.latex%20?\mathcal{L}(\xi,\sigma^\star(\xi)). And we can compute the maximum of this profile likelihood. This two-stage optimization is, in general, not equivalent with the (global) maximization of the likelihood, as computed below

>  n=500
>  set.seed(1)
>  x=rgpd(n,xi=1/1.5,beta=1)
>  loglikelihood=function(xi,beta){
+  sum(log(dgpd(x,xi,mu=0,beta))) }
>  XIV=(1:300)/100;L=rep(NA,300)
>  for(i in 1:300){
+  XI=XIV[i]
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  L[i]=-optim(par=1,fn=profilelikelihood)$value }
>  plot(XIV,L,type="l")
>  XIV[which.max(L)]
[1] 0.67
>  gpd(x,0)$par.ests
       xi      beta 
0.6730145 0.9725483

We are not far away. Actually, if we want to compute the maximum of the profile likelihood (and not only compute the values of the profile likelihood on a grid, as before), we use

>  PL=function(XI){
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  return(optim(par=1,fn=profilelikelihood)$value)}
>  (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6731025

$objective
[1] 822.5574

Observe that, indeed, we are not far away from the maximum likelihood estimator of https://latex.codecogs.com/gif.latex%20?\xi (I believe that it’s mainly a computational issue here, and theat the two are similar, here… actually, I’d be glad to hear about cases where maximum of the profile likelihood is not the same as the maximum of the likelihood). The interesting point is that we can use this technique to compute a confidence interval, and even visualize it on a graph

>  up=OPT$objective
>  abline(h=-up)
>  abline(h=-up-qchisq(p=.95,df=1),col="red")
>  I=which(L>=-up-qchisq(p=.95,df=1))
>  lines(XIV[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+  lwd=5,col="red")
>  abline(v=range(XIV[I]),lty=2,col="red")

The vertical lines are the lower and the upper bound of a 95% confidence interval for parameter https://latex.codecogs.com/gif.latex%20?\xi.

To go further, see Murphy, S.A & van der Vaart, A.W. (2000). On Profile Likelihood.

Tests on tail index for extremes

Since several students got the intuition that natural catastrophes might be non-insurable (underlying distributions with infinite mean), I will post some comments on testing procedures for extreme value models.

A natural idea is to use a likelihood ratio test (for composite hypotheses). Let http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif is smaller or larger than http://freakonometrics.blog.free.fr/public/perso5/lrtest22.gif (where in the context of finite versus infinite mean http://freakonometrics.blog.free.fr/public/perso5/lrtest23.gif). I.e. either http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif belongs to the set http://freakonometrics.blog.free.fr/public/perso5/lrtest-10.gif or to its complementary http://freakonometrics.blog.free.fr/public/perso5/lrtest-11.gif. Consider the maximum likelihood estimator http://freakonometrics.blog.free.fr/public/perso5/lrtest24.gif, i.e.

http://freakonometrics.blog.free.fr/public/perso5/lrtest-9.gif

Let http://freakonometrics.blog.free.fr/public/perso5/lrtest25.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-3.gif denote the constrained maximum likelihood estimators on http://freakonometrics.blog.free.fr/public/perso5/lrtest26.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest27.gif respectively,

http://freakonometrics.blog.free.fr/public/perso5/lrtest-12.gif

http://freakonometrics.blog.free.fr/public/perso5/lrtest-2.gif

Either http://freakonometrics.blog.free.fr/public/perso5/lrtest-13.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-6.gif (on the left), or http://freakonometrics.blog.free.fr/public/perso5/lrtest-14.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-7.gif (on the right)

So likelihood ratios

http://freakonometrics.blog.free.fr/public/perso5/lrtest-15.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-16.gif

 are either equal to

http://freakonometrics.blog.free.fr/public/perso5/lrtest-19.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-18.gif

or

http://freakonometrics.blog.free.fr/public/perso5/lrtest-20.gif        http://freakonometrics.blog.free.fr/public/perso5/lrtest-17.gif

If we use the code mentioned in the post on profile likelihood, it is easy to derive that ratio. The following graph is the evolution of that ratio, based on a GPD assumption, for different thresholds,

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> U=seq(2,10,by=.2)
> LR=P=ES=SES=rep(NA,length(U))
> for(j in 1:length(U)){
+ u=U[j]
+ Y=X[X>u]-u
+ loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
+ XIV=(1:300)/100;L=rep(NA,300)
+ for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
+ plot(XIV,L,type="l")
+ PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
+ (L0=(OPT=optimize(f=PL,interval=c(0,10)))$objective)
+ profilelikelihood=function(beta){
+ -loglikelihood(1,beta) }
+ (L1=optim(par=1,fn=profilelikelihood)$value)
+ LR[j]=L1-L0
+ P[j]=1-pchisq(L1-L0,df=1)
+ G=gpd(X,u)
+ ES[j]=G$par.ests[1]
+ SES[j]=G$par.ses[1]
+ }
>
> plot(U,LR,type="b",ylim=range(c(0,LR)))
> abline(h=qchisq(.95,1),lty=2)

with on top the values of the ratio (the dotted line is the quantile of a chi-square distribution with one degree of freedom) and below the associated p-value

> plot(U,P,type="b",ylim=range(c(0,P)))
> abline(h=.05,lty=2)

In order to compare, it is also possible to look at confidence interval for the tail index of the GPD fit,

> plot(U,ES,type="b",ylim=c(0,1))
> lines(U,ES+1.96*SES,type="h",col="red")
> abline(h=1,lty=2)

To go further, see Falk (1995), Dietrich, de Haan & Hüsler (2002), Hüsler & Li (2006) with the following table, or Neves & Fraga Alves (2008). See also here or there (for the latex based version) for an old paper I wrote on that topic.

MAT8886 from tail estimation to risk measure(s) estimation

This week, we conclude the part on extremes with an application of extreme value theory to risk measures. We have seen last week that, if we assume that above a threshold http://freakonometrics.blog.free.fr/public/perso5/qt01.gif, a Generalized Pareto Distribution will fit nicely, then we can use it to derive an estimator of the quantile function (for percentages such that the quantile is larger than the threshold)

http://freakonometrics.blog.free.fr/public/perso5/qt03.gif

It the threshold is http://freakonometrics.blog.free.fr/public/perso5/qt02.gif, i.e. we keep the http://freakonometrics.blog.free.fr/public/perso5/qt04.gif largest observations to fit a GPD, then this estimator can be written

http://freakonometrics.blog.free.fr/public/perso5/qt06.gif

The code we wrote last week was the following (here based on log-returns of the SP500 index, and we focus on large losses, i.e. large values of the opposite of log returns, plotted below)

> library(tseries)
> X=get.hist.quote("^GSPC")
> T=time(X)
> D=as.POSIXlt(T)
> Y=X$Close
> R=diff(log(Y))
> D=D[-1]
> X=-R
> plot(D,X)
> library(evir)
> GPD=gpd(X,quantile(X,.975))
> xi=GPD$par.ests[1]
> beta=GPD$par.ests[2]
> u=GPD$threshold
> QpGPD=function(p){
+ u+beta/xi*((100/2.5*(1-p))^(-xi)-1)
+ }
> QpGPD(1-1/250)
97.5%
0.04557386
> QpGPD(1-1/2500)
97.5%
0.08925095

This is similar with the following outputs, with the return period of a yearly event (one observation out of 250 trading days)

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/250, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.04172534 0.04557655 0.05086785

or the decennial one

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/2500, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.07165395 0.08925558 0.13636620

Note that it is also possible to derive an estimator for another population risk measure (the quantile is simply the so-called Value-at-Risk), the expected shortfall (or Tail Value-at-Risk), i.e.

http://freakonometrics.blog.free.fr/public/perso5/qt10.gif

The idea is to write that expression

http://freakonometrics.blog.free.fr/public/perso5/qt11.gif

so that we recognize the mean excess function (discussed earlier). Thus, assuming again that above http://freakonometrics.blog.free.fr/public/perso5/qt01.gif (and therefore above that high quantile) a GPD will fit, we can write

http://freakonometrics.blog.free.fr/public/perso5/qt12.gif

or equivalently

http://freakonometrics.blog.free.fr/public/perso5/qt13.gif

If we substitute estimators to unknown quantities on that expression, we get

http://freakonometrics.blog.free.fr/public/perso5/qt09.gif

The code is here

> EpGPD=function(p){
+ u-beta/xi+beta/xi/(1-xi)*(100/2.5*(1-p))^(-xi)
+ }
> EpGPD(1-1/250)
97.5%
0.06426508
> EpGPD(1-1/2500)
97.5%
0.1215077

An alternative is to use Hill’s approach (used to derive Hill’s estimator). Assume here that http://freakonometrics.blog.free.fr/public/perso5/qt20.gif, where http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function. Then, for all http://freakonometrics.blog.free.fr/public/perso5/qt23.gif,

http://freakonometrics.blog.free.fr/public/perso5/qt24.gif

Since http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function, it seem natural to assume that this ratio is almost 1 (which is true asymptotically). Thus

http://freakonometrics.blog.free.fr/public/perso5/qt25.gif

i.e. if we invert that function, we derive an estimator for the quantile function

http://freakonometrics.blog.free.fr/public/perso5/qt26.gif

which can also be written

http://freakonometrics.blog.free.fr/public/perso5/qt07.gif

(which is close to the relation we derived using a GPD model). Here the code is

> k=trunc(length(X)*.025)
> Xs=rev(sort(as.numeric(X)))
> xiHill=mean(log(Xs[1:k]))-log(Xs[k+1])
> u=Xs[k+1]
> QpHill=function(p){
+ u+u*((100/2.5*(1-p))^(-xiHill)-1)
+ }

with the following Hill plot

For yearly and decennial events, we have here

> QpHill(1-1/250)
[1] 0.04580548
> QpHill(1-1/2500)
[1] 0.1010204

Those quantities seem consistent since they are quite close, but they are different compared with empirical quantiles,

> quantile(X,1-1/250)
99.6%
0.04743929
> quantile(X,1-1/2500)
99.96%
0.09054039

Note that it is also possible to use some functions to derive estimators of those quantities,

> riskmeasures(gpd(X,quantile(X,.975)),1-1/250)
p   quantile      sfall
[1,] 0.996 0.04557655 0.06426859
> riskmeasures(gpd(X,quantile(X,.975)),1-1/2500)
p   quantile     sfall
[1,] 0.9996 0.08925558 0.1215137

(in this application, we have assumed that log-returns were independent and identically distributed… which might be a rather strong assumption).

Delta method and quantile estimation

An alternative (to profile likelihood techniques) to derive confidence intervals is to use the delta method. Consider an estimator such that

http://freakonometrics.blog.free.fr/public/perso5/delta02.gif

then for any differentiable transformation

http://freakonometrics.blog.free.fr/public/perso5/delta03.gif

The proof of that result is based on Taylor’s expansion (see here or there for more details on the theory, or even on this blog – here, in French or there in English – for some codes in R). This can be used to derive an asymptotic confidence interval for a quantile. Consider the following dataset

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM

It is possible to fit a Generalized Pareto distribution on observations above a given threshold,

http://freakonometrics.blog.free.fr/public/perso5/mef08.gif

In that case, if http://freakonometrics.blog.free.fr/public/perso5/GPD10.gif exceed the threshold out of a sample of size http://freakonometrics.blog.free.fr/public/perso5/GPD11.gif, the estimator of the quantile

http://freakonometrics.blog.free.fr/public/perso5/GPD2.gif

i.e. http://freakonometrics.blog.free.fr/public/perso5/GPD05.gif. Then

http://freakonometrics.blog.free.fr/public/perso5/GPD06.gif

whilehttp://freakonometrics.blog.free.fr/public/perso5/GPD07.gif

i.e. it is now possible to implement the delta-method to derive the asymptotic variance of the quantile, and also (asymptotic) confidence intervals.

> u=5
> GPD=gpd(X,u)
> theta=GPD$par.ests
> sigma=GPD$varcov
> k=GPD$n.exceed
> n=length(X)
> p=.975
> Q=u+theta[2]/theta[1]*((n*(1-p)/k)^(-theta[1])-1)
> nabla=c(-theta[2]/theta[1]^2*((1-p)^(-theta[1])-1)-
+ theta[2]/theta[1]*(1-p)^(-theta[1]*log(1-p)),
+ 1/theta[1]*((1-p)^(-theta[1])-1))
> variance=t(nabla)%*%sigma%*%nabla

Based on the assumption of normality, it is possible to derive confidence intervals, and to compare them with the one obtained in R,

> c(Q-1.96/sqrt(k)*sqrt(variance),
+   Q+1.96/sqrt(k)*sqrt(variance))
[1] 13.11562 16.82852
+ tailplot(gpd(X,5))
+ gpd.q(tailplot(gpd(X,5)), .975, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI Estimate Upper CI
13.33329 14.97207 17.18094

a short word on profile likelihood

Profile likelihood is an interesting theory to visualize and compute confidence interval for estimators (see e.g. Venzon & Moolgavkar (1988)). As we will use is, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif. Then (under standard suitable conditions)

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> u=5

The function to draw the profile likelihood for the tail index parameter is then

> Y=X[X>u]-u
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
> plot(XIV,L,type="l")

It is possible to use it that profile likelihood function to derive a confidenceinterval,

> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6315989

$objective
[1] 754.1115
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1)/2,col="red")
> I=which(L>=-up-qchisq(p=.95,df=1)/2)
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1)/2,length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")

This is done with the following code

> library(ismev)
> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

Tennis and risk management

As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,

Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:

CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney",
"beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor")
YEARS=1970:2009
BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA)
for(i in 1:length(CITIES)){
for(j in 1:length(YEARS)){
city=CITIES[i]
year=YEARS[j]
localization = paste("http://www.resultsfromtennis.com/",
year,"/atp/",city,".html",sep="")
essai = try(readLines(localization), silent=TRUE)
ERROR404=FALSE
if(inherits(essai, "try-error")){ERROR404=TRUE}
if(ERROR404==FALSE){
B=scan(localization,"character")
SETS=NA
LENGTH=NA
if(length(B)>270){
I=(substr(B,1,10)=="class=rez>")
sum(I)
X0=B[I]
X3=as.numeric(substr(X0,11,13))
X2=as.numeric(substr(X0,11,12))
X1=as.numeric(substr(X0,11,11))
X0=X3
X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE]
X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE]
JL=c(which(substr(B,1,9)=="class=nl>"),length(B))
IL=which(substr(B,1,10)=="class=rez>")
IC=cut(IL,JL)
base=data.frame(IC,X0)
LENGTH=as.numeric(tapply(X0,IC,sum))
SETS=as.numeric(tapply(X0,IC,length))/2}
BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS)
BASE0=rbind(BASE0,BASE)}}}
write.table(BASE0,"BASE-TENNIS-TOTAL.txt")

Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,

> I=is.na(TENNIS$LENGTH)==FALSE
> BT=TENNIS[I,]
> nrow(BT)
[1] 72754
> maxr=function(x){max(x,na.rm=TRUE)}
> T=paste(BT$TRNMT,BT$YEAR)
> DUREE=tapply(BT$SETS,T,maxr)
> LISTE=names(DUREE[DUREE>3])
> BT=BT[T%in%LISTE,]

so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),

The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).

If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,

The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,

and similarly with the Hill plot (assuming that tails are Pareto type….)

Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have

> seuil=68+0.25
> GPD1=gpd(X,seuil,method = "ml")
> GPD2=gpd(X,seuil,method = "pwm")
>
> xi=GPD1$par.ests[1]
> mu=seuil
> beta=GPD1$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD1$p.less.thresh)*P)
[1] 5.621281e-09
>
> xi=GPD2$par.ests[1]
> mu=seuil
> beta=GPD2$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD2$p.less.thresh)*P)
[1] 3.027095e-09

I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…

Detecting distributions with infinite mean

In a post I published a few month ago (in French, here, based on some old paper,there), I mentioned a statistical procedure to test if the underlying distribution https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO02.pngof an i.i.d. sample https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO01.png had a finite mean (based on extreme value results). Since I just used it on a small dataset (yes, with real data), I decided to post the R code, since it is rather simple to use. But instead of working on that dataset, let us see what happens on simulated samples. Consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO03.png=200 observations generated from a Pareto distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO04.png

(here https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, as a start)

> a=2
> X=runif(200)^(-1/a)

Here, we will use the package developped by Mathieu Ribatet,

> library(RFA)
  • Using Generalized Pareto Distribution (and LR test)

A first idea is to fit a GPD distribution on observations that exceed a threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO11.png>1.
Since we would like to test https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO06.png (against the assumption that the expected value is finite, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO07.png), it is natural to consider the likelihood ratio

https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO08.png

Under the null hypothesis, the distribution of https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO09.png should be chi square distribution with one degree of freedom. As mentioned here, the significance level is attained with a higher accuracy by employing Bartlett correction (there). But let  us make it as simple as possible for the blog, and use the chi-square distribution to derive the p-value.
Since it is rather difficult to select an appropriate threshold, it can be natural (as in Hill’s estimator) to consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO10.png, and thus, to fit a GPD on the https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO13.pnglargest values. And then to plot all that on a graph (like the Hill plot)

> Xs=rev(sort(X))
> s=0;G=rep(NA,length(Xs)-14);Gsd=G;LR=G;pLR=G
> for(i in length(X):15){
+ s=s+1
+ FG=fitgpd(X,Xs[i],method="mle")
+ FGc=fitgpd(X,Xs[i],method="mle",shape=1)
+ G[s]=FG$estimate[2]
+ Gsd[s]=FG$std.err[2]
+ FGc$fixed
+ LR[s]=FGc$deviance-FG$deviance
+ pLR[s]=1-pchisq(LR[s],df=1)
+ }

Here we keep the estimated value of the tail index, and the associated standard deviation of the estimator, to draw some confidence interval (assuming that the maximum likelihood estimate is Gaussian, which is correct only when n is extremely large). We also calculate the deviance of the model, the deviance of the constrained model (https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO06.png), and the difference, which is the log likelihood ratio. Then we calculate the p-value (since under https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO12.png the likelihood ratio statistics has a chi-square distribution).
If https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, we have the following graph, with on top the p-value (which is almost null here), the estimation of the tail index he https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO13.png largest values (and a confidence interval for the estimator),

If https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1.5 (finite mean, but infinite variance), we have

i.e. for those two models, we clearly reject the assumption of infinite mean (even if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png gets too close from 1, we should consider thresholds large enough). On the other hand, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1 (i.e. infinite mean) we clearly accept the assumption of infinite mean (whatever the threshold),

  • Using Hill’s estimator

An alternative could be to use Hill’s estimator (with Alexander McNeil’s package). See here for more details on that estimator. The test is simply based on the confidence interval derived from the (asymptotic) normal distribution of Hill’s estimator,

> library(evir)
> Xs=rev(sort(X))
> HILL=hill(X)
> G=rev(HILL$y)
> Gsd=rev(G/sqrt(HILL$x))
> pLR=1-pnorm(rep(1,length(G)),mean=G,sd=Gsd)

Again, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, we clearly rejct the assumption of infinite mean,

and similarly, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1.5 (finite mean, but infinite variance)

Here the test is more robust than the one based on the GPD. And if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1 (i.e. infinite mean), again we accept https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO12.png,

Note that if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=0.7, it is still possible to run the procedure, and hopefully, it suggests that the underlying distribution has infinite mean,

(here without any doubt). Now you need to wait a few days to see some practical applications of the idea (there was on in the paper mentioned above actually, on business interruption insurance losses).

Some historical remarks on extreme values

I will start here a short post on extreme values, with some historical perspective. In a recent paper (in French), I mentioned the use of the Pareto distribution as a standard model for extremes, but if reinsurers have been using the Pareto distribution for a long time (see here e.g.), the oldest mathematical models when dealing with extreme value should be related to work on maximum values in finite samples.

  • The work of Ronald Fisher and Leonard Tippett

Leonard Henry Tippett, a former student of Karl Pearson published in Biometrika a note on extremes, in 1925. The goal was “the determination of the distribution of the range and the extremes for a large number of samples“. In 1925, everyone was looking for the Gaussian distribution everywhere, and Leonard Tippett observed that the distribution of the largest value did not have a Gaussian distribution.
A few years after, a joint work with Ronald Fisher was presented to the Cambridge Philosophical Society. The starting point was the idea of “stability” (even if the term did not appear explicitely in their work): the limiting distribution the maximum should be of the “same type” as the underlying distribution. Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-01.png stands for the cumulative distribution function, it should satisfy functional equation

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-02.png

Solutions of that functional equation will give all possible limiting distributions. Thus, Fisher and Tippett obtained three possible limits,

  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-03.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-04.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-06.png with https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-07.png (i.e. finite lower bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-08.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-10.png if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-11.png (i.e. finite upper bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-09.png

Based on those possible limiting distributions, Fisher and Tippett wanted to derive what has been called later on the “domain of attraction” of those distributions.

  • The work of Maurice Fréchet, at the same time

In 1926, Maurice Fréchet wrote a paper on “la loi de probabilité de l’écart maximum“. That paper, as well as the one by Fisher and Tippett (wrote at the same time), investigated asymptotic limits. Both obtained functional equations, but only Maurice Fréchet understood the importance of the stability concept, pointed out by Paul Levy in the context of sums. Thus, Maurice Fréchet introduced the concept of what is called now “max-stability“. But Fréchet solve only functional equation https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png. The point is that Fréchet studied absolute values of errors, i.e. strictly positive random variables. Thus, Maurice Fréchet considered distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-12.png

wherehttps://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-92.png is an arbitrary positive constant. The “2” comes from the fact that Fréchet considered errors with respect to the median. But he did not introduced that new distribution function, he also proved that the distribution appears as a limit when the underlying distribution of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-13.png‘s has an algebraic behavior at infinity, i.e. equivalent to https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-90.png, for some https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-91.png. I.e. he proved that Pareto-type tailed distibutions where in the domain of attraction of the Fréchet distribution.

  •  Later on, the work of Emil Gumbel

In 1932, Emil Gumbel gave a talk in France on the “âge limite“. But as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que la probabilité de cet âge ait une valeur donnée – soit Gaussienne“. But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. Thus, as Fréchet proved that Pareto type distribution were in the max-domain of attraction of Fréchet’s distribution, Gumbel obtained that exponential type distributions were in the max-domain of attraction of Gumbel’s distribution. He also introduced the term “distribution de type exponentiel
For Emil Gumbel, it was natural to study the logarithmic derivative of the distribution, since it is the mortality rate in demography (area that Emil Gumbel studied previously). As he mentioned “d’un point de vue théorique, il est intéressant de noter que M. Fréchet a construit une distribution initiale d”une variable aléatoire pour laquelle la valeur absolue de la dérivée logarithmique diminue sans limite“. But since it was not a valuable property for practical applications, he decided that “nous nous bornerons au traitement des données de type exponentiel“. Emil Gumbel always tried to relate his work on extremes and what he did on demograpy.
For instance in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. He also applied his work on radioactivity, and hydrology.
In the 30’s, hydrographs as Hazen or Graszberger introduced the concept of “yearly maximum” of
a river level. They actually proposed to look for actuarial models to study decennial or centennial floods.  But they only used the lognormal distribution to model yearly maxima. In 1936, French hydrologist Aimé Coutagne met Emil Gumbel (who was teaching at the ISFA, in Lyon). At that time, Emil Gumbel was looking for possible applications (outside demography) for his doubly exponential distribution. As as pointed out by Aimé, “sa formule devait être applicable au cas des crues; c’est à dire des plus grands débits, problème analogue à celui des plus grands âges“. Not only Gumbel’s distribution gave better empirical results, but also it came with a theoritical justification.

  • Gumbel’s distribution properties

Consider the Gumbel distribution, with location and scale parameters alpha and beta respectively, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-40.png

Note that the associated quantile function is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-41.png

with mean

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-43.png

and variance

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-44.png
  • The work of Waloddi Weibull

Waloddi Weibull, a Swedish physict proposed a distribution in 1939, to represent the distribution of breaking strength of materials. He used it in the 50’s in reliability concept. Actually, Weibull appeared late in the story of extremes, since Fréchet, Fisher and Tippett mentioned it already in the mid-20’s.

  • From the central limit theorem (on the average) to Fisher-Tippett theorem (on the maxima)

In order to visualize those two theorem, consider the following animation, where samples of 20 exponential variables are generated. From those 20 values, we plot the maximum in blue, and the average in red, on top. Just below, be rescale those points by considering https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-16.png, and below again, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png}. When then look at the position of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-14.png and the one of the mean of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png. We then build an histogram to visualize the distribution of the rescaled maximum (in blue) and the rescale average (in red).

For those who might be busy, after 1000 generations of samples, we obtain the following histograms (below), including the Gaussian distribution below (i.e. the average of exponential variables looks Gaussian, even with only 20 observations, actually the Gaussian distribution is only asymptotic, i.e. we should consider samples of size 2000), and the maximum over 20 observations of exponential variables (on top) looks like a Gumbel distribution (actually, here it is the exact distribution, and it is the asymptotic distribution for exponential type variables).

  • The GEV distribution

The unified expression of those three distributions is call the GEV distribution. The generalized extreme value distribution has cumulative distribution function

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-20.png

for https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-21.png, where https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-22.png is the location parameter, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-23.png the scale parameter and https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-24.png the shape parameter. Note that the expected value is
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-30.png

Les modèles en réassurance

Publication d’un papier sur les modèles en réassurance dans la revue Risques (ici), suite à un questionnement sur la pertinence des modèles classiques utilisés par les réassureurs. La question initiale était partie de la constatation – que l’on retrouve ici ou – sur l’utilisation (ou la mauvaise utilisation) de modèles sophistiques en finance de marché, en essayant d’expliquer que – d’un point de vue épistémologique au moins – les modèles des réassureurs étaient plus robustes. En particulier, on notera que les modèles les plus anciens utilisés par les réassureurs (en particulier la loi de Pareto comme je l’avais évoqué ici) ont eu une légitimité pratique durant plusieurs décennies avant d’être justifiés par la théorie des valeurs extrêmes. Je ferais d’ailleurs bientôt un billet sur l’histoire des valeurs extrêmes en statistiques, en revenant en particulier sur les travaux de Gumbel ou de Fréchet.

Sinon le code utilisé dans le papier est en ligne ici, et la base

> sinpe = read.table("https://perso.univ-rennes1.fr/arthur.charpentier/sinpe.csv",header=TRUE,sep=";")
> head(sinpe)
      DSUR  MNTPE
1 19850206 240439
2 19851228 125674
3 19850504 488331
4 19851118 457347
5 19850220 990919
6 19851214 182939
> annee=as.numeric(substr(as.character(sinpe$DSUR),1,4))
> sinistres=sinpe$MNTPE[annee>1992]
> XS=sinistres/100000

On se limite ici aux sinistres survenus après 1992. Si l’on visualise ces montants de sinistres, on obtient

> datesur=as.Date(as.character(sinpe$DSUR),"%Y%m%d")
> jour=datesur[annee>1992]
> plot(jour,sinistres/100000,xlab="",ylab="Coût individuel",cex=.5,ylim=c(0,600))
> ded=50
> abline(h=ded)

On peut aussi faire le graphique Pareto-log-log,

> library(evir)
> n=length(X)
> plot(log(sort(X)),log((n:1)/(n+1)),
+ xlab="Coûts des sinistres (logarithme)",ylab="Fonction de survie (logarithme)",cex=.8)
> out <- gpd(XS, 15)
> XI=as.numeric(out$par.ests[1]); BETA=as.numeric(out$par.ests[2]) 
> x0=seq(2,8,.01)
> lines(x,-1/XI*(x-log(15)),col="red")

(l’ajustement de la loi de Pareto permettant de tracer la droite rouge) ou encore visualiser l’estimateur de Hill de l’indice de queue,

> hill(X)

Pour finir, le code suivant permet de calculer la prime pure (ou plus généralement une prime de Wang) soit de manière non-paramétrique (burning cost) ou bien en utilisant l’ajustement d’une loi de Pareto. Le papier étant un papier de vulgarisation, j’ai pris le seuil de manière arbitraire, sans aucune recherche de valeur “optimale”

> DEDUC = seq(10,50,by=5)
> lambda=0;  seuil=5
> WG1=WG2=rep(NA,length(DEDUC))
> for(k in 1:length(DEDUC)){
+ deductible=DEDUC[k]
+ out <- gpd(XS, seuil)
+ XI=as.numeric(out$par.ests[1]); BETA=as.numeric(out$par.ests[2]) 
+ G0=function(x){1-pgpd(x+seuil, xi = XI, mu = seuil, beta = BETA)}
+ G=function(x){(G0(x+deductible-seuil))/(G0(deductible-seuil))}
+ F=function(x){pnorm(qnorm(G(x))+lambda)}
+ (wang1=integrate(F, 0, Inf))
+ X=XS[XS>deductible]
+ n=length(X)
+ FS= function(z){
+ m=rep(NA,length(z))
+ for(i in 1:length(m)){
+ m[i]=sum(X>z[i]+deductible)/n}
+ return(m)
+ }
+ G=function(x){pnorm(qnorm(FS(x))+lambda)}
+ (wang2=sum(G(seq(0,800,.01))*.01))
+ WG1[k]=as.numeric(wang1$value)
+ WG2[k]=wang2
+ }
> plot(DEDUC,WG2-DEDUC,type='b',xlab="Niveau de la priorité ('00 000 euros)",ylab="Prime pure par sinistres réassuré",ylim=c(0,50))
> lines(DEDUC,WG1-DEDUC,type='b',col="red",pch=4)
> legend(10,50,c("Aujustement d'une loi de Pareto","'Burning cost'"),
+ col=c("red","black"),lty=1,cex=.8,pch=c(4,1))

Peut on faire l’économie du formalisme quand on parle d’extrêmes ?

Tous les blogs économiques saluent la parution en poche du joli petit livre de Daniel Zajdenweber, Economie des Extrêmes. En particulier, beaucoup de monde salue ce livre qui explique simplement des choses complexes…. Par exemple Alexandre dès 2001 “Passé le premier chapitre, un peu ardu, et qui nécessite du lecteur des connaissances de base en statistique et probabilités (notion de lois de probabilité, d’espérance, de variance…) qui décrit en termes littéraires les caractéristiques de ces lois, l’auteur applique ces résultats à un grand nombre de phénomènes concrets, et en tire les conséquences“. Mais peut-on parler d’économie des extrêmes sans être technique ?

Histoire que mon message ne soit pas déformé, je trouve passionnant ce petit livre introductif à la problématique des risques extrêmes (qui est un de mes dadas depuis quelques années) mais j’espère qu’il servira d’encouragement à une lecture d’ouvrages plus détaillés sur le sujet. Car la vulgarisation a des limites que l’on atteint vite quand on parle de sujets aussi complexes.

L’exemple que j’ai le plus étudié est celui des sinistres de perte d’exploitation (longuement évoqué par Daniel Zajdenweber dans son livre). Il y a quelques années j’avais utilisé cette partie du livre comme base pour faire un sujet d’examen pour le cours de “réassurance et grands risques” que je donnais alors à l’ENSAE1. Et malheureusement, mes compétences littéraires sont très limitées, donc je vais faire des maths. Dans le livre, la figure suivante est présentée,

qui correspond effectivement à la fonction tracée dès 1925 par Karl Gustav Hagstroem (j’avais souligné (ici) ses travaux précurseurs où l’intérêt de la loi de Pareto pour modéliser les très grands riques apparaissait pour la première fois). C’est en effet assez naturel: si on a une loi de Pareto, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z01.png

alors on pourrait écrire, en passant au logarithme

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z02.png

Si on représente la version empirique, c’est à dire le nuage de points

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z03.png

alors pour une loi de Pareto, les points devraient être alignés suivant une droite, et la pente doit correspondre au paramètre de la fonction puissance. C’est visiblement l’idée exploitée ici.

Autrement dit, les pointillés ne sont un intervalle de confiance, mais juste un outils graphique pour se demander si la pente vaut 1, ou pas. Daniel Zajdenweber affirme que la pente doit ici être -1.

Le fait que la valeur soit unitaire ou pas a en effet un impact très important en terme d’assurabilité du risque de perte d’exploitation. Rappelons que pour une variable positive (et c’est le cas ici). Et si on a une telle loi de Pareto (de puissance unitaire), alors la prime pure d’un traité de réassurance, couvrant entre m et M s’écrit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z06.png

soit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z07.png

ce qui correspond aux calculs de Daniel Zajdenweber… Mais encore une fois “l’absence d’espérance mathématique de la distribution des sinistres” est une conclusion très forte sur laquelle on peut essayer de revenir.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z04.png

aussi ici

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z05.png

autrement dit, l’espérance est finie si la pente est strictement plus grande (en valeur absolue) que 1. Si la pente est inférieure (ou égale) à 1, le risque n’est pas assurable ! Ce qui est une conclusion très très forte pour les assureurs.
J’ai donc demandé à la FFSA la base de données utilisée ici, et pour éviter des problèmes d’inflation des coûts de sinistres entre 1992 et 2000. Si je prends tous les sinsitres, on obtient l’ajustement de Pareto suivant

soit une pente (en valeur absolue de 1.47). Mais encore une fois, l’ajustement de Pareto se fait sur les grands sinistres. Hill a proposé un estimateur très populaire pour estimer ce coefficient, où on ne prend en compte que les k observations les plus grandes, et on regarde l’estimation de la pente du graphique de Pareto pour ces quelques valeurs. On représente alors l’estimation en fonction du nombre de grands sinistres, ou du seuil définissant les graphs sinistres. Numériquement, en posant

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z08.png

on peut écrire comme estimateur de la pente

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z09.png

soit, en simplifiant le numérateur,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z10.png

tel que l’a construit Hill en 1975. Graphiquement, on a ici

Bref, la question est ce savoir si on atteint la valeur 1 pour les grands sinistres. Graphiquement, on a malgré tout envie de rejeter cette hypothèse.
Une solution peut être de faire un test statistique, basé sur de ratio de vraisemblance, comme le suggèrent Reiss & Thomas (2001) or Coles (2001). En fait, on peut même utiliser d’autres estimateurs que celui de Hill, comme celui obtenu en faisant un ajustement de loi GPD (Pareto généralisée
) sur la loi des Excès, ou une loi GEV sur des maximas par blocs (Generalized Extreme Value). On introduit alors la statistique de test suivante

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z12.png

et on regarde les p-value (ainsi que la correction de Bartlett à droite),

On peut aussi, plus simplement, estimer plusieurs coefficients de pentes pour des seuils différents, et regarder la borne supérieure de l’intervalle de confiance,

Bref, même si avec un des ajustements de loi GPD on hésite à retenir une pente unitaire, la plupart des tests rejettent cette hypothèse, et donc le risque de perte d’exploitation semble être assurable, d’espérance mathématique finie. Bref, les dessins c’est très bien pour faire passer une idée, mais ne retenir que ça pour en tirer des conclusions aussi fortes me laisser sceptique….