Tag Archives: Extremes

Risk Measures with Extreme Value Models

We’ve seen Monday, in the MAT8595 course how to use the Generalized Pareto Distribution to estimate some downside risk measures, given a sample (assumed to be i.i.d., I will not mention here properties on extremes for stochastic processes) with distribution https://latex.codecogs.com/gif.latex?F. The cumulative distribution function of the  Pareto distribution is here

For some threshold , and https://latex.codecogs.com/gif.latex?x\geq%20u, we can write

From Pickands–Balkema–de Haan theorem, if is large enough, then

Given our sample https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_n\}, let  denote the number of observations over,  threshold . Then we can write

or equivalently

If we invert this function, we get the quantile of level ,

Actually, a threshold and then the implied number of observation exceeding that threshold, it is possible to consider a fixed number of observation, and then the associated threshold will be the associated order statistics.

The density of the Pareto distribution is here

https://latex.codecogs.com/gif.latex?%20%20%20%20%20g_{(\xi,\sigma)}(x)%20=%20\frac{1}{\sigma}\left(1%20+%20\frac{\xi%20x}{\sigma}\right)^{\left(-\frac{1}{\xi}%20-%201\right)}

which is here function of two paramters, https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?\sigma.As discussed in the course, it is possible to use the Delta method to derive the asymptotic distribution of any quantile, and get then an approximated (asymptotic) confidence interval.

But since https://latex.codecogs.com/gif.latex?\sigma is usually not a parameter of interest, why not considering a reparametrization of our density, as a function of  https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?Q(p) (for some probability https://latex.codecogs.com/gif.latex?p that will be considered as fixed from now on). We can easily get (assuming that https://latex.codecogs.com/gif.latex?\xi\neq%200) that

https://latex.codecogs.com/gif.latex?g_{\xi,Q(p)}(x)=\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi[Q(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{[Q(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

Tis expression is simple, and can be used to derive the likelihood (on the observations exceeding the threshold)

https://latex.codecogs.com/gif.latex?\log\mathcal{L}(\xi,Q(p);\boldsymbol{x})=\sum_{i=0}^{N_u-1}%20\log%20g_{\xi,Q(p)}(x_{n-i:n})Numerically, let us write (and plot) that function. Consider some real data here

> X=as.numeric(danish)
> Xs=sort(X,decreasing=TRUE)
> n=length(X)
> u=10
> nu=sum(X>u)

Consider, say, the 99.9% quantile,

> p=.999

The empirical quantile is here

> quantile(X,p)
   99.9% 
131.5519

The density and the loglikelihood functions are here

> gq=function(x,xi,q){
+ ( (n/nu*(1-p) ) ^ (-xi)-1)/(xi*(q-u))*
+ (1+((n/nu*(1-p))^(-xi)-1)/(q-u)*x)^(-1/xi-1)}

> loglik=function(param){
+ xi=param[2];q=param[1]
+ lg=function(i) log(gq(Xs[i],xi,q))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

We can try to plot this likelihood using

> h=201
> Q=seq(50,300,length=h)
> XI=seq(.1,1,length=h)
> XIQ=as.matrix(expand.grid(Q,XI))
> M=mapply(loglik,XIQ)

Unfortunately, it was not working, so I used the old style

> M=matrix(NA,h,h)
> for(i in 1:h){for(j in 1:h){M[i,j]=loglik(c(Q[i],XI[j]))}}

The level curves of the log-likelihood are here

> hc=heat.colors(100)
> image(Q,XI,-M,col=hc)
> contour(Q,XI,-M,add=TRUE)

Again, since our interest is in the quantile, we can draw the profile likelihood and get the maximum of that function

> PL=function(Q){
+ profilelikelihood=function(xi){
+ loglik(c(Q,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))

$minimum
[1] 111.1055

and the graph is

> XQ=seq(50,300,length=101)
> L=Vectorize(PL)(XQ)
> plot(XQ,-L,type="l")
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1),col="red")
> I=which(-L>=-up-qchisq(p=.95,df=1))
> lines(XQ[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+ lwd=5,col="red")
> abline(v=range(XQ[I]),lty=2,col="red")

which can be seen as an alternative to

> gpd.q(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 64.66184  94.28956 188.91752 

$objective
[1] 454.6481

If we want to focus on another downside risk measure, that shouldn’t be too difficult. For instance, the expected shortfall,  can be estimated as

where  denotes the mean excess function, which can be writen, with a Generalized Pareto Distribution

Thus, a natural estimator for the expected shortfall is

One more time, it is possible to re-parametrize the density of the Pareto distribution, using https://latex.codecogs.com/gif.latex?ES(p) instead of https://latex.codecogs.com/gif.latex?\sigma. Here, we get

https://latex.codecogs.com/gif.latex?g_{\xi,ES(p)}(x)=\frac{\displaystyle{\xi+\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi(1-\xi)[ES(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{(1-\xi)[ES(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

The code to get the associated log-likelihood is here

> ge=function(x,xi,es){
+ (xi+(n/nu*(1-p))^(-xi)-1)/(xi*(1-xi)*(es-u))*(1+(xi+(n/nu*(1-p))^(-xi)
+ -1)/((es-u)*(1-xi))*x)^(-1/xi-1)
+ }
> loglik=function(param){
+ xi=param[2];es=param[1]
+ lg=function(i) log(ge(Xs[i],xi,es))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

and again, we can plot it

and the profile (log) likelihood is here (for the 99.9% expected shortfall)

> PL=function(ES){
+ profilelikelihood=function(xi){
+ loglik(c(ES,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))
$minimum
[1] 143.66

$objective
[1] 454.6481

which could be compared with

> gpd.sfall(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 96.64625 191.36972 394.87555

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

How old is the oldest person you know?

Last week, we had a discussion with some colleagues about the fact that – in order to prepare for the SOA exams – we did not have time (so far) to mention results on extreme values in our actuarial program. I did gave an introduction in my nonlife actuarial models class, but it was only an introduction, in three hours, in order to illustrate reinsurance pricing. And I told my students that if they wanted to know more about extreme values, they should start a master program in actuarial science and finance, since I will give a course on extremes (and copulas) next winter.

But actually, extreme values are everywhere ! For instance, there is a Prudential TV commercial where has people place large, round stickers on a number line to represent the age of the oldest person they know. This forms some kind of histogram. The message is to have Prudential prepare you to have adequate money for all these years. And actually, anyone can add his or her own sticker at the Prudential website.

Patrick Honner, on his blog (http://mrhonner.com/…), did mention this interesting representation. But this idea is not new, as mentioned in a post, published three years ago. In 1932, Emil Gumbel gave a talk in France on the “âge limite“. And as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que cet âge ait une valeur donnée – soit Gaussienne“. In 1932 (not aware of Fisher and Tippett work, he thought that the limiting distribution for a maximum would be Gaussian). But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. And in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. One should also mention one of the most important paper in extreme value theory, published in 1974 by Balkema and de Haan, on Residual Life Time at Great Age.

Because in this experiment, the question is “How Old is the Oldest Person You Know?“, so it is the distribution of a maximum. And from Fisher-Tippett theorem, if we assume that the age is bounded (and that there exists some finite upper limit), then the limiting distribution for the maxima (or to be more rigorous, a affine transformation of the maxima) should be Weibull distribution. And this is what it looks like

> plot(-x,dweibull(x,2.25,4),type="l",lwd=2)

As an actuary, the only thing I know about demography, is the distribution of the age of death. For instance, consider the following French life table

> alive <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ sep=";",header=TRUE)$Lx
> nb= -diff(alive)
> ages=0:110
> plot(ages,nb,type="h")

This is the distribution of the age of the death in a given population. Which is not the same as the distribution mentioned above! What we look for is the following: given that someone is alive, what could be the distribution of his-her age ? Actually, if we assume that the yearly number of birth is constant with time (as well as death probability), then we can compute easily to number of people of age https://latex.codecogs.com/gif.latex?x : we take everyone born (exactly) https://latex.codecogs.com/gif.latex?x years ago, and remove all those who died at at https://latex.codecogs.com/gif.latex?x, https://latex.codecogs.com/gif.latex?x-1, etc. So the function should be

> probadeath=nb/sum(nb)
> nbx=function(x) 1-sum(probadeath[1:(x+1)])
> surv=Vectorize(nbx)(ages)
> distrage=surv/sum(surv)

which looks like

But this assumption of constant number of birth is not that relevent. And actually, what we need is the distribution of the age within a population… This is a population pyramid, actually. The French one can be downloaded from http://www.insee.fr/fr/ppp/bases-de-donnees/….

> population <- read.table("popinsee2007.csv",sep=";",header=TRUE)$POPTOT07
> ages=0:107
> plot(ages,population/sum(population),type="h")

(the red line being the one obtained previously, using some natality assumptions). Now, let us use this population to generate acquaintances.

> agemax=function(nsim=1000,size=20){
+ agemax=rep(NA,nsim)
+ for(i in 1:nsim){
+ X=sample(ages,prob=population/sum(population),size=size,replace=TRUE)
+ agemax[i]=max(X)}
+ return(agemax)}

Here, we assume that everyone knows 20 other people, randomly chosen in the entire population, then we return the age of the oldest. And we do that for 1,000 people. Here is the distribution, we obtain

> XS=agemax(10000,20)
> plot(table(XS)/length(XS),type="h",xlim=c(0,108))

where the red line is a Weibull distribution (a transformed one, actually, since in extremely value theory, the distance to the upper bound of the distribution has a Weibull density),

> library(MASS)
> fit=fitdistr(108-XS,dweibull,list(shape=1,scale=1))
> lines(ages,dweibull(108-ages,fit$estimate[1],fit$estimate[2]),col="red")

Which is quite close to the distribution obtained in the commercial, don’t you think ? But still, it should be possible to be more accurate, since people should think of their parents, or grandparents. So I guess it could be possible to build a more accurate algorithm, to get something closer to the distribution obtained on the Prudential website. But first, let us wait to have more stickers, more observations… and then I’ll be back to play with it !

Large claims, and ratemaking

During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost https://latex.codecogs.com/gif.latex?Y, given some covariates https://latex.codecogs.com/gif.latex?\boldsymbol{X}.Here is the dataset we’ll use,

>  sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt",
+  header=TRUE,sep=";")
>  sinistres=sinistre[sinistre$garantie=="1RC",]
>  sinistres=sinistres[sinistres$cout>0,]
>  contrat=read.table("http://freakonometrics.free.fr/contractACT2040.txt",
+  header=TRUE,sep=";")
>  couts=merge(sinistres,contrat)
> tail(couts)
     nocontrat    no garantie    cout exposition zone puissance agevehicule
1919   6104006 11933      1RC 5376.04       0.37    E         6           1
1920   6107355 12349      1RC   51.63       0.74    E         4           1
1921   6108364 13229      1RC 1320.00       0.74    B         9           1
1922   6109171 11567      1RC 1320.00       0.74    B        13           1
1923   6111208 14161      1RC  970.20       0.49    E        10           5
1924   6111650 14476      1RC 1940.40       0.48    E         4           0
     ageconducteur bonus marque carburant densite region
1919            32    57     12         E      93     10
1920            45    57     12         E      72     10
1921            32   100     12         E      83      0
1922            56    50     12         E      93     13
1923            30    90     12         E      53      2
1924            69    50     12         E      93     13

Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one.

> age=0:20
> reggamma.sp <- glm(cout~agevehicule,family=Gamma(link="log"),
+ data=couts)
> Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response")

For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost,

> reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT)
> sigma <- summary(reglm.sp)$sigma
> mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age))
> Pln <- exp(mu+sigma^2/2)

We can plot those two predictions on a single graph,

> plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4)
> lines(age,Pln,col="blue",type="b")

Here it is,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.18.56.png

Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.25.52.png

Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following,

http://freakonometrics.hypotheses.org/files/2013/02/Capture-d%E2%80%99e%CC%81cran-2013-02-13-a%CC%80-14.19.44.png

i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset,

> couts[which.max(couts$cout),]
         cout exposition zone puissance agevehicule ageconducteur
7842  4024601       0.22    B         9          13            19
     marque carburant densite region
7842      2         E      93     24

One young driver got a $ 4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above $ 10,000 (one has a fix it)

> s= 10000
> couts$normal=(couts$cout<=s)
> mean(couts$normal)
[1] 0.9818087

which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs,

> indice = which(couts$cout>s)
> mean(couts$cout[indice])
[1] 34471.59
> library(splines)
> regB=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response")
> ypB2=mean(couts$cout[indice])

the second one to model normal claims individual costs,

> indice = which(couts$cout<=s)
> mean(couts$cout[indice])
[1] 1335.878
> regA=glm(cout~bs(agevehicule),data=couts,
+ subset=indice,family=Gamma(link="log"))
> ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response")
> ypA2=mean(couts$cout[indice])

And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred

> regC=glm(normal~bs(agevehicule),data=couts,family=binomial)
> ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response")
> regC2=glm(normal~1,data=couts,family=binomial)
> ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response")

Note that we to have, each time something that can be interpreted either as https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X},Y\gtrless%20%20s), or https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|Y\gtrless%20%20s) – i.e. no covariate is considered on the later. On the graph below, we did plot

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s,%20\boldsymbol{X})%20}_{C}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities.

http://freakonometrics.hypotheses.org/files/2013/02/ecret-ABC-v2.gif

(but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e.

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s|\boldsymbol{X})}_{B}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s|%20\boldsymbol{X})}_{B}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C-v2.gif

To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too

https://latex.codecogs.com/gif.latex?\mathbb{E}(Y|\boldsymbol{X})%20=%20{\color{Blue}%20{\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq%20s)}_{A}%20\cdot%20{\underbrace{\mathbb{P}(Y\leq%20s)}_{B%27}}}}+{\color{Red}%20{{\underbrace{\mathbb{E}(Y|Y%3E%20s)%20}_{C%27}}\cdot%20{\underbrace{\mathbb{P}(Y%3E%20s)}_{B%27}}}}

http://freakonometrics.hypotheses.org/files/2013/02/ecret-AB2C2-v2.gif

From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking…

Tests on tail index for extremes

Since several students got the intuition that natural catastrophes might be non-insurable (underlying distributions with infinite mean), I will post some comments on testing procedures for extreme value models.

A natural idea is to use a likelihood ratio test (for composite hypotheses). Let http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif is smaller or larger than http://freakonometrics.blog.free.fr/public/perso5/lrtest22.gif (where in the context of finite versus infinite mean http://freakonometrics.blog.free.fr/public/perso5/lrtest23.gif). I.e. either http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif belongs to the set http://freakonometrics.blog.free.fr/public/perso5/lrtest-10.gif or to its complementary http://freakonometrics.blog.free.fr/public/perso5/lrtest-11.gif. Consider the maximum likelihood estimator http://freakonometrics.blog.free.fr/public/perso5/lrtest24.gif, i.e.

http://freakonometrics.blog.free.fr/public/perso5/lrtest-9.gif

Let http://freakonometrics.blog.free.fr/public/perso5/lrtest25.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-3.gif denote the constrained maximum likelihood estimators on http://freakonometrics.blog.free.fr/public/perso5/lrtest26.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest27.gif respectively,

http://freakonometrics.blog.free.fr/public/perso5/lrtest-12.gif

http://freakonometrics.blog.free.fr/public/perso5/lrtest-2.gif

Either http://freakonometrics.blog.free.fr/public/perso5/lrtest-13.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-6.gif (on the left), or http://freakonometrics.blog.free.fr/public/perso5/lrtest-14.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-7.gif (on the right)

So likelihood ratios

http://freakonometrics.blog.free.fr/public/perso5/lrtest-15.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-16.gif

 are either equal to

http://freakonometrics.blog.free.fr/public/perso5/lrtest-19.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-18.gif

or

http://freakonometrics.blog.free.fr/public/perso5/lrtest-20.gif        http://freakonometrics.blog.free.fr/public/perso5/lrtest-17.gif

If we use the code mentioned in the post on profile likelihood, it is easy to derive that ratio. The following graph is the evolution of that ratio, based on a GPD assumption, for different thresholds,

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> U=seq(2,10,by=.2)
> LR=P=ES=SES=rep(NA,length(U))
> for(j in 1:length(U)){
+ u=U[j]
+ Y=X[X>u]-u
+ loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
+ XIV=(1:300)/100;L=rep(NA,300)
+ for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
+ plot(XIV,L,type="l")
+ PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
+ (L0=(OPT=optimize(f=PL,interval=c(0,10)))$objective)
+ profilelikelihood=function(beta){
+ -loglikelihood(1,beta) }
+ (L1=optim(par=1,fn=profilelikelihood)$value)
+ LR[j]=L1-L0
+ P[j]=1-pchisq(L1-L0,df=1)
+ G=gpd(X,u)
+ ES[j]=G$par.ests[1]
+ SES[j]=G$par.ses[1]
+ }
>
> plot(U,LR,type="b",ylim=range(c(0,LR)))
> abline(h=qchisq(.95,1),lty=2)

with on top the values of the ratio (the dotted line is the quantile of a chi-square distribution with one degree of freedom) and below the associated p-value

> plot(U,P,type="b",ylim=range(c(0,P)))
> abline(h=.05,lty=2)

In order to compare, it is also possible to look at confidence interval for the tail index of the GPD fit,

> plot(U,ES,type="b",ylim=c(0,1))
> lines(U,ES+1.96*SES,type="h",col="red")
> abline(h=1,lty=2)

To go further, see Falk (1995), Dietrich, de Haan & Hüsler (2002), Hüsler & Li (2006) with the following table, or Neves & Fraga Alves (2008). See also here or there (for the latex based version) for an old paper I wrote on that topic.

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

MAT8886 Extremes and sums (of i.i.d. random variables)

Yesterday, we have discussed briefly sums and maximas of i.i.d. random variables using the concept of subexponential distributions. Today, we will introduce the concept of regular variation: a positive function is said to be regularly varying (at infinity), denoted http://freakonometrics.blog.free.fr/public/perso5/subexp-30.gif, for some http://freakonometrics.blog.free.fr/public/perso5/subexp-31.gif, if

http://freakonometrics.blog.free.fr/public/perso5/subexp-33.gif
for all http://freakonometrics.blog.free.fr/public/perso5/subexo_34.gif. An this concept can be related to sums and maxima (see section 6.2.6 in Embrechts et al. (1997)). Consider i.i.d. positive random variables http://freakonometrics.blog.free.fr/public/perso5/subsexp-01.gif: lethttp://freakonometrics.blog.free.fr/public/perso5/subexp-2.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-3.gif. Then it can be shown easily that

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-20.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-10.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-23.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Z.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-13.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-14.gif
If is not that simple to check for such convergences, it is still possible to use graphs to study the behavior of the empirical version of those quantities. Consider the following function to visualize convergence of empirical ratios,

CONVERGENCE=function(g,p=1,n=500000){
set.seed(1)
X=g(n);X1=g(n);X2=g(n);X3= g(n);X4=g(n)
Tp =cummax(X^p)/cumsum(X^p)
Tp1=cummax(X1^p)/cumsum(X1^p)
Tp2=cummax(X2^p)/cumsum(X2^p)
Tp3=cummax(X3^p)/cumsum(X3^p)
Tp4=cummax(X4^p)/cumsum(X4^p)
plot(Tp4,type="l",ylim=c(0,1),log="x",
xlim=c(100,n),ylab="",col="light blue",xlab="")
lines(Tp1,col="light green")
lines(Tp2,col="yellow")
lines(Tp3,col="pink")
lines(Tp,lwd=2)
abline(h=0:1,col="red",lty=2)
}

or the following to study the “asymptotic” distribution of the ratio on simulated samples

LIMITDIST=function(g,p=1,n=500000,ns=1000){
set.seed(1)
T=rep(NA,ns)
for(i in 1:ns){
X=g(n)
T[i]=max(X^p)/sum(X^p)
}
hist(T,breaks=seq(0,1,by=.05),probability=TRUE,
col="light green",ylab="",xlab="",main="")
}

In the case of exponentially distributed variables, we have

CONVERGENCE(rexp)

For variables with a lognormal distribution,

CONVERGENCE(rlnorm)

And finally, consider the case of a Pareto distribution

rpareto=function(n){runif(n)^(-1/1.5)-1}
CONVERGENCE(rpareto)

Here, it looks like those three distributions have finite variance (and actually, they do). To go one step further, for http://freakonometrics.blog.free.fr/public/perso5/subexp00.gif, define http://freakonometrics.blog.free.fr/public/perso5/suuuuuubexp.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-5.gif. Then analogous results can be derived,

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-99.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-11.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-25.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Zk.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-12.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-15.gif
Again, it is possible to use the function defined above,

CONVERGENCE(rexp,p=2)

or

CONVERGENCE(rexp,p=3)

or even

CONVERGENCE(rexp,p=10)

If the power is not too high, it looks like the ratio goes to zero. But when it becomes larger, it looks like more simulations might be necessary to say something relevant.

CONVERGENCE(rlnorm,p=2)

or

CONVERGENCE(rlnorm,p=3)

Here also, it looks like we have a light tailed distribution (and actually, it is the case). And finally, if we consider the case of a Pareto distribution

CONVERGENCE(rpareto,p=2)

Then it looks like it is an heavy tailed distribution. In order to get a better understanding, plot the distribution of the ratio obtained from 1,000 simulated samples (of size 500,000),

LIMITDIST(rpareto,p=1)

versus

LIMITDIST(rpareto,p=2)

So obviously, something is going on between 1 and 2 (recall that the power parameter of the Pareto distribution is 1.5).

Fisher-Tippett theorem with an historical perspective

A couple of weeks ago, Rafael asked me if I had something on the history of extreme value theory. Since I will get back to fundamental results about extremes in my course, I promised I will write down a short post on all that issue.

To start from the beginning, in 1928, Ronald Fisher and Leonard Tippett formulated the three types of limiting distributions for the maximum term of a random sample (Fisher & Tippett (1928)). The problem was to characterize function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/ext-2.gif

where http://freakonometrics.hypotheses.org/files/2015/12/ext-3.gif where http://freakonometrics.hypotheses.org/files/2015/12/ext-4.gif‘s are i.i.d. with cumulative distribution function http://freakonometrics.hypotheses.org/files/2015/12/ext-5.gif. They had supporting arguments, but no (rigorous) proof. Nevertheless, the obtained that the only possible types for G were

http://freakonometrics.hypotheses.org/files/2015/12/ext-6.gif

i.e. Fréchet type (Pareto-type tails), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-7.gif

i.e. Weibull type (bounded distribution type), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-8.gif

i.e. Gumbel type (exponential-type tails). Emil Gumbel has been intensively using the so-called Gumbel distribution on river flows, since (as he explained in 1958), “it seems that the rivers know the theory. It only remains to convince the engineers of the validity of this analysis“.
Independently of that work (published in 1928), Maurice Fréchet considered in 1927 (in Sur la loi de probabilité de l’écart maximum) possible limits of

http://freakonometrics.hypotheses.org/files/2015/12/ext-9.gif

and obtained only http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif as possible limit. Richard von Mises gave in 1936 sufficient, but not necessary conditions for their (max) domain of attraction, i.e. characterization of function http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gif such that the maxima converges to some specific function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif (von Mises (1936)). E.g. he noticed that a sufficient condition on http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gifto be in the (max) domain of attraction of the Gumbel distribution is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-13.gif

Then in 1943, Boris Gnedenko gave a complete characterization of those three types, with a complete characterization for two of them (heavy tails, i.e. Fréchet type and bounded support, i.e. Weibull) but his necessary and sufficient condition was based on a function that was not explicitly defined (see Gnedenko (1943)). Laurens de Haan in the 70’s derived checkable condition for Gumbel’s type.
Boris Gnedenko proved (in Section 4 of his paper) that F is the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif if and only if http://freakonometrics.hypotheses.org/files/2015/12/ext-16.gif is regularly varying at infinity, with index http://freakonometrics.hypotheses.org/files/2015/12/ext-17.gif (even if the term “regular variation” was not mentioned in the paper). Similar results were derived to characterize functions in the (max) domain of attraction of Weibull. For the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif, Boris Gnedenko obtained that a necessary and sufficient condition was that there exists a function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif such http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif goes to 0 at infinity and

http://freakonometrics.hypotheses.org/files/2015/12/ext-20.gif

Several papers have discussed what function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif could be e.g. David Mejzler in 1949 (in Russian, but see also his 1965 paper), and Laurens de Hann in 1970 and 1971 (following the dramatic flood in the Netherlands in 1953, researchers in the Netherlands have focuses on dikes, and extreme value applications).

Mejzler’s idea was to work on quantiles, and not on the cumulative distribution function. I.e. define

http://freakonometrics.hypotheses.org/files/2015/12/ext-21.gif

Then a necessary and sufficient condition for F to be in the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-23.gif

Laurens de Haan proved in 1971 that function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif can be – in general – given by

http://freakonometrics.hypotheses.org/files/2015/12/ext-25.gif

And in 1976, Laurens de Haan obtained a three-type convergence working on quantile function http://freakonometrics.hypotheses.org/files/2015/12/ext-26.gif (with a much shorter proof).
There have been many many papers extending Fisher-Tippett’s theorem, e.g. on non-independent sequences, like exchangeable ones (in a paper by Simeon Berman in 1962, or on stationary Gaussian sequences in 1964).

C’est agaçant cette manie de battre des records !

En janvier dernier, j’avais fait un billet pour noter que 2010 était une année record pour les catastrophes naturelles, mais que 2009 aussi, et 2008 aussi. Bref, tous les ans, on bat des records. Mais il n’y a pas que sur le climat que l’on bat sans cesse des records. En sport aussi.
Par exemple, dimanche, le record du monde du marathon a (encore) été battu, avec 42,195 kilomètres parcourus (ou courus, tout simplement) en 2:03:58 2:03:38 par Patrick Makau (bon, certes, avec une amélioration d’une seconde par rapport au précédant record). On devrait être content (en tous les cas Xi’an semble l’être). Sauf qu’à l’hiver 2012, je vais donner un cours sur copules et valeurs extrêmes à l’UQÀM, alias MAT8886. Et je pensais faire travailler les étudiants sur les articles de John Einmahl, sur les records au 100 mètres, ou plus généralement les records en athlétisme. Ces papiers proposent une très jolie application de la théorie des valeurs extrêmes, en particulier si on est dans le domaine d’attraction de la loi de Weibull (on peut alors chercher la borne supérieur – si on travaille sur le max – du support de la distribution, appelé ici endpoint). On apprend ainsi que selon la théorie des valeurs extrêmes, le temps minimal pour parcourir les 42,195 km du marathon devrait être de 2:04:06.

I.e. le temps pour courir un marathon devrait être borné par cette valeur. Sauf que ce record a été battu. Et à plusieurs reprises me semble-t-il.
Pour aller un peu plus loin, et comprendre ce qui se passait, j’ai été chercher les données sur wikipedia, pour le marathon de Boston, Chicago, Paris, Berlin, NYC, Stockholm, Fukuoka, Rotterdam, Amsterdam ou encore Londres. J’ai ainsi le temps du vainqueur pour les dernières années. Les données ont été copiées puis collées et donc un peu de remise en forme est ici nécessaire,

> base=read.table("http://freakonometrics.blog.free.fr/public
/data/topmarathon.csv",sep=";",header=TRUE)
> base=base[is.na(base$TIME)==FALSE,]
> base=base[(base$TIME=="")==FALSE,]
> n=nchar(as.character(base$YEAR))
> base$Y=as.numeric(substr(as.character(base$YEAR),n-3,n))
> base$Y[is.na(base$Y)]=1987
> h1=as.numeric(substr(as.character(base$TIME),1,1))
> h2=as.numeric(substr(as.character(base$TIME),1,2))
> i =which(is.na(t2))
> h=h2; h[i]=h1[i]
> m1=as.numeric(substr(as.character(base$TIME),3,4))
> m2=as.numeric(substr(as.character(base$TIME),4,5))
> m=m2; m[i]=m1[i]
> s1=as.numeric(substr(as.character(base$TIME),6,7))
> s2=as.numeric(substr(as.character(base$TIME),7,8))
> s=s2; s[i]=s1[i]
> base$T=h+m/60+s/60/60
> base=base[base$T>0,]
> base0=base[base$Y>=1982,]
> plot(base0$Y,base0$T,xlab="",ylab="")
> library(splines)
> reg=lm(T~bs(Y,6),data=base0)
> lines(1983:2010,predict(reg,newdata=
+ data.frame(Y=1983:2010)),col="red",lwd=2)

Si on regarde attentivement on observe plusieurs choses intéressantes. La première est qu’on ne court pas à la même vitesse dans toutes les villes. Par exemple à Stockholm, le temps du plus rapide est souvent bien éloigné des temps records. Mais surtout, le temps moyen du vainqueur ne cesse de baisser avec le temps,

http://freakonometrics.hypotheses.org/files/2015/12/marathon-cities_m.gif

On a alors une forme de non-stationnarité de la série qui laisse à penser qu’il conviendrait d’étudier davantage cette non-stationnarité avant d’utiliser la théorie des valeurs extrêmes. Et cette non-stationnarité et l’analyse des records n’est pas sans rappeler la discussion que l’on avait sur les catastrophes naturelles. Il serait peut-être temps de creuser davantage…

Tennis and risk management

As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,

Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:

CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney",
"beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor")
YEARS=1970:2009
BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA)
for(i in 1:length(CITIES)){
for(j in 1:length(YEARS)){
city=CITIES[i]
year=YEARS[j]
localization = paste("http://www.resultsfromtennis.com/",
year,"/atp/",city,".html",sep="")
essai = try(readLines(localization), silent=TRUE)
ERROR404=FALSE
if(inherits(essai, "try-error")){ERROR404=TRUE}
if(ERROR404==FALSE){
B=scan(localization,"character")
SETS=NA
LENGTH=NA
if(length(B)>270){
I=(substr(B,1,10)=="class=rez>")
sum(I)
X0=B[I]
X3=as.numeric(substr(X0,11,13))
X2=as.numeric(substr(X0,11,12))
X1=as.numeric(substr(X0,11,11))
X0=X3
X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE]
X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE]
JL=c(which(substr(B,1,9)=="class=nl>"),length(B))
IL=which(substr(B,1,10)=="class=rez>")
IC=cut(IL,JL)
base=data.frame(IC,X0)
LENGTH=as.numeric(tapply(X0,IC,sum))
SETS=as.numeric(tapply(X0,IC,length))/2}
BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS)
BASE0=rbind(BASE0,BASE)}}}
write.table(BASE0,"BASE-TENNIS-TOTAL.txt")

Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,

> I=is.na(TENNIS$LENGTH)==FALSE
> BT=TENNIS[I,]
> nrow(BT)
[1] 72754
> maxr=function(x){max(x,na.rm=TRUE)}
> T=paste(BT$TRNMT,BT$YEAR)
> DUREE=tapply(BT$SETS,T,maxr)
> LISTE=names(DUREE[DUREE>3])
> BT=BT[T%in%LISTE,]

so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),

The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).

If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,

The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,

and similarly with the Hill plot (assuming that tails are Pareto type….)

Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have

> seuil=68+0.25
> GPD1=gpd(X,seuil,method = "ml")
> GPD2=gpd(X,seuil,method = "pwm")
>
> xi=GPD1$par.ests[1]
> mu=seuil
> beta=GPD1$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD1$p.less.thresh)*P)
[1] 5.621281e-09
>
> xi=GPD2$par.ests[1]
> mu=seuil
> beta=GPD2$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD2$p.less.thresh)*P)
[1] 3.027095e-09

I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…

Some historical remarks on extreme values

I will start here a short post on extreme values, with some historical perspective. In a recent paper (in French), I mentioned the use of the Pareto distribution as a standard model for extremes, but if reinsurers have been using the Pareto distribution for a long time (see here e.g.), the oldest mathematical models when dealing with extreme value should be related to work on maximum values in finite samples.

  • The work of Ronald Fisher and Leonard Tippett

Leonard Henry Tippett, a former student of Karl Pearson published in Biometrika a note on extremes, in 1925. The goal was “the determination of the distribution of the range and the extremes for a large number of samples“. In 1925, everyone was looking for the Gaussian distribution everywhere, and Leonard Tippett observed that the distribution of the largest value did not have a Gaussian distribution.
A few years after, a joint work with Ronald Fisher was presented to the Cambridge Philosophical Society. The starting point was the idea of “stability” (even if the term did not appear explicitely in their work): the limiting distribution the maximum should be of the “same type” as the underlying distribution. Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-01.png stands for the cumulative distribution function, it should satisfy functional equation

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-02.png

Solutions of that functional equation will give all possible limiting distributions. Thus, Fisher and Tippett obtained three possible limits,

  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-03.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-04.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-06.png with https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-07.png (i.e. finite lower bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-08.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-10.png if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-11.png (i.e. finite upper bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-09.png

Based on those possible limiting distributions, Fisher and Tippett wanted to derive what has been called later on the “domain of attraction” of those distributions.

  • The work of Maurice Fréchet, at the same time

In 1926, Maurice Fréchet wrote a paper on “la loi de probabilité de l’écart maximum“. That paper, as well as the one by Fisher and Tippett (wrote at the same time), investigated asymptotic limits. Both obtained functional equations, but only Maurice Fréchet understood the importance of the stability concept, pointed out by Paul Levy in the context of sums. Thus, Maurice Fréchet introduced the concept of what is called now “max-stability“. But Fréchet solve only functional equation https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png. The point is that Fréchet studied absolute values of errors, i.e. strictly positive random variables. Thus, Maurice Fréchet considered distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-12.png

wherehttps://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-92.png is an arbitrary positive constant. The “2” comes from the fact that Fréchet considered errors with respect to the median. But he did not introduced that new distribution function, he also proved that the distribution appears as a limit when the underlying distribution of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-13.png‘s has an algebraic behavior at infinity, i.e. equivalent to https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-90.png, for some https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-91.png. I.e. he proved that Pareto-type tailed distibutions where in the domain of attraction of the Fréchet distribution.

  •  Later on, the work of Emil Gumbel

In 1932, Emil Gumbel gave a talk in France on the “âge limite“. But as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que la probabilité de cet âge ait une valeur donnée – soit Gaussienne“. But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. Thus, as Fréchet proved that Pareto type distribution were in the max-domain of attraction of Fréchet’s distribution, Gumbel obtained that exponential type distributions were in the max-domain of attraction of Gumbel’s distribution. He also introduced the term “distribution de type exponentiel
For Emil Gumbel, it was natural to study the logarithmic derivative of the distribution, since it is the mortality rate in demography (area that Emil Gumbel studied previously). As he mentioned “d’un point de vue théorique, il est intéressant de noter que M. Fréchet a construit une distribution initiale d”une variable aléatoire pour laquelle la valeur absolue de la dérivée logarithmique diminue sans limite“. But since it was not a valuable property for practical applications, he decided that “nous nous bornerons au traitement des données de type exponentiel“. Emil Gumbel always tried to relate his work on extremes and what he did on demograpy.
For instance in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. He also applied his work on radioactivity, and hydrology.
In the 30’s, hydrographs as Hazen or Graszberger introduced the concept of “yearly maximum” of
a river level. They actually proposed to look for actuarial models to study decennial or centennial floods.  But they only used the lognormal distribution to model yearly maxima. In 1936, French hydrologist Aimé Coutagne met Emil Gumbel (who was teaching at the ISFA, in Lyon). At that time, Emil Gumbel was looking for possible applications (outside demography) for his doubly exponential distribution. As as pointed out by Aimé, “sa formule devait être applicable au cas des crues; c’est à dire des plus grands débits, problème analogue à celui des plus grands âges“. Not only Gumbel’s distribution gave better empirical results, but also it came with a theoritical justification.

  • Gumbel’s distribution properties

Consider the Gumbel distribution, with location and scale parameters alpha and beta respectively, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-40.png

Note that the associated quantile function is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-41.png

with mean

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-43.png

and variance

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-44.png
  • The work of Waloddi Weibull

Waloddi Weibull, a Swedish physict proposed a distribution in 1939, to represent the distribution of breaking strength of materials. He used it in the 50’s in reliability concept. Actually, Weibull appeared late in the story of extremes, since Fréchet, Fisher and Tippett mentioned it already in the mid-20’s.

  • From the central limit theorem (on the average) to Fisher-Tippett theorem (on the maxima)

In order to visualize those two theorem, consider the following animation, where samples of 20 exponential variables are generated. From those 20 values, we plot the maximum in blue, and the average in red, on top. Just below, be rescale those points by considering https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-16.png, and below again, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png}. When then look at the position of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-14.png and the one of the mean of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png. We then build an histogram to visualize the distribution of the rescaled maximum (in blue) and the rescale average (in red).

For those who might be busy, after 1000 generations of samples, we obtain the following histograms (below), including the Gaussian distribution below (i.e. the average of exponential variables looks Gaussian, even with only 20 observations, actually the Gaussian distribution is only asymptotic, i.e. we should consider samples of size 2000), and the maximum over 20 observations of exponential variables (on top) looks like a Gumbel distribution (actually, here it is the exact distribution, and it is the asymptotic distribution for exponential type variables).

  • The GEV distribution

The unified expression of those three distributions is call the GEV distribution. The generalized extreme value distribution has cumulative distribution function

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-20.png

for https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-21.png, where https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-22.png is the location parameter, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-23.png the scale parameter and https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-24.png the shape parameter. Note that the expected value is
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-30.png