Tag Archives: GEV

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

More climate extremes, or simply global warming ?

In the paper on the heat wave in Paris (mentioned here) I discussed changes in the distribution of temperature (and autocorrelation of the time series).

During the workshop on Statistical Methods for Meteorology and Climate Change today (here) I observed that it was still an important question: is climate change affecting only averages, or does it have an impact on extremes ? And since I’ve seen nice slides to illustrate that question, I decided to play again with my dataset to see what could be said about temperature in Paris.
Recall that data can be downloaded here (daily temperature of the XXth century).

tmaxparis=read.table("/temperature/TX_SOUID100124.txt",
skip=20,sep=",",header=TRUE)
Dmaxparis=as.Date(as.character(tmaxparis$DATE),"%Y%m%d")
Tmaxparis=as.numeric(tmaxparis$TX)/10
tminparis=read.table("/temperature/TN_SOUID100123.txt",
skip=20,sep=",",header=TRUE)
Dminparis=as.Date(as.character(tminparis$DATE),"%Y%m%d")
Tminparis=as.numeric(tminparis$TN)/10
Tminparis[Tminparis==-999.9]=NA
Tmaxparis[Tmaxparis==-999.9]=NA
annee=trunc(tminparis$DATE/10000)
MIN=tapply(Tminparis,annee,min)
plot(unique(annee),MIN,col="blue",ylim=c(-15,40),xlim=c(1900,2000))
abline(lm(MIN~unique(annee)),col="blue")
abline(lm(Tminparis~unique(Dminparis)),col="blue",lty=2)
annee=trunc(tmaxparis$DATE/10000)
MAX=tapply(Tmaxparis,annee,max)
points(unique(annee),MAX,col="red")
abline(lm(MAX~unique(annee)),col="red")
abline(lm(Tmaxparis~unique(Dmaxparis)),col="red",lty=2)

On the plot below, the dots in red are the annual maximum temperatures, while the dots in blue are the annual minimum temperature. The plain line is the regression line (based on the annual max/min), and the dotted lines represent the average maximum/minimum daily temperature (to illustrate the global tendency),

It is also possible to look at annual boxplot, and to focus either on minimas, or on maximas.

annee=trunc(tminparis$DATE/10000)
boxplot(Tminparis~as.factor(annee),ylim=c(-15,10),
xlab="Year",ylab="Temperature",col="blue")
x=boxplot(Tminparis~as.factor(annee),plot=FALSE)
xx=1:length(unique(annee))
points(xx,x$stats[1,],pch=19,col="blue")
abline(lm(x$stats[1,]~xx),col="blue")
annee=trunc(tmaxparis$DATE/10000)
boxplot(Tmaxparis~as.factor(annee),ylim=c(15,40),
xlab="Year",ylab="Temperature",col="red")
x=boxplot(Tmaxparis~as.factor(annee),plot=FALSE)
xx=1:length(unique(annee))
points(xx,x$stats[5,],pch=19,col="red")
abline(lm(x$stats[5,]~xx),col="red")

Plain dots are average temperature below the 5% quantile for minima, or over the 95% quantile for maxima (again with the regression line),

We can observe an increasing trend on the minimas, but not on the maximas !
Finally, an alternative is to remember that we focus on annual maximas and minimas. Thus, Fisher and Tippett theory (mentioned here) can be used. Here, we fit a GEV distribution on a blog of 10 consecutive years. Recall that the GEV distribution is

http://freakonometrics.blog.free.fr/public/perso/gev1.png
install.packages("evir")
library(evir)
Pmin=Dmin=Pmax=Dmax=matrix(NA,10,3)
for(s in 1:10){
X=MIN[1:10+(s-1)*10]
FIT=gev(-X)
Pmin[s,]=FIT$par.ests
Dmin[s,]=FIT$par.ses
X=MAX[1:10+(s-1)*10]
FIT=gev(X)
Pmax[s,]=FIT$par.ests
Dmax[s,]=FIT$par.ses
}

The location parameter http://freakonometrics.blog.free.fr/public/perso/gev4.png is the following, with on the left the minimas and on the right the maximas,

while the scale parameter http://freakonometrics.blog.free.fr/public/perso/gev3.png is

and finally the shape parameter http://freakonometrics.blog.free.fr/public/perso/gev2.png is

On those graphs, it is very difficult to say anything regarding changes in temperature extremes… And I guess this is a reason why there is still active research on that area…

Some historical remarks on extreme values

I will start here a short post on extreme values, with some historical perspective. In a recent paper (in French), I mentioned the use of the Pareto distribution as a standard model for extremes, but if reinsurers have been using the Pareto distribution for a long time (see here e.g.), the oldest mathematical models when dealing with extreme value should be related to work on maximum values in finite samples.

  • The work of Ronald Fisher and Leonard Tippett

Leonard Henry Tippett, a former student of Karl Pearson published in Biometrika a note on extremes, in 1925. The goal was “the determination of the distribution of the range and the extremes for a large number of samples“. In 1925, everyone was looking for the Gaussian distribution everywhere, and Leonard Tippett observed that the distribution of the largest value did not have a Gaussian distribution.
A few years after, a joint work with Ronald Fisher was presented to the Cambridge Philosophical Society. The starting point was the idea of “stability” (even if the term did not appear explicitely in their work): the limiting distribution the maximum should be of the “same type” as the underlying distribution. Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-01.png stands for the cumulative distribution function, it should satisfy functional equation

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-02.png

Solutions of that functional equation will give all possible limiting distributions. Thus, Fisher and Tippett obtained three possible limits,

  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-03.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-04.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-06.png with https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-07.png (i.e. finite lower bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-08.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-10.png if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-11.png (i.e. finite upper bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-09.png

Based on those possible limiting distributions, Fisher and Tippett wanted to derive what has been called later on the “domain of attraction” of those distributions.

  • The work of Maurice Fréchet, at the same time

In 1926, Maurice Fréchet wrote a paper on “la loi de probabilité de l’écart maximum“. That paper, as well as the one by Fisher and Tippett (wrote at the same time), investigated asymptotic limits. Both obtained functional equations, but only Maurice Fréchet understood the importance of the stability concept, pointed out by Paul Levy in the context of sums. Thus, Maurice Fréchet introduced the concept of what is called now “max-stability“. But Fréchet solve only functional equation https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png. The point is that Fréchet studied absolute values of errors, i.e. strictly positive random variables. Thus, Maurice Fréchet considered distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-12.png

wherehttps://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-92.png is an arbitrary positive constant. The “2” comes from the fact that Fréchet considered errors with respect to the median. But he did not introduced that new distribution function, he also proved that the distribution appears as a limit when the underlying distribution of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-13.png‘s has an algebraic behavior at infinity, i.e. equivalent to https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-90.png, for some https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-91.png. I.e. he proved that Pareto-type tailed distibutions where in the domain of attraction of the Fréchet distribution.

  •  Later on, the work of Emil Gumbel

In 1932, Emil Gumbel gave a talk in France on the “âge limite“. But as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que la probabilité de cet âge ait une valeur donnée – soit Gaussienne“. But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. Thus, as Fréchet proved that Pareto type distribution were in the max-domain of attraction of Fréchet’s distribution, Gumbel obtained that exponential type distributions were in the max-domain of attraction of Gumbel’s distribution. He also introduced the term “distribution de type exponentiel
For Emil Gumbel, it was natural to study the logarithmic derivative of the distribution, since it is the mortality rate in demography (area that Emil Gumbel studied previously). As he mentioned “d’un point de vue théorique, il est intéressant de noter que M. Fréchet a construit une distribution initiale d”une variable aléatoire pour laquelle la valeur absolue de la dérivée logarithmique diminue sans limite“. But since it was not a valuable property for practical applications, he decided that “nous nous bornerons au traitement des données de type exponentiel“. Emil Gumbel always tried to relate his work on extremes and what he did on demograpy.
For instance in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. He also applied his work on radioactivity, and hydrology.
In the 30’s, hydrographs as Hazen or Graszberger introduced the concept of “yearly maximum” of
a river level. They actually proposed to look for actuarial models to study decennial or centennial floods.  But they only used the lognormal distribution to model yearly maxima. In 1936, French hydrologist Aimé Coutagne met Emil Gumbel (who was teaching at the ISFA, in Lyon). At that time, Emil Gumbel was looking for possible applications (outside demography) for his doubly exponential distribution. As as pointed out by Aimé, “sa formule devait être applicable au cas des crues; c’est à dire des plus grands débits, problème analogue à celui des plus grands âges“. Not only Gumbel’s distribution gave better empirical results, but also it came with a theoritical justification.

  • Gumbel’s distribution properties

Consider the Gumbel distribution, with location and scale parameters alpha and beta respectively, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-40.png

Note that the associated quantile function is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-41.png

with mean

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-43.png

and variance

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-44.png
  • The work of Waloddi Weibull

Waloddi Weibull, a Swedish physict proposed a distribution in 1939, to represent the distribution of breaking strength of materials. He used it in the 50’s in reliability concept. Actually, Weibull appeared late in the story of extremes, since Fréchet, Fisher and Tippett mentioned it already in the mid-20’s.

  • From the central limit theorem (on the average) to Fisher-Tippett theorem (on the maxima)

In order to visualize those two theorem, consider the following animation, where samples of 20 exponential variables are generated. From those 20 values, we plot the maximum in blue, and the average in red, on top. Just below, be rescale those points by considering https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-16.png, and below again, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png}. When then look at the position of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-14.png and the one of the mean of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png. We then build an histogram to visualize the distribution of the rescaled maximum (in blue) and the rescale average (in red).

For those who might be busy, after 1000 generations of samples, we obtain the following histograms (below), including the Gaussian distribution below (i.e. the average of exponential variables looks Gaussian, even with only 20 observations, actually the Gaussian distribution is only asymptotic, i.e. we should consider samples of size 2000), and the maximum over 20 observations of exponential variables (on top) looks like a Gumbel distribution (actually, here it is the exact distribution, and it is the asymptotic distribution for exponential type variables).

  • The GEV distribution

The unified expression of those three distributions is call the GEV distribution. The generalized extreme value distribution has cumulative distribution function

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-20.png

for https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-21.png, where https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-22.png is the location parameter, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-23.png the scale parameter and https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-24.png the shape parameter. Note that the expected value is
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-30.png

Peut on faire l’économie du formalisme quand on parle d’extrêmes ?

Tous les blogs économiques saluent la parution en poche du joli petit livre de Daniel Zajdenweber, Economie des Extrêmes. En particulier, beaucoup de monde salue ce livre qui explique simplement des choses complexes…. Par exemple Alexandre dès 2001 “Passé le premier chapitre, un peu ardu, et qui nécessite du lecteur des connaissances de base en statistique et probabilités (notion de lois de probabilité, d’espérance, de variance…) qui décrit en termes littéraires les caractéristiques de ces lois, l’auteur applique ces résultats à un grand nombre de phénomènes concrets, et en tire les conséquences“. Mais peut-on parler d’économie des extrêmes sans être technique ?

Histoire que mon message ne soit pas déformé, je trouve passionnant ce petit livre introductif à la problématique des risques extrêmes (qui est un de mes dadas depuis quelques années) mais j’espère qu’il servira d’encouragement à une lecture d’ouvrages plus détaillés sur le sujet. Car la vulgarisation a des limites que l’on atteint vite quand on parle de sujets aussi complexes.

L’exemple que j’ai le plus étudié est celui des sinistres de perte d’exploitation (longuement évoqué par Daniel Zajdenweber dans son livre). Il y a quelques années j’avais utilisé cette partie du livre comme base pour faire un sujet d’examen pour le cours de “réassurance et grands risques” que je donnais alors à l’ENSAE1. Et malheureusement, mes compétences littéraires sont très limitées, donc je vais faire des maths. Dans le livre, la figure suivante est présentée,

qui correspond effectivement à la fonction tracée dès 1925 par Karl Gustav Hagstroem (j’avais souligné (ici) ses travaux précurseurs où l’intérêt de la loi de Pareto pour modéliser les très grands riques apparaissait pour la première fois). C’est en effet assez naturel: si on a une loi de Pareto, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z01.png

alors on pourrait écrire, en passant au logarithme

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z02.png

Si on représente la version empirique, c’est à dire le nuage de points

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z03.png

alors pour une loi de Pareto, les points devraient être alignés suivant une droite, et la pente doit correspondre au paramètre de la fonction puissance. C’est visiblement l’idée exploitée ici.

Autrement dit, les pointillés ne sont un intervalle de confiance, mais juste un outils graphique pour se demander si la pente vaut 1, ou pas. Daniel Zajdenweber affirme que la pente doit ici être -1.

Le fait que la valeur soit unitaire ou pas a en effet un impact très important en terme d’assurabilité du risque de perte d’exploitation. Rappelons que pour une variable positive (et c’est le cas ici). Et si on a une telle loi de Pareto (de puissance unitaire), alors la prime pure d’un traité de réassurance, couvrant entre m et M s’écrit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z06.png

soit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z07.png

ce qui correspond aux calculs de Daniel Zajdenweber… Mais encore une fois “l’absence d’espérance mathématique de la distribution des sinistres” est une conclusion très forte sur laquelle on peut essayer de revenir.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z04.png

aussi ici

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z05.png

autrement dit, l’espérance est finie si la pente est strictement plus grande (en valeur absolue) que 1. Si la pente est inférieure (ou égale) à 1, le risque n’est pas assurable ! Ce qui est une conclusion très très forte pour les assureurs.
J’ai donc demandé à la FFSA la base de données utilisée ici, et pour éviter des problèmes d’inflation des coûts de sinistres entre 1992 et 2000. Si je prends tous les sinsitres, on obtient l’ajustement de Pareto suivant

soit une pente (en valeur absolue de 1.47). Mais encore une fois, l’ajustement de Pareto se fait sur les grands sinistres. Hill a proposé un estimateur très populaire pour estimer ce coefficient, où on ne prend en compte que les k observations les plus grandes, et on regarde l’estimation de la pente du graphique de Pareto pour ces quelques valeurs. On représente alors l’estimation en fonction du nombre de grands sinistres, ou du seuil définissant les graphs sinistres. Numériquement, en posant

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z08.png

on peut écrire comme estimateur de la pente

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z09.png

soit, en simplifiant le numérateur,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z10.png

tel que l’a construit Hill en 1975. Graphiquement, on a ici

Bref, la question est ce savoir si on atteint la valeur 1 pour les grands sinistres. Graphiquement, on a malgré tout envie de rejeter cette hypothèse.
Une solution peut être de faire un test statistique, basé sur de ratio de vraisemblance, comme le suggèrent Reiss & Thomas (2001) or Coles (2001). En fait, on peut même utiliser d’autres estimateurs que celui de Hill, comme celui obtenu en faisant un ajustement de loi GPD (Pareto généralisée
) sur la loi des Excès, ou une loi GEV sur des maximas par blocs (Generalized Extreme Value). On introduit alors la statistique de test suivante

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z12.png

et on regarde les p-value (ainsi que la correction de Bartlett à droite),

On peut aussi, plus simplement, estimer plusieurs coefficients de pentes pour des seuils différents, et regarder la borne supérieure de l’intervalle de confiance,

Bref, même si avec un des ajustements de loi GPD on hésite à retenir une pente unitaire, la plupart des tests rejettent cette hypothèse, et donc le risque de perte d’exploitation semble être assurable, d’espérance mathématique finie. Bref, les dessins c’est très bien pour faire passer une idée, mais ne retenir que ça pour en tirer des conclusions aussi fortes me laisser sceptique….