Tag Archives: Hill

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

MAT8886 from tail estimation to risk measure(s) estimation

This week, we conclude the part on extremes with an application of extreme value theory to risk measures. We have seen last week that, if we assume that above a threshold http://freakonometrics.blog.free.fr/public/perso5/qt01.gif, a Generalized Pareto Distribution will fit nicely, then we can use it to derive an estimator of the quantile function (for percentages such that the quantile is larger than the threshold)

http://freakonometrics.blog.free.fr/public/perso5/qt03.gif

It the threshold is http://freakonometrics.blog.free.fr/public/perso5/qt02.gif, i.e. we keep the http://freakonometrics.blog.free.fr/public/perso5/qt04.gif largest observations to fit a GPD, then this estimator can be written

http://freakonometrics.blog.free.fr/public/perso5/qt06.gif

The code we wrote last week was the following (here based on log-returns of the SP500 index, and we focus on large losses, i.e. large values of the opposite of log returns, plotted below)

> library(tseries)
> X=get.hist.quote("^GSPC")
> T=time(X)
> D=as.POSIXlt(T)
> Y=X$Close
> R=diff(log(Y))
> D=D[-1]
> X=-R
> plot(D,X)
> library(evir)
> GPD=gpd(X,quantile(X,.975))
> xi=GPD$par.ests[1]
> beta=GPD$par.ests[2]
> u=GPD$threshold
> QpGPD=function(p){
+ u+beta/xi*((100/2.5*(1-p))^(-xi)-1)
+ }
> QpGPD(1-1/250)
97.5%
0.04557386
> QpGPD(1-1/2500)
97.5%
0.08925095

This is similar with the following outputs, with the return period of a yearly event (one observation out of 250 trading days)

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/250, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.04172534 0.04557655 0.05086785

or the decennial one

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/2500, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.07165395 0.08925558 0.13636620

Note that it is also possible to derive an estimator for another population risk measure (the quantile is simply the so-called Value-at-Risk), the expected shortfall (or Tail Value-at-Risk), i.e.

http://freakonometrics.blog.free.fr/public/perso5/qt10.gif

The idea is to write that expression

http://freakonometrics.blog.free.fr/public/perso5/qt11.gif

so that we recognize the mean excess function (discussed earlier). Thus, assuming again that above http://freakonometrics.blog.free.fr/public/perso5/qt01.gif (and therefore above that high quantile) a GPD will fit, we can write

http://freakonometrics.blog.free.fr/public/perso5/qt12.gif

or equivalently

http://freakonometrics.blog.free.fr/public/perso5/qt13.gif

If we substitute estimators to unknown quantities on that expression, we get

http://freakonometrics.blog.free.fr/public/perso5/qt09.gif

The code is here

> EpGPD=function(p){
+ u-beta/xi+beta/xi/(1-xi)*(100/2.5*(1-p))^(-xi)
+ }
> EpGPD(1-1/250)
97.5%
0.06426508
> EpGPD(1-1/2500)
97.5%
0.1215077

An alternative is to use Hill’s approach (used to derive Hill’s estimator). Assume here that http://freakonometrics.blog.free.fr/public/perso5/qt20.gif, where http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function. Then, for all http://freakonometrics.blog.free.fr/public/perso5/qt23.gif,

http://freakonometrics.blog.free.fr/public/perso5/qt24.gif

Since http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function, it seem natural to assume that this ratio is almost 1 (which is true asymptotically). Thus

http://freakonometrics.blog.free.fr/public/perso5/qt25.gif

i.e. if we invert that function, we derive an estimator for the quantile function

http://freakonometrics.blog.free.fr/public/perso5/qt26.gif

which can also be written

http://freakonometrics.blog.free.fr/public/perso5/qt07.gif

(which is close to the relation we derived using a GPD model). Here the code is

> k=trunc(length(X)*.025)
> Xs=rev(sort(as.numeric(X)))
> xiHill=mean(log(Xs[1:k]))-log(Xs[k+1])
> u=Xs[k+1]
> QpHill=function(p){
+ u+u*((100/2.5*(1-p))^(-xiHill)-1)
+ }

with the following Hill plot

For yearly and decennial events, we have here

> QpHill(1-1/250)
[1] 0.04580548
> QpHill(1-1/2500)
[1] 0.1010204

Those quantities seem consistent since they are quite close, but they are different compared with empirical quantiles,

> quantile(X,1-1/250)
99.6%
0.04743929
> quantile(X,1-1/2500)
99.96%
0.09054039

Note that it is also possible to use some functions to derive estimators of those quantities,

> riskmeasures(gpd(X,quantile(X,.975)),1-1/250)
p   quantile      sfall
[1,] 0.996 0.04557655 0.06426859
> riskmeasures(gpd(X,quantile(X,.975)),1-1/2500)
p   quantile     sfall
[1,] 0.9996 0.08925558 0.1215137

(in this application, we have assumed that log-returns were independent and identically distributed… which might be a rather strong assumption).

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

Detecting distributions with infinite mean

In a post I published a few month ago (in French, here, based on some old paper,there), I mentioned a statistical procedure to test if the underlying distribution https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO02.pngof an i.i.d. sample https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO01.png had a finite mean (based on extreme value results). Since I just used it on a small dataset (yes, with real data), I decided to post the R code, since it is rather simple to use. But instead of working on that dataset, let us see what happens on simulated samples. Consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO03.png=200 observations generated from a Pareto distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO04.png

(here https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, as a start)

> a=2
> X=runif(200)^(-1/a)

Here, we will use the package developped by Mathieu Ribatet,

> library(RFA)
  • Using Generalized Pareto Distribution (and LR test)

A first idea is to fit a GPD distribution on observations that exceed a threshold https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO11.png>1.
Since we would like to test https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO06.png (against the assumption that the expected value is finite, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO07.png), it is natural to consider the likelihood ratio

https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO08.png

Under the null hypothesis, the distribution of https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO09.png should be chi square distribution with one degree of freedom. As mentioned here, the significance level is attained with a higher accuracy by employing Bartlett correction (there). But let  us make it as simple as possible for the blog, and use the chi-square distribution to derive the p-value.
Since it is rather difficult to select an appropriate threshold, it can be natural (as in Hill’s estimator) to consider https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO10.png, and thus, to fit a GPD on the https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO13.pnglargest values. And then to plot all that on a graph (like the Hill plot)

> Xs=rev(sort(X))
> s=0;G=rep(NA,length(Xs)-14);Gsd=G;LR=G;pLR=G
> for(i in length(X):15){
+ s=s+1
+ FG=fitgpd(X,Xs[i],method="mle")
+ FGc=fitgpd(X,Xs[i],method="mle",shape=1)
+ G[s]=FG$estimate[2]
+ Gsd[s]=FG$std.err[2]
+ FGc$fixed
+ LR[s]=FGc$deviance-FG$deviance
+ pLR[s]=1-pchisq(LR[s],df=1)
+ }

Here we keep the estimated value of the tail index, and the associated standard deviation of the estimator, to draw some confidence interval (assuming that the maximum likelihood estimate is Gaussian, which is correct only when n is extremely large). We also calculate the deviance of the model, the deviance of the constrained model (https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO06.png), and the difference, which is the log likelihood ratio. Then we calculate the p-value (since under https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO12.png the likelihood ratio statistics has a chi-square distribution).
If https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, we have the following graph, with on top the p-value (which is almost null here), the estimation of the tail index he https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO13.png largest values (and a confidence interval for the estimator),

If https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1.5 (finite mean, but infinite variance), we have

i.e. for those two models, we clearly reject the assumption of infinite mean (even if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png gets too close from 1, we should consider thresholds large enough). On the other hand, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1 (i.e. infinite mean) we clearly accept the assumption of infinite mean (whatever the threshold),

  • Using Hill’s estimator

An alternative could be to use Hill’s estimator (with Alexander McNeil’s package). See here for more details on that estimator. The test is simply based on the confidence interval derived from the (asymptotic) normal distribution of Hill’s estimator,

> library(evir)
> Xs=rev(sort(X))
> HILL=hill(X)
> G=rev(HILL$y)
> Gsd=rev(G/sqrt(HILL$x))
> pLR=1-pnorm(rep(1,length(G)),mean=G,sd=Gsd)

Again, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=2, we clearly rejct the assumption of infinite mean,

and similarly, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1.5 (finite mean, but infinite variance)

Here the test is more robust than the one based on the GPD. And if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=1 (i.e. infinite mean), again we accept https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO12.png,

Note that if https://perso.univ-rennes1.fr/arthur.charpentier/latex/paretoRO05.png=0.7, it is still possible to run the procedure, and hopefully, it suggests that the underlying distribution has infinite mean,

(here without any doubt). Now you need to wait a few days to see some practical applications of the idea (there was on in the paper mentioned above actually, on business interruption insurance losses).

Les modèles en réassurance

Publication d’un papier sur les modèles en réassurance dans la revue Risques (ici), suite à un questionnement sur la pertinence des modèles classiques utilisés par les réassureurs. La question initiale était partie de la constatation – que l’on retrouve ici ou – sur l’utilisation (ou la mauvaise utilisation) de modèles sophistiques en finance de marché, en essayant d’expliquer que – d’un point de vue épistémologique au moins – les modèles des réassureurs étaient plus robustes. En particulier, on notera que les modèles les plus anciens utilisés par les réassureurs (en particulier la loi de Pareto comme je l’avais évoqué ici) ont eu une légitimité pratique durant plusieurs décennies avant d’être justifiés par la théorie des valeurs extrêmes. Je ferais d’ailleurs bientôt un billet sur l’histoire des valeurs extrêmes en statistiques, en revenant en particulier sur les travaux de Gumbel ou de Fréchet.

Sinon le code utilisé dans le papier est en ligne ici, et la base

> sinpe = read.table("https://perso.univ-rennes1.fr/arthur.charpentier/sinpe.csv",header=TRUE,sep=";")
> head(sinpe)
      DSUR  MNTPE
1 19850206 240439
2 19851228 125674
3 19850504 488331
4 19851118 457347
5 19850220 990919
6 19851214 182939
> annee=as.numeric(substr(as.character(sinpe$DSUR),1,4))
> sinistres=sinpe$MNTPE[annee>1992]
> XS=sinistres/100000

On se limite ici aux sinistres survenus après 1992. Si l’on visualise ces montants de sinistres, on obtient

> datesur=as.Date(as.character(sinpe$DSUR),"%Y%m%d")
> jour=datesur[annee>1992]
> plot(jour,sinistres/100000,xlab="",ylab="Coût individuel",cex=.5,ylim=c(0,600))
> ded=50
> abline(h=ded)

On peut aussi faire le graphique Pareto-log-log,

> library(evir)
> n=length(X)
> plot(log(sort(X)),log((n:1)/(n+1)),
+ xlab="Coûts des sinistres (logarithme)",ylab="Fonction de survie (logarithme)",cex=.8)
> out <- gpd(XS, 15)
> XI=as.numeric(out$par.ests[1]); BETA=as.numeric(out$par.ests[2]) 
> x0=seq(2,8,.01)
> lines(x,-1/XI*(x-log(15)),col="red")

(l’ajustement de la loi de Pareto permettant de tracer la droite rouge) ou encore visualiser l’estimateur de Hill de l’indice de queue,

> hill(X)

Pour finir, le code suivant permet de calculer la prime pure (ou plus généralement une prime de Wang) soit de manière non-paramétrique (burning cost) ou bien en utilisant l’ajustement d’une loi de Pareto. Le papier étant un papier de vulgarisation, j’ai pris le seuil de manière arbitraire, sans aucune recherche de valeur “optimale”

> DEDUC = seq(10,50,by=5)
> lambda=0;  seuil=5
> WG1=WG2=rep(NA,length(DEDUC))
> for(k in 1:length(DEDUC)){
+ deductible=DEDUC[k]
+ out <- gpd(XS, seuil)
+ XI=as.numeric(out$par.ests[1]); BETA=as.numeric(out$par.ests[2]) 
+ G0=function(x){1-pgpd(x+seuil, xi = XI, mu = seuil, beta = BETA)}
+ G=function(x){(G0(x+deductible-seuil))/(G0(deductible-seuil))}
+ F=function(x){pnorm(qnorm(G(x))+lambda)}
+ (wang1=integrate(F, 0, Inf))
+ X=XS[XS>deductible]
+ n=length(X)
+ FS= function(z){
+ m=rep(NA,length(z))
+ for(i in 1:length(m)){
+ m[i]=sum(X>z[i]+deductible)/n}
+ return(m)
+ }
+ G=function(x){pnorm(qnorm(FS(x))+lambda)}
+ (wang2=sum(G(seq(0,800,.01))*.01))
+ WG1[k]=as.numeric(wang1$value)
+ WG2[k]=wang2
+ }
> plot(DEDUC,WG2-DEDUC,type='b',xlab="Niveau de la priorité ('00 000 euros)",ylab="Prime pure par sinistres réassuré",ylim=c(0,50))
> lines(DEDUC,WG1-DEDUC,type='b',col="red",pch=4)
> legend(10,50,c("Aujustement d'une loi de Pareto","'Burning cost'"),
+ col=c("red","black"),lty=1,cex=.8,pch=c(4,1))

Peut on faire l’économie du formalisme quand on parle d’extrêmes ?

Tous les blogs économiques saluent la parution en poche du joli petit livre de Daniel Zajdenweber, Economie des Extrêmes. En particulier, beaucoup de monde salue ce livre qui explique simplement des choses complexes…. Par exemple Alexandre dès 2001 “Passé le premier chapitre, un peu ardu, et qui nécessite du lecteur des connaissances de base en statistique et probabilités (notion de lois de probabilité, d’espérance, de variance…) qui décrit en termes littéraires les caractéristiques de ces lois, l’auteur applique ces résultats à un grand nombre de phénomènes concrets, et en tire les conséquences“. Mais peut-on parler d’économie des extrêmes sans être technique ?

Histoire que mon message ne soit pas déformé, je trouve passionnant ce petit livre introductif à la problématique des risques extrêmes (qui est un de mes dadas depuis quelques années) mais j’espère qu’il servira d’encouragement à une lecture d’ouvrages plus détaillés sur le sujet. Car la vulgarisation a des limites que l’on atteint vite quand on parle de sujets aussi complexes.

L’exemple que j’ai le plus étudié est celui des sinistres de perte d’exploitation (longuement évoqué par Daniel Zajdenweber dans son livre). Il y a quelques années j’avais utilisé cette partie du livre comme base pour faire un sujet d’examen pour le cours de “réassurance et grands risques” que je donnais alors à l’ENSAE1. Et malheureusement, mes compétences littéraires sont très limitées, donc je vais faire des maths. Dans le livre, la figure suivante est présentée,

qui correspond effectivement à la fonction tracée dès 1925 par Karl Gustav Hagstroem (j’avais souligné (ici) ses travaux précurseurs où l’intérêt de la loi de Pareto pour modéliser les très grands riques apparaissait pour la première fois). C’est en effet assez naturel: si on a une loi de Pareto, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z01.png

alors on pourrait écrire, en passant au logarithme

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z02.png

Si on représente la version empirique, c’est à dire le nuage de points

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z03.png

alors pour une loi de Pareto, les points devraient être alignés suivant une droite, et la pente doit correspondre au paramètre de la fonction puissance. C’est visiblement l’idée exploitée ici.

Autrement dit, les pointillés ne sont un intervalle de confiance, mais juste un outils graphique pour se demander si la pente vaut 1, ou pas. Daniel Zajdenweber affirme que la pente doit ici être -1.

Le fait que la valeur soit unitaire ou pas a en effet un impact très important en terme d’assurabilité du risque de perte d’exploitation. Rappelons que pour une variable positive (et c’est le cas ici). Et si on a une telle loi de Pareto (de puissance unitaire), alors la prime pure d’un traité de réassurance, couvrant entre m et M s’écrit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z06.png

soit

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z07.png

ce qui correspond aux calculs de Daniel Zajdenweber… Mais encore une fois “l’absence d’espérance mathématique de la distribution des sinistres” est une conclusion très forte sur laquelle on peut essayer de revenir.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z04.png

aussi ici

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z05.png

autrement dit, l’espérance est finie si la pente est strictement plus grande (en valeur absolue) que 1. Si la pente est inférieure (ou égale) à 1, le risque n’est pas assurable ! Ce qui est une conclusion très très forte pour les assureurs.
J’ai donc demandé à la FFSA la base de données utilisée ici, et pour éviter des problèmes d’inflation des coûts de sinistres entre 1992 et 2000. Si je prends tous les sinsitres, on obtient l’ajustement de Pareto suivant

soit une pente (en valeur absolue de 1.47). Mais encore une fois, l’ajustement de Pareto se fait sur les grands sinistres. Hill a proposé un estimateur très populaire pour estimer ce coefficient, où on ne prend en compte que les k observations les plus grandes, et on regarde l’estimation de la pente du graphique de Pareto pour ces quelques valeurs. On représente alors l’estimation en fonction du nombre de grands sinistres, ou du seuil définissant les graphs sinistres. Numériquement, en posant

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z08.png

on peut écrire comme estimateur de la pente

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z09.png

soit, en simplifiant le numérateur,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z10.png

tel que l’a construit Hill en 1975. Graphiquement, on a ici

Bref, la question est ce savoir si on atteint la valeur 1 pour les grands sinistres. Graphiquement, on a malgré tout envie de rejeter cette hypothèse.
Une solution peut être de faire un test statistique, basé sur de ratio de vraisemblance, comme le suggèrent Reiss & Thomas (2001) or Coles (2001). En fait, on peut même utiliser d’autres estimateurs que celui de Hill, comme celui obtenu en faisant un ajustement de loi GPD (Pareto généralisée
) sur la loi des Excès, ou une loi GEV sur des maximas par blocs (Generalized Extreme Value). On introduit alors la statistique de test suivante

https://perso.univ-rennes1.fr/arthur.charpentier/latex/z12.png

et on regarde les p-value (ainsi que la correction de Bartlett à droite),

On peut aussi, plus simplement, estimer plusieurs coefficients de pentes pour des seuils différents, et regarder la borne supérieure de l’intervalle de confiance,

Bref, même si avec un des ajustements de loi GPD on hésite à retenir une pente unitaire, la plupart des tests rejettent cette hypothèse, et donc le risque de perte d’exploitation semble être assurable, d’espérance mathématique finie. Bref, les dessins c’est très bien pour faire passer une idée, mais ne retenir que ça pour en tirer des conclusions aussi fortes me laisser sceptique….