Tag Archives: Extremes

C’est agaçant cette manie de battre des records !

En janvier dernier, j’avais fait un billet pour noter que 2010 était une année record pour les catastrophes naturelles, mais que 2009 aussi, et 2008 aussi. Bref, tous les ans, on bat des records. Mais il n’y a pas que sur le climat que l’on bat sans cesse des records. En sport aussi.
Par exemple, dimanche, le record du monde du marathon a (encore) été battu, avec 42,195 kilomètres parcourus (ou courus, tout simplement) en 2:03:58 2:03:38 par Patrick Makau (bon, certes, avec une amélioration d’une seconde par rapport au précédant record). On devrait être content (en tous les cas Xi’an semble l’être). Sauf qu’à l’hiver 2012, je vais donner un cours sur copules et valeurs extrêmes à l’UQÀM, alias MAT8886. Et je pensais faire travailler les étudiants sur les articles de John Einmahl, sur les records au 100 mètres, ou plus généralement les records en athlétisme. Ces papiers proposent une très jolie application de la théorie des valeurs extrêmes, en particulier si on est dans le domaine d’attraction de la loi de Weibull (on peut alors chercher la borne supérieur – si on travaille sur le max – du support de la distribution, appelé ici endpoint). On apprend ainsi que selon la théorie des valeurs extrêmes, le temps minimal pour parcourir les 42,195 km du marathon devrait être de 2:04:06.

I.e. le temps pour courir un marathon devrait être borné par cette valeur. Sauf que ce record a été battu. Et à plusieurs reprises me semble-t-il.
Pour aller un peu plus loin, et comprendre ce qui se passait, j’ai été chercher les données sur wikipedia, pour le marathon de Boston, Chicago, Paris, Berlin, NYC, Stockholm, Fukuoka, Rotterdam, Amsterdam ou encore Londres. J’ai ainsi le temps du vainqueur pour les dernières années. Les données ont été copiées puis collées et donc un peu de remise en forme est ici nécessaire,

> base=read.table("http://freakonometrics.blog.free.fr/public
/data/topmarathon.csv",sep=";",header=TRUE)
> base=base[is.na(base$TIME)==FALSE,]
> base=base[(base$TIME=="")==FALSE,]
> n=nchar(as.character(base$YEAR))
> base$Y=as.numeric(substr(as.character(base$YEAR),n-3,n))
> base$Y[is.na(base$Y)]=1987
> h1=as.numeric(substr(as.character(base$TIME),1,1))
> h2=as.numeric(substr(as.character(base$TIME),1,2))
> i =which(is.na(t2))
> h=h2; h[i]=h1[i]
> m1=as.numeric(substr(as.character(base$TIME),3,4))
> m2=as.numeric(substr(as.character(base$TIME),4,5))
> m=m2; m[i]=m1[i]
> s1=as.numeric(substr(as.character(base$TIME),6,7))
> s2=as.numeric(substr(as.character(base$TIME),7,8))
> s=s2; s[i]=s1[i]
> base$T=h+m/60+s/60/60
> base=base[base$T>0,]
> base0=base[base$Y>=1982,]
> plot(base0$Y,base0$T,xlab="",ylab="")
> library(splines)
> reg=lm(T~bs(Y,6),data=base0)
> lines(1983:2010,predict(reg,newdata=
+ data.frame(Y=1983:2010)),col="red",lwd=2)

Si on regarde attentivement on observe plusieurs choses intéressantes. La première est qu’on ne court pas à la même vitesse dans toutes les villes. Par exemple à Stockholm, le temps du plus rapide est souvent bien éloigné des temps records. Mais surtout, le temps moyen du vainqueur ne cesse de baisser avec le temps,

http://freakonometrics.hypotheses.org/files/2015/12/marathon-cities_m.gif

On a alors une forme de non-stationnarité de la série qui laisse à penser qu’il conviendrait d’étudier davantage cette non-stationnarité avant d’utiliser la théorie des valeurs extrêmes. Et cette non-stationnarité et l’analyse des records n’est pas sans rappeler la discussion que l’on avait sur les catastrophes naturelles. Il serait peut-être temps de creuser davantage…

Tennis and risk management

As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,

Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:

CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney",
"beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor")
YEARS=1970:2009
BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA)
for(i in 1:length(CITIES)){
for(j in 1:length(YEARS)){
city=CITIES[i]
year=YEARS[j]
localization = paste("http://www.resultsfromtennis.com/",
year,"/atp/",city,".html",sep="")
essai = try(readLines(localization), silent=TRUE)
ERROR404=FALSE
if(inherits(essai, "try-error")){ERROR404=TRUE}
if(ERROR404==FALSE){
B=scan(localization,"character")
SETS=NA
LENGTH=NA
if(length(B)>270){
I=(substr(B,1,10)=="class=rez>")
sum(I)
X0=B[I]
X3=as.numeric(substr(X0,11,13))
X2=as.numeric(substr(X0,11,12))
X1=as.numeric(substr(X0,11,11))
X0=X3
X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE]
X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE]
JL=c(which(substr(B,1,9)=="class=nl>"),length(B))
IL=which(substr(B,1,10)=="class=rez>")
IC=cut(IL,JL)
base=data.frame(IC,X0)
LENGTH=as.numeric(tapply(X0,IC,sum))
SETS=as.numeric(tapply(X0,IC,length))/2}
BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS)
BASE0=rbind(BASE0,BASE)}}}
write.table(BASE0,"BASE-TENNIS-TOTAL.txt")

Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,

> I=is.na(TENNIS$LENGTH)==FALSE
> BT=TENNIS[I,]
> nrow(BT)
[1] 72754
> maxr=function(x){max(x,na.rm=TRUE)}
> T=paste(BT$TRNMT,BT$YEAR)
> DUREE=tapply(BT$SETS,T,maxr)
> LISTE=names(DUREE[DUREE>3])
> BT=BT[T%in%LISTE,]

so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),

The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).

If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,

The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,

and similarly with the Hill plot (assuming that tails are Pareto type….)

Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have

> seuil=68+0.25
> GPD1=gpd(X,seuil,method = "ml")
> GPD2=gpd(X,seuil,method = "pwm")
>
> xi=GPD1$par.ests[1]
> mu=seuil
> beta=GPD1$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD1$p.less.thresh)*P)
[1] 5.621281e-09
>
> xi=GPD2$par.ests[1]
> mu=seuil
> beta=GPD2$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD2$p.less.thresh)*P)
[1] 3.027095e-09

I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…

Some historical remarks on extreme values

I will start here a short post on extreme values, with some historical perspective. In a recent paper (in French), I mentioned the use of the Pareto distribution as a standard model for extremes, but if reinsurers have been using the Pareto distribution for a long time (see here e.g.), the oldest mathematical models when dealing with extreme value should be related to work on maximum values in finite samples.

  • The work of Ronald Fisher and Leonard Tippett

Leonard Henry Tippett, a former student of Karl Pearson published in Biometrika a note on extremes, in 1925. The goal was “the determination of the distribution of the range and the extremes for a large number of samples“. In 1925, everyone was looking for the Gaussian distribution everywhere, and Leonard Tippett observed that the distribution of the largest value did not have a Gaussian distribution.
A few years after, a joint work with Ronald Fisher was presented to the Cambridge Philosophical Society. The starting point was the idea of “stability” (even if the term did not appear explicitely in their work): the limiting distribution the maximum should be of the “same type” as the underlying distribution. Thus, if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-01.png stands for the cumulative distribution function, it should satisfy functional equation

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-02.png

Solutions of that functional equation will give all possible limiting distributions. Thus, Fisher and Tippett obtained three possible limits,

  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-03.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-04.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-06.png with https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-07.png (i.e. finite lower bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-08.png
  • solutions of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png, i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-10.png if https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-11.png (i.e. finite upper bound for the support), i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-09.png

Based on those possible limiting distributions, Fisher and Tippett wanted to derive what has been called later on the “domain of attraction” of those distributions.

  • The work of Maurice Fréchet, at the same time

In 1926, Maurice Fréchet wrote a paper on “la loi de probabilité de l’écart maximum“. That paper, as well as the one by Fisher and Tippett (wrote at the same time), investigated asymptotic limits. Both obtained functional equations, but only Maurice Fréchet understood the importance of the stability concept, pointed out by Paul Levy in the context of sums. Thus, Maurice Fréchet introduced the concept of what is called now “max-stability“. But Fréchet solve only functional equation https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-05.png. The point is that Fréchet studied absolute values of errors, i.e. strictly positive random variables. Thus, Maurice Fréchet considered distribution

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-12.png

wherehttps://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-92.png is an arbitrary positive constant. The “2” comes from the fact that Fréchet considered errors with respect to the median. But he did not introduced that new distribution function, he also proved that the distribution appears as a limit when the underlying distribution of the https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-13.png‘s has an algebraic behavior at infinity, i.e. equivalent to https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-90.png, for some https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-91.png. I.e. he proved that Pareto-type tailed distibutions where in the domain of attraction of the Fréchet distribution.

  •  Later on, the work of Emil Gumbel

In 1932, Emil Gumbel gave a talk in France on the “âge limite“. But as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que la probabilité de cet âge ait une valeur donnée – soit Gaussienne“. But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. Thus, as Fréchet proved that Pareto type distribution were in the max-domain of attraction of Fréchet’s distribution, Gumbel obtained that exponential type distributions were in the max-domain of attraction of Gumbel’s distribution. He also introduced the term “distribution de type exponentiel
For Emil Gumbel, it was natural to study the logarithmic derivative of the distribution, since it is the mortality rate in demography (area that Emil Gumbel studied previously). As he mentioned “d’un point de vue théorique, il est intéressant de noter que M. Fréchet a construit une distribution initiale d”une variable aléatoire pour laquelle la valeur absolue de la dérivée logarithmique diminue sans limite“. But since it was not a valuable property for practical applications, he decided that “nous nous bornerons au traitement des données de type exponentiel“. Emil Gumbel always tried to relate his work on extremes and what he did on demograpy.
For instance in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. He also applied his work on radioactivity, and hydrology.
In the 30’s, hydrographs as Hazen or Graszberger introduced the concept of “yearly maximum” of
a river level. They actually proposed to look for actuarial models to study decennial or centennial floods.  But they only used the lognormal distribution to model yearly maxima. In 1936, French hydrologist Aimé Coutagne met Emil Gumbel (who was teaching at the ISFA, in Lyon). At that time, Emil Gumbel was looking for possible applications (outside demography) for his doubly exponential distribution. As as pointed out by Aimé, “sa formule devait être applicable au cas des crues; c’est à dire des plus grands débits, problème analogue à celui des plus grands âges“. Not only Gumbel’s distribution gave better empirical results, but also it came with a theoritical justification.

  • Gumbel’s distribution properties

Consider the Gumbel distribution, with location and scale parameters alpha and beta respectively, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-40.png

Note that the associated quantile function is

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-41.png

with mean

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-43.png

and variance

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-44.png
  • The work of Waloddi Weibull

Waloddi Weibull, a Swedish physict proposed a distribution in 1939, to represent the distribution of breaking strength of materials. He used it in the 50’s in reliability concept. Actually, Weibull appeared late in the story of extremes, since Fréchet, Fisher and Tippett mentioned it already in the mid-20’s.

  • From the central limit theorem (on the average) to Fisher-Tippett theorem (on the maxima)

In order to visualize those two theorem, consider the following animation, where samples of 20 exponential variables are generated. From those 20 values, we plot the maximum in blue, and the average in red, on top. Just below, be rescale those points by considering https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-16.png, and below again, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png}. When then look at the position of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-14.png and the one of the mean of https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-15.png. We then build an histogram to visualize the distribution of the rescaled maximum (in blue) and the rescale average (in red).

For those who might be busy, after 1000 generations of samples, we obtain the following histograms (below), including the Gaussian distribution below (i.e. the average of exponential variables looks Gaussian, even with only 20 observations, actually the Gaussian distribution is only asymptotic, i.e. we should consider samples of size 2000), and the maximum over 20 observations of exponential variables (on top) looks like a Gumbel distribution (actually, here it is the exact distribution, and it is the asymptotic distribution for exponential type variables).

  • The GEV distribution

The unified expression of those three distributions is call the GEV distribution. The generalized extreme value distribution has cumulative distribution function

https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-20.png

for https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-21.png, where https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-22.png is the location parameter, https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-23.png the scale parameter and https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-24.png the shape parameter. Note that the expected value is
https://perso.univ-rennes1.fr/arthur.charpentier/latex/ext-30.png

Formation ERM de l’Institut des Actuaires

Intervention jeudi après midi dans le cadre de la formation ERM (Enterprise Risk Management) de l’Institut des Actuaires, sur le thème “everything you wanted to ask about tail dependence in risk management and you’re afraid to ask“.
Comme Anne-Laure Fougères est déjà intervenue sur les extrêmes, et que Stéphane Loisel ont présenté les copules, le cours sera une discussion autour des diverses recommandations du Groupe Consultatif, des documents du CEIOPS, et de divers documents qui circulent sur le calcul du SCR (Solvency Capital Requierment) de l’agrégation des risques. Ou “everything you wanted to ask about tail dependence in risk management and you’re afraid to ask“… Les slides sont en ligne ici, et ne contienent que les éléments illustratifs du cours.

Statistical seminar at Belo Horizonte

Talk at the statistical seminar at the university of Belo Horizonte, Wednesday, onmultivariate extremes. Slides can be downloaded here.

The talk will give a detailed introduction on multivariate extremes and related concepts. Then the case of Archimedean copula will be fully described (following the paper with Johan Segers).

Many thanks to Renato Martins Assunção (here) for inviting me for a couple of days in Belo Horizonte ! Thanks also for your interest in my blog… and since I understood that some people who do not speak French might be interested in my blog, I started to write my blog in English (or at least a langage that should not be too far away from English). There is a nice discussion about langage on this blog (here, unfortunately in French…)

Mesurer les grands risques

Formation CARITAT, mesurer les risques extrêmes en assurance (flyer)

Introduction : mesurer les grands risques 9 :00 – 12 :30, A. Charpentier (slides)

  • Modèles probabilistes pour les extrêmes et estimation
  • Mesures de downside-risques, VaR, TVaR et mesures spectrales
  •  … un peu de R

Quelques compléments sur la TVaR 14 :00 – 15 :30, F. Planchet (http://www.ressources-actuarielles.net/)

  • Proprietes de la TVaR
  • Mise en oeuvre pratique, sous R
  • Application(s) en reassurance

SCR, estimation et robustesse 16 :00 – 17 :30, P. Therond

  • Solvency Capital Requirement : estimation
  • Solvency Capital Requirement : robsutesse
  • Mise en oeuvre dans une optique Solvency II