Tag Archives: risk

Insurance, Actuarial Science, Data and Models

Our research chaire ACTINFO, with our colleagues from Lyon, at the DAMI chaire,  PREVENT’HORIZON chaire & ACTUARIAT DURABLE chaire, will organize a 2 day conference in Paris, on Insurance, Actuarial Science, Data & Models, in ten days.

We invited Katrien ANTONIO (KU Leuven), Alexandre BOUMEZOUED (Milliman Paris), Alfred GALICHON (New-York University), Pierre-Yves GEOFFARD (Paris School of Economics), Meglena JELEVA (University of Paris Nanterre), Julie JOSSE (Ecole Polytechnique), Florence JUSOT (Paris Dauphine University), Michael LUDKOWSKI (University of California Santa Barbara), François PANNEQUIN (CREST and ENS Paris-Saclay), Florian PELGRIN (Edhec Business School), Dylan POSSAMAI (Columbia University) and Julien TRUFIN (ULB Brussels). More information (including the program) is online.

Picking an asset to invest

Yesterday, Andrew Lo spent some time on a nice graph, discussing attitudes towards risk. Here are four assets (thanks  for improving the terminology), real data (no information here about time, but it’s the same scale for the four of them)

The question raised was quite simple

if you could invest in one, and only one, asset which one will you pick ?

Continue reading Picking an asset to invest

Graduate Crash Course on Risk Measures

Tomorrow morning, I will give a crash course on risk measures at Louvain-la-Neuve, in Belgium. This is a crash course of PhD students (and researchers) with a long introduction on the univariate static framework (and some mathematical tools that will be interesting later on, such as the Fenchel transform and more generally on convexity, as well as some results on optimal transport). I will also mention what was obtained in decision theory, inspired by Itzhak Gilboa‘s Theory of Decision under Uncertainty. Then I will mention extensions to derive multiple risk measures, based on Marc Henry and Alfred Galichon‘s work. Finally, I will conclude by introducing the difficulty to derive dynamic risk measures.

The slides are based on a document I am still working on. And unfortunately, the deeper I get to explain the roots of the axioms, or the assumptions, the more papers I discover (and I need to read, and understand). So I guess I will need some time to finalize my survey. Note that I decided to skip details on technical issues when working on , and the weak topology on the dual of . I will try to add additional references in the notes, but I wanted the slides to be as simple as possible. I also want to add more connections with statistical results, such as Neyman Pearson’s lemma, for instance (as mentioned in a paper by Alexander Schied). All my apologies for the typos, too.

The law of small numbers

In insurance, the law of large numbers (named loi des grands nombres initially by Siméon Poisson, see e.g. http://en.wikipedia.org/…) is usually mentioned to legitimate large portfolios, because of pooling and diversification: the larger the pool, the more ‘predictable’ the losses will be (in a given period). Of course, under standard statistical assumption, namely finite expected value, and independence (see http://freakonometrics.blog.free.fr/…. for a discussion, in French). Since in insurance, catastrophes are usually rare – and extremely costly – and actuaries might be interested to model occurrence of that small number of events (see e.g. Aldous’ book on that specific topic, that can be downloaded from http://stat.berkeley.edu/…). The theorem behind is sometimes called the law of small numbers (from the book published by Ladislaus Bortkiewicz, but we’ll get back to that story later on, see also Whitaker (1914) http://biomet.oxfordjournals.org/… or the book recently published by Michael Falk, Jürg Hüsler and Rolf-Dieter Reiss).

  • The Poisson distribution

The so-called Poisson distribution (see http://en.wikipedia.org/…) was introduced by Siméon Poisson in 1837 (in Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, see http://gallica.bnf.fr/…). But it had been defined more than a century before, by Abraham De Moivre, in 1711, in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus (see e.g. the review in http://www.jstor.org/…). Let https://latex.codecogs.com/gif.latex?N denote a counting random variable, then it said to be Poisson distributed if there is https://latex.codecogs.com/gif.latex?\lambda\in(0,\infty) such that

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=e^{-\lambda}\frac{\lambda^k}{k!},\forall%20k\in\mathbb{N}

De Moivre obtained that distribution from an approximation of the binomial distribution. Recall that the binomial distribution is a standard distribution in actuarial science, for instance to model the number of deaths among https://latex.codecogs.com/gif.latex?n insured. If individual death probabilities are identical, say https://latex.codecogs.com/gif.latex?p, and if deaths are independent events, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)=\binom{n}{k}p^k(1-p)^{n-k},\forall%20k\in\{0,1,\cdots,n\}
And if https://latex.codecogs.com/gif.latex?n\rightarrow\infty and  https://latex.codecogs.com/gif.latex?np\rightarrow%20\lambda, then

https://latex.codecogs.com/gif.latex?\mathbb{P}(N=k)\rightarrow%20e^{-\lambda}\frac{\lambda^k}{k!}Again, this is an asymptotic theorem, which is valid when we have a lot of observations (https://latex.codecogs.com/gif.latex?n\rightarrow\infty), but also, the probability of occurrence should be extremely small (since https://latex.codecogs.com/gif.latex?p\sim\lambda/n), which is why to use the term small numbers. Siméon Poisson was not interested by mathematical approximations: his main point was to get a distribution with nice goodness of fit properties for the data he was working on. He wanted to get a better understanding of cours d’assises (jury panel, might be a valid translation of the French term). A jury consists of 12 jurors who voted to determine whether a defendant was guilty. When guilt was predominant, with at least 8 votes against 4, the defendant was convicted (which was 47% of criminal cases). 5 with 7 votes against, the opinion of professional judges was requested (11% of criminal trials again). Using these statistics we can demonstrate that a defendant brought before an assize court is guilty of the order of 68%, and the probability that a juror is not wrong by voting (condemning an innocent or releasing a culprit) was about 54%. He sought to calculate the probability that a defendant is wrongfully convicted, and gets 2%. And 28% of exonerated defendants are in fact guilty. Siméon Poisson introduced this law to get probabilities easily. But the law he considered is central in probability….

  • The law of small numbers

The heuristic of the main theorem, related to the Poisson distribution is the following: let https://latex.codecogs.com/gif.latex?X_1,%20\cdots,X_n denote i.i.d random variables taking values in https://latex.codecogs.com/gif.latex?%20\mathbb{R}^d (in a general setting, one component can be the time, the other one an upper region of interest, where some stochastic process might be). Let https://latex.codecogs.com/gif.latex?\mathcal{A}_n\subset\mathbb{R}^d. If  https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)\rightarrow%200 as https://latex.codecogs.com/gif.latex?n\rightarrow\infty (or https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A}_n)=O(n^{-1}) to be a little bit more specific about the assumptions), let https://latex.codecogs.com/gif.latex?N denote the (random variable characterizing) count of events https://latex.codecogs.com/gif.latex?\{X_i%20\in%20\mathcal{A}_n\}, then https://latex.codecogs.com/gif.latex?N can be approximated by a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda%20=n%20\times%20\mathbb%20P(X_i%20\in%20\mathcal{A}_n).
The heuristic is that if we consider a large number of observations, and if we count how many are in a given (small) region, then the number of such observations is Poisson distributed.

n=1000
X=runif(n)*10-1.5
Y=runif(n)*10-1.5
plot(X,Y,axis=FALSE,cex=.6)
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)
polygon(c(u,rev(u)),c(v,rev(-v)),col="yellow",border=NA)
I=(X^2+Y^2)<1
points(X[I],Y[I],cex=.6,pch=19,col="red")

If we run some simulations,

>  n=1000
>  ns=100000
>  N=rep(NA,ns)
> for(s in 1:ns){
+ X=runif(n)*10-1.5
+ Y=runif(n)*10-1.5
+ I=(X^2+Y^2)<1
+ N[s]=sum(I)
+ }
> hist(N,breaks=0:60,probability=TRUE,col="yellow")
> mean(N)
[1] 31.41257

The parameter of the Poisson distribution is the area of the yellow disk, over the area of the square, i.e.

> (lambda=10*pi)
[1] 31.41593
> lines(0:60-.5,dpois(0:60,lambda),type="b",col="red")

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-28-a%CC%80-11.14.21.png

To get an interpretation related to insurance modeling, let https://latex.codecogs.com/gif.latex?\mathcal{A} denote an upper layer in a reinsurance contract, i.e. https://latex.codecogs.com/gif.latex?\mathcal{A}=\{x%3Ed\} for some deductible https://latex.codecogs.com/gif.latex?d. Let https://latex.codecogs.com/gif.latex?X_i‘s denote individual losses. Then the number of claims that hit this upper layer can be modeled with a Poisson distribution. More precisely, if deductible https://latex.codecogs.com/gif.latex?d becomes extremely large (and https://latex.codecogs.com/gif.latex?\mathbb{P}(X_i%20\in%20\mathcal{A})\rightarrow%200), we obtain the point-over-threshold model in extreme value theory (see e.g. http://brale.math.hr/~iugrina/… or  http://fire.nist.gov/bfrlpubs/…): if https://latex.codecogs.com/gif.latex?N has a Poisson distribution and, conditionally on https://latex.codecogs.com/gif.latex?Nhttps://latex.codecogs.com/gif.latex?X_1,\cdots,X_N are independent identically distributed generalized Pareto random variables, then https://latex.codecogs.com/gif.latex?\max\{X_1,\cdots,X_N\} has the generalized extreme value distribution. Thus, exceedances models (for rare events) are closely related to Poisson processes.

  • The Poisson process

As mentioned above, the Poisson distribution appears when events occur somehow randomly and independently, over time. It is then natural to study the time between two occurences (or two claims, in an insurance context).

  • Poisson distribution, and claims occurrence

It is neither Siméon Poisson nor De Moivre, but Ladislaus Von Bortkiewicz who first mentioned the Poisson distribution as the law of small numbers. In 1898 (see https://archive.org/…), he studied the number number of soldiers killed by being kicked by a horse, from 1875 till 1894, in 200 corps (more precisely 10 corps over 20 ans).

He did obtain the following distribution (here, the parameter of the Poisson distribution is 0.61, i.e. the average number of death per year)

number of
death per
year
Empirical
counts
Poisson
distribution
0 109 108.67
1 65 66.21
2 22 20.22
3 3 4.11
4 1 0.63
5 and more 0 0.08

It is possible to find a lot of cases where the Poisson distribution fits extremely well. For instance, if we consider the number of hurricanes, that landed in Florida after 1850,

number of
hurricanes
per year
empirical
frequency
Poisson
frequency
0 30 27.16
1 48 47.99
2 37 42.41
3 29 24.98
4 8 11.03
5 3 3.90
6 3 1.15
7 1 0.29
8 and more 0 0.08
  • Poisson distribution, and return period

The return period was introduced by Emil Gumbel, in hydrology, to link probabilities and durations (see e.g. http://freakonometrics.blog.free.fr/…). A decennial event has an occurence probability of 1/10. 10 is then the average waiting time before occurence. This does not mean that the event will not occur before 10 years, or has to occur before 10 years. Consider a return period https://latex.codecogs.com/gif.latex?T (in years), then the yearly probability of non-occurrence is https://latex.codecogs.com/gif.latex?1-(1/T).

And the probability of non-occurence over https://latex.codecogs.com/gif.latex?n years is then https://latex.codecogs.com/gif.latex?1-[1-(1/T)]^n. It is standard to summarize this property with the following table,

return period https://latex.codecogs.com/gif.latex?T

Number of years (https://latex.codecogs.com/gif.latex?n) without catastrophes

10 20 50 100 200
10 65.1% 40.1% 18.3% 9.6% 4.9%
20 87.8% 64.2% 33.2% 18.2% 9.5%
50 99.5% 92.3% 63.6% 39.5% 22.5%
100 99.9% 99.4% 86.7% 63.4% 39.5%
200 99.9% 99.9% 98.2% 86.6% 63.3%

The diagonal in the table above is extremely interesting. It looks like there is some kind of convergence towards a limiting value (here  63.2%). Indeed, the number of events observed over n years have a Binomial distribution, with probability https://latex.codecogs.com/gif.latex?1/T=1/n, which will converge towards the Poisson distribution with parameter 1. The probability of not having a catastrophe is then https://latex.codecogs.com/gif.latex?1-\exp(-1), which is equal to 0.632.

  • Rare probabilities and the Poisson distribution

The Poisson distribution keeps appearing when computing probabilies of rare events. For instance, the probability to have at least one incident in a nuclear plant in France, over a 50 year period. Assume that the annual probability of an incident in a reactor https://latex.codecogs.com/gif.latex?p is small, e.g. 0.05%. Assume further that reactors are independent among them, and in time. The probability to have an incident over 80 reactors in 50 years is (exactly)

https://latex.codecogs.com/gif.latex?\mathbb{P}(N\neq0)=1-(1-p)^{50%20\times%2080}

Of course, a linear approximation is not correct (even if it was mentioned in some French newspaper, as explained in an old post http://freakonometrics.blog.free.fr/…)

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq%200)\neq%2050\times%2080\times%20p

On the other hand

https://latex.codecogs.com/gif.latex?\mathbb%20P(N\neq 0)=1-(1-p)^{50\times80%20}%20\sim1-\exp\left(-50\times80\times%20p%20\right)

> p=0.00005
> 1-(1-p)^(50*80)
[1] 0.1812733
> 1-exp(-50*80*p)
[1] 0.1812692

which is the probability that https://latex.codecogs.com/gif.latex?N is null when https://latex.codecogs.com/gif.latex?N has a Poisson distribution with parameter https://latex.codecogs.com/gif.latex?\lambda=50\times80\times%20p. We clearly see here an application of De Moivre’s approximation in risk management.

Another way of looking at this problem is based on the following idea: given the fact that in 45 years of observations on 450 reactors worldwide (roughly), three major accidents were observed including Three Mile Island (1979) and Fukushima (2011), i.e. the average time between accidents can be estimated at 16 years. For a single reactor, we can assume that the average time to wait before an incident is 450 times 16 years, i.e 7200 years. Or the probability to have one incident, over one year, for one reactor is 1 over 7200 (this is the idea behind the return period concept). If we assume that the arrival of accidents occurs randomly and independently of each other (as defined above) then the number of major accidents observed over a period of 50 years in France follows a Poisson distribution with parameter 50 / (7200/80). Also, the probability of having no major accident over 50 years, with 80 reactors can be estimated by

https://latex.codecogs.com/gif.latex?1-\exp(-50\times%2080/7200)

i.e.

> 1-exp(-50*80/7200)
[1] 0.4262466

(keeping in mind all the uncertainty around the estimated waiting time before a major accident to a single reactor!).

Talk on bivariate count times series in finance and risk management

I will be giving a talk on May 4th, at the Mathematical Finance Days, at HEC Montréal, on multivariate dynamic models for counts. The conference is organized by IFM2 (Institut de Finance Mathématique de Montréal). I will be chairing some session and I will give a talk based on the joint paper with Mathieu Boudreault.

The slides can be downloaded from the blog,

In various situations in the insurance industry, in finance, in epidemiology, etc., one needs to represent the joint evolution of the number of occurrences of an event. In this paper, we present a multivariate integer‐valued autoregressive (MINAR) model, derive its properties and apply the model to earthquake occurrences across various pairs of tectonic plates. The model is an extension of Pedelis & Karlis (2011) where cross autocorrelation (spatial contagion in a seismic context) is considered. We fit various bivariate count models and find that for many contiguous tectonic plates, spatial contagion is significant in both directions. Furthermore, ignoring cross autocorrelation can underestimate the potential for high numbers of occurrences over the short‐term. An application to risk management and cat‐bond pricing will be discussed.

http://freakonometrics.free.fr/ringfire.gif

Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

http://freakonometrics.hypotheses.org/files/2016/11/credit-01.gif

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – http://freakonometrics.hypotheses.org/files/2016/11/exch67.gif (either the company defaults, or not), so that

http://freakonometrics.hypotheses.org/files/2016/11/exch66.gif

i.e. we can derive the (unconditional) distribution of the sum

http://freakonometrics.hypotheses.org/files/2016/11/exch60.gif

so that the probability function of the sum is, assuming that http://freakonometrics.hypotheses.org/files/2016/11/exch76.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch68.gif

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value
+ }
> QUANTILE=function(p=.99,a=2,m=.1,n=500){
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
+ V=V/sum(V)
+ return(min(which(cumsum(V)>p))) }

Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here

http://freakonometrics.hypotheses.org/files/2016/11/exch70.gif

i.e.

http://freakonometrics.hypotheses.org/files/2016/11/exch71.gif

Thus, the correlation between two default indicators is then

http://freakonometrics.hypotheses.org/files/2016/11/exch73.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch75.gif

Under the assumption that the latent factor is beta distributed

http://freakonometrics.hypotheses.org/files/2016/11/exch78.gif

we get

http://freakonometrics.hypotheses.org/files/2016/11/exch80.gif

Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,

> PICTURE=function(P){
+ A=seq(.01,2,by=.01)
+ VQ=matrix(NA,length(A),5)
+ for(i in 1:length(A)){
+ VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P)
+ VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P)
+ VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P)
+ VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P)
+ VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P)
+ }
+ plot(A,VQ[,5],type="s",col="red",ylim=
+ c(0,max(VQ)),xlab="",ylab="")
+ lines(A,VQ[,4],type="s",col="blue")
+ lines(A,VQ[,3],type="s",col="black")
+ lines(A,VQ[,2],type="s",col="blue",lty=2)
+ lines(A,VQ[,1],type="s",col="red",lty=2)
+ lines(A,rep(500*P,length(A)),col="grey")
+ legend(3,max(VQ),c("quantile 99.5%","quantile 99%",
+ "quantile 97.5%","quantile 95%","quantile 90%","mean"),
+ col=c("red","blue","black",
+"blue","red","grey"),
+ lty=c(1,1,1,2,2,1),border=n)
+}

e.g. with a (marginal) default probability of 15%,

> PICTURE(.15)

On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,


Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),

> PICTURE(.05)

(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,

> PICTURE(.0075)

And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !

Variable annuities is not a systemic risk ?

The Geneva Association just published on its website an interesting report on variable annuities and systemic risk (online here). Based on a definition of potentially systemically risky activities, on interconnectedness or substitutability, the report claims that since “none of the criteria is triggered”, variables annuities is “not a potentially systemically risk activity”. Even if “short-term effects are conceivable”. I guess it is a diplomatic way to say it…

Note that a series of slides can also be downloaded (there) on insurance and systemic risk. But that deserves a more detailed post.

 

Tennis and risk management

As mentioned already here, while we were going to Québec City for the workshop, we had interesting discussions in the car, and Maciej mentioned an article recently published in The Actuary,

Hence, I wanted to discuss (extremely) rare event probabilities in tennis. The story is simple: in June 2010, at Wimbledon, Nicolas Mahut and John Isner have played the longest match ever. 980 points, 11 But first of all, we need a dataset. Thanks to Duncan Murdoch, I have been able to run a short code to build up a dataset:

CITIES=c("berlin","madrid","paris","rolandgarros","wimbledon","sydney",
"beijing","shanghai","singapore","tokyo","melbourne","melbourne-indoor")
YEARS=1970:2009
BASE0=data.frame(YEAR=NA,TRNMT=NA,LENGTH=NA,SETS=NA)
for(i in 1:length(CITIES)){
for(j in 1:length(YEARS)){
city=CITIES[i]
year=YEARS[j]
localization = paste("http://www.resultsfromtennis.com/",
year,"/atp/",city,".html",sep="")
essai = try(readLines(localization), silent=TRUE)
ERROR404=FALSE
if(inherits(essai, "try-error")){ERROR404=TRUE}
if(ERROR404==FALSE){
B=scan(localization,"character")
SETS=NA
LENGTH=NA
if(length(B)>270){
I=(substr(B,1,10)=="class=rez>")
sum(I)
X0=B[I]
X3=as.numeric(substr(X0,11,13))
X2=as.numeric(substr(X0,11,12))
X1=as.numeric(substr(X0,11,11))
X0=X3
X0[is.na(X3)==TRUE]=X2[is.na(X3)==TRUE]
X0[is.na(X2)==TRUE]=X1[is.na(X2)==TRUE]
JL=c(which(substr(B,1,9)=="class=nl>"),length(B))
IL=which(substr(B,1,10)=="class=rez>")
IC=cut(IL,JL)
base=data.frame(IC,X0)
LENGTH=as.numeric(tapply(X0,IC,sum))
SETS=as.numeric(tapply(X0,IC,length))/2}
BASE=data.frame(YEAR=year,TRNMT=city,LENGTH,SETS)
BASE0=rbind(BASE0,BASE)}}}
write.table(BASE0,"BASE-TENNIS-TOTAL.txt")

Here I consider only tournaments where players have to win 3 sets (and actually more tournaments than those in the code above), and I have something like a bit more than 72,000 matches,

> I=is.na(TENNIS$LENGTH)==FALSE
> BT=TENNIS[I,]
> nrow(BT)
[1] 72754
> maxr=function(x){max(x,na.rm=TRUE)}
> T=paste(BT$TRNMT,BT$YEAR)
> DUREE=tapply(BT$SETS,T,maxr)
> LISTE=names(DUREE[DUREE>3])
> BT=BT[T%in%LISTE,]

so, if we look briefly at matches over 35 years, we have the following boxplot (one boxplot per year),

The red line being the epic Isner-Mahut match in June 2010 (4-6, 6-3, 7-6, 6-7, 70-68, i.e. 183 games, here for the score card).

If we study theory (e.g. from Paul Newton and Kamran Aslam), a lot of results can be obtained for the expected value of the number of games, but if we want to study extremely rare events, we should generate Markov chains (with a lot of generation since the probability should be extremely small). But how many ? Consider below matches with more than 50 games,

The tail plot (over 50), i.e. the log-log Pareto plot indicates that it will be difficult to study tails,

and similarly with the Hill plot (assuming that tails are Pareto type….)

Anyway, if we want to study tails, we should consider a threshold high enough. For instance, with a threshold at 68 (we keep only 24 match), we have

> seuil=68+0.25
> GPD1=gpd(X,seuil,method = "ml")
> GPD2=gpd(X,seuil,method = "pwm")
>
> xi=GPD1$par.ests[1]
> mu=seuil
> beta=GPD1$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD1$p.less.thresh)*P)
[1] 5.621281e-09
>
> xi=GPD2$par.ests[1]
> mu=seuil
> beta=GPD2$par.ests[2]
> x=180
> P=exp((-1/xi)*log(1 + (xi * (x - mu))/beta))
> as.numeric((1-GPD2$p.less.thresh)*P)
[1] 3.027095e-09

I.e. the probability that one match last more than 183 games is 1 chance over a billion… With, say, 2500 match per year, that gives us a return period of 400 years. So yes, we might say that this way a rare event… So perhaps, generating several billions of chains, it should be possible to get a more precise estimation of the probability to play 183 games in a single match…

Millenium bridge, endogeneity and risk management

In less than 48 hours, last week two friends mentioned the Millennium Bridge as an illustration of a risk management concept. There are several documents with that example, here (for the initial idea of using the Millennium Bridge to illustrate issues in risk management) here or there, e.g.

When we mention resonance effects on bridges, we usually thing of the Tacoma Narrows Bridge (where strong winds set the bridge oscillating) or the Basse-Chaîne Bridge (in France, which collapsed on April 16, 1850, when 478 French soldiers marched across it in lockstep). In the first case, there is nothing we can do about it, but for the second one, this is why soldiers are required to break step on bridges.

But for the Millennium bridge, a ‘positive feedback‘ phenomenon (known as Synchronous Lateral Excitation in physics) has been observed: the natural sway motion of people walking caused small sideways oscillations in the bridge, which in turn caused people on the bridge to sway in step, increasing the amplitude of the oscillations and continually reinforcing the effect. That has been described in a nice paper in 2005 (here). In the initial paper by Jon Danielsson and Hyun Song Shin, they note that “what is the probability that a thousand people walking at random will end up walking exactly in step? It is tempting to say “close to zero”, or “negligible”. After all, if each person’s step is an independent event, then the probability of everyone walking in step would be the product of many small numbers – giving us a probability close to zero. Presumably, this is the reason why Arup – the bridge engineers – did not take this into account. However, this is exactly where endogenous risk comes in. What we must take into account is the way that people react to their environment. Pedestrians on the bridge react to how the bridge is moving. When the bridge moves under your feet, it is a natural reaction for people to adjust their stance to regain balance. But here is the catch. When the bridge moves, everyone adjusts his or her stance at the same time. This synchronized movement pushes the bridge that the people are standing on, and makes the bridge move even more. This, in turn, makes the people adjust their stance more drastically, and so on. In other words, the wobble of the bridge feeds on itself. When the bridge wobbles, everyone adjusts their stance, which sets off an even worse wobble, which makes the people adjust even more, and so on. So, the wobble will continue and get stronger even though the initial shock (say, a gust of wind) has long passed. It is an example of a force that is generated and amplified within the system. It is an endogenous response. It is very different from a shock that comes from a storm or an earthquake which are exogenous to the system.

And to go further, they point out that this event is rather similar to what is observed in financial markets (here) by quoting The Economist from October 12th 2000 “So-called value-at-risk models (VaR) blend science and art. They estimate how much a portfolio could lose in a single bad day. If that amount gets too large, the VAR model signals that the bank should sell. The trouble is that lots of banks have similar investments and similar VAR models. In periods when markets everywhere decline, the models can tell everybody to sell the same things at the same time, making market conditions much worse. In effect, they can, and often do, create a vicious feedback loop.

Course on risk measures (in French)

The course on risk measure, in Luminy, starts at 16.00 on Monday (here). The slides can be found here,

Note that additional references can be downloaded on the internet, e.g. the short course on risk measures by Freddy Delbaen (here) or the article from the Encyclopedia of quantitative finance, by Hans Föllmer and Alexander Schied (there). See also here for the paper by Jean Marc Tallon, Johanna Etner and Meglena Jeleva, on decision theory under uncertainty.

Lecture notes on risk and insurance

I just finished some lectures notes on risk and insurance. The notes, that can be downloaded [pdf], are in French, and will be used at the JES (Journées d’Etudes Statistiques), organised at the CIRM (mentioned here). Previous notes on risk measures [pdf] and copulas [pdf]. Again, all comments are welcome…

Discussion on stress scenarios

Friday morning, I had the honor to discuss a presentation by Alexander McNeil, on Stress Testing and Reverse Stress Testing, at the Financial Risks International Forum on Risk Dependencies (here).

This was an opportunity to rediscover techniques I have studied briefly a few years ago, on outliers detection, namely the bagplot (I will probably upload a post on that topic soon, in French unfortunately). The slides of my discussion are available here.

http://freakonometrics.hypotheses.org/5338