Tag Archives: MAT8886

Bounding sums of random variables, part 1

For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding  the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).

This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).

Let https://latex.codecogs.com/gif.latex?\Delta denote the set of univariate continuous distribution function, left-continuous, on https://latex.codecogs.com/gif.latex?\mathbb{R}. And https://latex.codecogs.com/gif.latex?\Delta^+ the set of distributions on https://latex.codecogs.com/gif.latex?\mathbb{R}^+. Thus, https://latex.codecogs.com/gif.latex?F\in\Delta^+ if https://latex.codecogs.com/gif.latex?F\in\Delta and https://latex.codecogs.com/gif.latex?F(0)=0. Consider now two distributions https://latex.codecogs.com/gif.latex?F,G\in\Delta^+. In a very general setting, it is possible to consider operators on https://latex.codecogs.com/gif.latex?\Delta^+\times%20\Delta^+. Thus, let https://latex.codecogs.com/gif.latex?T:[0,1]\times[0,1]\rightarrow[0,1] denote an operator, increasing in each component, thus that https://latex.codecogs.com/gif.latex?T(1,1)=1. And consider some function https://latex.codecogs.com/gif.latex?L:\mathbb{R}^+\times\mathbb{R}^+\rightarrow\mathbb{R}^+ assumed to be also increasing in each component (and continuous). For such functions https://latex.codecogs.com/gif.latex?T and https://latex.codecogs.com/gif.latex?L, define the following (general) operator, https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G) as

https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G)(x)=\sup_{L(u,v)=x}\{T(F(u),G(v))\}

One interesting case can be obtained when https://latex.codecogs.com/gif.latex?Tis a copula, https://latex.codecogs.com/gif.latex?C. In that case,

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and further, it is possible to write

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G)(x)=\sup_{(u,v)\in%20L^{-1}(x)}\{C(F(u),G(v))\}

It is also possible to consider other (general) operators, e.g. based on the sum

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)(x)=\int_{(u,v)\in%20L^{-1}(x)}%20dC(F(u),G(v))

or on the minimum,

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G)(x)=\inf_{(u,v)\in%20L^{-1}(x)}\{C^\star(F(u),G(v))\}

where https://latex.codecogs.com/gif.latex?C^\star is the survival copula associated with https://latex.codecogs.com/gif.latex?C, i.e. https://latex.codecogs.com/gif.latex?C^\star(u,v)=u+v-C(u,v). Note that those operators can be used to define distribution functions, i.e.

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and similarly

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

All that seems too theoretical ? An application can be the case of the sum, i.e. https://latex.codecogs.com/gif.latex?L(x,y)=x+y, in that case https://latex.codecogs.com/gif.latex?\sigma_{C,+}(F,G) is the distribution of sum of two random variables with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, and copula https://latex.codecogs.com/gif.latex?C. Thus, https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G) is simply the convolution of two distributions,

https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G)(x)=\int_{u+v=x}%20dC^\perp(F(u),G(v))

The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator https://latex.codecogs.com/gif.latex?L, then, for any copula https://latex.codecogs.com/gif.latex?C, one can find a lower bound for https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\tau_{C,L}(F,G)\leq\sigma_{C,L}(F,G)

as well as an upper bound

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)\leq%20\rho_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

Those inequalities come from the fact that for all copula https://latex.codecogs.com/gif.latex?C, https://latex.codecogs.com/gif.latex?C\geq%20C^-, where https://latex.codecogs.com/gif.latex?C^- is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…

In the case of the sum of two random variables, with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, bounds for the distribution of the sum https://latex.codecogs.com/gif.latex?H(x)=\mathbb{P}(X+Y\leq%20x), where https://latex.codecogs.com/gif.latex?X\sim%20F and https://latex.codecogs.com/gif.latex?Y\sim%20G, can be written

https://latex.codecogs.com/gif.latex?H^-(x)=\tau_{C^-%20,+}(F,G)(x)=\sup_{u+v=x}\{%20\max\{F(u)+G(v)-1,0\}%20\}

for the lower bound, and

https://latex.codecogs.com/gif.latex?H^+(x)=\rho_{C^-%20,+}(F,G)(x)=\inf_{u+v=x}\{%20\min\{F(u)+G(v),1\}%20\}

for the upper bound. And those bounds are sharp, in the sense that, for all https://latex.codecogs.com/gif.latex?t\in(0,1), there is a copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\tau_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

and there is (another) copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\sigma_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all https://latex.codecogs.com/gif.latex?F\in\Delta^+ let https://latex.codecogs.com/gif.latex?F^{-1} denotes its generalized inverse, left continuous, and let https://latex.codecogs.com/gif.latex?\nabla^+ denote the set of those quantile functions. Define then the dual versions of our operators,

https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})(x)=\inf_{(u,v)\in%20T^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

and

https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})(x)=\sup_{(u,v)\in%20T^\star^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

Those definitions are really dual versions of the previous ones, in the sense that https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})=[\tau_{T,L}(F,G)]^{-1} and https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})=[\rho_{T,L}(F,G)]^{-1}.

Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is

https://latex.codecogs.com/gif.latex?\tau^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\inf_{\max\{u+v-1,0\}=x}\{F^{-1}(u)+G^{-1}(v)\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\rho^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\sup_{\min\{u+v,1\}=x}\{F^{-1}(u)+G^{-1}(v)\}

A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .

> F=function(x) plnorm(x,0,1)
> G=function(x) plnorm(x,0,1)
> n=100
> X=seq(0,10,by=.05)
> Hinf=Hsup=rep(NA,length(X))
> for(i in 1:length(X)){
+ x=X[i]
+ U=seq(0,x,by=1/n); V=x-U
+ Hinf[i]=max(pmax(F(U)+G(V)-1,0))
+ Hsup[i]=min(pmin(F(U)+G(V),1))}

If we plot those bounds, we obtain

> plot(X,Hinf,ylim=c(0,1),type="s",col="red")
> lines(X,Hsup,type="s",col="red")

But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here

> Finv=function(u) qlnorm(u,0,1)
> Ginv=function(u) qlnorm(u,0,1)

The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\{F^{-1}(u)+G^{-1}(x-u)\}

The idea is to consider https://latex.codecogs.com/gif.latex?x=i/n and https://latex.codecogs.com/gif.latex?u=j/n, and the bound for the quantile function at point https://latex.codecogs.com/gif.latex?i/n is then

https://latex.codecogs.com/gif.latex?\sup_{j\in\{0,1,\cdots,i\}}\left\{F^{-1}\left(\frac{j}{n}\right)+G^{-1}\left(\frac{i-j}{n}\right)\right\}

The code to compute those bounds, for a given https://latex.codecogs.com/gif.latex?n is here

> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Here we have (several https://latex.codecogs.com/gif.latex?ns were considered, so that we can visualize the convergence of that numerical algorithm),

Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…

(nonparametric) copula density estimation

Today, we will go further on the inference of copula functions. Some codes (and references) can be found on a previous post, on nonparametric estimators of copula densities (among other related things).  Consider (as before) the loss-ALAE dataset (since we’ve been working a lot on that dataset)

> library(MASS)
> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/(nrow(X)+1))

The standard tool to plot nonparametric estimators of densities is to use multivariate kernels. We can look at the density using

> mat1=kde2d(U[,1],U[,2],n=35)
> persp(mat1$x,mat1$y,mat1$z,col="green",
+ shade=TRUE,theta=s*5,
+ xlab="",ylab="",zlab="",zlim=c(0,7))

or level curves (isodensity curves) with more detailed estimators (on grids with shorter steps)

> mat1=kde2d(U[,1],U[,2],n=101)
> image(mat1$x,mat1$y,mat1$z,col=
+ rev(heat.colors(100)),xlab="",ylab="")
> contour(mat1$x,mat1$y,mat1$z,add=
+ TRUE,levels = pretty(c(0,4), 11))

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est1.gif

Kernels are nice, but we clearly observe some border bias, extremely strong in corners (the estimator is 1/4th of what it should be, see another post for more details). Instead of working on sample https://latex.codecogs.com/gif.latex?(U_i,V_i) on the unit square, consider some transformed sample https://latex.codecogs.com/gif.latex?(Q(U_i),Q(V_i)), where https://latex.codecogs.com/gif.latex?Q:(0,1)\rightarrow\mathbb{R} is a given function. E.g. a quantile function of an unbounded distribution, for instance the quantile function of the https://latex.codecogs.com/gif.latex?\mathcal{N}(0,1) distribution. Then, we can estimate the density of the transformed sample, and using the inversion technique, derive an estimator of the density of the initial sample. Since the inverse of a (general) function is not that simple to compute, the code might be a bit slow. But it does work,

> gaussian.kernel.copula.surface <- function (u,v,n) {
+   s=seq(1/(n+1), length=n, by=1/(n+1))
+   mat=matrix(NA,nrow = n, ncol = n)
+ sur=kde2d(qnorm(u),qnorm(v),n=1000,
+ lims = c(-4, 4, -4, 4))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qnorm(s[i])+4)*1000/8)+1;
+ 	Yj<-round((qnorm(s[j])+4)*1000/8)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dnorm(qnorm(s[i]))*
+ 	dnorm(qnorm(s[j])))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Here, we get

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est2.gif

Note that it is possible to consider another transformation, e.g. the quantile function of a Student-t distribution.

> student.kernel.copula.surface =
+  function (u,v,n,d=4) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(NA,nrow = n, ncol = n)
+ sur<-kde2d(qt(u,df=d),qt(v,df=d),n=5000,
+ lims = c(-8, 8, -8, 8))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qt(s[i],df=d)+8)*5000/16)+1;
+ 	Yj<-round((qt(s[j],df=d)+8)*5000/16)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dt(qt(s[i],df=d),df=d)*
+ 	dt(qt(s[j],df=d),df=d))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Another strategy is to consider kernel that have precisely the unit interval as support. The idea is here to consider the product of Beta kernels, where parameters depend on the location

> beta.kernel.copula.surface=
+  function (u,v,bx=.025,by=.025,n) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(0,nrow = n, ncol = n)
+ for (i in 1:n) {
+     a <- s[i]
+     for (j in 1:n) {
+     b <- s[j]
+ 	mat[i,j] <- sum(dbeta(a,u/bx,(1-u)/bx) *
+     dbeta(b,v/by,(1-v)/by)) / length(u)
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est3.gif

On those two graphs, we can clearly observe strong tail dependence in the upper (right) corner, that cannot be intuited using a standard kernel estimator…

Kendall’s function for copulas

As mentioned in the course on copulas, a nice tool to describe dependence it Kendall’s cumulative function. Given a random pair http://freakonometrics.hypotheses.org/files/2015/12/conc-19.gif with distribution  http://freakonometrics.hypotheses.org/files/2015/12/conc-17.gif, define random variable http://freakonometrics.hypotheses.org/files/2015/12/conc-30.gif. Then Kendall’s cumulative function is

http://freakonometrics.hypotheses.org/files/2015/12/kendall-01.gif

Genest and Rivest (1993) introduced that function to choose among Archimedean copulas (we’ll get back to this point below).

From a computational point of view, computing such a function can be done as follows,

  • for all http://freakonometrics.hypotheses.org/files/2015/12/kendall-02.gif, compute http://freakonometrics.hypotheses.org/files/2015/12/kendall-03.gif as the proportion of observation in the lower quadrant, with upper corner http://freakonometrics.hypotheses.org/files/2015/12/kendall-4.gif, i.e.

http://freakonometrics.hypotheses.org/files/2015/12/kendall-06.gif

  • then compute the cumulative distribution function of http://freakonometrics.hypotheses.org/files/2015/12/kendall-03.gif‘s.

To visualize the construction of that cumulative distribution function, consider the following animation

Thus, here the code to compute simply that cumulative distribution function is

n=nrow(X)
i=rep(1:n,each=n)
j=rep(1:n,n)
S=((X[i,1]>X[j,1])&(X[i,2]>X[j,2]))
Z=tapply(S,i,sum)/(n-1)

The graph can be obtain either using

plot(ecdf(Z))

or

plot(sort(Z),(1:n)/n,type="s",col="red")

The interesting point is that for an Archimedean copula with generator http://freakonometrics.hypotheses.org/files/2015/12/kendall-7.gif, then Kendall’s function is simply

http://freakonometrics.hypotheses.org/files/2015/12/kendall-8.gifIf we’re too lazy to do the maths, at least, it is possible to compute those functions numerically. For instance, for Clayton copula,

h=.001
phi=function(t){(t^(-alpha)-1)}
dphi=function(t){(phi(t+h)-phi(t-h))/2/h}
k=function(t){t-phi(t)/dphi(t)}
Kc=Vectorize(k)

Similarly, let us consider Gumbel copula,

phi=function(t){(-log(t))^(theta)}
dphi=function(t){(phi(t+h)-phi(t-h))/2/h}
k=function(t){t-phi(t)/dphi(t)}
Kg=Vectorize(k)

If we plot the empirical Kendall’s function (obtained from the sample), with different theoretical ones, derived from Clayton copulas (on the left, in blue) or Gumbel copula (on the right, in purple), we have the following,

http://freakonometrics.hypotheses.org/files/2015/12/kendall-function-anim.gif

Note that the different curves were obtained when Clayton copula has Kendall’s tau equal to 0, .1, .2, .3, …, .9, 1, and similarly for Gumbel copula (so that Figures can be compared). The following table gives a correspondence, from Kendall’s tau to the underlying parameter of a copula (for different families)

as well as Spearman’s rho,


To conclude, observe that there are two important particular cases that can be identified here: the case of perfect dependent, on the first diagonal when http://freakonometrics.hypotheses.org/files/2015/12/kennnn-04.gif, and the case of independence, the upper green curve, http://freakonometrics.hypotheses.org/files/2016/10/kennnnn-05.gif. It should also be mentioned that it is also common to plot not function http://freakonometrics.hypotheses.org/files/2015/12/kennnn-01.gif, but function http://freakonometrics.hypotheses.org/files/2015/12/kennnn-02.gif, defined as http://freakonometrics.hypotheses.org/files/2015/12/kennnn-03.gif,

Tests on tail index for extremes

Since several students got the intuition that natural catastrophes might be non-insurable (underlying distributions with infinite mean), I will post some comments on testing procedures for extreme value models.

A natural idea is to use a likelihood ratio test (for composite hypotheses). Let http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif is smaller or larger than http://freakonometrics.blog.free.fr/public/perso5/lrtest22.gif (where in the context of finite versus infinite mean http://freakonometrics.blog.free.fr/public/perso5/lrtest23.gif). I.e. either http://freakonometrics.blog.free.fr/public/perso5/lrtest21.gif belongs to the set http://freakonometrics.blog.free.fr/public/perso5/lrtest-10.gif or to its complementary http://freakonometrics.blog.free.fr/public/perso5/lrtest-11.gif. Consider the maximum likelihood estimator http://freakonometrics.blog.free.fr/public/perso5/lrtest24.gif, i.e.

http://freakonometrics.blog.free.fr/public/perso5/lrtest-9.gif

Let http://freakonometrics.blog.free.fr/public/perso5/lrtest25.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-3.gif denote the constrained maximum likelihood estimators on http://freakonometrics.blog.free.fr/public/perso5/lrtest26.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest27.gif respectively,

http://freakonometrics.blog.free.fr/public/perso5/lrtest-12.gif

http://freakonometrics.blog.free.fr/public/perso5/lrtest-2.gif

Either http://freakonometrics.blog.free.fr/public/perso5/lrtest-13.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-6.gif (on the left), or http://freakonometrics.blog.free.fr/public/perso5/lrtest-14.gif and http://freakonometrics.blog.free.fr/public/perso5/lrtest-7.gif (on the right)

So likelihood ratios

http://freakonometrics.blog.free.fr/public/perso5/lrtest-15.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-16.gif

 are either equal to

http://freakonometrics.blog.free.fr/public/perso5/lrtest-19.gif      http://freakonometrics.blog.free.fr/public/perso5/lrtest-18.gif

or

http://freakonometrics.blog.free.fr/public/perso5/lrtest-20.gif        http://freakonometrics.blog.free.fr/public/perso5/lrtest-17.gif

If we use the code mentioned in the post on profile likelihood, it is easy to derive that ratio. The following graph is the evolution of that ratio, based on a GPD assumption, for different thresholds,

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> U=seq(2,10,by=.2)
> LR=P=ES=SES=rep(NA,length(U))
> for(j in 1:length(U)){
+ u=U[j]
+ Y=X[X>u]-u
+ loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
+ XIV=(1:300)/100;L=rep(NA,300)
+ for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
+ plot(XIV,L,type="l")
+ PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
+ (L0=(OPT=optimize(f=PL,interval=c(0,10)))$objective)
+ profilelikelihood=function(beta){
+ -loglikelihood(1,beta) }
+ (L1=optim(par=1,fn=profilelikelihood)$value)
+ LR[j]=L1-L0
+ P[j]=1-pchisq(L1-L0,df=1)
+ G=gpd(X,u)
+ ES[j]=G$par.ests[1]
+ SES[j]=G$par.ses[1]
+ }
>
> plot(U,LR,type="b",ylim=range(c(0,LR)))
> abline(h=qchisq(.95,1),lty=2)

with on top the values of the ratio (the dotted line is the quantile of a chi-square distribution with one degree of freedom) and below the associated p-value

> plot(U,P,type="b",ylim=range(c(0,P)))
> abline(h=.05,lty=2)

In order to compare, it is also possible to look at confidence interval for the tail index of the GPD fit,

> plot(U,ES,type="b",ylim=c(0,1))
> lines(U,ES+1.96*SES,type="h",col="red")
> abline(h=1,lty=2)

To go further, see Falk (1995), Dietrich, de Haan & Hüsler (2002), Hüsler & Li (2006) with the following table, or Neves & Fraga Alves (2008). See also here or there (for the latex based version) for an old paper I wrote on that topic.

the Dirichlet distribution

In the course, since we are still introducing some concepts of dependent distributions, we will talk about the Dirichlet distribution, which is a distribution over the simplex of http://freakonometrics.hypotheses.org/files/2017/07/diri11.gif. Let http://freakonometrics.hypotheses.org/files/2017/07/diri01.gif denote the Gamma distribution with density (on http://freakonometrics.hypotheses.org/files/2017/07/diri03.gif)

http://freakonometrics.hypotheses.org/files/2017/07/diri02.gif

Let http://freakonometrics.hypotheses.org/files/2017/07/diri04.gif denote independent http://freakonometrics.hypotheses.org/files/2017/07/diri05.gif random variables, with http://freakonometrics.hypotheses.org/files/2017/07/diri06.gif. Then http://freakonometrics.hypotheses.org/files/2017/07/diri07.gif where

http://freakonometrics.hypotheses.org/files/2017/07/diri08.gif

has a Dirichlet distribution with parameter

http://freakonometrics.hypotheses.org/files/2017/07/diri09.gif

Note that http://freakonometrics.hypotheses.org/files/2017/07/diri10.gif has a distribution in the simplex of http://freakonometrics.hypotheses.org/files/2017/07/diri11.gif,

http://freakonometrics.hypotheses.org/files/2017/07/diri40.gif

and has density

http://freakonometrics.hypotheses.org/files/2017/07/diri12.gif

We will write http://freakonometrics.hypotheses.org/files/2017/07/diri13.gif.

The density for different values of http://freakonometrics.hypotheses.org/files/2017/07/diri20.gif can be visualized below, e.g. http://freakonometrics.hypotheses.org/files/2017/07/diri21.gif, with some kind of symmetry,
http://freakonometrics.hypotheses.org/files/2017/07/dirichlet222.gif
or http://freakonometrics.hypotheses.org/files/2017/07/diri22.gif and http://freakonometrics.hypotheses.org/files/2017/07/diri23.gif, below
http://freakonometrics.hypotheses.org/files/2017/07/dirichlet522.gif
and finally, below, http://freakonometrics.hypotheses.org/files/2017/07/diri24.gif


Note that marginal distributions are also Dirichlet, in the sense that if

http://freakonometrics.hypotheses.org/files/2017/07/diri13.gif

then

http://freakonometrics.hypotheses.org/files/2017/07/diri14.gif

if http://freakonometrics.hypotheses.org/files/2017/07/diri15.gif, and if http://freakonometrics.hypotheses.org/files/2017/07/diri16.gif, then http://freakonometrics.hypotheses.org/files/2017/07/diri17.gif‘s have Beta distributions,

http://freakonometrics.hypotheses.org/files/2017/07/diri18.gif

See Devroye (1986) section XI.4, or Frigyik, Kapila & Gupta (2010) .This distribution might also be called multivariate Beta distribution. In R, this function can be used as follows

> library(MCMCpack)
> alpha=c(2,2,5)
> x=seq(0,1,by=.05)
> vx=rep(x,length(x))
> vy=rep(x,each=length(x))
> vz=1-x-vy
> V=cbind(vx,vy,vz)
> D=ddirichlet(V, alpha)
> persp(x,x,matrix(D,length(x),length(x))

(to plot the density, as figures above). Note that we will come back on that distribution later on so-called Liouville copulas (see also Gupta & Richards (1986)).

Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

http://freakonometrics.hypotheses.org/files/2016/11/credit-01.gif

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – http://freakonometrics.hypotheses.org/files/2016/11/exch67.gif (either the company defaults, or not), so that

http://freakonometrics.hypotheses.org/files/2016/11/exch66.gif

i.e. we can derive the (unconditional) distribution of the sum

http://freakonometrics.hypotheses.org/files/2016/11/exch60.gif

so that the probability function of the sum is, assuming that http://freakonometrics.hypotheses.org/files/2016/11/exch76.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch68.gif

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value
+ }
> QUANTILE=function(p=.99,a=2,m=.1,n=500){
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
+ V=V/sum(V)
+ return(min(which(cumsum(V)>p))) }

Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here

http://freakonometrics.hypotheses.org/files/2016/11/exch70.gif

i.e.

http://freakonometrics.hypotheses.org/files/2016/11/exch71.gif

Thus, the correlation between two default indicators is then

http://freakonometrics.hypotheses.org/files/2016/11/exch73.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch75.gif

Under the assumption that the latent factor is beta distributed

http://freakonometrics.hypotheses.org/files/2016/11/exch78.gif

we get

http://freakonometrics.hypotheses.org/files/2016/11/exch80.gif

Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,

> PICTURE=function(P){
+ A=seq(.01,2,by=.01)
+ VQ=matrix(NA,length(A),5)
+ for(i in 1:length(A)){
+ VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P)
+ VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P)
+ VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P)
+ VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P)
+ VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P)
+ }
+ plot(A,VQ[,5],type="s",col="red",ylim=
+ c(0,max(VQ)),xlab="",ylab="")
+ lines(A,VQ[,4],type="s",col="blue")
+ lines(A,VQ[,3],type="s",col="black")
+ lines(A,VQ[,2],type="s",col="blue",lty=2)
+ lines(A,VQ[,1],type="s",col="red",lty=2)
+ lines(A,rep(500*P,length(A)),col="grey")
+ legend(3,max(VQ),c("quantile 99.5%","quantile 99%",
+ "quantile 97.5%","quantile 95%","quantile 90%","mean"),
+ col=c("red","blue","black",
+"blue","red","grey"),
+ lty=c(1,1,1,2,2,1),border=n)
+}

e.g. with a (marginal) default probability of 15%,

> PICTURE(.15)

On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,


Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),

> PICTURE(.05)

(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,

> PICTURE(.0075)

And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !

MAT8886 from tail estimation to risk measure(s) estimation

This week, we conclude the part on extremes with an application of extreme value theory to risk measures. We have seen last week that, if we assume that above a threshold http://freakonometrics.blog.free.fr/public/perso5/qt01.gif, a Generalized Pareto Distribution will fit nicely, then we can use it to derive an estimator of the quantile function (for percentages such that the quantile is larger than the threshold)

http://freakonometrics.blog.free.fr/public/perso5/qt03.gif

It the threshold is http://freakonometrics.blog.free.fr/public/perso5/qt02.gif, i.e. we keep the http://freakonometrics.blog.free.fr/public/perso5/qt04.gif largest observations to fit a GPD, then this estimator can be written

http://freakonometrics.blog.free.fr/public/perso5/qt06.gif

The code we wrote last week was the following (here based on log-returns of the SP500 index, and we focus on large losses, i.e. large values of the opposite of log returns, plotted below)

> library(tseries)
> X=get.hist.quote("^GSPC")
> T=time(X)
> D=as.POSIXlt(T)
> Y=X$Close
> R=diff(log(Y))
> D=D[-1]
> X=-R
> plot(D,X)
> library(evir)
> GPD=gpd(X,quantile(X,.975))
> xi=GPD$par.ests[1]
> beta=GPD$par.ests[2]
> u=GPD$threshold
> QpGPD=function(p){
+ u+beta/xi*((100/2.5*(1-p))^(-xi)-1)
+ }
> QpGPD(1-1/250)
97.5%
0.04557386
> QpGPD(1-1/2500)
97.5%
0.08925095

This is similar with the following outputs, with the return period of a yearly event (one observation out of 250 trading days)

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/250, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.04172534 0.04557655 0.05086785

or the decennial one

> gpd.q(tailplot(gpd(X,quantile(X,.975))), 1-1/2500, ci.type =
+ "likelihood", ci.p = 0.95,like.num = 50)
Lower CI   Estimate   Upper CI
0.07165395 0.08925558 0.13636620

Note that it is also possible to derive an estimator for another population risk measure (the quantile is simply the so-called Value-at-Risk), the expected shortfall (or Tail Value-at-Risk), i.e.

http://freakonometrics.blog.free.fr/public/perso5/qt10.gif

The idea is to write that expression

http://freakonometrics.blog.free.fr/public/perso5/qt11.gif

so that we recognize the mean excess function (discussed earlier). Thus, assuming again that above http://freakonometrics.blog.free.fr/public/perso5/qt01.gif (and therefore above that high quantile) a GPD will fit, we can write

http://freakonometrics.blog.free.fr/public/perso5/qt12.gif

or equivalently

http://freakonometrics.blog.free.fr/public/perso5/qt13.gif

If we substitute estimators to unknown quantities on that expression, we get

http://freakonometrics.blog.free.fr/public/perso5/qt09.gif

The code is here

> EpGPD=function(p){
+ u-beta/xi+beta/xi/(1-xi)*(100/2.5*(1-p))^(-xi)
+ }
> EpGPD(1-1/250)
97.5%
0.06426508
> EpGPD(1-1/2500)
97.5%
0.1215077

An alternative is to use Hill’s approach (used to derive Hill’s estimator). Assume here that http://freakonometrics.blog.free.fr/public/perso5/qt20.gif, where http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function. Then, for all http://freakonometrics.blog.free.fr/public/perso5/qt23.gif,

http://freakonometrics.blog.free.fr/public/perso5/qt24.gif

Since http://freakonometrics.blog.free.fr/public/perso5/qt21.gif is a slowly varying function, it seem natural to assume that this ratio is almost 1 (which is true asymptotically). Thus

http://freakonometrics.blog.free.fr/public/perso5/qt25.gif

i.e. if we invert that function, we derive an estimator for the quantile function

http://freakonometrics.blog.free.fr/public/perso5/qt26.gif

which can also be written

http://freakonometrics.blog.free.fr/public/perso5/qt07.gif

(which is close to the relation we derived using a GPD model). Here the code is

> k=trunc(length(X)*.025)
> Xs=rev(sort(as.numeric(X)))
> xiHill=mean(log(Xs[1:k]))-log(Xs[k+1])
> u=Xs[k+1]
> QpHill=function(p){
+ u+u*((100/2.5*(1-p))^(-xiHill)-1)
+ }

with the following Hill plot

For yearly and decennial events, we have here

> QpHill(1-1/250)
[1] 0.04580548
> QpHill(1-1/2500)
[1] 0.1010204

Those quantities seem consistent since they are quite close, but they are different compared with empirical quantiles,

> quantile(X,1-1/250)
99.6%
0.04743929
> quantile(X,1-1/2500)
99.96%
0.09054039

Note that it is also possible to use some functions to derive estimators of those quantities,

> riskmeasures(gpd(X,quantile(X,.975)),1-1/250)
p   quantile      sfall
[1,] 0.996 0.04557655 0.06426859
> riskmeasures(gpd(X,quantile(X,.975)),1-1/2500)
p   quantile     sfall
[1,] 0.9996 0.08925558 0.1215137

(in this application, we have assumed that log-returns were independent and identically distributed… which might be a rather strong assumption).

a short word on profile likelihood

Profile likelihood is an interesting theory to visualize and compute confidence interval for estimators (see e.g. Venzon & Moolgavkar (1988)). As we will use is, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif. Then (under standard suitable conditions)

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> u=5

The function to draw the profile likelihood for the tail index parameter is then

> Y=X[X>u]-u
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
> plot(XIV,L,type="l")

It is possible to use it that profile likelihood function to derive a confidenceinterval,

> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6315989

$objective
[1] 754.1115
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1)/2,col="red")
> I=which(L>=-up-qchisq(p=.95,df=1)/2)
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1)/2,length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")

This is done with the following code

> library(ismev)
> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

MAT8886 Extremes and sums (of i.i.d. random variables)

Yesterday, we have discussed briefly sums and maximas of i.i.d. random variables using the concept of subexponential distributions. Today, we will introduce the concept of regular variation: a positive function is said to be regularly varying (at infinity), denoted http://freakonometrics.blog.free.fr/public/perso5/subexp-30.gif, for some http://freakonometrics.blog.free.fr/public/perso5/subexp-31.gif, if

http://freakonometrics.blog.free.fr/public/perso5/subexp-33.gif
for all http://freakonometrics.blog.free.fr/public/perso5/subexo_34.gif. An this concept can be related to sums and maxima (see section 6.2.6 in Embrechts et al. (1997)). Consider i.i.d. positive random variables http://freakonometrics.blog.free.fr/public/perso5/subsexp-01.gif: lethttp://freakonometrics.blog.free.fr/public/perso5/subexp-2.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-3.gif. Then it can be shown easily that

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-20.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-10.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-23.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Z.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-13.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-14.gif
If is not that simple to check for such convergences, it is still possible to use graphs to study the behavior of the empirical version of those quantities. Consider the following function to visualize convergence of empirical ratios,

CONVERGENCE=function(g,p=1,n=500000){
set.seed(1)
X=g(n);X1=g(n);X2=g(n);X3= g(n);X4=g(n)
Tp =cummax(X^p)/cumsum(X^p)
Tp1=cummax(X1^p)/cumsum(X1^p)
Tp2=cummax(X2^p)/cumsum(X2^p)
Tp3=cummax(X3^p)/cumsum(X3^p)
Tp4=cummax(X4^p)/cumsum(X4^p)
plot(Tp4,type="l",ylim=c(0,1),log="x",
xlim=c(100,n),ylab="",col="light blue",xlab="")
lines(Tp1,col="light green")
lines(Tp2,col="yellow")
lines(Tp3,col="pink")
lines(Tp,lwd=2)
abline(h=0:1,col="red",lty=2)
}

or the following to study the “asymptotic” distribution of the ratio on simulated samples

LIMITDIST=function(g,p=1,n=500000,ns=1000){
set.seed(1)
T=rep(NA,ns)
for(i in 1:ns){
X=g(n)
T[i]=max(X^p)/sum(X^p)
}
hist(T,breaks=seq(0,1,by=.05),probability=TRUE,
col="light green",ylab="",xlab="",main="")
}

In the case of exponentially distributed variables, we have

CONVERGENCE(rexp)

For variables with a lognormal distribution,

CONVERGENCE(rlnorm)

And finally, consider the case of a Pareto distribution

rpareto=function(n){runif(n)^(-1/1.5)-1}
CONVERGENCE(rpareto)

Here, it looks like those three distributions have finite variance (and actually, they do). To go one step further, for http://freakonometrics.blog.free.fr/public/perso5/subexp00.gif, define http://freakonometrics.blog.free.fr/public/perso5/suuuuuubexp.gif and http://freakonometrics.blog.free.fr/public/perso5/subexp-5.gif. Then analogous results can be derived,

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-99.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-11.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif for some http://freakonometrics.blog.free.fr/public/perso5/subexp-25.gif if and only if the exists a non-degenerate variable http://freakonometrics.blog.free.fr/public/perso5/Zk.gif such that

http://freakonometrics.blog.free.fr/public/perso5/subexp-12.gif

  • http://freakonometrics.blog.free.fr/public/perso5/subexp-21.gif with http://freakonometrics.blog.free.fr/public/perso5/subexp-22.gif if and only if

http://freakonometrics.blog.free.fr/public/perso5/subexp-15.gif
Again, it is possible to use the function defined above,

CONVERGENCE(rexp,p=2)

or

CONVERGENCE(rexp,p=3)

or even

CONVERGENCE(rexp,p=10)

If the power is not too high, it looks like the ratio goes to zero. But when it becomes larger, it looks like more simulations might be necessary to say something relevant.

CONVERGENCE(rlnorm,p=2)

or

CONVERGENCE(rlnorm,p=3)

Here also, it looks like we have a light tailed distribution (and actually, it is the case). And finally, if we consider the case of a Pareto distribution

CONVERGENCE(rpareto,p=2)

Then it looks like it is an heavy tailed distribution. In order to get a better understanding, plot the distribution of the ratio obtained from 1,000 simulated samples (of size 500,000),

LIMITDIST(rpareto,p=1)

versus

LIMITDIST(rpareto,p=2)

So obviously, something is going on between 1 and 2 (recall that the power parameter of the Pareto distribution is 1.5).

Fisher-Tippett theorem with an historical perspective

A couple of weeks ago, Rafael asked me if I had something on the history of extreme value theory. Since I will get back to fundamental results about extremes in my course, I promised I will write down a short post on all that issue.

To start from the beginning, in 1928, Ronald Fisher and Leonard Tippett formulated the three types of limiting distributions for the maximum term of a random sample (Fisher & Tippett (1928)). The problem was to characterize function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/ext-2.gif

where http://freakonometrics.hypotheses.org/files/2015/12/ext-3.gif where http://freakonometrics.hypotheses.org/files/2015/12/ext-4.gif‘s are i.i.d. with cumulative distribution function http://freakonometrics.hypotheses.org/files/2015/12/ext-5.gif. They had supporting arguments, but no (rigorous) proof. Nevertheless, the obtained that the only possible types for G were

http://freakonometrics.hypotheses.org/files/2015/12/ext-6.gif

i.e. Fréchet type (Pareto-type tails), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-7.gif

i.e. Weibull type (bounded distribution type), or

http://freakonometrics.hypotheses.org/files/2015/12/ext-8.gif

i.e. Gumbel type (exponential-type tails). Emil Gumbel has been intensively using the so-called Gumbel distribution on river flows, since (as he explained in 1958), “it seems that the rivers know the theory. It only remains to convince the engineers of the validity of this analysis“.
Independently of that work (published in 1928), Maurice Fréchet considered in 1927 (in Sur la loi de probabilité de l’écart maximum) possible limits of

http://freakonometrics.hypotheses.org/files/2015/12/ext-9.gif

and obtained only http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif as possible limit. Richard von Mises gave in 1936 sufficient, but not necessary conditions for their (max) domain of attraction, i.e. characterization of function http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gif such that the maxima converges to some specific function http://freakonometrics.hypotheses.org/files/2015/12/ext-01.gif (von Mises (1936)). E.g. he noticed that a sufficient condition on http://freakonometrics.hypotheses.org/files/2015/12/ext-11.gifto be in the (max) domain of attraction of the Gumbel distribution is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-13.gif

Then in 1943, Boris Gnedenko gave a complete characterization of those three types, with a complete characterization for two of them (heavy tails, i.e. Fréchet type and bounded support, i.e. Weibull) but his necessary and sufficient condition was based on a function that was not explicitly defined (see Gnedenko (1943)). Laurens de Haan in the 70’s derived checkable condition for Gumbel’s type.
Boris Gnedenko proved (in Section 4 of his paper) that F is the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-10.gif if and only if http://freakonometrics.hypotheses.org/files/2015/12/ext-16.gif is regularly varying at infinity, with index http://freakonometrics.hypotheses.org/files/2015/12/ext-17.gif (even if the term “regular variation” was not mentioned in the paper). Similar results were derived to characterize functions in the (max) domain of attraction of Weibull. For the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif, Boris Gnedenko obtained that a necessary and sufficient condition was that there exists a function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif such http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif goes to 0 at infinity and

http://freakonometrics.hypotheses.org/files/2015/12/ext-20.gif

Several papers have discussed what function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif could be e.g. David Mejzler in 1949 (in Russian, but see also his 1965 paper), and Laurens de Hann in 1970 and 1971 (following the dramatic flood in the Netherlands in 1953, researchers in the Netherlands have focuses on dikes, and extreme value applications).

Mejzler’s idea was to work on quantiles, and not on the cumulative distribution function. I.e. define

http://freakonometrics.hypotheses.org/files/2015/12/ext-21.gif

Then a necessary and sufficient condition for F to be in the (max) domain of attraction of http://freakonometrics.hypotheses.org/files/2015/12/ext-18.gif is that

http://freakonometrics.hypotheses.org/files/2015/12/ext-23.gif

Laurens de Haan proved in 1971 that function http://freakonometrics.hypotheses.org/files/2015/12/ext-19.gif can be – in general – given by

http://freakonometrics.hypotheses.org/files/2015/12/ext-25.gif

And in 1976, Laurens de Haan obtained a three-type convergence working on quantile function http://freakonometrics.hypotheses.org/files/2015/12/ext-26.gif (with a much shorter proof).
There have been many many papers extending Fisher-Tippett’s theorem, e.g. on non-independent sequences, like exchangeable ones (in a paper by Simeon Berman in 1962, or on stationary Gaussian sequences in 1964).