Category Archives: Copulas

Bounding sums of random variables, part 1

For the last course MAT8886 of this (long) winter session, on copulas (and extremes), we will discuss risk aggregation. The course will be mainly on the problem of bounding  the distribution (or some risk measure, say the Value-at-Risk) for two random variables with given marginal distribution. For instance, we have two Gaussian risks. What could be be worst-case scenario for the 99% quantile of the sum ? Note that I mention implications in terms of risk management, but of course, those questions are extremely important in terms of statistical inference, see e.g. Fan & Park (2006).

This problem, is sometimes related to some question asked by Kolmogorov almost one hundred years ago, as mentioned in Makarov (1981). One year after, Rüschendorf (1982) also suggested a proof of bounds calculation. Here, we focus in dimension 2. As usual, it is the simple case. But as mentioned recently, in Kreinovich & Ferson (2005), in dimension 3 (or higher), “computing the best-possible bounds for arbitrary n is an NP-hard (computationally intractable) problem“. So let us focus on the case where we sum (only) two random variable (for those interested in higher dimension, Puccetti & Rüschendorf (2012) provided interesting results for a dual version of those optimal bounds).

Let https://latex.codecogs.com/gif.latex?\Delta denote the set of univariate continuous distribution function, left-continuous, on https://latex.codecogs.com/gif.latex?\mathbb{R}. And https://latex.codecogs.com/gif.latex?\Delta^+ the set of distributions on https://latex.codecogs.com/gif.latex?\mathbb{R}^+. Thus, https://latex.codecogs.com/gif.latex?F\in\Delta^+ if https://latex.codecogs.com/gif.latex?F\in\Delta and https://latex.codecogs.com/gif.latex?F(0)=0. Consider now two distributions https://latex.codecogs.com/gif.latex?F,G\in\Delta^+. In a very general setting, it is possible to consider operators on https://latex.codecogs.com/gif.latex?\Delta^+\times%20\Delta^+. Thus, let https://latex.codecogs.com/gif.latex?T:[0,1]\times[0,1]\rightarrow[0,1] denote an operator, increasing in each component, thus that https://latex.codecogs.com/gif.latex?T(1,1)=1. And consider some function https://latex.codecogs.com/gif.latex?L:\mathbb{R}^+\times\mathbb{R}^+\rightarrow\mathbb{R}^+ assumed to be also increasing in each component (and continuous). For such functions https://latex.codecogs.com/gif.latex?T and https://latex.codecogs.com/gif.latex?L, define the following (general) operator, https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G) as

https://latex.codecogs.com/gif.latex?\tau_{T,L}(F,G)(x)=\sup_{L(u,v)=x}\{T(F(u),G(v))\}

One interesting case can be obtained when https://latex.codecogs.com/gif.latex?Tis a copula, https://latex.codecogs.com/gif.latex?C. In that case,

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and further, it is possible to write

https://latex.codecogs.com/gif.latex?\tau_{C,L}(F,G)(x)=\sup_{(u,v)\in%20L^{-1}(x)}\{C(F(u),G(v))\}

It is also possible to consider other (general) operators, e.g. based on the sum

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)(x)=\int_{(u,v)\in%20L^{-1}(x)}%20dC(F(u),G(v))

or on the minimum,

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G)(x)=\inf_{(u,v)\in%20L^{-1}(x)}\{C^\star(F(u),G(v))\}

where https://latex.codecogs.com/gif.latex?C^\star is the survival copula associated with https://latex.codecogs.com/gif.latex?C, i.e. https://latex.codecogs.com/gif.latex?C^\star(u,v)=u+v-C(u,v). Note that those operators can be used to define distribution functions, i.e.

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

and similarly

https://latex.codecogs.com/gif.latex?\rho_{C,L}(F,G):\Delta^+\times\Delta^+\rightarrow\Delta^+

All that seems too theoretical ? An application can be the case of the sum, i.e. https://latex.codecogs.com/gif.latex?L(x,y)=x+y, in that case https://latex.codecogs.com/gif.latex?\sigma_{C,+}(F,G) is the distribution of sum of two random variables with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, and copula https://latex.codecogs.com/gif.latex?C. Thus, https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G) is simply the convolution of two distributions,

https://latex.codecogs.com/gif.latex?\sigma_{C^\perp,+}(F,G)(x)=\int_{u+v=x}%20dC^\perp(F(u),G(v))

The important result (that can be found in Chapter 7, in Schweizer and Sklar (1983)) is that given an operator https://latex.codecogs.com/gif.latex?L, then, for any copula https://latex.codecogs.com/gif.latex?C, one can find a lower bound for https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\tau_{C,L}(F,G)\leq\sigma_{C,L}(F,G)

as well as an upper bound

https://latex.codecogs.com/gif.latex?\sigma_{C,L}(F,G)\leq%20\rho_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

Those inequalities come from the fact that for all copula https://latex.codecogs.com/gif.latex?C, https://latex.codecogs.com/gif.latex?C\geq%20C^-, where https://latex.codecogs.com/gif.latex?C^- is a copula. Since this function is not copula in higher dimension, one can easily imagine that get those bounds in higher dimension will be much more complicated…

In the case of the sum of two random variables, with marginal distributions https://latex.codecogs.com/gif.latex?F and https://latex.codecogs.com/gif.latex?G, bounds for the distribution of the sum https://latex.codecogs.com/gif.latex?H(x)=\mathbb{P}(X+Y\leq%20x), where https://latex.codecogs.com/gif.latex?X\sim%20F and https://latex.codecogs.com/gif.latex?Y\sim%20G, can be written

https://latex.codecogs.com/gif.latex?H^-(x)=\tau_{C^-%20,+}(F,G)(x)=\sup_{u+v=x}\{%20\max\{F(u)+G(v)-1,0\}%20\}

for the lower bound, and

https://latex.codecogs.com/gif.latex?H^+(x)=\rho_{C^-%20,+}(F,G)(x)=\inf_{u+v=x}\{%20\min\{F(u)+G(v),1\}%20\}

for the upper bound. And those bounds are sharp, in the sense that, for all https://latex.codecogs.com/gif.latex?t\in(0,1), there is a copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\tau_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

and there is (another) copula https://latex.codecogs.com/gif.latex?C_t such that

https://latex.codecogs.com/gif.latex?\sigma_{C_t,+}(F,G)(x)=\tau_{C^-%20,+}(F,G)(x)=t

Thus, using those results, it is possible to bound cumulative distribution function. But actually, all that can be done also on quantiles (see Frank, Nelsen & Schweizer (1987)). For all https://latex.codecogs.com/gif.latex?F\in\Delta^+ let https://latex.codecogs.com/gif.latex?F^{-1} denotes its generalized inverse, left continuous, and let https://latex.codecogs.com/gif.latex?\nabla^+ denote the set of those quantile functions. Define then the dual versions of our operators,

https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})(x)=\inf_{(u,v)\in%20T^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

and

https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})(x)=\sup_{(u,v)\in%20T^\star^{-1}(x)}\{L(F^{-1}(u),G^{-1}(v))\}

Those definitions are really dual versions of the previous ones, in the sense that https://latex.codecogs.com/gif.latex?\tau^{-1}_{T,L}(F^{-1},G^{-1})=[\tau_{T,L}(F,G)]^{-1} and https://latex.codecogs.com/gif.latex?\rho^{-1}_{T,L}(F^{-1},G^{-1})=[\rho_{T,L}(F,G)]^{-1}.

Note that if we focus on sums of bivariate distributions, the lower bound for the quantile of the sum is

https://latex.codecogs.com/gif.latex?\tau^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\inf_{\max\{u+v-1,0\}=x}\{F^{-1}(u)+G^{-1}(v)\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\rho^{-1}_{C^{-},+}(F^{-1},G^{-1})(x)=\sup_{\min\{u+v,1\}=x}\{F^{-1}(u)+G^{-1}(v)\}

A great thing is that it should not be too difficult to compute numerically those quantities. Perhaps a little bit more for cumulative distribution functions, since they are not defined on a bounded support. But still, if the goal is to plot those bounds on , for instance. The code is the following, for the sum of two lognormal distributions .

> F=function(x) plnorm(x,0,1)
> G=function(x) plnorm(x,0,1)
> n=100
> X=seq(0,10,by=.05)
> Hinf=Hsup=rep(NA,length(X))
> for(i in 1:length(X)){
+ x=X[i]
+ U=seq(0,x,by=1/n); V=x-U
+ Hinf[i]=max(pmax(F(U)+G(V)-1,0))
+ Hsup[i]=min(pmin(F(U)+G(V),1))}

If we plot those bounds, we obtain

> plot(X,Hinf,ylim=c(0,1),type="s",col="red")
> lines(X,Hsup,type="s",col="red")

But somehow, it is even more simple to work with quantiles since they are defined on a finite support. Quantiles are here

> Finv=function(u) qlnorm(u,0,1)
> Ginv=function(u) qlnorm(u,0,1)

The idea will be to consider a discretized version of the unit interval as discussed in Williamson (1989), in a much more general setting. Again the idea is to compute, for instance

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\{F^{-1}(u)+G^{-1}(x-u)\}

The idea is to consider https://latex.codecogs.com/gif.latex?x=i/n and https://latex.codecogs.com/gif.latex?u=j/n, and the bound for the quantile function at point https://latex.codecogs.com/gif.latex?i/n is then

https://latex.codecogs.com/gif.latex?\sup_{j\in\{0,1,\cdots,i\}}\left\{F^{-1}\left(\frac{j}{n}\right)+G^{-1}\left(\frac{i-j}{n}\right)\right\}

The code to compute those bounds, for a given https://latex.codecogs.com/gif.latex?n is here

> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Here we have (several https://latex.codecogs.com/gif.latex?ns were considered, so that we can visualize the convergence of that numerical algorithm),

Here, we have a simple code to visualize bounds for quantiles for the sum of two risks. But it is possible to go further…

(nonparametric) copula density estimation

Today, we will go further on the inference of copula functions. Some codes (and references) can be found on a previous post, on nonparametric estimators of copula densities (among other related things).  Consider (as before) the loss-ALAE dataset (since we’ve been working a lot on that dataset)

> library(MASS)
> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/(nrow(X)+1))

The standard tool to plot nonparametric estimators of densities is to use multivariate kernels. We can look at the density using

> mat1=kde2d(U[,1],U[,2],n=35)
> persp(mat1$x,mat1$y,mat1$z,col="green",
+ shade=TRUE,theta=s*5,
+ xlab="",ylab="",zlab="",zlim=c(0,7))

or level curves (isodensity curves) with more detailed estimators (on grids with shorter steps)

> mat1=kde2d(U[,1],U[,2],n=101)
> image(mat1$x,mat1$y,mat1$z,col=
+ rev(heat.colors(100)),xlab="",ylab="")
> contour(mat1$x,mat1$y,mat1$z,add=
+ TRUE,levels = pretty(c(0,4), 11))

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est1.gif

Kernels are nice, but we clearly observe some border bias, extremely strong in corners (the estimator is 1/4th of what it should be, see another post for more details). Instead of working on sample https://latex.codecogs.com/gif.latex?(U_i,V_i) on the unit square, consider some transformed sample https://latex.codecogs.com/gif.latex?(Q(U_i),Q(V_i)), where https://latex.codecogs.com/gif.latex?Q:(0,1)\rightarrow\mathbb{R} is a given function. E.g. a quantile function of an unbounded distribution, for instance the quantile function of the https://latex.codecogs.com/gif.latex?\mathcal{N}(0,1) distribution. Then, we can estimate the density of the transformed sample, and using the inversion technique, derive an estimator of the density of the initial sample. Since the inverse of a (general) function is not that simple to compute, the code might be a bit slow. But it does work,

> gaussian.kernel.copula.surface <- function (u,v,n) {
+   s=seq(1/(n+1), length=n, by=1/(n+1))
+   mat=matrix(NA,nrow = n, ncol = n)
+ sur=kde2d(qnorm(u),qnorm(v),n=1000,
+ lims = c(-4, 4, -4, 4))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qnorm(s[i])+4)*1000/8)+1;
+ 	Yj<-round((qnorm(s[j])+4)*1000/8)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dnorm(qnorm(s[i]))*
+ 	dnorm(qnorm(s[j])))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Here, we get

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est2.gif

Note that it is possible to consider another transformation, e.g. the quantile function of a Student-t distribution.

> student.kernel.copula.surface =
+  function (u,v,n,d=4) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(NA,nrow = n, ncol = n)
+ sur<-kde2d(qt(u,df=d),qt(v,df=d),n=5000,
+ lims = c(-8, 8, -8, 8))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qt(s[i],df=d)+8)*5000/16)+1;
+ 	Yj<-round((qt(s[j],df=d)+8)*5000/16)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dt(qt(s[i],df=d),df=d)*
+ 	dt(qt(s[j],df=d),df=d))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Another strategy is to consider kernel that have precisely the unit interval as support. The idea is here to consider the product of Beta kernels, where parameters depend on the location

> beta.kernel.copula.surface=
+  function (u,v,bx=.025,by=.025,n) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(0,nrow = n, ncol = n)
+ for (i in 1:n) {
+     a <- s[i]
+     for (j in 1:n) {
+     b <- s[j]
+ 	mat[i,j] <- sum(dbeta(a,u/bx,(1-u)/bx) *
+     dbeta(b,v/by,(1-v)/by)) / length(u)
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est3.gif

On those two graphs, we can clearly observe strong tail dependence in the upper (right) corner, that cannot be intuited using a standard kernel estimator…

Copulas and tail dependence, part 3

We have seen extreme value copulas in the section where we did consider general families of copulas. In the bivariate case, an extreme value can be written
http://freakonometrics.hypotheses.org/files/2016/05/CFG5.gif
where https://latex.codecogs.com/gif.latex?A(\cdot) is Pickands dependence function, which is a convex function satisfying
http://freakonometrics.hypotheses.org/files/2016/05/CFG11.gif
Observe that in this case,
http://freakonometrics.hypotheses.org/files/2016/05/CFG12.gifwhere https://latex.codecogs.com/gif.latex?\tau is Kendall’tau, and can be written
http://freakonometrics.hypotheses.org/files/2016/05/CFG13.gifFor instance, if
http://freakonometrics.hypotheses.org/files/2016/05/CFG15.gifthen, we obtain Gumbel copula. This is what we’ve seen in the section where we introduced this family. Now, let us talk about (nonparametric) inference, and more precisely the estimation of the dependence function. The starting point of the most standard estimator is to observe that if https://latex.codecogs.com/gif.latex?(U,V) has copula https://latex.codecogs.com/gif.latex?C, then
http://freakonometrics.hypotheses.org/files/2016/05/CFG3.gifhas distribution function
http://freakonometrics.hypotheses.org/files/2016/05/CFG2.gifAnd conversely, Pickands dependence function can be written
http://freakonometrics.hypotheses.org/files/2016/05/CFG7.gif
Thus, a natural estimator for Pickands function is
http://freakonometrics.hypotheses.org/files/2016/05/CFG9.gif
where https://latex.codecogs.com/gif.latex?\widehat{H}_n is the empirical cumulative distribution function of
http://freakonometrics.hypotheses.org/files/2016/05/cfg1.gifThis is the estimator proposed in Capéràa, Fougères  & Genest (1997). Here, we can compute everything here using

> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/
+ (nrow(X)+1))
> Z=log(U[,1])/log(U[,1]*U[,2])
> h=function(t) mean(Z<=t)
> H=Vectorize(h)
> a=function(t){
+ f=function(t) (H(t)-t)/(t*(1-t))
+ return(exp(integrate(f,lower=0,upper=t,
+ subdivisions=10000)$value))
+ }
> A=Vectorize(a)
> u=seq(.01,.99,by=.01)
> plot(c(0,u,1),c(1,A(u),1),type="l",col="red",
+ ylim=c(.5,1))

Even integrate to get an estimator of Pickands’ dependence function. Note that an interesting point is that the upper tail dependence index can be visualized on the graph, above,

> A(.5)/2
[1] 0.4055346

Copulas and tail dependence, part 2

An alternative to describe tail dependence can be found in the Ledford & Tawn (1996) for instance. The intuition behind can be found in Fischer & Klein (2007)). Assume that  and   have the same distribution. Now, if we assume that those variables are (strictly) independent,

But if we assume that those variables are (strictly) comonotonic (i.e. equal here since they have the same distribution), then

So assume that there is a https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png such that
Then https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=2 can be interpreted as independence while https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=1 means strong (perfect) positive dependence. Thus, consider the following transformation to get a parameter in [0,1], with a strength of dependence increasing with the index, e.g.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-8.2.php.png

In order to derive a tail dependence index, assume that there exists a limit to

which will be interpreted as a (weaktail dependence index. Thus define concentration functions

for the lower tail (on the left) and

for the upper tail (on the right). The R code to compute those functions is quite simple,
> library(evd); 
> data(lossalae)
> X=lossalae
> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1
> fL2emp=function(z) 2*log(mean(U<=z))/
+ log(mean((U<=z)&(V<=z)))-1
> fR2emp=function(z) 2*log(mean(U>=1-z))/
+ log(mean((U>=1-z)&(V>=1-z)))-1
> u=seq(.001,.5,by=.001)
> L=Vectorize(fL2emp)(u)
> R=Vectorize(fR2emp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL      UPPER TAIL")
> abline(v=.5,col="grey")

and again, it is possible to plot those empirical functions against some parametric ones, e.g. the one obtained from a Gaussian copula (with the same Kendall’s tau)

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) 2*log(z)/log(pCopula(c(z,z),
+ copgauss))-1
> Rgaussian=function(z) 2*log(1-z)/log(1-2*z+
+ pCopula(c(z,z),copgauss))-1
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) 2*log(z)/log(pCopula(c(z,z),
+ copgumbel))-1
> Rgumbel=function(z) 2*log(1-z)/log(1-2*z+
+ pCopula(c(z,z),copgumbel))-1
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

Again, one should look more carefully at confidence bands, but is looks like Gumbel copula provides a good fit here.

Copulas and tail dependence, part 1

As mentioned in the course last week Venter (2003) suggested nice functions to illustrate tail dependence (see also some slides used in Berlin a few years ago).

  • Joe (1990)’s lambda

Joe (1990) suggested a (strong) tail dependence index. For lower tails, for instance, consider

http://freakonometrics.hypotheses.org/files/2017/07/toc3latex2png.2.php_.png

i.e

http://freakonometrics.hypotheses.org/files/2017/07/toc3latex2png.3.php_.png
  • Upper and lower strong tail (empirical) dependence functions

The idea is to plot the function above, in order to visualize limiting behavior. Define

http://freakonometrics.hypotheses.org/files/2017/07/Llatex2png.2.php_.png

for the lower tail, and

http://freakonometrics.hypotheses.org/files/2017/07/Clatex2png.2.php_.png

for the upper tail, where http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-12.2.php_.png is the survival copula associated with http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-13.2.php_.png, in the sense that
http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-14.2.php_.png

while

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-15.2.php_.png

Now, one can easily derive empirical conterparts of those function, i.e.

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-18.2.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-19.2.php_.png

Thus, for upper tail, on the right, we have the following graph

http://freakonometrics.hypotheses.org/files/2017/07/upper-lambda.gif

and for the lower tail, on the left, we have

http://freakonometrics.hypotheses.org/files/2017/07/lower-lambda.gif

For the code, consider some real data, like the loss-ALAE dataset.

> library(evd)
> X=lossalae

The idea is to plot, on the left, the lower tail concentration function, and on the right, the upper tail function.

> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1)
> Lemp=function(z) sum((U<=z)&(V<=z))/sum(U<=z)
> Remp=function(z) sum((U>=1-z)&(V>=1-z))/sum(U>=1-z)
> u=seq(.001,.5,by=.001)
> L=Vectorize(Lemp)(u)
> R=Vectorize(Remp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL          UPPER TAIL")
> abline(v=.5,col="grey")

Now, we can compare this graph, with what should be obtained for some parametric copulas that have the same Kendall’s tau (e.g.). For instance, if we consider a Gaussian copula,

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) pCopula(c(z,z),copgauss)/z
> Rgaussian=function(z) (1-2*z+pCopula(c(z,z),copgauss))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel’s copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) pCopula(c(z,z),copgumbel)/z
> Rgumbel=function(z) (1-2*z+pCopula(c(z,z),copgumbel))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

That’s nice (isn’t it?), but since we do not have any confidence interval, it is still hard to conclude (even if it looks like Gumbel copula has a much better fit than the Gaussian one). A strategy can be to generate samples from those copulas, and to visualize what we had. With a Gaussian copula, the graph looks like

> u=seq(.0025,.5,by=.0025); nu=length(u)
> nsimul=500
> MGS=matrix(NA,nsimul,2*nu)
> for(s in 1:nsimul){
+ Xs=rCopula(nrow(X),copgauss)
+ Us=rank(Xs[,1])/(nrow(Xs)+1)
+ Vs=rank(Xs[,2])/(nrow(Xs)+1)
+ Lemp=function(z) sum((Us<=z)&(Vs<=z))/sum(Us<=z)
+ Remp=function(z) sum((Us>=1-z)&(Vs>=1-z))/sum(Us>=1-z)
+ MGS[s,1:nu]=Vectorize(Lemp)(u)
+ MGS[s,(nu+1):(2*nu)]=Vectorize(Remp)(rev(u))
+ lines(c(u,u+.5-u[1]),MGS[s,],col="red")
+ }

(including – pointwise – 90% confidence bands)

> Q95=function(x) quantile(x,.95)
> V95=apply(MGS,2,Q95)
> lines(c(u,u+.5-u[1]),V95,col="red",lwd=2)
> Q05=function(x) quantile(x,.05)
> V05=apply(MGS,2,Q05)
> lines(c(u,u+.5-u[1]),V05,col="red",lwd=2)

while it is

with Gumbel copula. Isn’t it a nice (graphical) tool ?

But as mentioned in the course, the statistical convergence can be slow. Extremely slow. So assessing if the underlying copula has tail dependence, or not, it now that simple. Especially if the copula exhibits tail independence. Like the Gaussian copula. Consider a sample of size 1,000. This is what we obtain if we generate random scenarios,

or we look at the left tail (with a log-scale)

Now, consider a 10,000 sample,

or with a log-scale

We can even consider a 100,000 sample,

or with a log-scale

On those graphs, it is rather difficult to conclude if the limit is 0, or some strictly positive value (again, it is a classical statistical problem when the value of interest is at the border of the support of the parameter). So, a natural idea is to consider a weaker tail dependence index. Unless you have something like 100,000 observations…

Copulas estimation and influence of margins

Just a short post to get back on results mentioned at the end of the course. Since copulas are obtained using (univariate) quantile functions in the joint cumulative distribution function, they are – somehow – related to the marginal distribution fitted. In order to illustrate this point, consider an i.i.d. sample http://freakonometrics.blog.free.fr/public/perso6/cop-marg-01.gif from a Student-t distribution,

library(mnormt)
r=.5
n=200
X=rmt(n,mean=c(0,0),S=matrix(c(1,r,r,1),2,2),df=4)

Thus, the true copula is Student-t. Here, with 4 degrees of freedom. Note that we can easily get the (true) value of the copula, on the diagonal

dg=function(t) pmt(qt(t,df=4),mean=c(0,0),
S=matrix(c(1,r,r,1),2,2),df=4)
DG=Vectorize(dg)

Four strategies are considered here to define pseudo-copula base variates,

  • misfit: consider an invalid marginal estimation: we have assumed that margins were Gaussian, i.e. http://freakonometrics.blog.free.fr/public/perso6/cop-marg-2.gif
  • perfect fit: here, we know that margins were Student-t, with 4 degrees of freedom http://freakonometrics.blog.free.fr/public/perso6/cop-marg-3.gif
  • standard fit: then, consider the case where we fit marginal distribution, but in the good family this time (e.g. among Student-t distributions), http://freakonometrics.blog.free.fr/public/perso6/cop-marg-4.gif
  • ranks: finally, we consider nonparametric estimators for marginal distributions, http://freakonometrics.blog.free.fr/public/perso6/cop-marg-10.gif

Now that we have a sample with margins in the unit square, let us construct the empirical copula,

http://freakonometrics.blog.free.fr/public/perso6/cop-marg-6.gif
Let us now compare those four approaches.

  • The first one is to illustrate model error, i.e. what’s going on if we fit distributions, but not in the proper family of parametric distributions.
X0=cbind((X[,1]-mean(X[,1])/sd(X[,1])),
(X[,2]-mean(X[,2])/sd(X[,2])))
Y=pnorm(X0)

Then, the following code is used to compute the value of the empirical copula, on the diagonal,

diagonale=function(t,Z) mean((Z[,1]<=t)&(Z[,2]<=t))
diagY=function(t) diagonale(t,Y)
DiagY=Vectorize(diagY)
u=seq(0,1,by=.005)
dY=DiagY(u)

On the graph below, 1,000 samples of size 200 have been generated. All trajectories are the estimation of the copula on the diagonal. The black plain line is the true value of the copula

Obviously, it is not good at all. Mainly because the distribution of http://freakonometrics.blog.free.fr/public/perso6/cop-marg-8.gif can’t be a copula, since margins are not even uniform on the unit interval.

  • a perfect fit. Here, we use the following code to generate our copula-type sample
U=pt(X,df=4)

This time, the fit is much better.

  • Using maximum likelihood estimators to fit the best distribution within the Student-t family
F1=fitdistr(X0[,1],dt,list(df=5),lower = 0.001)
F2=fitdistr(X0[,2],dt,list(df=5),lower = 0.001)
V=cbind(pt(X0[,1],df=F1$estimate),pt(X0[,2],df=F2$estimate))

Here, it is also very good. Even better than before, when the true distribution is considered.

(it is like using Lillie test for goodness of fit, versus Kolmogorov-Smirnov, seehere for instance, in French).

  • Finally, let us consider ranks, or nonparametric estimators for marginal distributions,
R=cbind(rank(X[,1])/(n+1),rank(X[,2])/(n+1))

Here it is even better then the previous one

If we compare Box-plots of the value of the copula at point (.2,.2), we obtain the following, with on top ranks, then fitting with the good family, then using the true distribution, and finally, using a non-proper distribution.

Just to illustrate one more time a result mentioned in a previous post, “in statistics, having too much information might not be a good thing“.

Kendall’s function for copulas

As mentioned in the course on copulas, a nice tool to describe dependence it Kendall’s cumulative function. Given a random pair http://freakonometrics.hypotheses.org/files/2015/12/conc-19.gif with distribution  http://freakonometrics.hypotheses.org/files/2015/12/conc-17.gif, define random variable http://freakonometrics.hypotheses.org/files/2015/12/conc-30.gif. Then Kendall’s cumulative function is

http://freakonometrics.hypotheses.org/files/2015/12/kendall-01.gif

Genest and Rivest (1993) introduced that function to choose among Archimedean copulas (we’ll get back to this point below).

From a computational point of view, computing such a function can be done as follows,

  • for all http://freakonometrics.hypotheses.org/files/2015/12/kendall-02.gif, compute http://freakonometrics.hypotheses.org/files/2015/12/kendall-03.gif as the proportion of observation in the lower quadrant, with upper corner http://freakonometrics.hypotheses.org/files/2015/12/kendall-4.gif, i.e.

http://freakonometrics.hypotheses.org/files/2015/12/kendall-06.gif

  • then compute the cumulative distribution function of http://freakonometrics.hypotheses.org/files/2015/12/kendall-03.gif‘s.

To visualize the construction of that cumulative distribution function, consider the following animation

Thus, here the code to compute simply that cumulative distribution function is

n=nrow(X)
i=rep(1:n,each=n)
j=rep(1:n,n)
S=((X[i,1]>X[j,1])&(X[i,2]>X[j,2]))
Z=tapply(S,i,sum)/(n-1)

The graph can be obtain either using

plot(ecdf(Z))

or

plot(sort(Z),(1:n)/n,type="s",col="red")

The interesting point is that for an Archimedean copula with generator http://freakonometrics.hypotheses.org/files/2015/12/kendall-7.gif, then Kendall’s function is simply

http://freakonometrics.hypotheses.org/files/2015/12/kendall-8.gifIf we’re too lazy to do the maths, at least, it is possible to compute those functions numerically. For instance, for Clayton copula,

h=.001
phi=function(t){(t^(-alpha)-1)}
dphi=function(t){(phi(t+h)-phi(t-h))/2/h}
k=function(t){t-phi(t)/dphi(t)}
Kc=Vectorize(k)

Similarly, let us consider Gumbel copula,

phi=function(t){(-log(t))^(theta)}
dphi=function(t){(phi(t+h)-phi(t-h))/2/h}
k=function(t){t-phi(t)/dphi(t)}
Kg=Vectorize(k)

If we plot the empirical Kendall’s function (obtained from the sample), with different theoretical ones, derived from Clayton copulas (on the left, in blue) or Gumbel copula (on the right, in purple), we have the following,

http://freakonometrics.hypotheses.org/files/2015/12/kendall-function-anim.gif

Note that the different curves were obtained when Clayton copula has Kendall’s tau equal to 0, .1, .2, .3, …, .9, 1, and similarly for Gumbel copula (so that Figures can be compared). The following table gives a correspondence, from Kendall’s tau to the underlying parameter of a copula (for different families)

as well as Spearman’s rho,


To conclude, observe that there are two important particular cases that can be identified here: the case of perfect dependent, on the first diagonal when http://freakonometrics.hypotheses.org/files/2015/12/kennnn-04.gif, and the case of independence, the upper green curve, http://freakonometrics.hypotheses.org/files/2016/10/kennnnn-05.gif. It should also be mentioned that it is also common to plot not function http://freakonometrics.hypotheses.org/files/2015/12/kennnn-01.gif, but function http://freakonometrics.hypotheses.org/files/2015/12/kennnn-02.gif, defined as http://freakonometrics.hypotheses.org/files/2015/12/kennnn-03.gif,

the Dirichlet distribution

In the course, since we are still introducing some concepts of dependent distributions, we will talk about the Dirichlet distribution, which is a distribution over the simplex of http://freakonometrics.hypotheses.org/files/2017/07/diri11.gif. Let http://freakonometrics.hypotheses.org/files/2017/07/diri01.gif denote the Gamma distribution with density (on http://freakonometrics.hypotheses.org/files/2017/07/diri03.gif)

http://freakonometrics.hypotheses.org/files/2017/07/diri02.gif

Let http://freakonometrics.hypotheses.org/files/2017/07/diri04.gif denote independent http://freakonometrics.hypotheses.org/files/2017/07/diri05.gif random variables, with http://freakonometrics.hypotheses.org/files/2017/07/diri06.gif. Then http://freakonometrics.hypotheses.org/files/2017/07/diri07.gif where

http://freakonometrics.hypotheses.org/files/2017/07/diri08.gif

has a Dirichlet distribution with parameter

http://freakonometrics.hypotheses.org/files/2017/07/diri09.gif

Note that http://freakonometrics.hypotheses.org/files/2017/07/diri10.gif has a distribution in the simplex of http://freakonometrics.hypotheses.org/files/2017/07/diri11.gif,

http://freakonometrics.hypotheses.org/files/2017/07/diri40.gif

and has density

http://freakonometrics.hypotheses.org/files/2017/07/diri12.gif

We will write http://freakonometrics.hypotheses.org/files/2017/07/diri13.gif.

The density for different values of http://freakonometrics.hypotheses.org/files/2017/07/diri20.gif can be visualized below, e.g. http://freakonometrics.hypotheses.org/files/2017/07/diri21.gif, with some kind of symmetry,
http://freakonometrics.hypotheses.org/files/2017/07/dirichlet222.gif
or http://freakonometrics.hypotheses.org/files/2017/07/diri22.gif and http://freakonometrics.hypotheses.org/files/2017/07/diri23.gif, below
http://freakonometrics.hypotheses.org/files/2017/07/dirichlet522.gif
and finally, below, http://freakonometrics.hypotheses.org/files/2017/07/diri24.gif


Note that marginal distributions are also Dirichlet, in the sense that if

http://freakonometrics.hypotheses.org/files/2017/07/diri13.gif

then

http://freakonometrics.hypotheses.org/files/2017/07/diri14.gif

if http://freakonometrics.hypotheses.org/files/2017/07/diri15.gif, and if http://freakonometrics.hypotheses.org/files/2017/07/diri16.gif, then http://freakonometrics.hypotheses.org/files/2017/07/diri17.gif‘s have Beta distributions,

http://freakonometrics.hypotheses.org/files/2017/07/diri18.gif

See Devroye (1986) section XI.4, or Frigyik, Kapila & Gupta (2010) .This distribution might also be called multivariate Beta distribution. In R, this function can be used as follows

> library(MCMCpack)
> alpha=c(2,2,5)
> x=seq(0,1,by=.05)
> vx=rep(x,length(x))
> vy=rep(x,each=length(x))
> vz=1-x-vy
> V=cbind(vx,vy,vz)
> D=ddirichlet(V, alpha)
> persp(x,x,matrix(D,length(x),length(x))

(to plot the density, as figures above). Note that we will come back on that distribution later on so-called Liouville copulas (see also Gupta & Richards (1986)).

Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

http://freakonometrics.hypotheses.org/files/2016/11/credit-01.gif

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – http://freakonometrics.hypotheses.org/files/2016/11/exch67.gif (either the company defaults, or not), so that

http://freakonometrics.hypotheses.org/files/2016/11/exch66.gif

i.e. we can derive the (unconditional) distribution of the sum

http://freakonometrics.hypotheses.org/files/2016/11/exch60.gif

so that the probability function of the sum is, assuming that http://freakonometrics.hypotheses.org/files/2016/11/exch76.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch68.gif

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value
+ }
> QUANTILE=function(p=.99,a=2,m=.1,n=500){
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
+ V=V/sum(V)
+ return(min(which(cumsum(V)>p))) }

Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here

http://freakonometrics.hypotheses.org/files/2016/11/exch70.gif

i.e.

http://freakonometrics.hypotheses.org/files/2016/11/exch71.gif

Thus, the correlation between two default indicators is then

http://freakonometrics.hypotheses.org/files/2016/11/exch73.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch75.gif

Under the assumption that the latent factor is beta distributed

http://freakonometrics.hypotheses.org/files/2016/11/exch78.gif

we get

http://freakonometrics.hypotheses.org/files/2016/11/exch80.gif

Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,

> PICTURE=function(P){
+ A=seq(.01,2,by=.01)
+ VQ=matrix(NA,length(A),5)
+ for(i in 1:length(A)){
+ VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P)
+ VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P)
+ VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P)
+ VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P)
+ VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P)
+ }
+ plot(A,VQ[,5],type="s",col="red",ylim=
+ c(0,max(VQ)),xlab="",ylab="")
+ lines(A,VQ[,4],type="s",col="blue")
+ lines(A,VQ[,3],type="s",col="black")
+ lines(A,VQ[,2],type="s",col="blue",lty=2)
+ lines(A,VQ[,1],type="s",col="red",lty=2)
+ lines(A,rep(500*P,length(A)),col="grey")
+ legend(3,max(VQ),c("quantile 99.5%","quantile 99%",
+ "quantile 97.5%","quantile 95%","quantile 90%","mean"),
+ col=c("red","blue","black",
+"blue","red","grey"),
+ lty=c(1,1,1,2,2,1),border=n)
+}

e.g. with a (marginal) default probability of 15%,

> PICTURE(.15)

On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,


Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),

> PICTURE(.05)

(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,

> PICTURE(.0075)

And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !

In statistics, having too much information might not be a good thing

A common idea in statistics is that if we don’t know something, and we use anestimator of that something (instead of the true value) then there will be some additional uncertainty. For instance, consider a random sample, i.i.d., from a Gaussian distribution. Then, a confidence interval for the mean is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-06.gif

where http://freakonometrics.blog.free.fr/public/perso2/inc-out-8.gif is the quantile of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif of the standard normal distribution http://freakonometrics.blog.free.fr/public/perso2/inc-out-09.gif. But usually, standard deviation http://freakonometrics.blog.free.fr/public/perso2/inc-cout-10.gif (the something is was talking about earlier) is usually unknown. So we substitute an estimation of the standard deviation, e.g.

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-02.gif

and the cost we have to pay is that the new confidence interval is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-01.gif

where now http://freakonometrics.blog.free.fr/public/perso2/IC-cout-03.gif is the quantile of the Student distribution, of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif, with http://freakonometrics.blog.free.fr/public/perso2/IC-cout-04.gif degrees of freedom.
We call it a cost since the new confidence interval is now larger (the Student distribution has higher upper-quantiles than the Gaussian distribution).
So usually, if we substitute an estimation to the true value, there is a price to pay.
A few years ago, with Jean David Fermanian and Olivier Scaillet, we were writing a survey on copula density estimation (using kernels,  here). At the end, we wanted to add a small paragraph on the fact that we assumed that we wanted to fit a copula on a sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_11.gif i.i.d. with distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif, a copula, but in practice, we start from a samplehttp://freakonometrics.blog.free.fr/public/perso2/ic-cout_12.gif with joint distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cour_14.gif (assumed to have continuous margins, and – unique – copula http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif). But since margins are usually unknown, there should be a price for not observing them.
To be more formal, in a perfect wold, we would consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-15.gif

but in the real world, we have to consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-16.gif

where it is standard to consider ranks, i.e. http://freakonometrics.blog.free.fr/public/perso2/ic-cout_109.gif are empirical cumulative distribution functions.
My point is that when I ran simulations for the survey (the idea was more to give illustrations of several techniques of estimation, rather than proofs of technical theorems) we observed that the price to pay… was negative ! I.e. the variance of the estimator of the density (wherever on the unit square) was smaller on the pseudo sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout-17.gif than on perfect sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_18.gif.
By that time, we could not understand why we got that counter-intuitive result: even if we do know the true distribution, it is better not to use it, and to use instead a nonparametric estimator. Our interpretation was based on the discrepancy concept and was related to the latin hypercube construction:

With ranks, the data are more regular, and marginal distributions are exactlyuniform on the unit interval. So there is less variance.
This was our heuristic interpretation.
A couple of weeks ago, Christian Genest and Johan Segers proved that intuition in an article published in JMVA,

Well, we observed something for finite http://freakonometrics.blog.free.fr/public/maths/mariage01.png, but Christian and Johan obtained an analytical result. Hence, if we denote

http://freakonometrics.blog.free.fr/public/perso2/JSCG-1.gif

the empirical copula in the perfect world (with known margins) and

http://freakonometrics.blog.free.fr/public/perso2/JSCG-2.gif

the one constructed from the pseudo sample, they obtained that, everywhere

http://freakonometrics.blog.free.fr/public/perso2/JSCG-6.gif

with nice graphs of http://freakonometrics.blog.free.fr/public/perso2/JSCG-7.gif,

So I was very happy last week when Christian show me their results, to learn that our intuition was correct. Nevertheless, it is still a very counter-intuitive result…. If anyone has seen similar things, I’d be glad to hear about it !

Lecture notes on risk and insurance

I just finished some lectures notes on risk and insurance. The notes, that can be downloaded [pdf], are in French, and will be used at the JES (Journées d’Etudes Statistiques), organised at the CIRM (mentioned here). Previous notes on risk measures [pdf] and copulas [pdf]. Again, all comments are welcome…

Tails of copulas, une lecture graphique

Suite à une formation que je faisais en fin de semaine à Brest (les slides sont ici et ), je voulais revenir sur les histoires de tails of copulas, pour reprendre le titre de l’article (ici) de Gary Venter (et qui correspond à des choses que j’avais pu présenter il y a quelques années à Berlin, les slides étant en ligne ici).

  • Quantifier la dépendance de queue

L’idée est de noter qu’il est noter qu’il existe deux manières de quantifier la dépendance de queue. La première est liée à l’approche de Joe (1990, ici, ou 1997 pour le livre), qui a introduit un (strong) tail dependence index. Par exemple pour la queue inférieure,

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.2.php.png

soit

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.3.php.png

La seconde est liée à une idée que l’on retrouve dans les travaux de Janet Heffernan, Stuart Coles ou Jonathan Tawn. L’intuition est la suivante (on peut la retrouver en ligne ici). Si https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-2.2.php.png et https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-3.2.php.png ont la même loi et que l’on suppose les variables indépendantes, alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-1.2.php.png

En revanche, si les variables sont comonotones (c’est à dire égales comme on suppose les lois identiques),

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-4.2.php.png

Aussi, on peut supposer qu’il existe un indice https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png tel que

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-5.2.php.png

Le soucis est que le cas d’indépendance correspond à https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=2, alors que le cas de dépendance forte correspond au cas https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=1. Il est alors usuel de faire une transformation affine pour se ramener sur [0,1], et que la force de la dépendance soit croissante avec https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png, e.g.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-8.2.php.png

Posons alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.2.php.png

qui pourra être interprété comme un (weak) tail dependence index.
Bref, ces deux mesures donnent de l’information sur le comportement dans les queues de distribution.

  • Les fonctions de concentration dans les queues

L’idée est de noter qu’il est possible d’étudier ces fonctions afin de mieux comprendre le comportement dans les queues. En s’inspirant de Gary Venter, on peut définir

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Llatex2png.2.php.png

pour étudier le comportement dans la queue inférieure, et

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Clatex2png.2.php.png

pour la queue supérieure,où https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-12.2.php.png est la copule de survie associée à https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-13.2.php.png, au sens où
https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-14.2.php.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-15.2.php.png

Cet outil permettra de modéliser la dépendance forte. On peut également poser, afin d’étudier la dépendance faible,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.3.php.png

ou

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.4.php.png
  • Application statistique

L’idée est de noter qu’il est facile d’estimer ces fonctions. Ces outils peuvent être utiles pour mieux comprendre le comportement dans les queues.
Par exemple pour une copule Gaussienne de corrélation 0,5, on a la forme théorique suivante pour les fonctions de concentration (au sens fort)

Statistiquement, il est possible d’estimer ces quantités en comptant simplement le nombre d’observations dans le coin inférieur gauche, ou le coin supérieur droit.  Si on dispose d’un échantillon, on peut alors regarder ce que donnent les versions

http://perso.univ-re<br /><br /> nnes1.fr/arthur.charpentier/latex/toclatex2png-18.2.php.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-19.2.php.png

Pour un échantillon de taille n=500, on obtient les intervalles de confiance à 90% de la forme suivante,

Le code R ressemble à ça

> library(evd); data(lossalae)
> cor(lossalae,method="spearman")
         Loss     ALAE
Loss 1.000000 0.451872
ALAE 0.451872 1.000000

avec le code suivant pour la version empirique,

> z=seq(0,.5,by=.001)
> U=rank(v[,1])/(nrow(v)+1)
> V=rank(v[,2])/(nrow(v)+1)
> Lemp=rep(NA,length(z))
> Remp=rep(NA,length(z))
> for(i in 1:length(z)){
+  Lemp[i]=sum((U<=z[i])&(V<=z[i]))/sum(U<=z[i])
+  Remp[i]=sum((U>=1-z[i])&(V>=1-z[i]))/sum(U<=z[i])
+ }

et pour la version théorique,

> Lg=(pcopula(copclayton,cbind(z,z)))/(z)
> Rg=((1-2*(1-z)+pcopula(copclayton,cbind(1-z,1-z))))/(z)
> plot(c(1-z,z),c(Lg,Rg))

De plus, on a des fonctions similaires pour la dépendance au sens faible, avec le code suivant pour la version théorique,

> Lg=log(pcopula(cop,cbind(z,z)))/log(z)
> Rg=log((1-2*(1-z)+pcopula(cop,cbind(1-z,1-z))))/log(z)
> Lg=1/Lg*2-1
> Rg=1/Rg*2-1

et celui là pour la version empirique

> z=seq(0,.5,by=.001)
> v <- lossalae
> U=rank(v[,1])/(nrow(v)+1)
> V=rank(v[,2])/(nrow(v)+1)
> Lemp=rep(NA,length(z))
> Remp=rep(NA,length(z))
> for(i in 1:length(z)){
+  Lemp[i]=log(mean((U<=z[i])&(V<=z[i])))/log(mean(U<=z[i]))
+  Remp[i]=log(mean((U>=1-z[i])&(V>=1-z[i])))/log(mean(U<=z[i]))
+ }
> Lemp=1/Lemp*2-1
> Remp=1/Remp*2-1

Bref, on peut utiliser ces fonctions sur des vrais échantillons. Considérons l’exemple classique loss-alae (où l’on couple les frais dans des sinistres assurés, et les frais payés par l’assureur). On souhaite ajuster une copule, sans trop savoir laquelle. On peut commencer par étudier la dépendance forte, et comparer avec une copule Gaussienne. La copule Gaussienne de référence possède ici le même rho de Spearman que l’échantillon dont on dispose,

> cor(lossalae,method="spearman")
         Loss     ALAE
Loss 1.000000 0.451872
ALAE 0.451872 1.000000
> library(copula)
> paramgauss=.47
> paramclayton=.9
> paramgumbel=1.45
> copgauss=normalCopula(paramgauss)
> copclayton=claytonCopula(paramclayton, dim = 2)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)

On obtient ici

La courbe verte est l’intervalle de confiance (ponctuel) à 95% pour une copule Gaussienne et un échantillon de même taille. On voit qu’on modélise mal la structure de dépendance. Avec une copule duale de Clayton, on obtient

et enfin pour une copule de Gumbel,

Bref, la copule de Gumbel semble réellement bien adaptée… Si on creuse en étudiant la dépendance au sens faible, on peut valider là aussi ce modèle. En effet, si la référence est la copule Gaussienne,

ou pour une copule de Clayton,

alors qu’une copule de Gumbel donnerait

Copules et processus empiriques

Tarek Zari a soutenu sa thèse au début du mois, présentant une “contribution  à l’étude du processus empirique de copule“, et sa thèse est en ligne ici. Je mets aussi une copie des slides de la soutenance . Historiquement, il semble que Frits Ruymgaart a été le premier a parler de processus empirique de copules, en 1973 (sa thèse est en ligne ici).

Paul Deheuvels avait également introduit la notion en copule empirique dès 1979 sous le nom de “fonction de dépendance empirique“. A la même époque, Ludger Rüschendorf proposait également une étude asymptotique des processus empiriques de copules (ici en 1976), ou encore Gäenssler et Stute dans leur seminar on empirical processes et Winfried Stute dans les années 80 (). Une revue de la littérature sur les processus empiriques multivariés a été publié à cette époque, en ligne . Depuis Jean-David Fermanian a publié un papier ici sur la convergence faible, et Paul Deheuvels ou Ludger Rüschendorf ont publié énormément de choses, en particulier sur la vitesse de convergence…

Copules et processus empiriques

Tarek Zari a soutenu sa thèse au début du mois, présentant une “contribution  à l’étude du processus empirique de copule“, et sa thèse est en ligne ici. Je mets aussi une copie de ses slides . Historiquement, il semble que Frits Ruymgaart a été le premier a parler de processus empirique de copules, en 1973 (sa thèse est en ligne ici).

Paul Deheuvels avait également introduit la notion en copule empirique dès 1979 sous le nom de “fonction de dépendance empirique“. A la même époque, Ludger Rüschendorf proposait également une étude asymptotique des processus empiriques de copules (ici en 1976), ou encore Gäenssler et Stute dans leur seminar on empirical processes et Winfried Stute dans les années 80 (). Une revue de la littérature sur les processus empiriques multivariés a été publié à cette époque, en ligne . Depuis Jean-David Fermanian a publié un papier ici sur la convergence faible, et Paul Deheuvels ou Ludger Rüschendorf ont publié énormément de choses, en particulier sur la vitesse de convergence…