Tag Archives: copulas

(nonparametric) copula density estimation

Today, we will go further on the inference of copula functions. Some codes (and references) can be found on a previous post, on nonparametric estimators of copula densities (among other related things).  Consider (as before) the loss-ALAE dataset (since we’ve been working a lot on that dataset)

> library(MASS)
> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/(nrow(X)+1))

The standard tool to plot nonparametric estimators of densities is to use multivariate kernels. We can look at the density using

> mat1=kde2d(U[,1],U[,2],n=35)
> persp(mat1$x,mat1$y,mat1$z,col="green",
+ shade=TRUE,theta=s*5,
+ xlab="",ylab="",zlab="",zlim=c(0,7))

or level curves (isodensity curves) with more detailed estimators (on grids with shorter steps)

> mat1=kde2d(U[,1],U[,2],n=101)
> image(mat1$x,mat1$y,mat1$z,col=
+ rev(heat.colors(100)),xlab="",ylab="")
> contour(mat1$x,mat1$y,mat1$z,add=
+ TRUE,levels = pretty(c(0,4), 11))

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est1.gif

Kernels are nice, but we clearly observe some border bias, extremely strong in corners (the estimator is 1/4th of what it should be, see another post for more details). Instead of working on sample https://latex.codecogs.com/gif.latex?(U_i,V_i) on the unit square, consider some transformed sample https://latex.codecogs.com/gif.latex?(Q(U_i),Q(V_i)), where https://latex.codecogs.com/gif.latex?Q:(0,1)\rightarrow\mathbb{R} is a given function. E.g. a quantile function of an unbounded distribution, for instance the quantile function of the https://latex.codecogs.com/gif.latex?\mathcal{N}(0,1) distribution. Then, we can estimate the density of the transformed sample, and using the inversion technique, derive an estimator of the density of the initial sample. Since the inverse of a (general) function is not that simple to compute, the code might be a bit slow. But it does work,

> gaussian.kernel.copula.surface <- function (u,v,n) {
+   s=seq(1/(n+1), length=n, by=1/(n+1))
+   mat=matrix(NA,nrow = n, ncol = n)
+ sur=kde2d(qnorm(u),qnorm(v),n=1000,
+ lims = c(-4, 4, -4, 4))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qnorm(s[i])+4)*1000/8)+1;
+ 	Yj<-round((qnorm(s[j])+4)*1000/8)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dnorm(qnorm(s[i]))*
+ 	dnorm(qnorm(s[j])))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Here, we get

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est2.gif

Note that it is possible to consider another transformation, e.g. the quantile function of a Student-t distribution.

> student.kernel.copula.surface =
+  function (u,v,n,d=4) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(NA,nrow = n, ncol = n)
+ sur<-kde2d(qt(u,df=d),qt(v,df=d),n=5000,
+ lims = c(-8, 8, -8, 8))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qt(s[i],df=d)+8)*5000/16)+1;
+ 	Yj<-round((qt(s[j],df=d)+8)*5000/16)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dt(qt(s[i],df=d),df=d)*
+ 	dt(qt(s[j],df=d),df=d))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Another strategy is to consider kernel that have precisely the unit interval as support. The idea is here to consider the product of Beta kernels, where parameters depend on the location

> beta.kernel.copula.surface=
+  function (u,v,bx=.025,by=.025,n) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(0,nrow = n, ncol = n)
+ for (i in 1:n) {
+     a <- s[i]
+     for (j in 1:n) {
+     b <- s[j]
+ 	mat[i,j] <- sum(dbeta(a,u/bx,(1-u)/bx) *
+     dbeta(b,v/by,(1-v)/by)) / length(u)
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est3.gif

On those two graphs, we can clearly observe strong tail dependence in the upper (right) corner, that cannot be intuited using a standard kernel estimator…

Copulas and tail dependence, part 3

We have seen extreme value copulas in the section where we did consider general families of copulas. In the bivariate case, an extreme value can be written
http://freakonometrics.hypotheses.org/files/2016/05/CFG5.gif
where https://latex.codecogs.com/gif.latex?A(\cdot) is Pickands dependence function, which is a convex function satisfying
http://freakonometrics.hypotheses.org/files/2016/05/CFG11.gif
Observe that in this case,
http://freakonometrics.hypotheses.org/files/2016/05/CFG12.gifwhere https://latex.codecogs.com/gif.latex?\tau is Kendall’tau, and can be written
http://freakonometrics.hypotheses.org/files/2016/05/CFG13.gifFor instance, if
http://freakonometrics.hypotheses.org/files/2016/05/CFG15.gifthen, we obtain Gumbel copula. This is what we’ve seen in the section where we introduced this family. Now, let us talk about (nonparametric) inference, and more precisely the estimation of the dependence function. The starting point of the most standard estimator is to observe that if https://latex.codecogs.com/gif.latex?(U,V) has copula https://latex.codecogs.com/gif.latex?C, then
http://freakonometrics.hypotheses.org/files/2016/05/CFG3.gifhas distribution function
http://freakonometrics.hypotheses.org/files/2016/05/CFG2.gifAnd conversely, Pickands dependence function can be written
http://freakonometrics.hypotheses.org/files/2016/05/CFG7.gif
Thus, a natural estimator for Pickands function is
http://freakonometrics.hypotheses.org/files/2016/05/CFG9.gif
where https://latex.codecogs.com/gif.latex?\widehat{H}_n is the empirical cumulative distribution function of
http://freakonometrics.hypotheses.org/files/2016/05/cfg1.gifThis is the estimator proposed in Capéràa, Fougères  & Genest (1997). Here, we can compute everything here using

> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/
+ (nrow(X)+1))
> Z=log(U[,1])/log(U[,1]*U[,2])
> h=function(t) mean(Z<=t)
> H=Vectorize(h)
> a=function(t){
+ f=function(t) (H(t)-t)/(t*(1-t))
+ return(exp(integrate(f,lower=0,upper=t,
+ subdivisions=10000)$value))
+ }
> A=Vectorize(a)
> u=seq(.01,.99,by=.01)
> plot(c(0,u,1),c(1,A(u),1),type="l",col="red",
+ ylim=c(.5,1))

Even integrate to get an estimator of Pickands’ dependence function. Note that an interesting point is that the upper tail dependence index can be visualized on the graph, above,

> A(.5)/2
[1] 0.4055346

Copulas and tail dependence, part 1

As mentioned in the course last week Venter (2003) suggested nice functions to illustrate tail dependence (see also some slides used in Berlin a few years ago).

  • Joe (1990)’s lambda

Joe (1990) suggested a (strong) tail dependence index. For lower tails, for instance, consider

http://freakonometrics.hypotheses.org/files/2017/07/toc3latex2png.2.php_.png

i.e

http://freakonometrics.hypotheses.org/files/2017/07/toc3latex2png.3.php_.png
  • Upper and lower strong tail (empirical) dependence functions

The idea is to plot the function above, in order to visualize limiting behavior. Define

http://freakonometrics.hypotheses.org/files/2017/07/Llatex2png.2.php_.png

for the lower tail, and

http://freakonometrics.hypotheses.org/files/2017/07/Clatex2png.2.php_.png

for the upper tail, where http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-12.2.php_.png is the survival copula associated with http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-13.2.php_.png, in the sense that
http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-14.2.php_.png

while

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-15.2.php_.png

Now, one can easily derive empirical conterparts of those function, i.e.

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-18.2.php_.png

and

http://freakonometrics.hypotheses.org/files/2017/07/toclatex2png-19.2.php_.png

Thus, for upper tail, on the right, we have the following graph

http://freakonometrics.hypotheses.org/files/2017/07/upper-lambda.gif

and for the lower tail, on the left, we have

http://freakonometrics.hypotheses.org/files/2017/07/lower-lambda.gif

For the code, consider some real data, like the loss-ALAE dataset.

> library(evd)
> X=lossalae

The idea is to plot, on the left, the lower tail concentration function, and on the right, the upper tail function.

> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1)
> Lemp=function(z) sum((U<=z)&(V<=z))/sum(U<=z)
> Remp=function(z) sum((U>=1-z)&(V>=1-z))/sum(U>=1-z)
> u=seq(.001,.5,by=.001)
> L=Vectorize(Lemp)(u)
> R=Vectorize(Remp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL          UPPER TAIL")
> abline(v=.5,col="grey")

Now, we can compare this graph, with what should be obtained for some parametric copulas that have the same Kendall’s tau (e.g.). For instance, if we consider a Gaussian copula,

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) pCopula(c(z,z),copgauss)/z
> Rgaussian=function(z) (1-2*z+pCopula(c(z,z),copgauss))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel’s copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) pCopula(c(z,z),copgumbel)/z
> Rgumbel=function(z) (1-2*z+pCopula(c(z,z),copgumbel))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

That’s nice (isn’t it?), but since we do not have any confidence interval, it is still hard to conclude (even if it looks like Gumbel copula has a much better fit than the Gaussian one). A strategy can be to generate samples from those copulas, and to visualize what we had. With a Gaussian copula, the graph looks like

> u=seq(.0025,.5,by=.0025); nu=length(u)
> nsimul=500
> MGS=matrix(NA,nsimul,2*nu)
> for(s in 1:nsimul){
+ Xs=rCopula(nrow(X),copgauss)
+ Us=rank(Xs[,1])/(nrow(Xs)+1)
+ Vs=rank(Xs[,2])/(nrow(Xs)+1)
+ Lemp=function(z) sum((Us<=z)&(Vs<=z))/sum(Us<=z)
+ Remp=function(z) sum((Us>=1-z)&(Vs>=1-z))/sum(Us>=1-z)
+ MGS[s,1:nu]=Vectorize(Lemp)(u)
+ MGS[s,(nu+1):(2*nu)]=Vectorize(Remp)(rev(u))
+ lines(c(u,u+.5-u[1]),MGS[s,],col="red")
+ }

(including – pointwise – 90% confidence bands)

> Q95=function(x) quantile(x,.95)
> V95=apply(MGS,2,Q95)
> lines(c(u,u+.5-u[1]),V95,col="red",lwd=2)
> Q05=function(x) quantile(x,.05)
> V05=apply(MGS,2,Q05)
> lines(c(u,u+.5-u[1]),V05,col="red",lwd=2)

while it is

with Gumbel copula. Isn’t it a nice (graphical) tool ?

But as mentioned in the course, the statistical convergence can be slow. Extremely slow. So assessing if the underlying copula has tail dependence, or not, it now that simple. Especially if the copula exhibits tail independence. Like the Gaussian copula. Consider a sample of size 1,000. This is what we obtain if we generate random scenarios,

or we look at the left tail (with a log-scale)

Now, consider a 10,000 sample,

or with a log-scale

We can even consider a 100,000 sample,

or with a log-scale

On those graphs, it is rather difficult to conclude if the limit is 0, or some strictly positive value (again, it is a classical statistical problem when the value of interest is at the border of the support of the parameter). So, a natural idea is to consider a weaker tail dependence index. Unless you have something like 100,000 observations…

Kendall’s function for copulas

As mentioned in the course on copulas, a nice tool to describe dependence it Kendall’s cumulative function. Given a random pair http://freakonometrics.hypotheses.org/files/2015/12/conc-19.gif with distribution  http://freakonometrics.hypotheses.org/files/2015/12/conc-17.gif, define random variable http://freakonometrics.hypotheses.org/files/2015/12/conc-30.gif. Then Kendall’s cumulative function is

http://freakonometrics.hypotheses.org/files/2015/12/kendall-01.gif

Genest and Rivest (1993) introduced that function to choose among Archimedean copulas (we’ll get back to this point below).

From a computational point of view, computing such a function can be done as follows,

  • for all http://freakonometrics.hypotheses.org/files/2015/12/kendall-02.gif, compute http://freakonometrics.hypotheses.org/files/2015/12/kendall-03.gif as the proportion of observation in the lower quadrant, with upper corner http://freakonometrics.hypotheses.org/files/2015/12/kendall-4.gif, i.e.

http://freakonometrics.hypotheses.org/files/2015/12/kendall-06.gif

  • then compute the cumulative distribution function of http://freakonometrics.hypotheses.org/files/2015/12/kendall-03.gif‘s.

To visualize the construction of that cumulative distribution function, consider the following animation

Thus, here the code to compute simply that cumulative distribution function is

n=nrow(X)
i=rep(1:n,each=n)
j=rep(1:n,n)
S=((X[i,1]>X[j,1])&(X[i,2]>X[j,2]))
Z=tapply(S,i,sum)/(n-1)

The graph can be obtain either using

plot(ecdf(Z))

or

plot(sort(Z),(1:n)/n,type="s",col="red")

The interesting point is that for an Archimedean copula with generator http://freakonometrics.hypotheses.org/files/2015/12/kendall-7.gif, then Kendall’s function is simply

http://freakonometrics.hypotheses.org/files/2015/12/kendall-8.gifIf we’re too lazy to do the maths, at least, it is possible to compute those functions numerically. For instance, for Clayton copula,

h=.001
phi=function(t){(t^(-alpha)-1)}
dphi=function(t){(phi(t+h)-phi(t-h))/2/h}
k=function(t){t-phi(t)/dphi(t)}
Kc=Vectorize(k)

Similarly, let us consider Gumbel copula,

phi=function(t){(-log(t))^(theta)}
dphi=function(t){(phi(t+h)-phi(t-h))/2/h}
k=function(t){t-phi(t)/dphi(t)}
Kg=Vectorize(k)

If we plot the empirical Kendall’s function (obtained from the sample), with different theoretical ones, derived from Clayton copulas (on the left, in blue) or Gumbel copula (on the right, in purple), we have the following,

http://freakonometrics.hypotheses.org/files/2015/12/kendall-function-anim.gif

Note that the different curves were obtained when Clayton copula has Kendall’s tau equal to 0, .1, .2, .3, …, .9, 1, and similarly for Gumbel copula (so that Figures can be compared). The following table gives a correspondence, from Kendall’s tau to the underlying parameter of a copula (for different families)

as well as Spearman’s rho,


To conclude, observe that there are two important particular cases that can be identified here: the case of perfect dependent, on the first diagonal when http://freakonometrics.hypotheses.org/files/2015/12/kennnn-04.gif, and the case of independence, the upper green curve, http://freakonometrics.hypotheses.org/files/2016/10/kennnnn-05.gif. It should also be mentioned that it is also common to plot not function http://freakonometrics.hypotheses.org/files/2015/12/kennnn-01.gif, but function http://freakonometrics.hypotheses.org/files/2015/12/kennnn-02.gif, defined as http://freakonometrics.hypotheses.org/files/2015/12/kennnn-03.gif,

PhD defense on copulas

This Wednesday I will be at Université Paris 1 Sorbonne as a member of the jury of the PhD thesis of Pierre-André Maugis, on conditional correlation and vine copula.

Vine copulas were born in 2002 with thepaper of Tim Bedford and Roger M. CookeVines–a new graphical model for dependent random variables. The idea is to use the following decomposition for a multivariate density

(from Bayes formula, with synthetic notations). Then using the relationship between a bivariate density and its copula (density)

thus

Using again Bayes formula,

and we can write

Since  and , the previous expression becomes

or to stress on the most important part (as I see it)

It is common then to assume that this conditional copula does not depend on the conditioning parameter. The more detailed expression of that joint trivariate density is

The (parametric) inference algorithm is defined in Cooke, Joe and Aas (2010) as follows

The important assumption in vine copula models is that conditional copulas are constant. And this assumption might be relevant in some cases. For instance, in the Gaussian case (the observations have a Gaussian joint distribution – or at least copula – and we fit a vine model with Gaussian bivariate copulas).

The code to fit a vine copula is the following,

> library(CDVine)
> library(mnormt)
> SIGMA=matrix(c(1,.6,.7,.6,1,.8,.7,.8,1),3,3)
> X=rmnorm(n=100000,varcov=SIGMA)
> CDVineSeqEst(dat=X, family = c(1,1,1),
+ type = 1, method = "mle")
$par
[1] 0.6001505 0.7023699 0.6698215
 
$par2
[1] 0 0 0

Note that it is consistent with the following algorithm where conditional copulas are fitted. In the following, for all values of the given component, we wit a Gaussian copula for the conditional remaining pair,

> U=pnorm(X)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> GaussCop = normalCopula(param=.5, dim = 2)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> fit12.mpl = fitCopula(GaussCop, U1U2, method="mpl")@estimate
> fit13.mpl = fitCopula(GaussCop, U1U3, method="mpl")@estimate
> fit12.mpl
[1] 0.5984932
> fit13.mpl
[1] 0.7005185
> fit23a=fit23b=rep(NA,99)
> for(i in 4:96){
+ x=i/100
+ C12=pcopula(normalCopula(param=fit12.mpl, dim = 2),U1U2)
+ C13=pcopula(normalCopula(param=fit13.mpl, dim = 2),U1U3)
+ U12=rank(C12)/(nrow(U)+1)
+ U13=rank(C13)/(nrow(U)+1)
+ U23=cbind(U12[abs(U[,1]-x)<.02],U13[abs(U[,1]-x)<.02])
+ V23=cbind(rank(U23[,1])/(nrow(U23)+1),
+ rank(U23[,2])/(nrow(U23)+1))
+ fit23.mpl = fitCopula(GaussCop, V23, method="mpl")@estimate
+ fit23a[i]=fit23.mpl
+ }
> plot(X,fit23a,col="red")

It looks like assuming the conditional copula as constant was a valid assumption here

But note that if the true distribution is not Gaussian, then assuming the conditional copula as constant is not valid anymore (here a trivariate Clayton copula was generated)

Talk at Laval University at the Actuarial Seminar

I was last Friday at Laval University for a conference by David Cummins and Mary Weiss (here). I will be back tomorrow, this time to give a talk, on “distorting probabilities in actuarial science” (the talk will be extremely close to the one I gave at McGill in November). “In this talk, we will first get back on properties of distortion operators for pricing financial and insurance risks. Based on the dual version of the expected utility framework, we will see how distorted risk measures have been introduced, from VaR and TVaR, to Esscher premium and Wang’s measures. Then we will discuss extensions in higher dimension. We will discuss tail properties of distorted copulas (in the particular case of Archimedean copulas). A natural application will be aging problems (in survival analysis or in credit risk).” Slides can be downloaded from here.

 

This talk can be seen as a first part, the second one behing the talk I will give in 15 days, again at Laval University, but this time for the Seminar of Statistics. The talk will be on “Beta kernel and transformed kernel : applications to quantile estimation, and copula density estimation“.

In statistics, having too much information might not be a good thing

A common idea in statistics is that if we don’t know something, and we use anestimator of that something (instead of the true value) then there will be some additional uncertainty. For instance, consider a random sample, i.i.d., from a Gaussian distribution. Then, a confidence interval for the mean is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-06.gif

where http://freakonometrics.blog.free.fr/public/perso2/inc-out-8.gif is the quantile of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif of the standard normal distribution http://freakonometrics.blog.free.fr/public/perso2/inc-out-09.gif. But usually, standard deviation http://freakonometrics.blog.free.fr/public/perso2/inc-cout-10.gif (the something is was talking about earlier) is usually unknown. So we substitute an estimation of the standard deviation, e.g.

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-02.gif

and the cost we have to pay is that the new confidence interval is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-01.gif

where now http://freakonometrics.blog.free.fr/public/perso2/IC-cout-03.gif is the quantile of the Student distribution, of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif, with http://freakonometrics.blog.free.fr/public/perso2/IC-cout-04.gif degrees of freedom.
We call it a cost since the new confidence interval is now larger (the Student distribution has higher upper-quantiles than the Gaussian distribution).
So usually, if we substitute an estimation to the true value, there is a price to pay.
A few years ago, with Jean David Fermanian and Olivier Scaillet, we were writing a survey on copula density estimation (using kernels,  here). At the end, we wanted to add a small paragraph on the fact that we assumed that we wanted to fit a copula on a sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_11.gif i.i.d. with distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif, a copula, but in practice, we start from a samplehttp://freakonometrics.blog.free.fr/public/perso2/ic-cout_12.gif with joint distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cour_14.gif (assumed to have continuous margins, and – unique – copula http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif). But since margins are usually unknown, there should be a price for not observing them.
To be more formal, in a perfect wold, we would consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-15.gif

but in the real world, we have to consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-16.gif

where it is standard to consider ranks, i.e. http://freakonometrics.blog.free.fr/public/perso2/ic-cout_109.gif are empirical cumulative distribution functions.
My point is that when I ran simulations for the survey (the idea was more to give illustrations of several techniques of estimation, rather than proofs of technical theorems) we observed that the price to pay… was negative ! I.e. the variance of the estimator of the density (wherever on the unit square) was smaller on the pseudo sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout-17.gif than on perfect sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_18.gif.
By that time, we could not understand why we got that counter-intuitive result: even if we do know the true distribution, it is better not to use it, and to use instead a nonparametric estimator. Our interpretation was based on the discrepancy concept and was related to the latin hypercube construction:

With ranks, the data are more regular, and marginal distributions are exactlyuniform on the unit interval. So there is less variance.
This was our heuristic interpretation.
A couple of weeks ago, Christian Genest and Johan Segers proved that intuition in an article published in JMVA,

Well, we observed something for finite http://freakonometrics.blog.free.fr/public/maths/mariage01.png, but Christian and Johan obtained an analytical result. Hence, if we denote

http://freakonometrics.blog.free.fr/public/perso2/JSCG-1.gif

the empirical copula in the perfect world (with known margins) and

http://freakonometrics.blog.free.fr/public/perso2/JSCG-2.gif

the one constructed from the pseudo sample, they obtained that, everywhere

http://freakonometrics.blog.free.fr/public/perso2/JSCG-6.gif

with nice graphs of http://freakonometrics.blog.free.fr/public/perso2/JSCG-7.gif,

So I was very happy last week when Christian show me their results, to learn that our intuition was correct. Nevertheless, it is still a very counter-intuitive result…. If anyone has seen similar things, I’d be glad to hear about it !

Mandelbrot, fractals and counterexamples in applied probability

Benoît Mandelbrot died yesterday. Like most of the blogs dealing with applied mathematics, it looks like I have to mention this event. Unfortunately, I don’t know much about fractals…

The first time I heard about Mandelbrot and chaos was when I have been working on fractional time series (see eghere). Murad Taqqu gave a very interesting short course in Paris, and I have been using it in two papers (actually one more should appear soon in Climate Change).

The second time was in Québec (city), five years ago, when Roger Nelsen gave a talk on copulas with fractal suppport (here). By that time, we were finishing our paper with Alessandro (in mathematical finance, and limiting theorems). I remember adding a Remark in the paper (that can be found here), since using that kind of copulas was a nice way to show that, without sufficient regularity conditions, the limit we were looking for had no sense.

A few months after, with Johan, in a paper on pitfalls on lower tail dependence for Archimedean copulas (here), we used again this fractal construction (here applied to Archimedean copulas) to find a nice counterexample to a (false) theorem on regular variation. Again, the goal was to understand how the dependence structure of https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel01.png given https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel02.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel03.png changed when https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel04.png goes to 0. In the case of Archimedean copulas, the copula of the conditional pair is still Archimedean (with another generator, except for Clayton copula). The graph below show how https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel05.png changes, as https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel04.png decreases… Actually, I draw https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel06.png since Archimedean generators are not unique.

I have also decided to plot https://perso.univ-rennes1.fr/arthur.charpentier/latex/mandel07.png.

Here, we see that there is no way of talk about a possible limit for the conditional copula because of a fractal behavior in 0 of the generator (even if my fractal are not as nice as the one you can find on the internet….). So thanks Benoît for giving us a nice toy to build interesting counterexamples !

Copules et risques corrélés

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso2/.copula-density-proj_m.jpg

J’avais promis dans un commentaire que je mettrais bientôt en ligne un survey sur les copules…. après plusieurs mois de retard, le document est en ligne [pdf], et il est sobrement intitulé copules et risques multiples. Toutes les remarques et critiques sont les bienvenues ! Il s’agit d’un chapitre pour un livre dont je mettrais la référence en ligne ultérieurement. A priori, ce document devrait servir de base pour le cours qui sera donné dans un mois au CIRM (mentionné ici, dans le cadre des Journées d’Études Statistiques).

Tails of copulas, une lecture graphique

Suite à une formation que je faisais en fin de semaine à Brest (les slides sont ici et ), je voulais revenir sur les histoires de tails of copulas, pour reprendre le titre de l’article (ici) de Gary Venter (et qui correspond à des choses que j’avais pu présenter il y a quelques années à Berlin, les slides étant en ligne ici).

  • Quantifier la dépendance de queue

L’idée est de noter qu’il est noter qu’il existe deux manières de quantifier la dépendance de queue. La première est liée à l’approche de Joe (1990, ici, ou 1997 pour le livre), qui a introduit un (strong) tail dependence index. Par exemple pour la queue inférieure,

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.2.php.png

soit

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.3.php.png

La seconde est liée à une idée que l’on retrouve dans les travaux de Janet Heffernan, Stuart Coles ou Jonathan Tawn. L’intuition est la suivante (on peut la retrouver en ligne ici). Si https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-2.2.php.png et https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-3.2.php.png ont la même loi et que l’on suppose les variables indépendantes, alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-1.2.php.png

En revanche, si les variables sont comonotones (c’est à dire égales comme on suppose les lois identiques),

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-4.2.php.png

Aussi, on peut supposer qu’il existe un indice https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png tel que

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-5.2.php.png

Le soucis est que le cas d’indépendance correspond à https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=2, alors que le cas de dépendance forte correspond au cas https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=1. Il est alors usuel de faire une transformation affine pour se ramener sur [0,1], et que la force de la dépendance soit croissante avec https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png, e.g.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-8.2.php.png

Posons alors

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.2.php.png

qui pourra être interprété comme un (weak) tail dependence index.
Bref, ces deux mesures donnent de l’information sur le comportement dans les queues de distribution.

  • Les fonctions de concentration dans les queues

L’idée est de noter qu’il est possible d’étudier ces fonctions afin de mieux comprendre le comportement dans les queues. En s’inspirant de Gary Venter, on peut définir

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Llatex2png.2.php.png

pour étudier le comportement dans la queue inférieure, et

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Clatex2png.2.php.png

pour la queue supérieure,où https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-12.2.php.png est la copule de survie associée à https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-13.2.php.png, au sens où
https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-14.2.php.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-15.2.php.png

Cet outil permettra de modéliser la dépendance forte. On peut également poser, afin d’étudier la dépendance faible,

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.3.php.png

ou

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toc2latex2png.4.php.png
  • Application statistique

L’idée est de noter qu’il est facile d’estimer ces fonctions. Ces outils peuvent être utiles pour mieux comprendre le comportement dans les queues.
Par exemple pour une copule Gaussienne de corrélation 0,5, on a la forme théorique suivante pour les fonctions de concentration (au sens fort)

Statistiquement, il est possible d’estimer ces quantités en comptant simplement le nombre d’observations dans le coin inférieur gauche, ou le coin supérieur droit.  Si on dispose d’un échantillon, on peut alors regarder ce que donnent les versions

http://perso.univ-re<br /><br /> nnes1.fr/arthur.charpentier/latex/toclatex2png-18.2.php.png

et

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-19.2.php.png

Pour un échantillon de taille n=500, on obtient les intervalles de confiance à 90% de la forme suivante,

Le code R ressemble à ça

> library(evd); data(lossalae)
> cor(lossalae,method="spearman")
         Loss     ALAE
Loss 1.000000 0.451872
ALAE 0.451872 1.000000

avec le code suivant pour la version empirique,

> z=seq(0,.5,by=.001)
> U=rank(v[,1])/(nrow(v)+1)
> V=rank(v[,2])/(nrow(v)+1)
> Lemp=rep(NA,length(z))
> Remp=rep(NA,length(z))
> for(i in 1:length(z)){
+  Lemp[i]=sum((U<=z[i])&(V<=z[i]))/sum(U<=z[i])
+  Remp[i]=sum((U>=1-z[i])&(V>=1-z[i]))/sum(U<=z[i])
+ }

et pour la version théorique,

> Lg=(pcopula(copclayton,cbind(z,z)))/(z)
> Rg=((1-2*(1-z)+pcopula(copclayton,cbind(1-z,1-z))))/(z)
> plot(c(1-z,z),c(Lg,Rg))

De plus, on a des fonctions similaires pour la dépendance au sens faible, avec le code suivant pour la version théorique,

> Lg=log(pcopula(cop,cbind(z,z)))/log(z)
> Rg=log((1-2*(1-z)+pcopula(cop,cbind(1-z,1-z))))/log(z)
> Lg=1/Lg*2-1
> Rg=1/Rg*2-1

et celui là pour la version empirique

> z=seq(0,.5,by=.001)
> v <- lossalae
> U=rank(v[,1])/(nrow(v)+1)
> V=rank(v[,2])/(nrow(v)+1)
> Lemp=rep(NA,length(z))
> Remp=rep(NA,length(z))
> for(i in 1:length(z)){
+  Lemp[i]=log(mean((U<=z[i])&(V<=z[i])))/log(mean(U<=z[i]))
+  Remp[i]=log(mean((U>=1-z[i])&(V>=1-z[i])))/log(mean(U<=z[i]))
+ }
> Lemp=1/Lemp*2-1
> Remp=1/Remp*2-1

Bref, on peut utiliser ces fonctions sur des vrais échantillons. Considérons l’exemple classique loss-alae (où l’on couple les frais dans des sinistres assurés, et les frais payés par l’assureur). On souhaite ajuster une copule, sans trop savoir laquelle. On peut commencer par étudier la dépendance forte, et comparer avec une copule Gaussienne. La copule Gaussienne de référence possède ici le même rho de Spearman que l’échantillon dont on dispose,

> cor(lossalae,method="spearman")
         Loss     ALAE
Loss 1.000000 0.451872
ALAE 0.451872 1.000000
> library(copula)
> paramgauss=.47
> paramclayton=.9
> paramgumbel=1.45
> copgauss=normalCopula(paramgauss)
> copclayton=claytonCopula(paramclayton, dim = 2)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)

On obtient ici

La courbe verte est l’intervalle de confiance (ponctuel) à 95% pour une copule Gaussienne et un échantillon de même taille. On voit qu’on modélise mal la structure de dépendance. Avec une copule duale de Clayton, on obtient

et enfin pour une copule de Gumbel,

Bref, la copule de Gumbel semble réellement bien adaptée… Si on creuse en étudiant la dépendance au sens faible, on peut valider là aussi ce modèle. En effet, si la référence est la copule Gaussienne,

ou pour une copule de Clayton,

alors qu’une copule de Gumbel donnerait

Copules et processus empiriques

Tarek Zari a soutenu sa thèse au début du mois, présentant une “contribution  à l’étude du processus empirique de copule“, et sa thèse est en ligne ici. Je mets aussi une copie des slides de la soutenance . Historiquement, il semble que Frits Ruymgaart a été le premier a parler de processus empirique de copules, en 1973 (sa thèse est en ligne ici).

Paul Deheuvels avait également introduit la notion en copule empirique dès 1979 sous le nom de “fonction de dépendance empirique“. A la même époque, Ludger Rüschendorf proposait également une étude asymptotique des processus empiriques de copules (ici en 1976), ou encore Gäenssler et Stute dans leur seminar on empirical processes et Winfried Stute dans les années 80 (). Une revue de la littérature sur les processus empiriques multivariés a été publié à cette époque, en ligne . Depuis Jean-David Fermanian a publié un papier ici sur la convergence faible, et Paul Deheuvels ou Ludger Rüschendorf ont publié énormément de choses, en particulier sur la vitesse de convergence…

Séminaire Probabilité et Statistique, UBO, Brest

Talk at the statistical seminar at the Université de Bretagne Occidentale, in Brest, Wednesday May 6th Tuesday May 5th, 14h (in  10 days), on “multivariate extremes. Slides can be found here.

The talk will give a detailed introduction on multivariate extremes and related concepts. Then the case of Archimedean copula will be fully described (following the paper with Johan Segers).

[04/05/2009]: some applications in risk management will be shown at the end of talk, as well as some news things on spatial correlation.

and in order to illustrate tail convergence of Archimedean copulas, I have uploaded two animations, with tail independence below,

with tail dependence (or asymptotic dependence),

Dynamic dependence ordering for Archimedean copulas and distorted copulas

The paper on Dynamic dependence ordering for Archimedean copulas and distorted copulas should appear soon in Kybernetika.

This paper proposes a general framework to compare the strength of the dependence in survival models, as time changes, i.e. given remaining lifetimes X, to compare the dependence of X given X>t, and X given X>s, where s>t. More precisely, analytical results will be obtained in the case the survival copula of X is either Archimedean or a distorted copula. The case of a frailty based model will also be discussed in details.