I will present results of the pricing games at the conference on Recent Developments in Dependence Modelling with Applications in Finance and Insurance in in Αίγινα, Greece tomorrow. Slides are now online,
Tag Archives: dependence
Copulas and tail dependence, part 3
where is Pickands dependence function, which is a convex function satisfying
Observe that in this case,
where is Kendall’tau, and can be written
For instance, if
then, we obtain Gumbel copula. This is what we’ve seen in the section where we introduced this family. Now, let us talk about (nonparametric) inference, and more precisely the estimation of the dependence function. The starting point of the most standard estimator is to observe that if has copula , then
has distribution function
And conversely, Pickands dependence function can be written
Thus, a natural estimator for Pickands function is
where is the empirical cumulative distribution function of
This is the estimator proposed in Capéràa, Fougères & Genest (1997). Here, we can compute everything here using
> library(evd) > X=lossalae > U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/ + (nrow(X)+1)) > Z=log(U[,1])/log(U[,1]*U[,2]) > h=function(t) mean(Z<=t) > H=Vectorize(h) > a=function(t){ + f=function(t) (H(t)-t)/(t*(1-t)) + return(exp(integrate(f,lower=0,upper=t, + subdivisions=10000)$value)) + } > A=Vectorize(a) > u=seq(.01,.99,by=.01) > plot(c(0,u,1),c(1,A(u),1),type="l",col="red", + ylim=c(.5,1))
Even integrate to get an estimator of Pickands’ dependence function. Note that an interesting point is that the upper tail dependence index can be visualized on the graph, above,
> A(.5)/2 [1] 0.4055346
Copulas and tail dependence, part 2
An alternative to describe tail dependence can be found in the Ledford & Tawn (1996) for instance. The intuition behind can be found in Fischer & Klein (2007)). Assume that and have the same distribution. Now, if we assume that those variables are (strictly) independent,
So assume that there is a such that
Then =2 can be interpreted as independence while =1 means strong (perfect) positive dependence. Thus, consider the following transformation to get a parameter in [0,1], with a strength of dependence increasing with the index, e.g.
In order to derive a tail dependence index, assume that there exists a limit to
which will be interpreted as a (weak) tail dependence index. Thus define concentration functions
for the lower tail (on the left) and
> library(evd); > data(lossalae) > X=lossalae > U=rank(X[,1])/(nrow(X)+1) > V=rank(X[,2])/(nrow(X)+1 > fL2emp=function(z) 2*log(mean(U<=z))/ + log(mean((U<=z)&(V<=z)))-1 > fR2emp=function(z) 2*log(mean(U>=1-z))/ + log(mean((U>=1-z)&(V>=1-z)))-1 > u=seq(.001,.5,by=.001) > L=Vectorize(fL2emp)(u) > R=Vectorize(fR2emp)(rev(u)) > plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1, + xlab="LOWER TAIL UPPER TAIL") > abline(v=.5,col="grey")
and again, it is possible to plot those empirical functions against some parametric ones, e.g. the one obtained from a Gaussian copula (with the same Kendall’s tau)
> tau=cor(lossalae,method="kendall")[1,2] > library(copula) > paramgauss=sin(tau*pi/2) > copgauss=normalCopula(paramgauss) > Lgaussian=function(z) 2*log(z)/log(pCopula(c(z,z), + copgauss))-1 > Rgaussian=function(z) 2*log(1-z)/log(1-2*z+ + pCopula(c(z,z),copgauss))-1 > u=seq(.001,.5,by=.001) > Lgs=Vectorize(Lgaussian)(u) > Rgs=Vectorize(Rgaussian)(1-rev(u)) > lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")
or Gumbel copula,
> paramgumbel=1/(1-tau) > copgumbel=gumbelCopula(paramgumbel, dim = 2) > Lgumbel=function(z) 2*log(z)/log(pCopula(c(z,z), + copgumbel))-1 > Rgumbel=function(z) 2*log(1-z)/log(1-2*z+ + pCopula(c(z,z),copgumbel))-1 > Lgl=Vectorize(Lgumbel)(u) > Rgl=Vectorize(Rgumbel)(1-rev(u)) > lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")
Again, one should look more carefully at confidence bands, but is looks like Gumbel copula provides a good fit here.
Non transitivity of correlation for random vectors in dimension 3
Dependence in dimension 2 is difficult. But one has to admit that dimension 2 is way more simple than dimension 3 ! I recently rediscovered a nice paper, Langford, Schwertman & Owens (2001), on transitivity of the property of being positively correlated (which inspired the odd title of this post). And more recently, Castro Sotos, Vanhoof, Van Den Noortgate & Onghena (2001) conducted a study which confirmed that there are strong misconceptions of correlation (and I guess, not only because probabilistic reasoning is extremely weak, as mentioned in Stock & Gross (1989)) and association, or correlation (as already stated in Estapa & Bataneor (1996), or Batanero, Estepa, Godino and Green (1996)). My understanding is that is it possible to have almost anything… even counterintuitive results. For instance, if we want to mix independence and comonotonicity (i.e. perfect positive dependence), all the theorems you might think of should probably be incorrect. Consider the following result (based on some old examples I have been using in my courses 5 or 6 years ago, see e.g. here)
“If X and Y are comontonic, and if Y and Z are comonotonic, then X and Z are comonotonic”
Well, this result seems to be intuitive, and probably valid. But it is not. Consider the following triplet,
Projections on bivariate planes of the three dimensional vector are
Here, X and Y are comonotonic, so are Y and Z, but X and Z are independent… Weird, isn’t it ? Another one ?
“If X and Y are comontonic, and if Y and Z are independent, then X and Z are independent”
Again, even if it is intuitive, it is not correct… Consider for instance the following 3 dimensional distribution,
Here, X and Y are comonotonic, while Y and Z are independent, but X here and Z are countercomonotonic (perfect negative dependence). It is also possible to consider the following distribution,
that can be visualized below,
In that case, X and Y are comonotonic, while Y and Z are independent, but X here and Z are comonotonic (perfect positive dependence). So obviously, we should be able to construct any kind of counterexample, on any kind of result we might think as intuitive.
To be honest, the problem with intuition is that is usually comes from the Gaussian case, and from the perception that dependence is related to correlation. Pearson’s linear correlation. Consider the case of a 3 dimensional random vector, with correlation matrix
Given two pairs of correlations, and , what could we say about ? For instance, the intuition is that if and are positive, then is likely to be positive too (perhaps). The only property (at least the most important) we have on that correlation matrix is that it should be positive-semidefinite. So if we play on eigenvalues, it should be possible to derive inequalities satisfied by .Langford, Schwertman & Owens (2001) claim (in Theorem 3) that correlations have to satisfy some property, like
which is simply the fact that the determinant of the correlation matrix has to be positive, that property was already mentioned in Kendall (1948), as an exercise,
But is that a sufficient and necessary condition ? Since I am extremely lazy, let us run some numerical calculation to visualize possible values for , as function of and . Consider the following code
U=seq(-1,1,by=.1) V=seq(-1,1,by=.001) FSUP=function(a,b){ DF=function(c){min(eigen(matrix (c(1,a,b,a,1,c,b,c,1),3,3))$values)}; V[max(which(Vectorize(DF)(V)>0))]} FINF=function(a,b){ DF=function(c){min(eigen(matrix( c(1,a,b,a,1,c,b,c,1),3,3))$values)}; V[min(which(Vectorize(DF)(V)>0))]} MSUP=outer(U,U,Vectorize(FSUP)) MINF=outer(U,U,Vectorize(FINF)) library(RColorBrewer) clr=rev(brewer.pal(6, "RdBu")) U=U[2:20] MSUP=MSUP[2:20,2:20] MINF=MINF[2:20,2:20] persp(U,U,MSUP,col="green",shade=TRUE) image(U,U,MSUP,breaks=((-3):3)/3,col=clr) persp(U,U,MINF,col="green",shade=TRUE) image(U,U,MINF,breaks=((-3):3)/3,col=clr)
Here, we can derive the lower and the upper bound for , as function of and .
V=seq(-1,1,by=.001) U=seq(-1,1,by=.1) U=U[2:(length(U)-1)] V=V[2:(length(V)-1)] U=c(-.9999,U,.9999) V=c(-.99999,V,.99999) FSUP=function(a){ DF=function(c){min(eigen(matrix( c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)}; V[max(which(Vectorize(DF)(V)>0))]} FINF=function(a){ DF=function(c){min(eigen(matrix( c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)}; V[min(which(Vectorize(DF)(V)>0))]} VS=Vectorize(FSUP)(U) VI=Vectorize(FINF)(U) plot(c(U,U),c(VS,VI),col="white") polygon(c(U,rev(U)),c(VS,rev(VI)), col="yellow",border=NA) lines(U,VS,lwd=2,col="red") lines(U,VI,lwd=2,col="red")
We do observe here extremely nice ellipses… Consider the case of a null correlation then the region for possible values for and is the unit circle.
The interpretation is that if is null, and so is then might take any value between -1 and 1 (under the assumption that marginal distribution allow such values, e.g. marginal Gaussian distributions). On the other hand if is either -1 or +1 (perfect negative/positive correlation) then has to be null…