Tag Archives: dimension

Non transitivity of correlation for random vectors in dimension 3

Dependence in dimension 2 is difficult. But one has to admit that dimension 2 is way more simple than dimension 3 ! I recently rediscovered a nice paper, Langford, Schwertman & Owens (2001), on transitivity of the property of being positively correlated (which inspired the odd title of this post). And more recently, Castro Sotos, Vanhoof, Van Den Noortgate & Onghena (2001) conducted a study which confirmed that there are strong misconceptions of correlation (and I guess, not only because probabilistic reasoning is extremely weak, as mentioned in Stock & Gross (1989)) and association, or correlation (as already stated in Estapa & Bataneor (1996), or Batanero, Estepa, Godino and Green (1996)). My understanding is that is it possible to have almost anything… even counterintuitive results. For instance, if we want to mix independence and comonotonicity (i.e. perfect positive dependence), all the theorems you might think of should probably be incorrect. Consider the following result (based on some old examples I have been using in my courses 5 or 6 years ago, see e.g. here)

“If X and Y are comontonic, and if Y and Z are comonotonic, then X and Z are comonotonic”

Well, this result seems to be intuitive, and probably valid. But it is not. Consider the following triplet,

Projections on bivariate planes of the three dimensional vector are

Here, X and Y are comonotonic, so are Y and Z, but X and Z are independent… Weird, isn’t it ? Another one ?

If X and Y are comontonic, and if Y and Z are independent, then X and Z are independent

Again, even if it is intuitive, it is not correct… Consider for instance the following 3 dimensional distribution,

Here, X and Y are comonotonic, while Y and Z are independent, but X here and Z are countercomonotonic (perfect negative dependence). It is also possible to consider the following distribution,

that can be visualized below,

In that case, X and Y are comonotonic, while Y and Z are independent, but X here and Z are comonotonic (perfect positive dependence). So obviously, we should be able to construct any kind of counterexample, on any kind of result we might think as intuitive.

To be honest, the problem with intuition is that is usually comes from the Gaussian case, and from the perception that dependence is related to correlation. Pearson’s linear correlation. Consider the case of a 3 dimensional random vector, with correlation matrix

http://freakonometrics.blog.free.fr/public/perso6/CORRMATRICE.gif

Given two pairs of correlations, http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif, what could we say about http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif ? For instance, the intuition is that if http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif are positive, then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif is likely to be positive too (perhaps). The only property (at least the most important) we have on that correlation matrix is that it should be positive-semidefinite. So if we play on eigenvalues, it should be possible to derive inequalities satisfied by  http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif.Langford, Schwertman & Owens (2001) claim (in Theorem 3) that correlations have to satisfy some property, like

http://freakonometrics.blog.free.fr/public/perso6/kendall1.gif

which is simply the fact that the determinant of the correlation matrix has to be positive, that property was already mentioned in Kendall (1948), as an exercise,

But is that a sufficient and necessary condition ? Since I am extremely lazy, let us run some numerical calculation to visualize possible values for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif. Consider the following code

U=seq(-1,1,by=.1)
V=seq(-1,1,by=.001)
FSUP=function(a,b){
DF=function(c){min(eigen(matrix
(c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a,b){
DF=function(c){min(eigen(matrix(
c(1,a,b,a,1,c,b,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}
MSUP=outer(U,U,Vectorize(FSUP))
MINF=outer(U,U,Vectorize(FINF))
library(RColorBrewer)
clr=rev(brewer.pal(6, "RdBu"))
U=U[2:20]
MSUP=MSUP[2:20,2:20]
MINF=MINF[2:20,2:20]
persp(U,U,MSUP,col="green",shade=TRUE)
image(U,U,MSUP,breaks=((-3):3)/3,col=clr)
persp(U,U,MINF,col="green",shade=TRUE)
image(U,U,MINF,breaks=((-3):3)/3,col=clr)

Here, we can derive the lower and the upper bound for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif.

In the dark blue area, the bound for the correlation can be really low, while in the dark red, the bound is very high (either the lower bound on the left, or the upper bound on the right). Since it might be hard to read, it is possible to fix for instance http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif, and to derive bonds for http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif, as function of http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif.
V=seq(-1,1,by=.001)
U=seq(-1,1,by=.1)
U=U[2:(length(U)-1)]
V=V[2:(length(V)-1)]
U=c(-.9999,U,.9999)
V=c(-.99999,V,.99999)
FSUP=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[max(which(Vectorize(DF)(V)>0))]}
FINF=function(a){
DF=function(c){min(eigen(matrix(
c(1,a,-.7,a,1,c,-.7,c,1),3,3))$values)};
V[min(which(Vectorize(DF)(V)>0))]}

VS=Vectorize(FSUP)(U)
VI=Vectorize(FINF)(U)
plot(c(U,U),c(VS,VI),col="white")
polygon(c(U,rev(U)),c(VS,rev(VI)),
col="yellow",border=NA)
lines(U,VS,lwd=2,col="red")
lines(U,VI,lwd=2,col="red")
On the graph below, we have bound for a negative correlation for http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif (on the left, with -0.7) and a positive correlation for http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif (on the right, here +0.7),

We do observe here extremely nice ellipses… Consider the case of a null correlation http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif then the region for possible values for http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif and http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif is the unit circle.
The interpretation is that if http://freakonometrics.blog.free.fr/public/perso6/correl-b.gif is null, and so is http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif might take any value between -1 and 1 (under the assumption that marginal distribution allow such values, e.g. marginal Gaussian distributions). On the other hand if http://freakonometrics.blog.free.fr/public/perso6/correl-a.gif is either -1 or +1 (perfect negative/positive correlation) then http://freakonometrics.blog.free.fr/public/perso6/correl-c.gif has to be null…

Correlations, dimension, and risk measure

Yesterday, while I was attending the IFM2 conference, at HEC Montreal, I heard a nice talk about credit risk, and a comparison between contagion (or at least default correlation), for corporate and retail companies (in the US). And it was mentioned that default correlation was much lower for retail companies than it could be for corporate risk. In a discussion that followed those slides, it was mentioned that banks in the US should actually have been working more with those small firms, since contagion risk was much lower.

A problem here is that the link between correlation, risk and dimension is rather complicated:

  • corporate means a small number of firms, high correlation (and possible large individual losses)
  • retail means a large number of firms (even perhaps extremely large), lower correlation (and small individual losses)

A simple model for default models is based on the assumption that we deal with an exchangeable portfolio (as in a previous post). With the following code, given an (individual) default probability, a default correlation, and a number of firms, it is possible to calculate the probability to have more than a given number of defaults.

 proba=function(s,a,m,n){
 b=a/m-a
 choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
 dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
 stop.on.error =  FALSE)$value}

CDF=function(x=10,r=.4,m=.1,n=50){
a=m*(1-r)/r ;
V=rep(NA,n+1)
 for(i in 0:n){
 V[i+1]=proba(i,a,m,n)}
 V=V/sum(V);
 return(sum(V[1:(x+1)])) }

It is possible to calculate, for a large range of correlations, the probability to have – at least – 20% of default in the portfolio (in order to compare things that are comparable).

R=seq(.01,.99,by=.01)
VQ=matrix(NA,length(A),2)
for(i in 1:length(A)){
VQ[i,1]=1-CDF(r=A[i],x=4,n=20);  
VQ[i,2]=1-CDF(r=A[i],x=200,n=1000)}

With 20 firms (corporate) we want to have at least 4 defaults, while with 1000 firms (retail) there should be 200 defaults. As mentioned in the previous post, the relationship between correlation and quantiles of sums is not simple. Hence, it might not be monotone. The dotted line is the probability to have at least 4 defaults when default correlation is 50% (around 10%). The plain line is the probability to have at least 200 defaults, as a function of the correlation,

plot(A,1-VQ[,2],type="l",col="red",ylim=c(0,.22))
abline(h=1-VQ[50,1],lty=2,col="red")

In that case, with only a correlation of 10% among retail firms, the probability of having 20% defaults is the same as the same probability for corporate, but with 50% correlation… One should remember that in portfolio analysis, the links between correlation, dimension and risk measure is a sensitive issue…