Category Archives: Copulas

The Pay-for-Performance Myth

Last week, Eric Chemi and Ariana Giorgi published an interesting article on “The Pay-for-Performance Myth

With all the public chatter about exorbitant executive compensation and income inequality, it’s useful to look at the relationship between chief executive officer pay and corporate performance. Typically, when the subject of their big pay packages arises, CEOs—usually through their spokespeople—say they are paid for performance. Does data back that up?

An analysis of compensation data publicly released by Equilar shows little correlation between CEO pay and company performance. Equilar ranked the salaries of 200 highly paid CEOs. When compared to metrics such as revenue, profitability, and stock return, the scattering of data looks pretty random, as though performance doesn’t matter. The comparison makes it look as if there is zero relationship between pay and performance.

In the article, they produce a copula-type plot (since ranks – only – are considered). Ariana kindly sent me the dataset (that was used in The Pay at the Top) to play with it

> base=read.table("ceo.csv",sep=";",header=TRUE)

Here I normalize (dividing by the size of the dataset) to have uniform distribution on the unit interval (instead of working with ranks, i.e. integers). If we remove that scaling factor, the scatterplot is that same as the one mentioned in  the Pay-for-performance myth.

> n=nrow(base)
> U=rank(base[,1])/(n+1)
> V=rank(base[,2])/(n+1)
> plot(U,V,xlab="Rank CEO Pay",
+ ylab="Rank Stock Return")

This is the copula type representation.

If we visualize the density of the copula (using the algorithm described in the joint paper with Gery and Davy), we get either

> library("copula")
> library("ks")
> library("MASS")
> library("locfit")
> n.res=32
> ctilde1=probtranscopkde(UVs,p=1,
+ u.out=seq(1/(2*n.res+1),1-1/(2*n.res+1),
+length=n.res),plots=TRUE)

Continue reading The Pay-for-Performance Myth

Conditional Distributions from some Elliptical Vectors

This winter, in my ACT8595 course, I asked my students (that was some homework) to prove that it was possible to derive the conditional distribution when we have a Student-t random vector (and to get the analytical expression of the later). But before, let us recall a standard result about the Gaussian vector. If  is a Gaussian random vector, i.e.

then  has a Gaussian distribution. More precisely, it is a  distribution, with

and  is the Schur complement of the block  of the matrix ,

Observe that  is also related to well known quantity: in the bivariate case, where  and  are univariate Gaussian variables,

which is the slope in the linear regression of  on .

In the case of the Student-t distribution, the conditional distrubution will not be a Student-t distribution anymore, but it will still be an elliptical distribution, and some interpretations of various quantities can actually be obtained.

The density of the multivariate centred Student-t distribution, with unit variance, and parameters  and  is

https://latex.codecogs.com/gif.latex?f(\boldsymbol{x})=%20\frac{\Gamma([d+\nu]/2)}{(\nu\pi)^{d/2}%20\Gamma(\nu/2)\vert\boldsymbol{R}\vert^{1/2}}%20\left(%201+\frac{1}{\nu}\boldsymbol{x}%27\boldsymbol{R}^{-1}\boldsymbol{x}%20\right)^{-(d+\nu)/2}

If we consider the following blocks,

https://latex.codecogs.com/gif.latex?\boldsymbol{R}=%20\left(%20\begin{array}{cc}%20\boldsymbol{R}_{11}&%20\boldsymbol{R}_{12}\\%20\boldsymbol{R}_{21}&%20\boldsymbol{R}_{22}%20\end{array}%20\right)

then we can get that marginal distributions have a centred Student-t distribution, with unit variance, and parameters  and ,

https://latex.codecogs.com/gif.latex?f_2(\boldsymbol{x}_2)=%20\frac{\Gamma([d_2+\nu]/2)}{(\nu\pi)^{d_2/2}%20\Gamma(\nu/2)\vert\boldsymbol{R}_{22}\vert^{1/2}}%20\left(%201+\frac{1}{\nu}\boldsymbol{x}_2%27\boldsymbol{R}_{22}^{-1}\boldsymbol{x}_2%20\right)^{-(d_2+\nu)/2}

Then, to derive the conditional density, we can use Bayes formula,

https://latex.codecogs.com/gif.latex?f_{1\vert%202}(\boldsymbol{x}_1\vert%20\boldsymbol{x}_2)=%20\frac{f(\boldsymbol{x}_1,\boldsymbol{x}_2)}{f_2(\boldsymbol{x}_2)}

One can write (as in Section 9.1 in Tong, 1990, The Multivariate Normal Distribution, but other expressions can be found in Section 2.5 in Fang, Ng and Kotz, 1989, Symmetric multivariate and related distributions, or in Section 1.11 in Kotz and Nadarajah, 2004, Multivariate t distributions and their applications) this conditional density as

https://latex.codecogs.com/gif.latex?f_{1\vert%202}(\boldsymbol{x}_1\vert%20\boldsymbol{x}_2)=\kappa%20\left(1+\frac{1}{\nu}\boldsymbol{x}_2%27\boldsymbol{R}_{22}^{-1}\boldsymbol{x}_2\right)^{(d_2+\nu)/2}%20\left(1+\frac{1}{\nu}\left[\boldsymbol{x}_2%27\boldsymbol{R}_{22}^{-1}\boldsymbol{x}_2+\alpha(\boldsymbol{x}_1,\boldsymbol{x}_2)\right]\right)^{-(d_1+\nu)/2}

with

https://latex.codecogs.com/gif.latex?\kappa=\frac{\Gamma([d+\nu]/2)}{(\nu\pi)^{d_1/2}%20\Gamma([d_2+\nu]/2)}\frac{1}{\vert\boldsymbol{R}_{11}-\boldsymbol{R}_{12}\boldsymbol{R}_{22}^{-1}\boldsymbol{R}_{21}\vert^{1/2}}

and

https://latex.codecogs.com/gif.latex?\alpha(\boldsymbol{x}_1,\boldsymbol{x}_2)=(\boldsymbol{x}_1-\boldsymbol{R}_{12}\boldsymbol{R}_{22}^{-1}\boldsymbol{x}_{2})%27%20[\boldsymbol{R}_{11}-\boldsymbol{R}_{12}\boldsymbol{R}_{22}^{-1}\boldsymbol{R}_{21}]^{-1}(\boldsymbol{x}_1-\boldsymbol{R}_{12}\boldsymbol{R}_{22}^{-1}\boldsymbol{x}_{2})

This conditional distribution is elliptical, but it is not a Student-t distribution, except in the case where , or when the correlation matrix  is the identity.

[June 2016] actually, as shown in Ding (2016), this is a Student-t distribution. “Kotz & Nadarajah (2004) and Nadarajah & Kotz (2005) failed to recognize that the conditional distribution of the MVT distribution is also a MVT distribution due to the complexity of the conditional density function […] Conditional distributions of elliptically contoured distributions are also elliptically contoured distributions. But this does not immediately guarantee that conditional distributions of the MVT distributions are also MVT distributions without some further algebra.

Now, if we look at the components of this density, we can observe that we have

https://latex.codecogs.com/gif.latex?(\boldsymbol{x}_1-\boldsymbol{R}_{12}\boldsymbol{R}_{22}^{-1}\boldsymbol{x}_{2})

which was mentioned previously, in the Gaussian case: the term on the right is the conditional mean,

and the bloc that appears at several places is the conditional variance,

Now, if we want to visualize that conditional density, let us plot it. The code below is based on Bayes formula

> library(mnormt)
> r=.6
> R=matrix(c(1,r,r,1),2,2)
> nu=4
> f2=function(x2) dt(x2,df=nu)
> f =function(x) dmt(x,S=R,df=nu)
> f1.2=function(x1,x2) f(c(x1,x2))/f2(x2)

In order to compare that conditional density with a Student-t one, let us define the density of a non-centred Student-t random variable,

> dstd=function(x,mu,s,nu) gamma((nu+1)/2)/
+ (gamma(nu/2)*s*sqrt(pi*nu))*
+ (1+1/nu*(x-mu)^2/(s^2))^(-(nu+1)/2)

Here is the function we can use to plot those two densities,

> graphdensity=function(x2=-1.5){
+ vectx1=seq(-3,3,length=251)
+ y=Vectorize(function(x) f1.2(x,x2))(vectx1)
+ plot(vectx1,y,type="l",col="red",ylim=c(0,.5),
+ xlab="",ylab="")
+ abline(v=r*x2,lty=2)
+ lines(vectx1,dstd(vectx1,x2*r,sqrt(1-r^2),nu),col="blue",lty=2)}
> graphdensity(-1.5)

In the case where , the two lines are rather close (the difference migth come from computational issues)

> graphdensity(-1)

and just to conclude, a last one

> graphdensity(0)

On Hoeffding’s identity

In 1940, Wassily Hoeffding published Masstabinvariante Korrelationstheorie, which was an impressive paper. For those (like me) who unfortunately barely speak German, an English translation could be found in The Collected Works of Wassily Hoeffding, published a few years ago. As I keep saying in my courses about copulas, almost everything was in that paper, by Wassily Hoeffding. For instance, we can see the following graph, of a cumulative distribution function,

What is the difference with a copula? A copula (in dimension 2) is the cumulative distribution function of a random pair with uniform on , as defined by Abe Sklar

But Wassily Hoeffding considered a random pair with uniform on . But everything else is the same. He can even derive the level curves of the density of the Gaussian copula,

> library(mnormt)
> r=.6
> dc=function(u,v) return(
+ as.numeric(dmnorm(cbind(qnorm(u),qnorm(v)),varcov=
+ matrix(c(1,r,r,1),2,2))/dnorm(qnorm(u))/dnorm(qnorm(v))))
> n=500
> vectu=seq(1/n,1-1/n,length=n-1)
> matdc=outer(vectu,vectu,dc)
> contour(vectu,vectu,matdc,levels=
+ c(.325,.944,1.212,1.250,1.290,1.656,3.85),lwd=2)

 

But another interesting point is that there is the so-called Hoeffding’s equality

which is interesting, and quite important, actually, to understand that the covariance (or the correlation) can be seen as some ‘distance‘ to the independence. More precisely, observe that

where  would be the joint cumulative distribution function of some independent variables, with the same marginal distributions.

Of course, it is not exactly a distance, since it can be negative. But still. Now, the thing is that the proof is not trivial. But it is using interesting identities. For instance, in 1885, Franklin wrote a nice paper, Proof of a Theorem of Tchebycheff’s on Definite Integrals, in the American Journal of Mathematics. To get some heuristics about the identity, consider some (finite) sequences  and , then one can prove that

And there is a continuous version of that identity. Consider two bounded functions  and , on some interval,  then

is equal to

In 1979, in Monotone Regression and Covariance Structure, Gerald Shea gave a more probabilistic interpretation of that results, using the fact that

and using a different measure. More precisely, assume now that  functions  and  are integrable, with respect to some measure , on some set . Then

is equal to

In the case where  is a probability measure of , i.e. , this equality is the one used by Wassily Hoeffding, in 1940. The interpretation in terms of random variable is simple that

(with standard assuptions of existence of those quantitites) where  and  are two independent vectors, with identical distribution, . Actually, this relationship can also be found in Some Concepts of Dependence, by E. L. Lehmann, published in 1966. Oh, and by the way, the connection with Chebyshev inequality (claimed in the title of seminal paper by Franklin) come from the fact that if  and  are monotonic, then the left part of the identity is positive, and thus,

But let’s get back to Hoeffiding’s result. How do we get it from that lemma. The idea is to write

as

i.e.

We can then intervert the integral and the expectation, use the fact that

and then, and some integral calculus can be used to rewrite that expression as

So we get here Hoeffding’s identity. Actually, as mentioned by Ben Derrett about the equality above, it can be observed (see http://math.stackexchange.com/105713) that2\text{cov}(X,Y)=2\big(\mathbb{E}[XY]-\mathbb{E}[X]\mathbb{E}[Y]\big)can also be written

where again,  and  are two independent vectors, with identical distribution, . The later can be writen

Copula Density Estimation

The joint paper, written with Gery Geenens and Davy Paindaveine, entitled Probit transformation for nonparametric kernel estimation of the copula density” is now online on http://arxiv.org/abs/1404.4414

Copula modelling has become ubiquitous in modern statistics. Here, the problem of nonparametrically estimating a copula density is addressed. Arguably the most popular nonparametric density estimator, the kernel estimator is not suitable for the unit-square-supported copula densities, mainly because it is heavily affected by boundary bias issues. In addition, most common copulas admit unbounded densities, and kernel methods are not consistent in that case. In this paper, a kernel-type copula density estimator is proposed. It is based on the idea of transforming the uniform marginals of the copula density into normal distributions via the probit function, estimating the density in the transformed domain, which can be accomplished without boundary problems, and obtaining an estimate of the copula density through back-transformation. Although natural, a raw application of this procedure was, however, seen not to perform very well in the earlier literature. Here, it is shown that, if combined with local likelihood density estimation methods, the idea yields very good and easy to implement estimators, fixing boundary issues in a natural way and able to cope with unbounded copula densities. The asymptotic properties of the suggested estimators are derived, and a practical way of selecting the crucially important smoothing parameters is devised. Finally, extensive simulation studies and a real data analysis evidence their excellent performance compared to their main competitors.”

Modeling the Marginals and the Dependence separately

When introducing copulas, it is commonly admitted that copulas are interesting because they allow to model the marginals and the dependence structure separately. The motivation is probably Sklar’s theorem, which says that given some marginal cumulative distribution functions (say  and , in dimension 2), and a copula (denoted ), then we can generate a multivariate cumulative distribution function with marginals the one specified previously, using

But this separability might be misleading. Consider the case of a fully parametric model,

Assume that those distributions are continuous, so that we can write the likelihood using densities,

and the log-likelihood is

The first part is the log-likelihood if we consider the first marginal (only). The second part is the log-likelihood if we consider the second marginal (only). If the two components are not independent (i.e. the copula density  is not equal to 1 everywhere) the third part cannot be considered as null, and so, in a general context,

where

while

In order to illustrate this point, consider a bivariate lognormal distribution (obtained by taking the exponential of a Gaussian vector)

> mu1=1
> mu2=2
> MU=c(mu1,mu2)
> s1=1
> s2=sqrt(2)
> r=.8
> SIGMA=matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
> library(mnormt)
> set.seed(1)
> Z=exp(rmnorm(25,MU,SIGMA))

If we believe that marginals and correlations can be treated separately, we can start with marginal distributions.

> library(MASS)
> (p1=fitdistr(Z[,1],"lognormal"))
    meanlog      sdlog  
  1.1686652   0.9309119 
 (0.1861824) (0.1316508)
> (p2=fitdistr(Z[,2],"lognormal"))
    meanlog      sdlog  
  2.2181721   1.1684049 
 (0.2336810) (0.1652374)

Based on those marginal distributions, define  and , and consider the maximum likelihood estimator  of the copula parameter, obtained from this pseudo sample,

Numerically, we get (since we consider a Gaussian copula, which is the true copula generated here)

> library(copula)
> Gcop=normalCopula(.3,dim=2)
> U=cbind(plnorm(Z[,1],p1$estimate[1],p1$estimate[2]),
+ plnorm(Z[,2],p2$estimate[1],p2$estimate[2]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.86530    0.03799   22.77

But clearly, we did not treat the dependence structure separately, since it was a function of marginal distributions,

If we consider a global optimization problem, then results are different. The joint density can be derived (see e.g. Mostafa & Mahmoud (1964))

> dbivlognorm=function(x,theta){
+ mu1=theta[1]
+ mu2=theta[2]
+ s1=theta[3]
+ s2=theta[4]
+ r=theta[5]
+ a1=(log(x[,1])-mu1)/s1
+ a2=(log(x[,2])-mu2)/s2
+ d=1/(2*pi*s1*s2*sqrt(1-r^2))*1/(x[,1]*x[,2])*
+ exp(-(a1^2-2*r*a1*a2+a2^2)/(2*(1-r^2)))
+ return(d)
+ }
> LogLik=function(theta){
+ return(-sum(log(dbivlognorm(Z,theta))))}
> optim(par=c(0,0,1,1,0),fn=LogLik)$par
[1] 1.1655359 2.2159767 0.9237853 1.1610132 0.8645052

The difference is not huge, but still. The estimators are not identical. From a statistical point of view, we can hardly treat the marginals and the dependence structure separately.

Another point we should keep in mind is that the estimation of the copula parameter depends on the margins, not only through the parameters, but more deeply, through the choice of the marginal distributions (that might be misspecified). For instance, if we assume that margins are exponentially distributed,

> (p1=fitdistr(Z[,1],"exponential"))
      rate   
  0.22288362 
 (0.04457672)
> (p2=fitdistr(Z[,2],"exponential"))
      rate   
  0.06543665 
 (0.01308733)

the estimation of the parameter of the Gaussian copula yields

> U=cbind(pexp(Z[,1],p1$estimate[1]),
+ pexp(Z[,2],p2$estimate[1]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.87421    0.03617   24.17   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The maximized loglikelihood is  15.4 
Optimization converged

The problem is that since we misspecify marginal distribution, our pseudo sample is defined on the unit-interval, but there is no chance that we get uniform margins. If we generate a sample of size 500 with the code above,

> x <- U[,1]; y <- U[,2]
> xhist <- hist(x, plot=FALSE) ; yhist <- hist(y, plot=FALSE)
> top <- max(c(xhist$counts, yhist$counts)) 
> nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) 
> par(mar=c(3,3,1,1)) 
> plot(x, y, xlab="", ylab="",col="red",xlim=0:1,ylim=0:1) 
> par(mar=c(0,3,1,1))
> barplot(xhist$counts, axes=FALSE, ylim=c(0, top), 
+ space=0,col="light green") 
> par(mar=c(3,0,1,1))
> barplot(yhist$counts, axes=FALSE, xlim=c(0, top), 
+ space=0, horiz=TRUE,col="light blue")

If we compare with the previous case, when marginal distribution were well-specified, we can clearly see that the dependence structure depends on marginal distributions,

Correlation with constraints on pairs

An interesting question was posted on http://math.stackexchange.com/726205/…: if one knows the covariances  and , is it possible to infer ? I asked myself a question close to this one a few weeks ago (that I might also relate to a question I asked a long time ago, about possible correlations between three exchange rates, on financial markets). More precisely, if one knows the correlations  and , is it possible to say something about ?

I could not find much details (but maybe I did not look enough in the existing literature). My strategy was to consider the correlation matrix, and to use the fact that a correlation matrix is symmetric, positive semidefinite matrix (also called Gramian matrix, which is a matrix with no negative eigenvalues). given the two correlations, we should consider the function of the third correlation, which indicates whether the smallest eigenvalue is non-negative, or not. Then, I look at the range of the third correlation, to get the minimum and the maximum possible value (I guess we can prove that possible values belongs to some interval). The code to get that is simply

corrminmax=function(r1,r2){
h=function(r3){
R=matrix(c(1,r1,r2,r1,1,r3,r2,r3,1),3,3)
return(min(eigen(R)$values)>0)}
vc=seq(-1,+1,length=1e4+1)
vr=Vectorize(h)(vc)
indx=which(vr==TRUE)
return(vc[range(indx)])
}

Using this code, it is possible to look at the smallest correlation for the third pair, as well as the maximum correlation,

x1=seq(-1,1,by=.1)
x2=seq(-1,1,by=.1)
W=M=matrix(NA,length(x1),length(x2))
for(i in 1:length(x1)){
for(j in 1:length(x2)){
C=corrminmax(x1[i],x2[j])
W[i,j]=C[1]
M[i,j]=C[2]
}}

If we plot those matrices, we get

par(mfrow=c(1,2))
persp(x1,x2,W,zlim=c(-1,1),col="green",
shade=TRUE,theta=-30)
persp(x1,x2,M,zlim=c(-1,1),col="green",
shade=TRUE,theta=-30)

and if we plot the difference, to get the range of the interval we clearly see that the largest range is obtained when the two correlations are null (in that case, any correlation is valid)

Talk at CIMAT, Guanajuato, Mexico

I will be back in Guanajuato, Mexico, this week, to visit Victor Rivero. And I will give a talk at the Centro de Investigacion en Matematicas (CIMAT) this Wednesday on “Multivariate Archimax Copulas“. The slides are already online.

(there is a lot of material on copulas, as requested, to provide an introduction for students not familiar with this concept).

Bivariate Densities with N(0,1) Margins

This Monday, in the ACT8595 course, we came back on elliptical distributions and conditional independence (here is an old post on de Finetti’s theorem, and the extension to Hewitt-Savage’s). I have shown simulations, to illustrate those two concepts of dependent variables, but I wanted to spend some time to visualize densities. More specifically what could be the joint density is we assume that margins are  distributions.

  • The Bivariate Gaussian distribution

Here, we consider a Gaussian random vector, with margins , and with correlation . This is the standard graph, with elliptical isodensity curves

r=.5
library(mnormt)
S=matrix(c(1,r,r,1),2,2)
f=function(x,y) dmnorm(cbind(x,y),varcov=S)
vx=seq(-3,3,length=201)
vy=seq(-3,3,length=201)
z=outer(vx,vy,f)
set.seed(1)
X=rmnorm(1500,varcov=S)
xhist <- hist(X[,1], plot=FALSE)
yhist <- hist(X[,2], plot=FALSE)
top <- max(c(xhist$density, yhist$density,dnorm(0)))
nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
par(mar=c(3,3,1,1))
image(vx,vy,z,col=rev(heat.colors(101)))
contour(vx,vy,z,col="blue",add=TRUE)
points(X,cex=.2)
par(mar=c(0,3,1,1))
barplot(xhist$density, axes=FALSE, ylim=c(0, top), space=0,col="light green")
lines((density(X[,1])$x-xhist$breaks[1])/diff(xhist$breaks)[1],
dnorm(density(X[,1])$x),col="red")
par(mar=c(3,0,1,1))
barplot(yhist$density, axes=FALSE, xlim=c(0, top), space=0, 
horiz=TRUE,col="light green")
lines(dnorm(density(X[,2])$x),(density(X[,2])$x-yhist$breaks[1])/
diff(yhist$breaks)[1],col="red")

That was the simple part.

  • The Bivariate Student-t distribution

Consider now another elliptical distribution. But we want here to normalize the margins. Thus, instead of a pair , we would like to consider the pair , so that the marginal distributions are . The new density is obtained simply since the transformation is a one-to-one increasing transformation. Here, we have

k=3
r=.5
G=function(x) qnorm(pt(x,df=k))
dg=function(x) dt(x,df=k)/dnorm(qnorm(pt(x,df=k)))
Ginv=function(x) qt(pnorm(x),df=k)
S=matrix(c(1,r,r,1),2,2)
f=function(x,y) dmt(cbind(Ginv(x),Ginv(y)),S=S,df=k)/(dg(x)*dg(y))
vx=seq(-3,3,length=201)
vy=seq(-3,3,length=201)
z=outer(vx,vy,f)
set.seed(1)
Z=rmt(1500,S=S,df=k)
X=G(Z)

Because we considered a nonlinear transformation of the margins, the level curves are no longer elliptical. But there is still some kind of symmetry.

  • The Exchangeable Case with Conditionally Independent Random Variables

We did consider the case where  and  with independent random variables, given , and that both variables are exponentially distributed, with parameter . As we’ve seen in class, it might be difficult to visualize that sample, unless we have log scales on both axis. But instead of a log transformation, why not consider a transformation so that margins will be . The only technical problem is that we do not have the (nonconditional) distributions of the margins. Well, we have them, but they are integral based. From a computational point of view, that’s not a bit deal… Computations might take a while, but we can visualize the density using the following code (here, we assume that  is Gamma distributed)

a=.6
b=1
h=.0001
G=function(x) qnorm(ifelse(x<0,0,integrate(function(z) pexp(x,z)*
dgamma(z,a,b),lower=0,upper=Inf)$value))
Ginv=function(x) uniroot(function(z) G(z)-x,lower=-40,upper=1e5)$root
dg=function(x) (Ginv(x+h)-Ginv(x-h))/2/h
H=function(xy) integrate(function(z) dexp(xy[2],z)*dexp(xy[1],z)*
dgamma(z,a,b),lower=0,upper=Inf)$value
f=function(x,y) H(c(Ginv(x),Ginv(y)))*(dg(x)*dg(y))
vx=seq(-3,3,length=151)
vy=seq(-3,3,length=151)
z=matrix(NA,length(vx),length(vy))
for(i in 1:length(vx)){
for(j in 1:length(vy)){
z[i,j]=f(vx[i],vy[j])}}
set.seed(1)
Theta=rgamma(1500,a,b)
Z=cbind(rexp(1500,Theta),rexp(1500,Theta))
X=cbind(Vectorize(G)(Z[,1]),Vectorize(G)(Z[,2]))

There is a small technical problem, but no big deal.

Here, the joint distribution is quite different. Margins are – one more time – standard Gaussian, but the shape of the joint distribution is quite different, with an asymmetry from the lower (left) tail to the upper (right) tail. More details when we’ll introduce copulas. The only difference will be that the margins will be uniform on the unit interval, and not standard Gaussian.

Multivariate Archimax copulas

Our paper, written jointly also with Anne-Laure Fougères, Christian Genest and Johanna Nešlehová, entitled Multivariate Archimax Copulas, should appear some day in the Journal of Multivariate Analysis.

A multivariate extension of the bivariate class of Archimax copulas was recently proposed by Mesiar & Jagr (2013), who asked under which conditions it holds. This paper answers their question and provides a stochastic representation of multivariate Archimax copulas. A few basic properties of these copulas are explored, including their minimum and maximum domains of attraction. Several non-trivial examples of multivariate Archimax copulas are also provided.

In this paper, we extend the class of Archimax copulas, introduced in dimension 2 in Bivariate Distributions with Given Extreme Value Attractor, by Philippe Capéraà, Anne-Laure Fougères and Christian Genest, inspired by some ideas mentioned in a paper published in Kybernetika a few years ago. I will try to post additional material, soon…

Conditional dependence measures

This week, I spend some time at the Workshop on Nonparametric Curve Smoothing conference at Concordia. Yesterday afternoon, Noël Veraverbeke show an interesting graph, to illustrate conditional copulas (and the derivation of conditional dependence measures, such as Kendall’s tau, or Spearman’s rho). A long time ago, in my PhD thesis (mainly on conditional copulas) I did try to derive conditional dependence measures (in a dedicated chapter). In my PhD, I was interested to describe the dependence of a pair https://latex.codecogs.com/gif.latex?(Y_1,Y_2) given https://latex.codecogs.com/gif.latex?(Y_1,Y_2)\in\mathcal{V}, where https://latex.codecogs.com/gif.latex?\mathcal%20V is a region of interest, such has tails. So I wanted to study the behavior of https://latex.codecogs.com/gif.latex?(Y_1,Y_2) given https://latex.codecogs.com/gif.latex?\{Y_1%3Et,Y_2%3Et\}. This has interpretation when studying large risks, but also in joint life mortality.

In the paper Noël mentioned, they want to describe the dependence of a pair https://latex.codecogs.com/gif.latex?(Y_1,Y_2) given a covariate https://latex.codecogs.com/gif.latex?X=x. And he came up with this very nice example: consider expected lifetimes, for male and female, in various countries. You can get zipped files with data for male, female and we can use the GPD per capita as our covariate. Here is the code to visualize life expectancies,

b1=read.table("sp.dyn.le00.fe.in_Indicator_en_csv_v2.csv",header=TRUE,sep=",",skip=2)
b2=read.table("sp.dyn.le00.ma.in_Indicator_en_csv_v2.csv",header=TRUE,sep=",",skip=2)
b3=read.table("ny.gdp.pcap.cd_Indicator_en_csv_v2.csv",header=TRUE,sep=",",skip=2)
b1b=b1[,c(1,2,55)]
b2b=b2[,c(1,2,55)]
b3b=b3[,c(1,2,55)]
names(b1b)[3]="LEF"
names(b2b)[3]="LEM"
names(b3b)[3]="GPD"
b=merge(b1b,b2b)
b=merge(b,b3b)
plot(b$LEM,b$LEF,xlab="Life Expectancy (male vs. female)")

With this graph, we cannot visualize the link with the covariate,

b$cgpd=cut(b$GPD,quantile(b$GPD,seq(0,1,by=1/6),na.rm=TRUE))
levels(b$cgpd)=as.character(1:6)
library(RColorBrewer)
CL=brewer.pal(6, "RdBu")	
plot(b$LEM,b$LEF,xlab="Life Expectancy (male vs. female)",pch=19,col=CL[as.numeric(b$cgpd)])

Here, poor countries are in red, and rich countries in blue,

Clearly, life expectancy is connected to the wealth of the country,

plot(b$GPD,b$LEF,xlab="(Female) Life Expectancy vs. GPD (log scale)",pch=19,col=CL[as.numeric(b$cgpd)],log="x")
plot(b$GPD,b$LEM,xlab="(Male) Life Expectancy vs. GPD (log scale)",pch=19,col=CL[as.numeric(b$cgpd)],log="x")

The idea here is to consider the conditional dependence structure, given the wealth. If we want something smooth (this is actually the goal of the workshop, but I’d like to make that quickly) consider some weighted version of Kendall’s tau, based on the idea mentioned in a post on http://stackoverflow.com/

The idea is to use concordance and discordance counts, with replications of the data, based on the weights

P = function(t) {   
  r_ndx = row(t)
  c_ndx = col(t)
  sum(t * mapply(function(r, c){sum(t[(r_ndx > r) & (c_ndx > c)])},
    r = r_ndx, c = c_ndx))}
Q = function(t) {
  r_ndx = row(t)
  c_ndx = col(t)
  sum(t * mapply( function(r, c){
      sum(t[(r_ndx > r) & (c_ndx < c)])
  },
    r = r_ndx, c = c_ndx) )
}
kendall_tau_c = function(t){
    t = as.matrix(t) 
    m = min(dim(t))
    n = sum(t)
    ks_tauc = (m*2*(P(t)-Q(t)))/((n*n)*(m-1))
}
I=is.na(b$GPD)
bw=density(log(b$GPD[!I]))$bw
kendall.weight=function(x){
df=data.frame(Y1=b$LEF, Y2=b$LEM, freq=trunc(dnorm(log(b$GPD)-log(x),sd=bw)*100))
df=df[!is.na(df$freq),]
dfrep=data.frame( lapply(df, function(x){rep(x, df$freq)}))
t=xtabs(~ Y1+Y2, dfrep)
return(kendall_tau_c(t))}

Here, I use weights using some Gaussian kernel on the logarithm of the GPD per capita (my standard deviation for the Gaussian weight being equal to the bandwidth of the Gaussian kernel of the density of the log of the GPD per capita), then, we can compute various conditional Kendall’s tau,

T=exp(seq(6,11.5,length=50))
K=Vectorize(kendall.weight)(T)

and plot them,

plot(T,K,type="l",xlab="Conditional Kendall's tau vs. GPD (log scale)")

There is more “correlation” between lifetimes of men and women in poor countries than rich country (which is also what Noël observed). Now, we can also play with time, because we have those statistics for several years.

Graduate Course on Copulas and Extreme Values

This Winter, I will be giving a (graduate) course on extreme values, and copulas (more generally multivariate models and dependence), MAT8595. It is an ISM course, and even if it will probably be given in French, I will upload information here, in English. I will upload the (detailed) syllabus of the course during the Christmas holidays. But to give an overview, for those willing to register, the first part of the course will focus on extreme value theory. The references will be

The second part of the course will be on multivariate distributions. The references will be

Specific references and more details about the chapters will be given during the course. I will upload exercises this winter, as well as a list of articles that will be used for projects. Examples will be illustrated using R functions from dedicated packages.

Grades will be based on exercises (homework), report (based on a published paper) and final writen exam.

Fractals and Kronecker product

A few years ago, I went to listen to Roger Nelsen who was giving a talk about copulas with fractal support. Roger is amazing when he gives a talk (I am also a huge fan of his books, and articles), and I really wanted to play with that concept (that he did publish later on, with Gregory Fredricks and José Antonio Rodriguez-Lallena). I did mention that idea in a paper, writen with Alessandro Juri, just to mention some cases where deriving fixed point theorems is not that simple (since the limit may not exist).

The idea in the initial article was to start with something quite simple, a the so-called transformation matrix, e.g.

https://latex.codecogs.com/gif.latex?T=\frac{1}{8}\left(\begin{matrix}1&%200%20&%201%20\\%200%20&%204%20&%200%20\\%201%20&%200&1\end{matrix}\right)
Here, in all areas with mass, we spread it uniformly (say), i.e. the support of https://latex.codecogs.com/gif.latex?T(C^\perp) is the one below, i.e. https://latex.codecogs.com/gif.latex?1/8th of the mass is located in each corner, and https://latex.codecogs.com/gif.latex?1/2 is in the center. So if we spread the mass to have a copula (with uniform margin,)we have to consider squares on intervals https://latex.codecogs.com/gif.latex?[0,1/4]https://latex.codecogs.com/gif.latex?[1/4,3/4] and https://latex.codecogs.com/gif.latex?[3/4,1],

Then the idea, then, is to consider https://latex.codecogs.com/gif.latex?T^2=\otimes^2T, where  https://latex.codecogs.com/gif.latex?\otimes^2T is the tensor product (also called Kronecker product) of https://latex.codecogs.com/gif.latex?T with itself. Here, the support of https://latex.codecogs.com/gif.latex?T^2(C^\perp) is

Then, consider https://latex.codecogs.com/gif.latex?T^3=\otimes^3T, where https://latex.codecogs.com/gif.latex?\otimes^3T is the tensor product of https://latex.codecogs.com/gif.latex?T with itself, three times. And the support of https://latex.codecogs.com/gif.latex?T^3(C^\perp) is

Etc. Here, it is computationally extremely simple to do it, using this Kronecker product. Recall that if https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}=(a_{i,j}), then

https://latex.codecogs.com/gif.latex?%20%20%20%20%20\mathbf{A}\otimes\mathbf{B}%20=%20\begin{pmatrix}%20a_{11}%20\mathbf{B}%20&%20\cdots%20&%20a_{1n}\mathbf{B}%20\\%20\vdots%20&%20\ddots%20&%20\vdots%20\\%20a_{m1}%20\mathbf{B}%20&%20\cdots%20&%20a_{mn}%20\mathbf{B}%20\end{pmatrix}

So, we need a transformation matrix: consider the following https://latex.codecogs.com/gif.latex?4\times4 matrix,

> k=4
> M=matrix(c(1,0,0,1,
+            0,1,1,0,
+            0,1,1,0,
+            1,0,0,1),k,k)
> M
[,1] [,2] [,3] [,4]
[1,]    1    0    0    1
[2,]    0    1    1    0
[3,]    0    1    1    0
[4,]    1    0    0    1

Once we have it, we just consider the Kronecker product of this matrix with itself, which yields a https://latex.codecogs.com/gif.latex?4^2\times4^2 matrix,

> N=kronecker(M,M)
> N[,1:4]
[,1]  [,2] [,3] [,4]
[1,]     1    0    0    1
[2,]     0    1    1    0
[3,]     0    1    1    0
[4,]     1    0    0    1
[5,]     0    0    0    0
[6,]     0    0    0    0
[7,]     0    0    0    0
[8,]     0    0    0    0
[9,]     0    0    0    0
[10,]    0    0    0    0
[11,]    0    0    0    0
[12,]    0    0    0    0
[13,]    1    0    0    1
[14,]    0    1    1    0
[15,]    0    1    1    0
[16,]    1    0    0    1

And then, we continue,

> for(s in 1:3){N=kronecker(N,M)}

After only a couple of loops, we have a https://latex.codecogs.com/gif.latex?4^5\times4^5 matrix. And we can plot it simply to visualize the support,

> image(N,col=c("white","blue"))

As we zoom in, we can visualize this fractal property,

Bounding sums of random variables, part 2

It is possible to go further, much more actually, on bounding sums of random variables (mentioned in the previous post). For instance, if everything has been defined, in that previous post, on distributions on , it is possible to extend bounds of distributions on . Especially if we deal with quantiles. Everything we’ve seen remain valid. Consider for instance two  distributions. Using the previous code, it is possible to compute bounds for the quantiles of the sum of two Gaussian variates. And one has to remember that those bounds are sharp.

> Finv=function(u) qnorm(u,0,1)
> Ginv=function(u) qnorm(u,0,1)
> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Actually, it is possible to compare here with two simple cases: the independent case, where the sum has a  distribution, and the comonotonic case where the sum has a  distribution.

>  lines(x,qnorm(x,sd=sqrt(2)),col="blue",lty=2)
>  lines(x,qnorm(x,sd=2),col="blue",lwd=2)

On the graph below, the comonotonic case (usually considered as the worst case scenario) is the plain blue line (with here an animation to illustrate the convergence of the numerical algorithm)

Below that (strong) blue line, then risks are sub-additive for the Value-at-Risk, i.e.

but above, risks are super-additive for the Value-at-RIsk. i.e.

(since for comonotonic variates, the quantile of the sum is the sum of quantiles). It is possible to visualize those two cases above, in green the area where risks are super-additive, while the yellow area is where risks are sub-additive.

Recall that with a Gaussian random vector, with correlation https://latex.codecogs.com/gif.latex?r then the quantile is the quantile of a random variable centered, with variance https://latex.codecogs.com/gif.latex?2(1+r). Thus, on the graph below, we can visualize case that can be obtained with this Gaussian copula. Here the yellow area can be obtained with a Gaussian copulas, the upper and the lower bounds being respectively the comonotonic and the countermononic cases.

https://freakonometrics.hypotheses.org/files/2019/05/sum-norm-G-bounds2.gif

But the green area can also be obtained when we sum two Gaussian variables ! We just have to go outside the Gaussian world, and consider another copula.

Another point is that, in the previous post, https://latex.codecogs.com/gif.latex?C^- was the lower Fréchet-Hoeffding bound on the set of copulas. But all the previous results remain valid if https://latex.codecogs.com/gif.latex?C^- is alower bound on the set of copulas of interest. Especially

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

for all https://latex.codecogs.com/gif.latex?C such that https://latex.codecogs.com/gif.latex?C\geq%20C^-. For instance, if we assume that the copula should have positive dependence, i.e. https://latex.codecogs.com/gif.latex?C\geq%20C^\perp, then

https://latex.codecogs.com/gif.latex?\tau_{C^\perp,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^\perp,L}(F,G)

Which means we should have sharper bounds. Numerically, it is possible to compute those sharper bounds for quantiles. The lower bound becomes

https://latex.codecogs.com/gif.latex?\sup_{u\in[x,1]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x}{u}\right)\right\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x-u}{1-u}\right)\right\}

Again, one can easily compute those quantities on a grid of the unit interval,

> Qinfind=Qsupind=rep(NA,n-1)
> for(i in 1:(n-1)){
+  J=1:(i)
+  Qinfind[i]=max(Finv(J/n)+Ginv((i-J)/n/(1-J/n)))
+  J=(i):(n-1)
+  Qsupind[i]=min(Finv(J/n)+Ginv(i/J))
+ }

We get the graph below (the blue area is here to illustrate how sharper those bounds get with the assumption that we do have positive dependence, this area been attained only with copulas exhibiting non-positive dependence)

For high quantiles, the upper bound is rather close to the one we had before, since worst case are probably obtained when we do have positive correlation. But it will strongly impact the lower bound. For instance, it becomes now impossible to have a negative quantile, when the probability exceeds 75% if we do have positive dependence…

> Qinfind[u==.75]
[1] 0