Tag Archives: copula

Modeling the Marginals and the Dependence separately

When introducing copulas, it is commonly admitted that copulas are interesting because they allow to model the marginals and the dependence structure separately. The motivation is probably Sklar’s theorem, which says that given some marginal cumulative distribution functions (say  and , in dimension 2), and a copula (denoted ), then we can generate a multivariate cumulative distribution function with marginals the one specified previously, using

But this separability might be misleading. Consider the case of a fully parametric model,

Assume that those distributions are continuous, so that we can write the likelihood using densities,

and the log-likelihood is

The first part is the log-likelihood if we consider the first marginal (only). The second part is the log-likelihood if we consider the second marginal (only). If the two components are not independent (i.e. the copula density  is not equal to 1 everywhere) the third part cannot be considered as null, and so, in a general context,

where

while

In order to illustrate this point, consider a bivariate lognormal distribution (obtained by taking the exponential of a Gaussian vector)

> mu1=1
> mu2=2
> MU=c(mu1,mu2)
> s1=1
> s2=sqrt(2)
> r=.8
> SIGMA=matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
> library(mnormt)
> set.seed(1)
> Z=exp(rmnorm(25,MU,SIGMA))

If we believe that marginals and correlations can be treated separately, we can start with marginal distributions.

> library(MASS)
> (p1=fitdistr(Z[,1],"lognormal"))
    meanlog      sdlog  
  1.1686652   0.9309119 
 (0.1861824) (0.1316508)
> (p2=fitdistr(Z[,2],"lognormal"))
    meanlog      sdlog  
  2.2181721   1.1684049 
 (0.2336810) (0.1652374)

Based on those marginal distributions, define  and , and consider the maximum likelihood estimator  of the copula parameter, obtained from this pseudo sample,

Numerically, we get (since we consider a Gaussian copula, which is the true copula generated here)

> library(copula)
> Gcop=normalCopula(.3,dim=2)
> U=cbind(plnorm(Z[,1],p1$estimate[1],p1$estimate[2]),
+ plnorm(Z[,2],p2$estimate[1],p2$estimate[2]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.86530    0.03799   22.77

But clearly, we did not treat the dependence structure separately, since it was a function of marginal distributions,

If we consider a global optimization problem, then results are different. The joint density can be derived (see e.g. Mostafa & Mahmoud (1964))

> dbivlognorm=function(x,theta){
+ mu1=theta[1]
+ mu2=theta[2]
+ s1=theta[3]
+ s2=theta[4]
+ r=theta[5]
+ a1=(log(x[,1])-mu1)/s1
+ a2=(log(x[,2])-mu2)/s2
+ d=1/(2*pi*s1*s2*sqrt(1-r^2))*1/(x[,1]*x[,2])*
+ exp(-(a1^2-2*r*a1*a2+a2^2)/(2*(1-r^2)))
+ return(d)
+ }
> LogLik=function(theta){
+ return(-sum(log(dbivlognorm(Z,theta))))}
> optim(par=c(0,0,1,1,0),fn=LogLik)$par
[1] 1.1655359 2.2159767 0.9237853 1.1610132 0.8645052

The difference is not huge, but still. The estimators are not identical. From a statistical point of view, we can hardly treat the marginals and the dependence structure separately.

Another point we should keep in mind is that the estimation of the copula parameter depends on the margins, not only through the parameters, but more deeply, through the choice of the marginal distributions (that might be misspecified). For instance, if we assume that margins are exponentially distributed,

> (p1=fitdistr(Z[,1],"exponential"))
      rate   
  0.22288362 
 (0.04457672)
> (p2=fitdistr(Z[,2],"exponential"))
      rate   
  0.06543665 
 (0.01308733)

the estimation of the parameter of the Gaussian copula yields

> U=cbind(pexp(Z[,1],p1$estimate[1]),
+ pexp(Z[,2],p2$estimate[1]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.87421    0.03617   24.17   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The maximized loglikelihood is  15.4 
Optimization converged

The problem is that since we misspecify marginal distribution, our pseudo sample is defined on the unit-interval, but there is no chance that we get uniform margins. If we generate a sample of size 500 with the code above,

> x <- U[,1]; y <- U[,2]
> xhist <- hist(x, plot=FALSE) ; yhist <- hist(y, plot=FALSE)
> top <- max(c(xhist$counts, yhist$counts)) 
> nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) 
> par(mar=c(3,3,1,1)) 
> plot(x, y, xlab="", ylab="",col="red",xlim=0:1,ylim=0:1) 
> par(mar=c(0,3,1,1))
> barplot(xhist$counts, axes=FALSE, ylim=c(0, top), 
+ space=0,col="light green") 
> par(mar=c(3,0,1,1))
> barplot(yhist$counts, axes=FALSE, xlim=c(0, top), 
+ space=0, horiz=TRUE,col="light blue")

If we compare with the previous case, when marginal distribution were well-specified, we can clearly see that the dependence structure depends on marginal distributions,

Copulas and tail dependence, part 2

An alternative to describe tail dependence can be found in the Ledford & Tawn (1996) for instance. The intuition behind can be found in Fischer & Klein (2007)). Assume that  and   have the same distribution. Now, if we assume that those variables are (strictly) independent,

But if we assume that those variables are (strictly) comonotonic (i.e. equal here since they have the same distribution), then

So assume that there is a https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png such that
Then https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=2 can be interpreted as independence while https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-6.2.php.png=1 means strong (perfect) positive dependence. Thus, consider the following transformation to get a parameter in [0,1], with a strength of dependence increasing with the index, e.g.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-8.2.php.png

In order to derive a tail dependence index, assume that there exists a limit to

which will be interpreted as a (weaktail dependence index. Thus define concentration functions

for the lower tail (on the left) and

for the upper tail (on the right). The R code to compute those functions is quite simple,
> library(evd); 
> data(lossalae)
> X=lossalae
> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1
> fL2emp=function(z) 2*log(mean(U<=z))/
+ log(mean((U<=z)&(V<=z)))-1
> fR2emp=function(z) 2*log(mean(U>=1-z))/
+ log(mean((U>=1-z)&(V>=1-z)))-1
> u=seq(.001,.5,by=.001)
> L=Vectorize(fL2emp)(u)
> R=Vectorize(fR2emp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL      UPPER TAIL")
> abline(v=.5,col="grey")

and again, it is possible to plot those empirical functions against some parametric ones, e.g. the one obtained from a Gaussian copula (with the same Kendall’s tau)

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) 2*log(z)/log(pCopula(c(z,z),
+ copgauss))-1
> Rgaussian=function(z) 2*log(1-z)/log(1-2*z+
+ pCopula(c(z,z),copgauss))-1
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) 2*log(z)/log(pCopula(c(z,z),
+ copgumbel))-1
> Rgumbel=function(z) 2*log(1-z)/log(1-2*z+
+ pCopula(c(z,z),copgumbel))-1
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

Again, one should look more carefully at confidence bands, but is looks like Gumbel copula provides a good fit here.

Copulas estimation and influence of margins

Just a short post to get back on results mentioned at the end of the course. Since copulas are obtained using (univariate) quantile functions in the joint cumulative distribution function, they are – somehow – related to the marginal distribution fitted. In order to illustrate this point, consider an i.i.d. sample http://freakonometrics.blog.free.fr/public/perso6/cop-marg-01.gif from a Student-t distribution,

library(mnormt)
r=.5
n=200
X=rmt(n,mean=c(0,0),S=matrix(c(1,r,r,1),2,2),df=4)

Thus, the true copula is Student-t. Here, with 4 degrees of freedom. Note that we can easily get the (true) value of the copula, on the diagonal

dg=function(t) pmt(qt(t,df=4),mean=c(0,0),
S=matrix(c(1,r,r,1),2,2),df=4)
DG=Vectorize(dg)

Four strategies are considered here to define pseudo-copula base variates,

  • misfit: consider an invalid marginal estimation: we have assumed that margins were Gaussian, i.e. http://freakonometrics.blog.free.fr/public/perso6/cop-marg-2.gif
  • perfect fit: here, we know that margins were Student-t, with 4 degrees of freedom http://freakonometrics.blog.free.fr/public/perso6/cop-marg-3.gif
  • standard fit: then, consider the case where we fit marginal distribution, but in the good family this time (e.g. among Student-t distributions), http://freakonometrics.blog.free.fr/public/perso6/cop-marg-4.gif
  • ranks: finally, we consider nonparametric estimators for marginal distributions, http://freakonometrics.blog.free.fr/public/perso6/cop-marg-10.gif

Now that we have a sample with margins in the unit square, let us construct the empirical copula,

http://freakonometrics.blog.free.fr/public/perso6/cop-marg-6.gif
Let us now compare those four approaches.

  • The first one is to illustrate model error, i.e. what’s going on if we fit distributions, but not in the proper family of parametric distributions.
X0=cbind((X[,1]-mean(X[,1])/sd(X[,1])),
(X[,2]-mean(X[,2])/sd(X[,2])))
Y=pnorm(X0)

Then, the following code is used to compute the value of the empirical copula, on the diagonal,

diagonale=function(t,Z) mean((Z[,1]<=t)&(Z[,2]<=t))
diagY=function(t) diagonale(t,Y)
DiagY=Vectorize(diagY)
u=seq(0,1,by=.005)
dY=DiagY(u)

On the graph below, 1,000 samples of size 200 have been generated. All trajectories are the estimation of the copula on the diagonal. The black plain line is the true value of the copula

Obviously, it is not good at all. Mainly because the distribution of http://freakonometrics.blog.free.fr/public/perso6/cop-marg-8.gif can’t be a copula, since margins are not even uniform on the unit interval.

  • a perfect fit. Here, we use the following code to generate our copula-type sample
U=pt(X,df=4)

This time, the fit is much better.

  • Using maximum likelihood estimators to fit the best distribution within the Student-t family
F1=fitdistr(X0[,1],dt,list(df=5),lower = 0.001)
F2=fitdistr(X0[,2],dt,list(df=5),lower = 0.001)
V=cbind(pt(X0[,1],df=F1$estimate),pt(X0[,2],df=F2$estimate))

Here, it is also very good. Even better than before, when the true distribution is considered.

(it is like using Lillie test for goodness of fit, versus Kolmogorov-Smirnov, seehere for instance, in French).

  • Finally, let us consider ranks, or nonparametric estimators for marginal distributions,
R=cbind(rank(X[,1])/(n+1),rank(X[,2])/(n+1))

Here it is even better then the previous one

If we compare Box-plots of the value of the copula at point (.2,.2), we obtain the following, with on top ranks, then fitting with the good family, then using the true distribution, and finally, using a non-proper distribution.

Just to illustrate one more time a result mentioned in a previous post, “in statistics, having too much information might not be a good thing“.

Beta kernel and transformed kernel

This Thursday I will give a talk at Laval University, on “Beta kernel and transformed kernel : applications to copula density estimation and quantile estimation“. This time, I will talk at the department of Mathematics and Statistics (13:30 at the pavillon Adrien-Pouliot). “Because copulas have bounded support (the unit square in dimension 2), standard kernel based estimators of densities are (multiplicatively) biased on borders and in corners of the support. Two techniques can be used to avoid that underestimation: Beta kernels and Transformed kernel. We will describe and discuss those two techniques in the first part of the talk. Then, we will see that it is possible to combine those two techniques to get nice estimator of several quantities (e.g. quantiles): transform the data to get on the unit interval – using a transformed kernel – then estimate the (transformed) quantile on [0,1] using a beta kernel, then get back on the initial support. As we will see on simulations, that technique can be better than standard quantile estimators, especially when data are heavy tailed.” Slides can be downloaded here.

  • kernel based density estimation

Kernel based estimation are a popular (and natural) technique to estimate densities.  It is simply and extension of the moving histogram:

so we count how many observations are a the neighborhood of the point where we want to estimate the density of the distribution. Then it is natural so consider a smoothing function, i.e. instead of a step function (either observations are close enough, or not), it is possible to give weights to observations, which will be a decreasing function of the distance,

With a smooth kernel, we have a smooth estimation of the density

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-01.gif

Then it is possible to play on the bandwidth, either to get a more accurate estimation of the density, but not that smooth (small bias but large variance),

or a smoother one (large bias, but small variance),

In R, it is simply

> X=rnorm(100)
> (D=density(X))
 
Call:
	density.default(x = X)
 
Data: X (100 obs.);	Bandwidth 'bw' = 0.3548
 
       x                   y            
 Min.   :-3.910799   Min.   :0.0001265  
 1st Qu.:-1.959098   1st Qu.:0.0108900  
 Median :-0.007397   Median :0.0513358  
 Mean   :-0.007397   Mean   :0.1279645  
 3rd Qu.: 1.944303   3rd Qu.:0.2641952  
 Max.   : 3.896004   Max.   :0.3828215  
 
> plot(D$x,D$y)
  • Beta kernel

The idea of Beta kernel is to consider kernels having support [0,1]. In the univariate case,

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-06.gif

where http://freakonometrics.blog.free.fr/public/perso3/kernel-f-07.gif is the density of a Beta distribution, i.e.

http://freakonometrics.blog.free.fr<br />
/public/perso3/beta-distribution.gif

For additional material, I have uploaded some R code to fit copula densities using beta kernels,

library(copula)
beta.kernel.copula.surface = function (u,v,bx,by,p) {
s = seq(1/p, len=(p-1), by=1/p)
mat = matrix(0,nrow = p-1, ncol = p-1)
for (i in 1:(p-1)) {
a = s[i]
for (j in 1:(p-1)) {
b = s[j]
mat[i,j] = sum(dbeta(a,u/bx,(1-u)/bx) *
dbeta(b,v/by,(1-v)/by)) / length(u)
} }
return(data.matrix(mat)) }

Then we can used it to see what we get on a simulated sample

library(copula)
COPULA = frankCopula(param=5, dim = 2)
X = rcopula(n=1000,COPULA)
p0 = 26
Z= beta.kernel.copula.surface(X[,1],X[,2],bx=.01,by=.01,p=p0)
u = seq(1/p0, len=(p0-1), by=1/p0)
persp(u,u,Z,theta=30,col="green",shade=TRUE,
box=FALSE,zlim=c(0,6))

http://freakonometrics.free.fr/copula-kernel-beta.gif
(yes, the surface is changing… to illustrate the impact of the bandwidth on the estimation).

  • transformed kernel estimation

I the talk, I will also mention the transformed Kernel estimate, as introduced in the book on L1 density estimation by Luc Devroye and Laszlo Györfi (the book can be downloaded here). I probably spend a few minutes on the original chapter, in order to provide another application of that techniques (not only to estimate copula densities, but here to estimate quantiles of heavy tailed distribution). In the univariate case, the R code is the following (here I consider two transformation, the quantile function of the Gaussian distribution, and the quantile function of the Student distribution with 3 degrees of freedom),

set.seed(1)
sample=rbeta(100,4,3)
 
transfN = function(x){
Y=qnorm(sample)
f=density(Y,from=-4,to=4,n=2001)
ny=sum(f$x<=qnorm(x)); 
  g=f$y[ny]/dnorm(qnorm(x))
return(g)
}
 
df0=3
 
transfT = function(x){
Y=qt(sample,df=df0)
f=density(Y,from=-4,to=4,n=2001)
ny=sum(f$x<=qt(x,3)); 
  g=f$y[ny]/dt(qt(x,df=df0),df=df0)
return(g)
}
 
tN=Vectorize(transfN)
tT=Vectorize(transfT)
 
u=seq(.01,.99,by=.01)
vN=tN(u)
vT=tT(u)
plot(u,vN,type="l",lwd=3,col="blue")
lines(u,vT,lwd=3,col="green")
lines(u,dbeta(u,4,3),col="red",lty=2)

The density estimation is the following,

(the red dotted line is the true density, since we work on a simulated sample). Now, let us get back on the initial chapter,

In the book, this is introduced as follows,

The original idea we add it to use this kernel based estimator for copulas, i.e. since we can estimate densities in high dimension with unbounded support, using

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-02.gif

the idea is to transform marginal observations,

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-10.gif

and to use the fact that the associated copula density can be written

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-12.gif

to derive an intuitive estimator for the copula density

http://freakonometrics.blog.free.fr/public/perso3/kernel-f-13.gif

An important issue is how do we choose the transformation

And Luc Devroye and Laszlo Györfi mention that this can be used to deal with extremes.

well, extremes are introduced through bumps (which is not the way I would have been dealing with extremes)

and note that several results can be derived on those bumps,

e.g.

Then, there is an interesting discussion about estimating the optimal transformation

and I will prove that this can be an extremely interesting idea, for instance to estimate quantiles of heavy tailed distribution, if we use also the beta kernel estimator on the unit interval. This idea was developed in a paper with Abder Oulidi, online here.

Remark: actually, in the book, an additional reference is mentioned,

but I have never been able to find a copy… if anyone has one, I’d be glad to read it…

Tails of Archimedean copulas

Publication de Tails of Archimedean Copulas, coécrit avec Johan Segers, dans le Journal of Multivariate Analysis, enfin… en ligne sur http://sciencedirect.com/science…

A complete and user-friendly directory of tails of Archimedean copulas is presented which can be used in the selection and construction of appropriate models with desired properties. The results are synthesized in the form of a decision tree: Given the values of some readily computable characteristics of the Archimedean generator, the upper and lower tails of the copula are classified into one of three classes each, one corresponding to asymptotic dependence and the other two to asymptotic independence. For a long list of single-parameter families, the relevant tail quantities are computed so that the corresponding classes in the decision tree can easily be determined. In addition, new models with tailor-made upper and lower tails can be constructed via a number of transformation methods. The frequently occurring category of asymptotic independence turns out to conceal a surprisingly rich variety of tail dependence structures.

Convergence of Archimedean Copulas

The paper on Convergence of Archimedean Copulas, with Johan Segers, just appeared, in Statistics and Probability Letters.

Convergence of a sequence of bivariate Archimedean copulas to another Archimedean copula or to the comonotone copula is shown to be equivalent with convergence of the corresponding sequence of Kendall distribution functions. No extra differentiability conditions on the generators are needed.