Tag Archives: vine

PhD defense on copulas

This Wednesday I will be at Université Paris 1 Sorbonne as a member of the jury of the PhD thesis of Pierre-André Maugis, on conditional correlation and vine copula.

Vine copulas were born in 2002 with thepaper of Tim Bedford and Roger M. CookeVines–a new graphical model for dependent random variables. The idea is to use the following decomposition for a multivariate density

(from Bayes formula, with synthetic notations). Then using the relationship between a bivariate density and its copula (density)

thus

Using again Bayes formula,

and we can write

Since  and , the previous expression becomes

or to stress on the most important part (as I see it)

It is common then to assume that this conditional copula does not depend on the conditioning parameter. The more detailed expression of that joint trivariate density is

The (parametric) inference algorithm is defined in Cooke, Joe and Aas (2010) as follows

The important assumption in vine copula models is that conditional copulas are constant. And this assumption might be relevant in some cases. For instance, in the Gaussian case (the observations have a Gaussian joint distribution – or at least copula – and we fit a vine model with Gaussian bivariate copulas).

The code to fit a vine copula is the following,

> library(CDVine)
> library(mnormt)
> SIGMA=matrix(c(1,.6,.7,.6,1,.8,.7,.8,1),3,3)
> X=rmnorm(n=100000,varcov=SIGMA)
> CDVineSeqEst(dat=X, family = c(1,1,1),
+ type = 1, method = "mle")
$par
[1] 0.6001505 0.7023699 0.6698215
 
$par2
[1] 0 0 0

Note that it is consistent with the following algorithm where conditional copulas are fitted. In the following, for all values of the given component, we wit a Gaussian copula for the conditional remaining pair,

> U=pnorm(X)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> GaussCop = normalCopula(param=.5, dim = 2)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> fit12.mpl = fitCopula(GaussCop, U1U2, method="mpl")@estimate
> fit13.mpl = fitCopula(GaussCop, U1U3, method="mpl")@estimate
> fit12.mpl
[1] 0.5984932
> fit13.mpl
[1] 0.7005185
> fit23a=fit23b=rep(NA,99)
> for(i in 4:96){
+ x=i/100
+ C12=pcopula(normalCopula(param=fit12.mpl, dim = 2),U1U2)
+ C13=pcopula(normalCopula(param=fit13.mpl, dim = 2),U1U3)
+ U12=rank(C12)/(nrow(U)+1)
+ U13=rank(C13)/(nrow(U)+1)
+ U23=cbind(U12[abs(U[,1]-x)<.02],U13[abs(U[,1]-x)<.02])
+ V23=cbind(rank(U23[,1])/(nrow(U23)+1),
+ rank(U23[,2])/(nrow(U23)+1))
+ fit23.mpl = fitCopula(GaussCop, V23, method="mpl")@estimate
+ fit23a[i]=fit23.mpl
+ }
> plot(X,fit23a,col="red")

It looks like assuming the conditional copula as constant was a valid assumption here

But note that if the true distribution is not Gaussian, then assuming the conditional copula as constant is not valid anymore (here a trivariate Clayton copula was generated)