This Monday, in the ACT8595 course, we came back on elliptical distributions and conditional independence (here is an old post on de Finetti’s theorem, and the extension to Hewitt-Savage’s). I have shown simulations, to illustrate those two concepts of dependent variables, but I wanted to spend some time to visualize densities. More specifically what could be the joint density is we assume that margins are distributions.
- The Bivariate Gaussian distribution
Here, we consider a Gaussian random vector, with margins , and with correlation . This is the standard graph, with elliptical isodensity curves
r=.5 library(mnormt) S=matrix(c(1,r,r,1),2,2) f=function(x,y) dmnorm(cbind(x,y),varcov=S) vx=seq(-3,3,length=201) vy=seq(-3,3,length=201) z=outer(vx,vy,f) set.seed(1) X=rmnorm(1500,varcov=S) xhist <- hist(X[,1], plot=FALSE) yhist <- hist(X[,2], plot=FALSE) top <- max(c(xhist$density, yhist$density,dnorm(0))) nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) par(mar=c(3,3,1,1)) image(vx,vy,z,col=rev(heat.colors(101))) contour(vx,vy,z,col="blue",add=TRUE) points(X,cex=.2) par(mar=c(0,3,1,1)) barplot(xhist$density, axes=FALSE, ylim=c(0, top), space=0,col="light green") lines((density(X[,1])$x-xhist$breaks[1])/diff(xhist$breaks)[1], dnorm(density(X[,1])$x),col="red") par(mar=c(3,0,1,1)) barplot(yhist$density, axes=FALSE, xlim=c(0, top), space=0, horiz=TRUE,col="light green") lines(dnorm(density(X[,2])$x),(density(X[,2])$x-yhist$breaks[1])/ diff(yhist$breaks)[1],col="red")
That was the simple part.
- The Bivariate Student-t distribution
Consider now another elliptical distribution. But we want here to normalize the margins. Thus, instead of a pair , we would like to consider the pair , so that the marginal distributions are . The new density is obtained simply since the transformation is a one-to-one increasing transformation. Here, we have
k=3 r=.5 G=function(x) qnorm(pt(x,df=k)) dg=function(x) dt(x,df=k)/dnorm(qnorm(pt(x,df=k))) Ginv=function(x) qt(pnorm(x),df=k) S=matrix(c(1,r,r,1),2,2) f=function(x,y) dmt(cbind(Ginv(x),Ginv(y)),S=S,df=k)/(dg(x)*dg(y)) vx=seq(-3,3,length=201) vy=seq(-3,3,length=201) z=outer(vx,vy,f) set.seed(1) Z=rmt(1500,S=S,df=k) X=G(Z)
Because we considered a nonlinear transformation of the margins, the level curves are no longer elliptical. But there is still some kind of symmetry.
- The Exchangeable Case with Conditionally Independent Random Variables
We did consider the case where and with independent random variables, given , and that both variables are exponentially distributed, with parameter . As we’ve seen in class, it might be difficult to visualize that sample, unless we have log scales on both axis. But instead of a log transformation, why not consider a transformation so that margins will be . The only technical problem is that we do not have the (nonconditional) distributions of the margins. Well, we have them, but they are integral based. From a computational point of view, that’s not a bit deal… Computations might take a while, but we can visualize the density using the following code (here, we assume that is Gamma distributed)
a=.6 b=1 h=.0001 G=function(x) qnorm(ifelse(x<0,0,integrate(function(z) pexp(x,z)* dgamma(z,a,b),lower=0,upper=Inf)$value)) Ginv=function(x) uniroot(function(z) G(z)-x,lower=-40,upper=1e5)$root dg=function(x) (Ginv(x+h)-Ginv(x-h))/2/h H=function(xy) integrate(function(z) dexp(xy[2],z)*dexp(xy[1],z)* dgamma(z,a,b),lower=0,upper=Inf)$value f=function(x,y) H(c(Ginv(x),Ginv(y)))*(dg(x)*dg(y)) vx=seq(-3,3,length=151) vy=seq(-3,3,length=151) z=matrix(NA,length(vx),length(vy)) for(i in 1:length(vx)){ for(j in 1:length(vy)){ z[i,j]=f(vx[i],vy[j])}} set.seed(1) Theta=rgamma(1500,a,b) Z=cbind(rexp(1500,Theta),rexp(1500,Theta)) X=cbind(Vectorize(G)(Z[,1]),Vectorize(G)(Z[,2]))
There is a small technical problem, but no big deal.
Here, the joint distribution is quite different. Margins are – one more time – standard Gaussian, but the shape of the joint distribution is quite different, with an asymmetry from the lower (left) tail to the upper (right) tail. More details when we’ll introduce copulas. The only difference will be that the margins will be uniform on the unit interval, and not standard Gaussian.