Tag Archives: Gaussian

Simple Distributions for Mixtures?

The idea of GLMs is that given some covariates has a distribution in the exponential family (Gaussian, Poisson, Gamma, etc). But that does not mean that  has a similar distribution… so there is no reason to test for a Gamma model for  before running a Gamma regression, for instance. But are there cases where it might work? That the non-conditional distribution is the same (same family at least) than the conditional ones?

For instance, if  has a joint Gaussien distribution, then both marginals are Gaussian, but also . So, in that case, if the covariate is normally distributed, it is possible to have a Gaussian distribution also for . The econometric interpretation is that with a standard Gaussian linear model, if is normally distributed, not only the conditional distribution  is Gaussian but also the non-conditional distribution of .

> set.seed(1)
> n=1e3
> X=rnorm(n,10,2)
> Y=1+3*X+rnorm(n)
> plot(X,Y,xlim=c(4,20))

Indeed, here the distribution of  is also Gaussian

> library(nortest)
> ad.test(Y)

	Anderson-Darling normality test

data:  Y
A = 0.23155, p-value = 0.802

> shapiro.test(Y)

	Shapiro-Wilk normality test

data:  Y
W = 0.99892, p-value = 0.8293

(not only from a statistical point of view, the thoery of Gaussian random vectors confirms that the non-conditional distribution is Gaussian actually)

Here  is continuous. What if we consider a finite mixture here, i.e. takes only a finite number of values? Actually, Teicher (1963) proved that it is not possible to have a non-conditional Gaussian distribution for . But in practice, would we really reject the Gaussian assumption, for ? If the number of classes is to small, yes. But with a large number of classes (a sufficiently large number of mixture components), it is possible,

> pv=function(k=2){
+ n=1e4
+ X=rnorm(n,10,2)
+ Q=quantile(X,(0:k)/k)
+ Q[1]=0
+ Xc=cut(X,Q,labels=1:k)
+ XcN=tapply(X,Xc,mean)
+ Xn=XcN[as.numeric(Xc)]
+ Y=1+3*Xn+rnorm(n)
+ ad.test(Y)$p.value}
 
> plot(2:100,Vectorize(pv)(2:100),type="l")
> abline(h=.05,col="red")

So here, it could be possible to have also a Gaussian distribution, for . As least to accept that assumption, statistically.

In the context of a Poisson regression, it is well know that it’s not possible to have at the same time  that is Poisson distributed (that’s a Poisson regression) and also  that is Poisson distributed. That simply comes from the fact that

while

and because of the conditional Poisson distribution, then

Thus,

So  cannot be Poisson distribution. But again, it could be possible, if heterogeneity is not too large, to accept the null assumption of a Poisson distribution for .

More generally, it is very difficult to have a distribution family for   that is also the distribution of the non-conditional variable . In the context of a finite mixture ( takes a finite number of values),Teicher (1963) proved that it was not not possible, neither for the Gaussian distribution nor the Gamma distribution. An to go further, check Monfrini (2002) (thanks Romuald for point out the reference).

Hence, as a keep saying, before running a regression model on with some given family, it is never a good idea to check if the non-conditional distribution  has the same distribution. Because there is no reason, usually, to remain in the same family.

Overview on Multivariate Distributions

In June 2016, with Olivier L’Haridon, we will organize a (small) conference, in Rennes, on risk models in a multi-attribute framework. In order to fully enjoy the workshop (more to come on the blog), we will organize every month an internal workshop on that topic. We will start tomorrow afternoon, 13:00-14:30, and I will give a brief talk on multivariate distributions, with an emphasis on spherical / elliptical distributions, distributions on the simplex, and copulas. Slides are now online,

Central Limit Theorem

This week, in the MAT8595 course, before proving Fisher-Tippett theorem, we will get back on the proof of the Central Limit Theorem, and the class of stable distribution (in Lévy’s sense). In order to illustrate the problem of heavy tails, on the behavior of the mean, consider a sequence of i.i.d. Gaussian random variables https://latex.codecogs.com/gif.latex?X_i‘s. On top, we visualize the sequence, and below, we visualize the associate random walk

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=1}^n%20X_i

(the central limit theorem will give a limiting distribution for https://latex.codecogs.com/gif.latex?n^{-1}S_n in the case where the variance of the https://latex.codecogs.com/gif.latex?X_i‘s is finite)

If we consider a sequence of i.i.d. random variables https://latex.codecogs.com/gif.latex?X_i‘s whith heavier tails (possibly with infinite variance), we can still define https://latex.codecogs.com/gif.latex?S_n, but as we can see below, https://latex.codecogs.com/gif.latex?S_n can be quite heratic.

As we will see this Thursday, the key to derive stable distribution for the central limit theorem, or possible limiting distributions for the maximum is Cauchy’s function equation. I strongly recommand to look at the proof.

More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the first model – a GLM with some distribution, and some link function – and

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

> fitdistr(Y,"normal")
      mean          sd    
  0.93685011   0.90700830 
 (0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
      rate   
  1.06740661 
 (0.07547704)

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

> set.seed(5)
> n=200
> U=runif(n); 
> Y=-log(U)

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0$coefficient
> b=summary(reg0)$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

> a=.1
> set.seed(5)
> n=200
> U=runif(n); 
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)

To visualize the copula of the variables, we can use

> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
> mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30,
+ ticktype ="detailed",zlab="")

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

  • a Gaussian model (here a standard linear model)
  • a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

> reg1=lm(Y~X)
> reg2=glm(Y~X,family=Gamma(link="identity"))
> summary(reg1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9021 on 198 degrees of freedom
Multiple R-squared:  0.02071,	Adjusted R-squared:  0.01576 
F-statistic: 4.187 on 1 and 198 DF,  p-value: 0.04206

> summary(reg2)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for Gamma family taken to be 0.9086447)

    Null deviance: 229.72  on 199  degrees of freedom
Residual deviance: 226.58  on 198  degrees of freedom
AIC: 379.22

Number of Fisher Scoring iterations: 10

And here are the two predictions,

So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here

(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here

And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get

Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.

GLM, non-linearity and heteroscedasticity

Last week in the non-life insurance course, we’ve seen the theory of the Generalized Linear Models, emphasizing the two important components

  • the link function (which is actually the key component in predictive modeling)
  • the distribution, or the variance function

Just to illustrate, consider my favorite dataset

­lin.mod = lm(dist~speed,data=cars)

A linear model means here

where the residuals are assumed to be centered, independent, and with identical variance. If we visualize that linear regression, we usually see something like that

The idea here (in GLMs) is to assume

which will produce the same model as the one describe previously, based on some error term. That model can be visualized below,

attach(cars)
n=2
X= cars$speed 
Y=cars$dist
df=data.frame(X,Y)
vX=seq(min(X)-2,max(X)+2,length=n)
vY=seq(min(Y)-15,max(Y)+15,length=n)
mat=persp(vX,vY,matrix(0,n,n),zlim=c(0,.1),theta=-30,ticktype ="detailed", box = FALSE)
reggig=glm(Y~X,data=df,family=gaussian(link="identity"))
x=seq(min(X),max(X),length=501)
C=trans3d(x,predict(reggig,newdata=data.frame(X=x),type="response"),rep(0,length(x)),mat)
lines(C,lwd=2)
sdgig=sqrt(summary(reggig)$dispersion)
x=seq(min(X),max(X),length=501)
y1=qnorm(.95,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y1,rep(0,length(x)),mat)
lines(C,lty=2)
y2=qnorm(.05,predict(reggig,newdata=data.frame(X=x),type="response"), sdgig)
C=trans3d(x,y2,rep(0,length(x)),mat)
lines(C,lty=2)
C=trans3d(c(x,rev(x)),c(y1,rev(y2)),rep(0,2*length(x)),mat)
polygon(C,border=NA,col="yellow")
C=trans3d(X,Y,rep(0,length(X)),mat)
points(C,pch=19,col="red")
n=8
vX=seq(min(X),max(X),length=n)
mgig=predict(reggig,newdata=data.frame(X=vX))
sdgig=sqrt(summary(reggig)$dispersion)
for(j in n:1){
stp=251
x=rep(vX[j],stp)
y=seq(min(min(Y)-15,qnorm(.05,predict(reggig,newdata=data.frame(X=vX[j]),type="response"), sdgig)),max(Y)+15,length=stp)
z0=rep(0,stp)
z=dnorm(y, mgig[j], sdgig)
C=trans3d(c(x,x),c(y,rev(y)),c(z,z0),mat)
polygon(C,border=NA,col="light blue",density=40)
C=trans3d(x,y,z0,mat)
lines(C,lty=2)
C=trans3d(x,y,z,mat)
lines(C,col="blue")}

We do have two parts here: the linear increase of the average,  and the constant variance of the normal distribution .

On the other hand, if we assume a Poisson regression,

poisson.reg = glm(dist~speed,data=cars,family=poisson(link="log"))

we have something like

This time, two things have changed simultaneously: our model is no longer linear, it is an exponential one , and the variance is also increasing with the explanatory variable , since with a Poisson regression,

If we adapt the previous code, we get

The problem is that we changed two things when we introduced the Poisson regression from the linear model. So let us look at what happens when we change the two components independently. First, we can change the link function, with a Gaussian model but this time a multiplicative model (with a logarithm link function)

gaussian.reg = glm(dist~speed,data=cars,family=gaussian(link="log"))

which is still, here, an homoscedasctic model, but this time non-linear. Or we can change the link function in the Poisson regression, to get a linear model, but heteroscedastic

poisson.lin = glm(dist~speed,data=cars,family=poisson(link="identity"))

So this is basically what GLMs are about….

Bounding sums of random variables, part 2

It is possible to go further, much more actually, on bounding sums of random variables (mentioned in the previous post). For instance, if everything has been defined, in that previous post, on distributions on , it is possible to extend bounds of distributions on . Especially if we deal with quantiles. Everything we’ve seen remain valid. Consider for instance two  distributions. Using the previous code, it is possible to compute bounds for the quantiles of the sum of two Gaussian variates. And one has to remember that those bounds are sharp.

> Finv=function(u) qnorm(u,0,1)
> Ginv=function(u) qnorm(u,0,1)
> n=1000
> Qinf=Qsup=rep(NA,n-1)
> for(i in 1:(n-1)){
+ J=0:i
+ Qinf[i]=max(Finv(J/n)+Ginv((i-J)/n))
+ J=(i-1):(n-1)
+ Qsup[i]=min(Finv((J+1)/n)+Ginv((i-1-J+n)/n))
+ }

Actually, it is possible to compare here with two simple cases: the independent case, where the sum has a  distribution, and the comonotonic case where the sum has a  distribution.

>  lines(x,qnorm(x,sd=sqrt(2)),col="blue",lty=2)
>  lines(x,qnorm(x,sd=2),col="blue",lwd=2)

On the graph below, the comonotonic case (usually considered as the worst case scenario) is the plain blue line (with here an animation to illustrate the convergence of the numerical algorithm)

Below that (strong) blue line, then risks are sub-additive for the Value-at-Risk, i.e.

but above, risks are super-additive for the Value-at-RIsk. i.e.

(since for comonotonic variates, the quantile of the sum is the sum of quantiles). It is possible to visualize those two cases above, in green the area where risks are super-additive, while the yellow area is where risks are sub-additive.

Recall that with a Gaussian random vector, with correlation https://latex.codecogs.com/gif.latex?r then the quantile is the quantile of a random variable centered, with variance https://latex.codecogs.com/gif.latex?2(1+r). Thus, on the graph below, we can visualize case that can be obtained with this Gaussian copula. Here the yellow area can be obtained with a Gaussian copulas, the upper and the lower bounds being respectively the comonotonic and the countermononic cases.

http://freakonometrics.blog.free.fr/public/perso6/sum-norm-G-bounds2.gif

But the green area can also be obtained when we sum two Gaussian variables ! We just have to go outside the Gaussian world, and consider another copula.

Another point is that, in the previous post, https://latex.codecogs.com/gif.latex?C^- was the lower Fréchet-Hoeffding bound on the set of copulas. But all the previous results remain valid if https://latex.codecogs.com/gif.latex?C^- is alower bound on the set of copulas of interest. Especially

https://latex.codecogs.com/gif.latex?\tau_{C^-,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^-,L}(F,G)

for all https://latex.codecogs.com/gif.latex?C such that https://latex.codecogs.com/gif.latex?C\geq%20C^-. For instance, if we assume that the copula should have positive dependence, i.e. https://latex.codecogs.com/gif.latex?C\geq%20C^\perp, then

https://latex.codecogs.com/gif.latex?\tau_{C^\perp,L}(F,G)\leq%20\sigma_{C,L}(F,G)\leq\rho_{C^\perp,L}(F,G)

Which means we should have sharper bounds. Numerically, it is possible to compute those sharper bounds for quantiles. The lower bound becomes

https://latex.codecogs.com/gif.latex?\sup_{u\in[x,1]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x}{u}\right)\right\}

while the upper bound is

https://latex.codecogs.com/gif.latex?\sup_{u\in[0,x]}\left\{F^{-1}(u)+G^{-1}\left(\frac{x-u}{1-u}\right)\right\}

Again, one can easily compute those quantities on a grid of the unit interval,

> Qinfind=Qsupind=rep(NA,n-1)
> for(i in 1:(n-1)){
+  J=1:(i)
+  Qinfind[i]=max(Finv(J/n)+Ginv((i-J)/n/(1-J/n)))
+  J=(i):(n-1)
+  Qsupind[i]=min(Finv(J/n)+Ginv(i/J))
+ }

We get the graph below (the blue area is here to illustrate how sharper those bounds get with the assumption that we do have positive dependence, this area been attained only with copulas exhibiting non-positive dependence)

For high quantiles, the upper bound is rather close to the one we had before, since worst case are probably obtained when we do have positive correlation. But it will strongly impact the lower bound. For instance, it becomes now impossible to have a negative quantile, when the probability exceeds 75% if we do have positive dependence…

> Qinfind[u==.75]
[1] 0

(nonparametric) copula density estimation

Today, we will go further on the inference of copula functions. Some codes (and references) can be found on a previous post, on nonparametric estimators of copula densities (among other related things).  Consider (as before) the loss-ALAE dataset (since we’ve been working a lot on that dataset)

> library(MASS)
> library(evd)
> X=lossalae
> U=cbind(rank(X[,1])/(nrow(X)+1),rank(X[,2])/(nrow(X)+1))

The standard tool to plot nonparametric estimators of densities is to use multivariate kernels. We can look at the density using

> mat1=kde2d(U[,1],U[,2],n=35)
> persp(mat1$x,mat1$y,mat1$z,col="green",
+ shade=TRUE,theta=s*5,
+ xlab="",ylab="",zlab="",zlim=c(0,7))

or level curves (isodensity curves) with more detailed estimators (on grids with shorter steps)

> mat1=kde2d(U[,1],U[,2],n=101)
> image(mat1$x,mat1$y,mat1$z,col=
+ rev(heat.colors(100)),xlab="",ylab="")
> contour(mat1$x,mat1$y,mat1$z,add=
+ TRUE,levels = pretty(c(0,4), 11))

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est1.gif

Kernels are nice, but we clearly observe some border bias, extremely strong in corners (the estimator is 1/4th of what it should be, see another post for more details). Instead of working on sample https://latex.codecogs.com/gif.latex?(U_i,V_i) on the unit square, consider some transformed sample https://latex.codecogs.com/gif.latex?(Q(U_i),Q(V_i)), where https://latex.codecogs.com/gif.latex?Q:(0,1)\rightarrow\mathbb{R} is a given function. E.g. a quantile function of an unbounded distribution, for instance the quantile function of the https://latex.codecogs.com/gif.latex?\mathcal{N}(0,1) distribution. Then, we can estimate the density of the transformed sample, and using the inversion technique, derive an estimator of the density of the initial sample. Since the inverse of a (general) function is not that simple to compute, the code might be a bit slow. But it does work,

> gaussian.kernel.copula.surface <- function (u,v,n) {
+   s=seq(1/(n+1), length=n, by=1/(n+1))
+   mat=matrix(NA,nrow = n, ncol = n)
+ sur=kde2d(qnorm(u),qnorm(v),n=1000,
+ lims = c(-4, 4, -4, 4))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qnorm(s[i])+4)*1000/8)+1;
+ 	Yj<-round((qnorm(s[j])+4)*1000/8)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dnorm(qnorm(s[i]))*
+ 	dnorm(qnorm(s[j])))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Here, we get

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est2.gif

Note that it is possible to consider another transformation, e.g. the quantile function of a Student-t distribution.

> student.kernel.copula.surface =
+  function (u,v,n,d=4) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(NA,nrow = n, ncol = n)
+ sur<-kde2d(qt(u,df=d),qt(v,df=d),n=5000,
+ lims = c(-8, 8, -8, 8))
+ su<-sur$z
+ for (i in 1:n) {
+     for (j in 1:n) {
+ 	Xi<-round((qt(s[i],df=d)+8)*5000/16)+1;
+ 	Yj<-round((qt(s[j],df=d)+8)*5000/16)+1
+ 	mat[i,j]<-su[Xi,Yj]/(dt(qt(s[i],df=d),df=d)*
+ 	dt(qt(s[j],df=d),df=d))
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

Another strategy is to consider kernel that have precisely the unit interval as support. The idea is here to consider the product of Beta kernels, where parameters depend on the location

> beta.kernel.copula.surface=
+  function (u,v,bx=.025,by=.025,n) {
+  s <- seq(1/(n+1), length=n, by=1/(n+1))
+  mat <- matrix(0,nrow = n, ncol = n)
+ for (i in 1:n) {
+     a <- s[i]
+     for (j in 1:n) {
+     b <- s[j]
+ 	mat[i,j] <- sum(dbeta(a,u/bx,(1-u)/bx) *
+     dbeta(b,v/by,(1-v)/by)) / length(u)
+     }
+ }
+ return(list(x=s,y=s,z=data.matrix(mat)))
+ }

http://freakonometrics.blog.free.fr/public/perso6/3dcop-est3.gif

On those two graphs, we can clearly observe strong tail dependence in the upper (right) corner, that cannot be intuited using a standard kernel estimator…

Copulas and tail dependence, part 1

As mentioned in the course last week Venter (2003) suggested nice functions to illustrate tail dependence (see also some slides used in Berlin a few years ago).

  • Joe (1990)’s lambda

Joe (1990) suggested a (strong) tail dependence index. For lower tails, for instance, consider

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.2.php.png

i.e

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/toc3latex2png.3.php.png
  • Upper and lower strong tail (empirical) dependence functions

The idea is to plot the function above, in order to visualize limiting behavior. Define

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Llatex2png.2.php.png

for the lower tail, and

https://blogperso.univ-rennes1.fr/arthur.charpentier/public/perso3/Clatex2png.2.php.png

for the upper tail, where https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-12.2.php.png is the survival copula associated with https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-13.2.php.png, in the sense that
https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-14.2.php.png

while

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-15.2.php.png

Now, one can easily derive empirical conterparts of those function, i.e.

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-18.2.php.png

and

https://perso.univ-rennes1.fr/arthur.charpentier/latex/toclatex2png-19.2.php.png

Thus, for upper tail, on the right, we have the following graph

http://freakonometrics.blog.free.fr/public/perso6/upper-lambda.gif

and for the lower tail, on the left, we have

http://freakonometrics.blog.free.fr/public/perso6/lower-lambda.gif

For the code, consider some real data, like the loss-ALAE dataset.

> library(evd)
> X=lossalae

The idea is to plot, on the left, the lower tail concentration function, and on the right, the upper tail function.

> U=rank(X[,1])/(nrow(X)+1)
> V=rank(X[,2])/(nrow(X)+1)
> Lemp=function(z) sum((U<=z)&(V<=z))/sum(U<=z)
> Remp=function(z) sum((U>=1-z)&(V>=1-z))/sum(U>=1-z)
> u=seq(.001,.5,by=.001)
> L=Vectorize(Lemp)(u)
> R=Vectorize(Remp)(rev(u))
> plot(c(u,u+.5-u[1]),c(L,R),type="l",ylim=0:1,
+ xlab="LOWER TAIL          UPPER TAIL")
> abline(v=.5,col="grey")

Now, we can compare this graph, with what should be obtained for some parametric copulas that have the same Kendall’s tau (e.g.). For instance, if we consider a Gaussian copula,

> tau=cor(lossalae,method="kendall")[1,2]
> library(copula)
> paramgauss=sin(tau*pi/2)
> copgauss=normalCopula(paramgauss)
> Lgaussian=function(z) pCopula(c(z,z),copgauss)/z
> Rgaussian=function(z) (1-2*z+pCopula(c(z,z),copgauss))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgs=Vectorize(Lgaussian)(u)
> Rgs=Vectorize(Rgaussian)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgs,Rgs),col="red")

or Gumbel’s copula,

> paramgumbel=1/(1-tau)
> copgumbel=gumbelCopula(paramgumbel, dim = 2)
> Lgumbel=function(z) pCopula(c(z,z),copgumbel)/z
> Rgumbel=function(z) (1-2*z+pCopula(c(z,z),copgumbel))/(1-z)
> u=seq(.001,.5,by=.001)
> Lgl=Vectorize(Lgumbel)(u)
> Rgl=Vectorize(Rgumbel)(1-rev(u))
> lines(c(u,u+.5-u[1]),c(Lgl,Rgl),col="blue")

That’s nice (isn’t it?), but since we do not have any confidence interval, it is still hard to conclude (even if it looks like Gumbel copula has a much better fit than the Gaussian one). A strategy can be to generate samples from those copulas, and to visualize what we had. With a Gaussian copula, the graph looks like

> u=seq(.0025,.5,by=.0025); nu=length(u)
> nsimul=500
> MGS=matrix(NA,nsimul,2*nu)
> for(s in 1:nsimul){
+ Xs=rCopula(nrow(X),copgauss)
+ Us=rank(Xs[,1])/(nrow(Xs)+1)
+ Vs=rank(Xs[,2])/(nrow(Xs)+1)
+ Lemp=function(z) sum((Us<=z)&(Vs<=z))/sum(Us<=z)
+ Remp=function(z) sum((Us>=1-z)&(Vs>=1-z))/sum(Us>=1-z)
+ MGS[s,1:nu]=Vectorize(Lemp)(u)
+ MGS[s,(nu+1):(2*nu)]=Vectorize(Remp)(rev(u))
+ lines(c(u,u+.5-u[1]),MGS[s,],col="red")
+ }

(including – pointwise – 90% confidence bands)

> Q95=function(x) quantile(x,.95)
> V95=apply(MGS,2,Q95)
> lines(c(u,u+.5-u[1]),V95,col="red",lwd=2)
> Q05=function(x) quantile(x,.05)
> V05=apply(MGS,2,Q05)
> lines(c(u,u+.5-u[1]),V05,col="red",lwd=2)

while it is

with Gumbel copula. Isn’t it a nice (graphical) tool ?

But as mentioned in the course, the statistical convergence can be slow. Extremely slow. So assessing if the underlying copula has tail dependence, or not, it now that simple. Especially if the copula exhibits tail independence. Like the Gaussian copula. Consider a sample of size 1,000. This is what we obtain if we generate random scenarios,

or we look at the left tail (with a log-scale)

Now, consider a 10,000 sample,

or with a log-scale

We can even consider a 100,000 sample,

or with a log-scale

On those graphs, it is rather difficult to conclude if the limit is 0, or some strictly positive value (again, it is a classical statistical problem when the value of interest is at the border of the support of the parameter). So, a natural idea is to consider a weaker tail dependence index. Unless you have something like 100,000 observations…

Copulas estimation and influence of margins

Just a short post to get back on results mentioned at the end of the course. Since copulas are obtained using (univariate) quantile functions in the joint cumulative distribution function, they are – somehow – related to the marginal distribution fitted. In order to illustrate this point, consider an i.i.d. sample http://freakonometrics.blog.free.fr/public/perso6/cop-marg-01.gif from a Student-t distribution,

library(mnormt)
r=.5
n=200
X=rmt(n,mean=c(0,0),S=matrix(c(1,r,r,1),2,2),df=4)

Thus, the true copula is Student-t. Here, with 4 degrees of freedom. Note that we can easily get the (true) value of the copula, on the diagonal

dg=function(t) pmt(qt(t,df=4),mean=c(0,0),
S=matrix(c(1,r,r,1),2,2),df=4)
DG=Vectorize(dg)

Four strategies are considered here to define pseudo-copula base variates,

  • misfit: consider an invalid marginal estimation: we have assumed that margins were Gaussian, i.e. http://freakonometrics.blog.free.fr/public/perso6/cop-marg-2.gif
  • perfect fit: here, we know that margins were Student-t, with 4 degrees of freedom http://freakonometrics.blog.free.fr/public/perso6/cop-marg-3.gif
  • standard fit: then, consider the case where we fit marginal distribution, but in the good family this time (e.g. among Student-t distributions), http://freakonometrics.blog.free.fr/public/perso6/cop-marg-4.gif
  • ranks: finally, we consider nonparametric estimators for marginal distributions, http://freakonometrics.blog.free.fr/public/perso6/cop-marg-10.gif

Now that we have a sample with margins in the unit square, let us construct the empirical copula,

http://freakonometrics.blog.free.fr/public/perso6/cop-marg-6.gif
Let us now compare those four approaches.

  • The first one is to illustrate model error, i.e. what’s going on if we fit distributions, but not in the proper family of parametric distributions.
X0=cbind((X[,1]-mean(X[,1])/sd(X[,1])),
(X[,2]-mean(X[,2])/sd(X[,2])))
Y=pnorm(X0)

Then, the following code is used to compute the value of the empirical copula, on the diagonal,

diagonale=function(t,Z) mean((Z[,1]<=t)&(Z[,2]<=t))
diagY=function(t) diagonale(t,Y)
DiagY=Vectorize(diagY)
u=seq(0,1,by=.005)
dY=DiagY(u)

On the graph below, 1,000 samples of size 200 have been generated. All trajectories are the estimation of the copula on the diagonal. The black plain line is the true value of the copula

Obviously, it is not good at all. Mainly because the distribution of http://freakonometrics.blog.free.fr/public/perso6/cop-marg-8.gif can’t be a copula, since margins are not even uniform on the unit interval.

  • a perfect fit. Here, we use the following code to generate our copula-type sample
U=pt(X,df=4)

This time, the fit is much better.

  • Using maximum likelihood estimators to fit the best distribution within the Student-t family
F1=fitdistr(X0[,1],dt,list(df=5),lower = 0.001)
F2=fitdistr(X0[,2],dt,list(df=5),lower = 0.001)
V=cbind(pt(X0[,1],df=F1$estimate),pt(X0[,2],df=F2$estimate))

Here, it is also very good. Even better than before, when the true distribution is considered.

(it is like using Lillie test for goodness of fit, versus Kolmogorov-Smirnov, seehere for instance, in French).

  • Finally, let us consider ranks, or nonparametric estimators for marginal distributions,
R=cbind(rank(X[,1])/(n+1),rank(X[,2])/(n+1))

Here it is even better then the previous one

If we compare Box-plots of the value of the copula at point (.2,.2), we obtain the following, with on top ranks, then fitting with the good family, then using the true distribution, and finally, using a non-proper distribution.

Just to illustrate one more time a result mentioned in a previous post, “in statistics, having too much information might not be a good thing“.

Border bias and weighted kernels

With Ewen (aka @3wen), not only we have been playing on Twitter this month, we have also been working on kernel estimation for densities of spatial processes. Actually, it is only a part of what he was working on, but that part on kernel estimation has been the opportunity to write a short paper, that can now be downloaded on hal.

The problem with kernels is that kernel density estimators suffer a strong bias on borders. And with geographic data, it is not uncommon to have observations very close to the border (frontier, or ocean). With standard kernels, some weight is allocated outside the area: the density does not sum to one. And we should not look for a global correction, but for a local one. So we should use weighted kernel estimators (see on hal for more details). The problem that weights can be difficult to derive, when the shape of the support is a strange polygon. The idea is to use a property of product Gaussian kernels (with identical bandwidth) i.e. with the interpretation of having noisy observation, we can use the property of circular isodensity curve. And this can be related to Ripley (1977) circumferential correction. And the good point is that, with R, it is extremely simple to get the area of the intersection of two polygons. But we need to upload some R packages first,

require(maps)
require(sp)
require(snow)
require(ellipse)
require(ks)
require(gpclib)
require(rgeos)
require(fields)

To be more clear, let us illustrate that technique on a nice example. For instance, consider some bodiliy injury car accidents in France, in 2008 (that I cannot upload but I can upload a random sample),

base_cara=read.table(
"http://freakonometrics.blog.free.fr/public/base_fin_morb.txt",
sep=";",header=TRUE)

The border of the support of our distribution of car accidents will be the contour of the Finistère departement, that can be found in standard packages

geoloc=read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE,sep=",",comment.char="",check.names=FALSE,
colClasses=c(rep("character",5),rep("numeric",38)))
geoloc=geoloc[,c("dep","com","com_nom",
"long","lat","pop_2008")]
geoloc$id=paste(sprintf("%02s",geoloc$dep),
sprintf("%03s",geoloc$com),sep="")
geoloc=geoloc[,c("com_nom","long","lat","pop_2008")]
head(geoloc)
france=map('france',namesonly=TRUE,
plot=FALSE)
francemap=map('france', fill=TRUE, col="transparent",
plot=FALSE)
detpartement_bzh=france[which(france%in%
c("Finistere","Morbihan","Ille-et-Vilaine",
"Cotes-Darmor"))]
bretagne=map('france',regions=detpartement_bzh,
fill=TRUE, col="transparent", plot=FALSE,exact=TRUE)
finistere=cbind(bretagne$x[321:678],bretagne$y[321:678])
FINISTERE=map('france',regions="Finistere", fill=TRUE,
col="transparent", plot=FALSE,exact=TRUE)
monFINISTERE=cbind(FINISTERE$x[c(8:414)],FINISTERE$y[c(8:414)])

Now we need simple functions,

cercle=function(n=200,centre=c(0,0),rayon)
{theta=seq(0,2*pi,length=100)
m=cbind(cos(theta),sin(theta))*rayon
m[,1]=m[,1]+centre[1]
m[,2]=m[,2]+centre[2]
names(m)=c("x","y")
return(m)}
poids=function(x,h,POL)
{leCercle=cercle(centre=x,rayon=5/pi*h)
POLcercle=as(leCercle, "gpc.poly")
return(area.poly(intersect(POL,POLcercle))/
area.poly(POLcercle))}
lissage = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,
sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)

before defining our weights.

poidsU=function(i,U,h,POL)
{x=U[i,]
poids(x,h,POL)}
OMEGA=parLapply(cl,1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)
stopCluster(cl)
}else
{OMEGA=lapply(1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)}

Note that it is possible to parallelize if there are a lot of observations,

if(n>=500)
{cl <- makeCluster(4,type="SOCK")
worker.init <- function(packages)
{for(p in packages){library(p, character.only=T)}
NULL}
clusterCall(cl, worker.init, c("gpclib","sp"))
clusterExport(cl,c("cercle","poids"))

Then, we can use standard bivariate kernel smoothing functions, but with the weights we just calculated, using a simple technique that can be related to one suggested in Ripley (1977),

fhat=kde(U,H,w=1/OMEGA,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
fhat$estimate=fhat$estimate*sum(1/OMEGA)/n
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}
lissage_without_c = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)}
fhat=kde(U,H,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}

So, now we can play with those functions,

base_cara_FINISTERE=base_cara[which(point.in.polygon(
base_cara$long,base_cara$lat,monFINISTERE[,1],
monFINISTERE[,2])==1),]
coord=cbind(as.numeric(base_cara_FINISTERE$long),
as.numeric(base_cara_FINISTERE$lat))
nrow(coord)
map(francemap)
lissage_FIN_withoutc=lissage_without_c(coord,
monFINISTERE,optimal=TRUE)
lissage_FIN=lissage(coord,monFINISTERE,
optimal=TRUE)
lesBreaks_sans_pop=range(c(
range(lissage_FIN_withoutc$Z),
range(lissage_FIN$Z)))
lesBreaks_sans_pop=seq(min(lesBreaks_sans_pop)*.95,
max(lesBreaks_sans_pop)*1.05,length=21)

plot_article=function(lissage,breaks,
polygone,coord){
par(mar=c(3,1,3,1))
image.plot(lissage$X,lissage$Y,(lissage$ZNA),
xlim=range(polygone[,1]),ylim=range(polygone[,2]),
breaks=breaks, col=rev(heat.colors(20)),xlab="",
ylab="",xaxt="n",yaxt="n",bty="n",zlim=range(breaks),
horizontal=TRUE)
contour(lissage$X,lissage$Y,lissage$ZNA,add=TRUE,
col="grey")
points(coord[,1],coord[,2],pch=19,cex=.1,
col="dodger blue")
polygon(polygone,lwd=2,)}

plot_article(lissage_FIN_withoutc,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

plot_article(lissage_FIN,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

If we look at the graphs, we have the following densities of car accident, with a standard kernel on the left, and our proposal on the right (with local weight adjustment when the estimation is done next to the border of the region of interest),

Similarly, in Morbihan,

With those modified kernels, hot spots appear much more clearly. For more details, the paper is online on hal.

PhD defense on copulas

This Wednesday I will be at Université Paris 1 Sorbonne as a member of the jury of the PhD thesis of Pierre-André Maugis, on conditional correlation and vine copula.

Vine copulas were born in 2002 with thepaper of Tim Bedford and Roger M. CookeVines–a new graphical model for dependent random variables. The idea is to use the following decomposition for a multivariate density

(from Bayes formula, with synthetic notations). Then using the relationship between a bivariate density and its copula (density)

thus

Using again Bayes formula,

and we can write

Since  and , the previous expression becomes

or to stress on the most important part (as I see it)

It is common then to assume that this conditional copula does not depend on the conditioning parameter. The more detailed expression of that joint trivariate density is

The (parametric) inference algorithm is defined in Cooke, Joe and Aas (2010) as follows

The important assumption in vine copula models is that conditional copulas are constant. And this assumption might be relevant in some cases. For instance, in the Gaussian case (the observations have a Gaussian joint distribution – or at least copula – and we fit a vine model with Gaussian bivariate copulas).

The code to fit a vine copula is the following,

> library(CDVine)
> library(mnormt)
> SIGMA=matrix(c(1,.6,.7,.6,1,.8,.7,.8,1),3,3)
> X=rmnorm(n=100000,varcov=SIGMA)
> CDVineSeqEst(dat=X, family = c(1,1,1),
+ type = 1, method = "mle")
$par
[1] 0.6001505 0.7023699 0.6698215
 
$par2
[1] 0 0 0

Note that it is consistent with the following algorithm where conditional copulas are fitted. In the following, for all values of the given component, we wit a Gaussian copula for the conditional remaining pair,

> U=pnorm(X)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> GaussCop = normalCopula(param=.5, dim = 2)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> fit12.mpl = fitCopula(GaussCop, U1U2, method="mpl")@estimate
> fit13.mpl = fitCopula(GaussCop, U1U3, method="mpl")@estimate
> fit12.mpl
[1] 0.5984932
> fit13.mpl
[1] 0.7005185
> fit23a=fit23b=rep(NA,99)
> for(i in 4:96){
+ x=i/100
+ C12=pcopula(normalCopula(param=fit12.mpl, dim = 2),U1U2)
+ C13=pcopula(normalCopula(param=fit13.mpl, dim = 2),U1U3)
+ U12=rank(C12)/(nrow(U)+1)
+ U13=rank(C13)/(nrow(U)+1)
+ U23=cbind(U12[abs(U[,1]-x)<.02],U13[abs(U[,1]-x)<.02])
+ V23=cbind(rank(U23[,1])/(nrow(U23)+1),
+ rank(U23[,2])/(nrow(U23)+1))
+ fit23.mpl = fitCopula(GaussCop, V23, method="mpl")@estimate
+ fit23a[i]=fit23.mpl
+ }
> plot(X,fit23a,col="red")

It looks like assuming the conditional copula as constant was a valid assumption here

But note that if the true distribution is not Gaussian, then assuming the conditional copula as constant is not valid anymore (here a trivariate Clayton copula was generated)

Does the Student based confidence interval have any interest in practice ?

Friday in the course of statistics, we started the section on confidence interval, and like always, I got a bit confused with the degrees of freedom of the Student (should it be http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif or http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif ?) and which empirical variance (should we consider the one where we divide by http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif or the one with http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif ?).
And each time I start to get confused, the student obviously see it, and start to ask tricky questions… So let us make it clear now. The correct formula is the following: let

http://freakonometrics.blog.free.fr/public/perso2/IC-std-4.gif

then

http://freakonometrics.blog.free.fr/public/perso2/IC-std-1.gif

is a confidence interval for the mean of a Gaussian i.i.d. sample.
But the important thing is neither the n-1 that appear as degrees of freedom nor the http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif that appear in the estimation of the standard error. Like always in mathematical result, the most important part of that result is not mentioned here: observations have to be i.i.d. and to be normally distributed. And not “almost” normally distributed….
Consider the following case: we have http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif=20 observations that are almost normally distributed. Hence, I consider a student t distribution

n=20; X=rt(n,df=3)

An Anderson Darling normality test accepts a normal distribution in 2 cases out of 3.

for(s in 1:10000){
X=rt(n,df=3)
pv[s]=ad.test(X)$p.value
}
mean(pv>.05)
[1] 0.6799

With a true normal distribution if would be 95% of the cases, so in some sense, I can pretend that I generate almost normal samples.
For those samples, we can look at bounds of the 90% confidence interval for the mean, with three different formulas,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-1.gif

i.e. the correct one, or the one where I considered http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif degrees of freedom instead of http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-2.gif

and the one were we condired a Gaussian quantile instead of a Student t one,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-3.gif

(and one might think to look at the non-unbiased estimator of the variance, also).
for(s in 1:10000){
X=rt(n,df=3)
m[s]=mean(X)
sd=sqrt(var(X))
IC1[s]=m[s]-qt(.95,df=n-1)*sd/sqrt(n)
IC2[s]=m[s]-qt(.95,df=n)*sd/sqrt(n)
IC3[s]=m[s]-qnorm(.95)*sd/sqrt(n)
}

One the graph below are plotted the distributions of the values obtained as lower bound of the 90% confidence interval,

(the curves with http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif and http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif degrees of freedom in quantiles are the same, here).
The dotted vertical line is the true lower bound of the 90%-confidence interval, given the true distribution (which was not a Gaussian one).
If I get back to the standard procedure in any statistical textbook, since the sample is almost Gaussian, the lower bound of the confidence interval should be (since we have a Student t distribution)

mean(IC1)
[1] -0.605381

instead of

mean(IC3)
[1] -0.5759391

(obtained with a Gaussian distribution instead of a Student one). Actually, both of them are quite different from the correct one which was

quantile(m,.05)
       5% 
-0.623578

As I mentioned in a previous post (here), an important issue is that if we do not know a parameter and substitute an estimator, there is usually a cost (which means usually that the confidence interval should be larger). And this is what we observe here. From a teacher’s point of view, it is an important issue that should be mentioned in statistical courses….

But another important point is also that confidence interval is valid only if the underlying distribution is Gaussian. And not almost Gaussian, but really a Gaussian one.  So since with http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif=20 observations everything might look Gaussian, I was wondering what should be done in practice… Because in some sense, using a Student quantile based confidence interval on some almost Gaussian sample is as wrong as using a Gaussian quantile based confidence interval on some Gaussian sample…

Will I ever be a bayesian statistician ? (part 1)

Last week, during the workshop on Statistical Methods for Meteorology and Climate Change (here), I discovered how powerful bayesian techniques could be, and that there were more and more bayesian statisticians. So, if I was to fully understand applied statisticians in conferences and workshops, I really have to understand basics of bayesian statistics. I have published some time ago some posts on bayesian statistics applied to actuarial problems (here or there), but so far, I always thought that bayesian was a synonym for magician. To be honest, I am a Muggle, and I have not been trained as a bayesian. But I can be an opportunist…

So I decided to publish some posts on bayesian techniques, in order to prove that it is actually not that difficult to implement.

As far as I understand it, in bayesian statistics, the parameter is considered as a random variable (which is also the case, in classical mathematical statistics). But here, here assume that this parameter does have a parametric distribution….
Consider a classical statistical problem: assume we have a sample http://freakonometrics.free.fr/blog/bayy1.png i.i.d. with distribution http://freakonometrics.free.fr/blog/bayy2.png. Here we note

http://freakonometrics.free.fr/blog/bayy3.png

since parameter http://freakonometrics.free.fr/blog/bayyyyy001.png is a random variable. The idea is to assume that http://freakonometrics.free.fr/blog/bayyyyy001.png has a (so called a priori) distribution, e.g.

http://freakonometrics.free.fr/blog/bayy4.png

So far it was simple. The idea is then to consider the posterior distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png, given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png. Thus, we need to compute the distribution of http://freakonometrics.free.fr/blog/bayyyyyy03.png which is here extremely simple (due to properties of the Gaussian distribution), i.e.

http://freakonometrics.free.fr/blog/bayyyyyy04.png

where

http://freakonometrics.free.fr/blog/bayyyyyy05.png

And them, it becomes extremely natural to consider http://freakonometrics.free.fr/blog/bayy20.png as an estimator of given our sample data (and thus, we also have a confidence interval since we know the distribution of http://freakonometrics.free.fr/blog/bayyyyy001.png given the observations http://freakonometrics.free.fr/blog/bayyyyyy02.png).
In order to be sure that we understood, consider now a heads and tails problem, i.e. http://freakonometrics.free.fr/blog/bayy5.png. Note, first, that \theta has support http://freakonometrics.free.fr/blog/bayy6.png. So we need a distribution on that support. Why not a beta distribution ? E.g.

http://freakonometrics.free.fr/blog/bayy7.png

Thus,

http://freakonometrics.free.fr/blog/bayy8.png

and

http://freakonometrics.free.fr/blog/bayy9.png

From Bayes formula,

http://freakonometrics.free.fr/blog/bayy10.png

and we get easily

http://freakonometrics.free.fr/blog/bayy11.png

which is the density of a Beta distribution, i.e.

http://freakonometrics.free.fr/blog/bayy12.png
prior=dbeta(u,a,b)
posterior=dbeta(u,a+y,n-y+b)

The estimator proposed is then the expected value of that conditional distribution,

http://freakonometrics.blog.free.fr/public/perso/bayyyyyyyyyyy.png

Note that

http://freakonometrics.free.fr/blog/bayy13.png

Further, it is possible to derive confidence intervals using quantiles of the posterior distribution.
On the graphs below, we consider the following heads/tails sample

A first idea is to consider a uniform prior distribution.

http://freakonometrics.free.fr/blog/bayes-cv-1.gif

A second idea is to consider an asymmetric beta distribution. First, with an asymmetry on the left,

http://freakonometrics.free.fr/blog/bayes-cv-3.gif

or on the right
http://freakonometrics.free.fr/blog/bayes-cv-2.gif

Finally a third idea is simply to get back to the standard Gaussian approximation,

http://freakonometrics.free.fr/blog/bayes-cv-gauss.gif

If we compare the four models, we obtain (the plain black line is the Gaussian approximated distribution for the empirical mean), and red lines are obtained from prior beta distributions

http://freakonometrics.free.fr/blog/bayes-cv-all.gif

The code to generate those graphs is the following
a1=1; b1=1
D1[1,]=dbeta(u,a,b)
a2=4; b2=2
D2[1,]=dbeta(u,a,b)
a3=2; b3=4
D3[1,]=dbeta(u,a,b)
setseed(1)
S=sample(0:1,size=100,replace=TRUE)
COULEUR=rev(rainbow(120))
D1=D2=D3=D4=matrix(NA,101,length(u))
for(s in 1:100){
y=sum(S[1:s])
D1[s+1,]=dbeta(u,a1+y,s-y+b1)
D2[s+1,]=dbeta(u,a2+y,s-y+b2)
D3[s+1,]=dbeta(u,a3+y,s-y+b3)
D4[s+1,]=dnorm(u,y/s,sqrt(y/s*(1-y/s)/s))
plot(u,D1[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D1[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D2[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D2[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D3[1,],col="black",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D3[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[1,],col="white",type="l",ylim=c(0,8),
xlab="",ylab="")
for(i in 1:s){lines(u,D4[1+i,],col=COULEUR[i])}
points(y/s,0,pch=3,cex=2)
plot(u,D4[s+1,],col="black",lwd=2,type="l",
ylim=c(0,8),xlab="",ylab="")
lines(u,D1[1+i,],col="blue")
lines(u,D2[1+i,],col="red")
lines(u,D3[1+i,],col="purple")
points(y/s,0,pch=3,cex=2)
}

Here, we can see that computations are simple if the prior distribution has a distribution which is the conjugate of the observations’ distribution (see here for the list of prior and posterior standard distributions).
So far, I have two questions that naturally show up

  • is it possible to start with a neutral prior distribution, non informative ?
  • what if we are no longer working with conjugate distributions ?

Well, I guess I have to work a bit more to answer those questions…. to be continued

Optimization and mixture estimation

Recently, one of my students asked me about optimization routines in R. He told me he that R performed well on the estimation of a time series model with different regimes, while he had trouble with a (simple) GARCH process, and he was wondering if R was good in optimization routines. Actually, I always thought that mixtures (and regimes) was something difficult to estimate, so I was a bit surprised…

Indeed, it reminded me some trouble I experienced once, while I was talking about maximum likelihooh estimation, for non standard distribution, i.e. when optimization had to be done on the log likelihood function. And even when generating nice samples, giving appropriate initial values (actually the true value used in random generation), each time I tried to optimize my log likelihood, it failed. So I decided to play a little bit with standard optimization functions, to see which one performed better when trying to estimate mixture parameter (from a mixture based sample). Here, I generate a mixture of two gaussian distributions, and I would like to see how different the mean should be to have a high probability to estimate properly the parameters of the mixture.

The density is here https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-01.png proportional to

https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-02.png

The true model is https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-03.png, and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-04.png being a parameter that will change, from 0 to 4.
The log likelihood (actually, I add a minus since most of the optimization functions actually minimize functions) is
> logvraineg <- function(param, obs) {
+ p <- param[1]
+ m1 <- param[2]
+ sd1 <- param[3]
+ m2 <- param[4]
+  sd2 <- param[5]
+  -sum(log(p * dnorm(x = obs, mean = m1, sd = sd1) + (1 – p) *
+ dnorm(x = obs, mean = m2, sd = sd2)))
+  }
The code to generate my samples is the following,
>X1 = rnorm(n,0,1)
> X20 = rnorm(n,0,1)
> Z  = sample(c(1,2,2),size=n,replace=TRUE)
> X2=m+X20
> X = c(X1[Z==1],X2[Z==2])
Then I use two functions to optimize my log likelihood, with identical intial values,
> O1=nlm(f = logvraineg, p = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), obs = X)
> logvrainegX <- function(param) {logvraineg(param,X)}
> O2=optim( par = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)),
+   fn = logvrainegX)
Actually, since I might have identification problems, I take either https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-05.png or https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-06.png, depending whether https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-07.png or https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-08.png is the smallest parameter.

On the graph above, the x-axis is the difference between means of the mixture (as on the animated grap above). Then, the red point is the median of estimated parameter I have (here https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-05.png), and I have included something that can be interpreted as a confidence interval, i.e. where I have been in 90% of my scenarios: theblack vertical segments. Obviously, when the sample is not enough heterogeneous (i.e. https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-09.png and https://perso.univ-rennes1.fr/arthur.charpentier/latex/mix-ml-04.png rather different), I cannot estimate properly my parameters, I might even have a probability that exceed 1 (I did not add any constraint). The blue plain horizontal line is the true value of the parameter, while the blue dotted horizontal line is the initial value of the parameter in the optimization algorithm (I started assuming that the mixture probability was around 0.2).
The graph below is based on the second optimization routine (with identical  starting values, and of course on the same generated samples),

(just to be honest, in many cases, it did not converge, so the loop stopped, and I had to run it again… so finally, my study is based on a bit less than 500 samples (times 15 since I considered several values for the mean of my second underlying distribution), with 200 generated observations from a mixture).
The graph below compares the two (empty circles are the first algorithm, while plain circles the second one),

On average, it is not so bad…. but the probability to be far away from the tru value is not small at all… except when the difference between the two means exceeds 3…
If I change starting values for the optimization algorithm (previously, I assumed that the mixture probability was 1/5, here I start from 1/2), we have the following graph

which look like the previous one, except for small differences between the two underlying distributions (just as if initial values had not impact on the optimization, but it might come from the fact that the surface is nice, and we are not trapped in regions of local minimum).
Thus, I am far from being an expert in optimization routines in R (see here for further information), but so far, it looks like R is not doing so bad… and the two algorithm perform similarly (maybe the first one being a bit closer to the trueparameter).