Category Archives: Inference

Modeling the Marginals and the Dependence separately

When introducing copulas, it is commonly admitted that copulas are interesting because they allow to model the marginals and the dependence structure separately. The motivation is probably Sklar’s theorem, which says that given some marginal cumulative distribution functions (say  and , in dimension 2), and a copula (denoted ), then we can generate a multivariate cumulative distribution function with marginals the one specified previously, using

But this separability might be misleading. Consider the case of a fully parametric model,

Assume that those distributions are continuous, so that we can write the likelihood using densities,

and the log-likelihood is

The first part is the log-likelihood if we consider the first marginal (only). The second part is the log-likelihood if we consider the second marginal (only). If the two components are not independent (i.e. the copula density  is not equal to 1 everywhere) the third part cannot be considered as null, and so, in a general context,

where

while

In order to illustrate this point, consider a bivariate lognormal distribution (obtained by taking the exponential of a Gaussian vector)

> mu1=1
> mu2=2
> MU=c(mu1,mu2)
> s1=1
> s2=sqrt(2)
> r=.8
> SIGMA=matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
> library(mnormt)
> set.seed(1)
> Z=exp(rmnorm(25,MU,SIGMA))

If we believe that marginals and correlations can be treated separately, we can start with marginal distributions.

> library(MASS)
> (p1=fitdistr(Z[,1],"lognormal"))
    meanlog      sdlog  
  1.1686652   0.9309119 
 (0.1861824) (0.1316508)
> (p2=fitdistr(Z[,2],"lognormal"))
    meanlog      sdlog  
  2.2181721   1.1684049 
 (0.2336810) (0.1652374)

Based on those marginal distributions, define  and , and consider the maximum likelihood estimator  of the copula parameter, obtained from this pseudo sample,

Numerically, we get (since we consider a Gaussian copula, which is the true copula generated here)

> library(copula)
> Gcop=normalCopula(.3,dim=2)
> U=cbind(plnorm(Z[,1],p1$estimate[1],p1$estimate[2]),
+ plnorm(Z[,2],p2$estimate[1],p2$estimate[2]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.86530    0.03799   22.77

But clearly, we did not treat the dependence structure separately, since it was a function of marginal distributions,

If we consider a global optimization problem, then results are different. The joint density can be derived (see e.g. Mostafa & Mahmoud (1964))

> dbivlognorm=function(x,theta){
+ mu1=theta[1]
+ mu2=theta[2]
+ s1=theta[3]
+ s2=theta[4]
+ r=theta[5]
+ a1=(log(x[,1])-mu1)/s1
+ a2=(log(x[,2])-mu2)/s2
+ d=1/(2*pi*s1*s2*sqrt(1-r^2))*1/(x[,1]*x[,2])*
+ exp(-(a1^2-2*r*a1*a2+a2^2)/(2*(1-r^2)))
+ return(d)
+ }
> LogLik=function(theta){
+ return(-sum(log(dbivlognorm(Z,theta))))}
> optim(par=c(0,0,1,1,0),fn=LogLik)$par
[1] 1.1655359 2.2159767 0.9237853 1.1610132 0.8645052

The difference is not huge, but still. The estimators are not identical. From a statistical point of view, we can hardly treat the marginals and the dependence structure separately.

Another point we should keep in mind is that the estimation of the copula parameter depends on the margins, not only through the parameters, but more deeply, through the choice of the marginal distributions (that might be misspecified). For instance, if we assume that margins are exponentially distributed,

> (p1=fitdistr(Z[,1],"exponential"))
      rate   
  0.22288362 
 (0.04457672)
> (p2=fitdistr(Z[,2],"exponential"))
      rate   
  0.06543665 
 (0.01308733)

the estimation of the parameter of the Gaussian copula yields

> U=cbind(pexp(Z[,1],p1$estimate[1]),
+ pexp(Z[,2],p2$estimate[1]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.87421    0.03617   24.17   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The maximized loglikelihood is  15.4 
Optimization converged

The problem is that since we misspecify marginal distribution, our pseudo sample is defined on the unit-interval, but there is no chance that we get uniform margins. If we generate a sample of size 500 with the code above,

> x <- U[,1]; y <- U[,2]
> xhist <- hist(x, plot=FALSE) ; yhist <- hist(y, plot=FALSE)
> top <- max(c(xhist$counts, yhist$counts)) 
> nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) 
> par(mar=c(3,3,1,1)) 
> plot(x, y, xlab="", ylab="",col="red",xlim=0:1,ylim=0:1) 
> par(mar=c(0,3,1,1))
> barplot(xhist$counts, axes=FALSE, ylim=c(0, top), 
+ space=0,col="light green") 
> par(mar=c(3,0,1,1))
> barplot(yhist$counts, axes=FALSE, xlim=c(0, top), 
+ space=0, horiz=TRUE,col="light blue")

If we compare with the previous case, when marginal distribution were well-specified, we can clearly see that the dependence structure depends on marginal distributions,

Risk Measures with Extreme Value Models

We’ve seen Monday, in the MAT8595 course how to use the Generalized Pareto Distribution to estimate some downside risk measures, given a sample (assumed to be i.i.d., I will not mention here properties on extremes for stochastic processes) with distribution https://latex.codecogs.com/gif.latex?F. The cumulative distribution function of the  Pareto distribution is here

For some threshold , and https://latex.codecogs.com/gif.latex?x\geq%20u, we can write

From Pickands–Balkema–de Haan theorem, if is large enough, then

Given our sample https://latex.codecogs.com/gif.latex?\{x_1,\cdots,x_n\}, let  denote the number of observations over,  threshold . Then we can write

or equivalently

If we invert this function, we get the quantile of level ,

Actually, a threshold and then the implied number of observation exceeding that threshold, it is possible to consider a fixed number of observation, and then the associated threshold will be the associated order statistics.

The density of the Pareto distribution is here

https://latex.codecogs.com/gif.latex?%20%20%20%20%20g_{(\xi,\sigma)}(x)%20=%20\frac{1}{\sigma}\left(1%20+%20\frac{\xi%20x}{\sigma}\right)^{\left(-\frac{1}{\xi}%20-%201\right)}

which is here function of two paramters, https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?\sigma.As discussed in the course, it is possible to use the Delta method to derive the asymptotic distribution of any quantile, and get then an approximated (asymptotic) confidence interval.

But since https://latex.codecogs.com/gif.latex?\sigma is usually not a parameter of interest, why not considering a reparametrization of our density, as a function of  https://latex.codecogs.com/gif.latex?%20%20\xi and https://latex.codecogs.com/gif.latex?Q(p) (for some probability https://latex.codecogs.com/gif.latex?p that will be considered as fixed from now on). We can easily get (assuming that https://latex.codecogs.com/gif.latex?\xi\neq%200) that

https://latex.codecogs.com/gif.latex?g_{\xi,Q(p)}(x)=\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi[Q(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{[Q(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

Tis expression is simple, and can be used to derive the likelihood (on the observations exceeding the threshold)

https://latex.codecogs.com/gif.latex?\log\mathcal{L}(\xi,Q(p);\boldsymbol{x})=\sum_{i=0}^{N_u-1}%20\log%20g_{\xi,Q(p)}(x_{n-i:n})Numerically, let us write (and plot) that function. Consider some real data here

> X=as.numeric(danish)
> Xs=sort(X,decreasing=TRUE)
> n=length(X)
> u=10
> nu=sum(X>u)

Consider, say, the 99.9% quantile,

> p=.999

The empirical quantile is here

> quantile(X,p)
   99.9% 
131.5519

The density and the loglikelihood functions are here

> gq=function(x,xi,q){
+ ( (n/nu*(1-p) ) ^ (-xi)-1)/(xi*(q-u))*
+ (1+((n/nu*(1-p))^(-xi)-1)/(q-u)*x)^(-1/xi-1)}

> loglik=function(param){
+ xi=param[2];q=param[1]
+ lg=function(i) log(gq(Xs[i],xi,q))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

We can try to plot this likelihood using

> h=201
> Q=seq(50,300,length=h)
> XI=seq(.1,1,length=h)
> XIQ=as.matrix(expand.grid(Q,XI))
> M=mapply(loglik,XIQ)

Unfortunately, it was not working, so I used the old style

> M=matrix(NA,h,h)
> for(i in 1:h){for(j in 1:h){M[i,j]=loglik(c(Q[i],XI[j]))}}

The level curves of the log-likelihood are here

> hc=heat.colors(100)
> image(Q,XI,-M,col=hc)
> contour(Q,XI,-M,add=TRUE)

Again, since our interest is in the quantile, we can draw the profile likelihood and get the maximum of that function

> PL=function(Q){
+ profilelikelihood=function(xi){
+ loglik(c(Q,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))

$minimum
[1] 111.1055

and the graph is

> XQ=seq(50,300,length=101)
> L=Vectorize(PL)(XQ)
> plot(XQ,-L,type="l")
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1),col="red")
> I=which(-L>=-up-qchisq(p=.95,df=1))
> lines(XQ[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+ lwd=5,col="red")
> abline(v=range(XQ[I]),lty=2,col="red")

which can be seen as an alternative to

> gpd.q(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 64.66184  94.28956 188.91752 

$objective
[1] 454.6481

If we want to focus on another downside risk measure, that shouldn’t be too difficult. For instance, the expected shortfall,  can be estimated as

where  denotes the mean excess function, which can be writen, with a Generalized Pareto Distribution

Thus, a natural estimator for the expected shortfall is

One more time, it is possible to re-parametrize the density of the Pareto distribution, using https://latex.codecogs.com/gif.latex?ES(p) instead of https://latex.codecogs.com/gif.latex?\sigma. Here, we get

https://latex.codecogs.com/gif.latex?g_{\xi,ES(p)}(x)=\frac{\displaystyle{\xi+\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{\xi(1-\xi)[ES(p)-u]}\left(1+\frac{\displaystyle{\left(\frac{n}{N_u}(1-p)\right)^{-\xi}-1}}{(1-\xi)[ES(p)-u]}\cdot%20x\right)^{-\frac{1}{\xi}-1}

The code to get the associated log-likelihood is here

> ge=function(x,xi,es){
+ (xi+(n/nu*(1-p))^(-xi)-1)/(xi*(1-xi)*(es-u))*(1+(xi+(n/nu*(1-p))^(-xi)
+ -1)/((es-u)*(1-xi))*x)^(-1/xi-1)
+ }
> loglik=function(param){
+ xi=param[2];es=param[1]
+ lg=function(i) log(ge(Xs[i],xi,es))
+ return(-sum(Vectorize(lg)(1:nu)))
+ }

and again, we can plot it

and the profile (log) likelihood is here (for the 99.9% expected shortfall)

> PL=function(ES){
+ profilelikelihood=function(xi){
+ loglik(c(ES,xi))}
+ return(optim(par=.8,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(100,500)))
$minimum
[1] 143.66

$objective
[1] 454.6481

which could be compared with

> gpd.sfall(tailplot(gpd(X,u)),.999)
 Lower CI  Estimate  Upper CI 
 96.64625 191.36972 394.87555

Inference for AR(p) Time Series

Consider a (stationary) autoregressive process, say of order 2,

https://latex.codecogs.com/gif.latex?Y_t%20=\varphi_1%20Y_{t-1}+\varphi_2%20Y_{t-2}+\varepsilon_t

for some white noise with variance . Here is a code to generate such a process,

> phi1=.25
> phi2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t]
> Z=Z[800:1000]
> n=length(Z)
> plot(Z,type="l")

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, since (if we consider a matrix formulation)

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.0268 -0.7063  0.1065  0.6925  3.2566 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1  0.23400    0.05463   4.283 2.88e-05 ***
X2  0.62863    0.05476  11.479  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.062 on 197 degrees of freedom
Multiple R-squared:  0.6349,	Adjusted R-squared:  0.6312 
F-statistic: 171.3 on 2 and 197 DF,  p-value: < 2.2e-16

so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise

> regression$coefficients
       X1        X2 
0.2339959 0.6286321 
> summary(regression)$sigma
[1] 1.061839
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written (again, using a matrix expression)

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
          [,1]
[1,] 0.2256270
[2,] 0.6315329

Now, we need to extract the estimated innovation process, from this set of parameters

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.058558

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\gamma_0%20=%20\varphi_1%20\gamma_1+\varphi_2%20\gamma_2+\sigma^2\\%20\gamma_1=\varphi_1%20\gamma_0+\varphi_2%20\gamma_1%20\\%20\gamma_2=\varphi_1%20\gamma_1+\varphi_2%20\gamma_0\end{array}\right.

It is not much more complicated to solve, actually,

> gamma0=var(Z[1:n])
> gamma1=var(Z[1:(n-1)],Z[2:n])
> gamma2=var(Z[1:(n-2)],Z[3:n])
> A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3)
> b=matrix(c(gamma0,gamma1,gamma2),3,1)
> (PHISIGMA=solve(A,b))
          [,1]
[1,] 0.2283151
[2,] 0.6283431
[3,] 1.1335501
  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.

https://latex.codecogs.com/gif.latex?Y_t\vert%20Y_{t-1}=y_{t-1},Y_{t-2}=y_{t-2}

has a Gaussian distribution

https://latex.codecogs.com/gif.latex?\mathcal{N}(\varphi_1y_{t-1}+\varphi_2y_{t-2},\sigma^2)

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1] 0.2339589 0.6285002 1.0565613

$value
[1] 293.3042

$counts
function gradient 
     106       NA 

$convergence
[1] 0

$message
NULL

It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) has a know form.

  • using (unconditional) likelihood estimators

The variance matrix of https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is https://latex.codecogs.com/gif.latex?\boldsymbol{\Gamma}=[\gamma(\vert%20i-j\vert)], where autocovariances are not not know, be can easily be computed using a recursive relationship.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=phi1/(1-phi2)
+ for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2))
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
Error in chol.default(x, pivot = FALSE) : 
Error in pd.solve(varcov, log.det = TRUE) : 
  x appears to be not positive definite

The problem is that there is a strong constraint on the pair https://latex.codecogs.com/gif.latex?(\varphi_1,\varphi_2) to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

i.e. in a standard matrix form

https://latex.codecogs.com/gif.latex?\left[\begin{array}{cc}%20+1%20&%20-1%20\\%20-1%20&%20-1%20\\%200%20&%20+1\end{array}\right]\left[\begin{array}{c}%20\varphi_1%20\\%20\varphi_2\end{array}\right]%20%3E%20\left[\begin{array}{c}%20-1%20\\%20-1%20\\%20-1\end{array}\right]

(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider

> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3)
> C=c(0,0,0,-.99999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.2238892 0.6342850 1.0613388

$value
[1] 297.9202

$counts
function gradient 
     108       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 0.000189892

(here, to faster, we restrain the parameters so that they will be positive).

  • comparing those estimates

Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter https://latex.codecogs.com/gif.latex?\widehat{\varphi_1}, we get

and for the second one, https://latex.codecogs.com/gif.latex?\widehat{\varphi_2}, we have

The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.

Observe that the standard-deviation of the innovation process https://latex.codecogs.com/gif.latex?\widehat{\sigma} is here, well estimated,

(with clearly some estimators that perform better than others).

 

Likelihood Based Methods, for Extremes

This week, in the MAT8595 course, we will start the section on inference for extreme values. To start with something simple, we will use maximum likelihood techniques on a Generalized Pareto Distribution (we’ve seen Monday Pickands-Balkema-de Hann theorem).

  • Maximum Likelihood Estimation

In the context of parametric models, the standard technique is to consider the maximum of the likelihood (or the log-likelihod).Let denote the parameter (with ). Given some – stnardard – technical assumptions, such as , or  on some neighbourhood of , then

where https://latex.codecogs.com/gif.latex%20?I denotes Fisher information matrix (see any textbook for mathematical statistics courses). Consider here some i.i.d. sample, from a Generalized Pareto Distribution, with parameter https://latex.codecogs.com/gif.latex?\boldsymbol{\theta}=(\xi,\sigma), so that

https://latex.codecogs.com/gif.latex?%20%20%20%20%20F_{(\xi,\sigma)}(x)%20=%20\begin{cases}%201%20-%20\left(1+%20\frac{\xi%20x}{\sigma}\right)^{-1/\xi}%20&,%20\xi%20\neq%200%20\\%201%20-%20\exp%20\left(-\frac{x}{\sigma}\right)%20&,%20\xi%20=%200%20\end{cases}

If we solve (numerically) the first order condition of the maximum likelihood, we get an estimator  https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) which satisfies

https://latex.codecogs.com/gif.latex?\sqrt{n}\left(\left[\begin{array}{c}\widehat{\xi}_n\\\widehat{\sigma%20}_n\end{array}\right]-\left[\begin{array}{c}\xi_0\\\sigma_0%20\end{array}\right]\right)\rightarrow%20\mathcal{N}\left(\left[\begin{array}{c}0\\end{array}\right],\left[\begin{array}{cc}(1+\xi_0)^2%20&%20\sigma_0[1+\xi_0]\\%20\sigma_0%20[1+\xi_0]%20&%202\sigma^2_0(1+\xi_0)%20\end{array}\right]\right)

The idea of this asymptotic normality is the following : if the true distribution of the sample is a GPD with parameter , then, if https://latex.codecogs.com/gif.latex%20?n is large enough, then https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) will have a joint normal distribution. So if we generate a lot of sample (sufficently large, say 200 observations), then the scatterplot of the estimator should the same as the scatterplot of a Gaussian distribution,

> library(evir)
> n=200
> param=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ }
> m=apply(param,2,mean)
> S=var(param)
> library(mnormt)
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=101)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> COL=rev(heat.colors(100))
> image(x,y,z,col=COL)
> points(param)

and to get a 3d representation

> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=31)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=31)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> persp(x,y,t(z),shade=TRUE,col="green",theta=-30,phi=20,ticktype="detailed",
+ xlab="xi",ylab="sigma")

With 200 observations, if the true underlying distribution is a GPD, then, indeed, the joint distribution of https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) seems to be normal. That would be interesting to generate some confidence intervals for instance, or define some tests.

To go further, see any standard textbook on statistical mathematics, e.g. Casella & Berger (2002).

  • Delta Method

Another important property is the so called delta-method (we’ve seen Monday in class that it was obtained easily using a first order Taylor expansion). The idea is that if https://latex.codecogs.com/gif.latex%20?\widehat{\boldsymbol{\theta}}_n is asymptotically normal, and if is sufficently smooth, then https://latex.codecogs.com/gif.latex%20?h(\widehat{\boldsymbol{\theta}}_n) will also be asymptotically Gaussian. More precicely (see also the header of this blog)

From this property, we can get the normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n=\widehat{\xi}_n^{-1} (which is another parametrization used in extreme value models), or on any quantile, https://latex.codecogs.com/gif.latex%20?\widehat{Q}_u=F^{-1}_{\widehat{\boldsymbol{\theta}}_n}(u)=h_u(\widehat{\xi}_n,\widehat{\sigma}_n). Let us run some simulation, one more time to check that we actually have a joint normality.

> library(evir)
> n=200
> param=riskm=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ xihat=param[s,1]
+ shat=param[s,2]
+ q=shat * (.01^(-xihat) - 1)/xihat
+ tvar=q+(shat + xihat * q)/(1 - xihat)
+ riskm[s,]=c(1/xihat,q)
+ }
> m=apply(riskm,2,mean)
> S=var(riskm)
> library(mnormt)
> x=seq(min(riskm[,1])-.05,max(riskm[,1])+.05,length=101)
> y=seq(min(riskm[,2])-.05,max(riskm[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> image(x,y,t(z),col=COL)
> points(riskm)

As we can see bellow, with samples of size 200, we cannot use this asymptotical result: it looks like we do not have enought data. But if we run the same code with

> n=5000

We get the joint normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n and https://latex.codecogs.com/gif.latex%20?\widehat{Q}_n(u). This is what we can get from this result, called delta-method in statistical textbooks. See again Casella & Berger (2002) for more details.

  • Profile Likelihood

Another interesting tool is the concept of profile likelihood. This would be interesting here since the main interest is the tail index https://latex.codecogs.com/gif.latex%20?\xi, https://latex.codecogs.com/gif.latex%20?\sigma being here some kind of auxilary parameter. See Venzon & Moolgavkar (1988) for more details. Here, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif is the set of interesting parameters. Then (under standard suitable conditions) we can prove that

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals. In the GPD case, for each https://latex.codecogs.com/gif.latex%20?\xi, we have to find an optimal https://latex.codecogs.com/gif.latex%20?\sigma^\star(\xi). We compute the (profile) likelihood i.e. https://latex.codecogs.com/gif.latex%20?\mathcal{L}(\xi,\sigma^\star(\xi)). And we can compute the maximum of this profile likelihood. This two-stage optimization is, in general, not equivalent with the (global) maximization of the likelihood, as computed below

>  n=500
>  set.seed(1)
>  x=rgpd(n,xi=1/1.5,beta=1)
>  loglikelihood=function(xi,beta){
+  sum(log(dgpd(x,xi,mu=0,beta))) }
>  XIV=(1:300)/100;L=rep(NA,300)
>  for(i in 1:300){
+  XI=XIV[i]
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  L[i]=-optim(par=1,fn=profilelikelihood)$value }
>  plot(XIV,L,type="l")
>  XIV[which.max(L)]
[1] 0.67
>  gpd(x,0)$par.ests
       xi      beta 
0.6730145 0.9725483

We are not far away. Actually, if we want to compute the maximum of the profile likelihood (and not only compute the values of the profile likelihood on a grid, as before), we use

>  PL=function(XI){
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  return(optim(par=1,fn=profilelikelihood)$value)}
>  (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6731025

$objective
[1] 822.5574

Observe that, indeed, we are not far away from the maximum likelihood estimator of https://latex.codecogs.com/gif.latex%20?\xi (I believe that it’s mainly a computational issue here, and theat the two are similar, here… actually, I’d be glad to hear about cases where maximum of the profile likelihood is not the same as the maximum of the likelihood). The interesting point is that we can use this technique to compute a confidence interval, and even visualize it on a graph

>  up=OPT$objective
>  abline(h=-up)
>  abline(h=-up-qchisq(p=.95,df=1),col="red")
>  I=which(L>=-up-qchisq(p=.95,df=1))
>  lines(XIV[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+  lwd=5,col="red")
>  abline(v=range(XIV[I]),lty=2,col="red")

The vertical lines are the lower and the upper bound of a 95% confidence interval for parameter https://latex.codecogs.com/gif.latex%20?\xi.

To go further, see Murphy, S.A & van der Vaart, A.W. (2000). On Profile Likelihood.

Bias and MLE

Before leaving the office, this evening, JP decided to knock at my door to ask me a “quick and very basic question” (as he put it). This is JP’s stategy, and he knows it works. His question was – more or less – what do we know about the bias in maximum likelihood estimation when we have a small sample, from a Gamma distribution. He was surprised by some results he got. If I wanted to be naughty, too, I would say that he was suprised to see how long his student spent to code that in SAS. So he wanted to challenge me, and see how fast I could give him a valuable answer. Given the fact that I had to leave early because my elder son had a fencing competition, I tried to write a simple code to “visualize” the bias of the parameter (the first one) of a Gamma distribution, with MLE.

Before showing the graph, I wanted to add that I hate one thing about mathematical statistics courses: we learn nothing interesting there. I mean, we can see nice mathematical concepts, but after this class, you can hardly say anything when you see your first dataset. Like with real data. For instance, this course usually emphasize asymptotical results, using limiting theorem. When you take this course, you learn a lot of thing about maximumum likelihood for instance. You can compute the asymptotic variance and derive asymptotic confidence intervals. But are those results relevant when you have 50 observations? Is it possible, with 50 observations, to have a bias which has the same size as the parameter?

As usual, one possible answer is “if you don’t have a lot of observations, be Bayesian!“. Maybe. Someday. What I tried, here, is to run simulations to see how MLE estimators behave. Given a -i.i.d. sample, from a  distribution, let  and  denote the maximum likelihood estimators of the two parameters.

library(fitdistrplus)
maxl=function(x) fitdist(x,"gamma",method="mle")$estimate
VK=floor(exp(seq(log(5),log(200),length=25)))
V=NULL
for(k in 1:length(VK)){
n=VK[k]
N=5000
m=matrix(rgamma(n*N,1,2),n,N)
ss=apply(m,2,maxl)
V=rbind(V,ss)}
y=as.vector(V[seq(1,length(VK)*2,by=2),])
x=rep(c(VK),ncol(V))
boxplot(y~x,
xlab="Nb. observations (log scale)",ylim=c(0,6))
abline(h=1,lty=2,col="blue")

Here, in our simulations, the shape parameter was 1. On the graph, we have boxplots of   obtained on several scenarios. We clearly see the positive bias of the MLE. And the bias reduces with  (as expected, since the MLE is asymptotically unbiased). We can also visualize the distribution of    (instead of boxplots)

It is also possible to derive analytical results. David Cox and Joyce Snell did the maths in 1968 and actually did obtain analytical expressions for the biases. More recently, David Giles (a.k.a. @deagiles on Twitter) and Hui Feng did look at the behavior of bias-adjusted estimators, a few years ago. For instance, one can get that

where

 being the so-called digamma function,

and where  and  are the first and second order derivatives, see e.g. Bowman and Shenton (1982) – yes, there is an book on the topic of estimating parameters of the Gamma distribution…

Observe that the bias of   does not depend on , while the bias of   will depend on .

d1digamma=function(x,h=1e-7)
return((digamma(x+h)-digamma(x-h))/(2*h))
d2digamma=function(x,h=1e-7)
return((d1digamma(x+h)-d1digamma(x-h))/(2*h))
biasalpha=function(a,n){
return((a*d1digamma(a)-a^2*d2digamma(a)
-2)/(2*n*(a*d1digamma(a)-1)^2))
}

The way I compute it is probably not optimal, so if you want to improve it, please, go ahead ! If we compare the average bias obtained on our simulation, and the one obtained this first order approximation, we get

m=apply(V,1,mean)
plot(VK,m[seq(1,length(VK)*2,by=2)],type="b",col="red",xlab="Nb. observations (log scale)",log="x")
abline(h=1,lty=2,col="blue")
B=Vectorize(function(n) biasalpha(a=1,n))(1:200)
lines(1:200,B+1,col="orange")

Observe here that neglecting the  factor yield an underestimation of the real biais… Fun, isn’t it?

Maximum likelihood estimates for multivariate distributions

Consider our loss-ALAE dataset, and – as in Frees & Valdez (1998) – let us fit a parametric model, in order to price a reinsurance treaty. The dataset is the following,

> library(evd)
> data(lossalae)
> Z=lossalae
> X=Z[,1];Y=Z[,2]

The first step can be to estimate marginal distributions, independently. Here, we consider lognormal distributions for both components,

> Fempx=function(x) mean(X<=x)
> Fx=Vectorize(Fempx)
> u=exp(seq(2,15,by=.05))
> plot(u,Fx(u),log="x",type="l",
+ xlab="loss (log scale)")
> Lx=function(px) -sum(log(Vectorize(dlnorm)(
+ X,px[1],px[2])))
> opx=optim(c(1,5),fn=Lx)
> opx$par
[1] 9.373679 1.637499
> lines(u,Vectorize(plnorm)(u,opx$par[1],
+ opx$par[2]),col="red")

The fit here is quite good,

For the second component, we do the same,

> Fempy=function(x) mean(Y<=x)
> Fy=Vectorize(Fempy)
> u=exp(seq(2,15,by=.05))
> plot(u,Fy(u),log="x",type="l",
+ xlab="ALAE (log scale)")
> Ly=function(px) -sum(log(Vectorize(dlnorm)(
+ Y,px[1],px[2])))
> opy=optim(c(1.5,10),fn=Ly)
> opy$par
[1] 8.522452 1.429645
> lines(u,Vectorize(plnorm)(u,opy$par[1],
+ opy$par[2]),col="blue")

It is not as good as the fit obtained on losses, but it is not that bad,

Now, consider a multivariate model, with Gumbel copula. We’ve seen before that it worked well. But this time, consider the maximum likelihood estimator globally.

> Cop=function(u,v,a) exp(-((-log(u))^a+
+ (-log(v))^a)^(1/a))
> phi=function(t,a) (-log(t))^a
> cop=function(u,v,a) Cop(u,v,a)*(phi(u,a)+
+ phi(v,a))^(1/a-2)*(
+ a-1+(phi(u,a)+phi(v,a))^(1/a))*(phi(u,a-1)*
+ phi(v,a-1))/(u*v)
> L=function(p) {-sum(log(Vectorize(dlnorm)(
+ X,p[1],p[2])))-
+ sum(log(Vectorize(dlnorm)(Y,p[3],p[4])))-
+ sum(log(Vectorize(cop)(plnorm(X,p[1],p[2]),
+ plnorm(Y,p[3],p[4]),p[5])))}
> opz=optim(c(1.5,10,1.5,10,1.5),fn=L)
> opz$par
[1] 9.377219 1.671410 8.524221 1.428552 1.468238

Marginal parameters are (slightly) different from the one obtained independently,

> c(opx$par,opy$par)
[1] 9.373679 1.637499 8.522452 1.429645
> opz$par[1:4]
[1] 9.377219 1.671410 8.524221 1.428552

And the parameter of Gumbel copula is close to the one obtained with heuristic methods in class.

Now that we have a model, let us play with it, to price a reinsurance treaty. But first, let us see how to generate Gumbel copula… One idea can be to use the frailty approach, based on a stable frailty. And we can use Chambers et al (1976)to generate a stable distribution. So here is the algorithm to generate samples from Gumbel copula

> alpha=opz$par[5]
> invphi=function(t,a) exp(-t^(1/a))
> n=500
> x=matrix(rexp(2*n),n,2)
> angle=runif(n,0,pi)
> E=rexp(n)
> beta=1/alpha
> stable=sin((1-beta)*angle)^((1-beta)/beta)*
+ (sin(beta*angle))/(sin(angle))^(1/beta)/
+ (E^(alpha-1))
> U=invphi(x/stable,alpha)
> plot(U)

Here, we consider only 500 simulations,

Based on that copula simulation, we can then use marginal transformations to generate a pair, losses and allocated expenses,

> Xloss=qlnorm(U[,1],opz$par[1],opz$par[2])
> Xalae=qlnorm(U[,2],opz$par[3],opz$par[4])

In standard reinsurance treaties – see e.g. Clarke (1996) – allocated expenses are splited prorata capita between the insurance company, and the reinsurer. If  denotes losses, and  the allocated expenses, a standard excess treaty can be has payoff

where  denotes the (upper) limit, and  the insurer’s retention. Using monte carlo simulation, it is then possible to estimate the pure premium of such a reinsurance treaty.

> L=100000
> R=50000
> Z=((Xloss-R)+(Xloss-R)/Xloss*Xalae)*
+ (R<=Xloss)*(Xloss<L)+
+ ((L-R)+(L-R)/R*Xalae)*(L<=Xloss)
> mean(Z)
[1] 12596.45

Now, play with it… it is possible to find a better fit, I guess…

a short word on profile likelihood

Profile likelihood is an interesting theory to visualize and compute confidence interval for estimators (see e.g. Venzon & Moolgavkar (1988)). As we will use is, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif. Then (under standard suitable conditions)

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> library(evir)
> X=base1$Loss.in.DKM
> u=5

The function to draw the profile likelihood for the tail index parameter is then

> Y=X[X>u]-u
> loglikelihood=function(xi,beta){
+ sum(log(dgpd(Y,xi,mu=0,beta))) }
> XIV=(1:300)/100;L=rep(NA,300)
> for(i in 1:300){
+ XI=XIV[i]
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ L[i]=-optim(par=1,fn=profilelikelihood)$value }
> plot(XIV,L,type="l")

It is possible to use it that profile likelihood function to derive a confidenceinterval,

> PL=function(XI){
+ profilelikelihood=function(beta){
+ -loglikelihood(XI,beta) }
+ return(optim(par=1,fn=profilelikelihood)$value)}
> (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6315989

$objective
[1] 754.1115
> up=OPT$objective
> abline(h=-up)
> abline(h=-up-qchisq(p=.95,df=1)/2,col="red")
> I=which(L>=-up-qchisq(p=.95,df=1)/2)
> lines(XIV[I],rep(-up-qchisq(p=.95,df=1)/2,length(I)),
+ lwd=5,col="red")
> abline(v=range(XIV[I]),lty=2,col="red")

This is done with the following code

> library(ismev)
> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

Tail index estimation

These data were collected at Copenhagen Reinsurance and comprise 2167 fire losses over the period 1980 to 1990, They have been adjusted for inflation to reflect 1985 values and are expressed in millions of Danish Kron. Note that it is possible to work with the same data as above but the total claim has been divided into a building loss, a loss of contents and a loss of profits.

> base1=read.table(
+ "http://freakonometrics.free.fr/danish-univariate.txt",
+ header=TRUE)
> base2=read.table(
+ "http://freakonometrics.free.fr/danish-multivariate.txt",
+ header=TRUE)

Consider here the first dataset (we deal – so far – with univariate extremes),

> X=base1$Loss.in.DKM
> D=as.Date(as.character(base1$Date),"%m/%d/%Y")
> plot(D,X,type="h")

The graph is the following,

A natural idea is then to plot

http://freakonometrics.hypotheses.org/files/2015/12/hill01.gif

i.e.

> Xs=sort(X)
> logXs=rev(log(Xs))
> n=length(X)
> plot(log(Xs),log((n:1)/(n+1)))

Points are on a straight line here. The slope can be obtained using a linear regression,

> B=data.frame(X=log(Xs),Y=log((n:1)/(n+1)))
> reg=lm(Y~X,data=B)
> summary(reg)

Call:
lm(formula = Y ~ X, data = B)

Residuals:
Min       1Q   Median       3Q      Max
-0.59999 -0.00777  0.00878  0.02461  0.20309

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.089442   0.001572   56.88   <2e-16 ***
X           -1.382181   0.001477 -935.55   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.04928 on 2165 degrees of freedom
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9975
F-statistic: 8.753e+05 on 1 and 2165 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-500):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 500):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.48502 -0.02148 -0.00900  0.01626  0.35798

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.186188   0.010033   18.56   <2e-16 ***
X           -1.432767   0.005105 -280.68   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.07751 on 499 degrees of freedom
Multiple R-squared: 0.9937,	Adjusted R-squared: 0.9937
F-statistic: 7.878e+04 on 1 and 499 DF,  p-value: < 2.2e-16

> reg=lm(Y~X,data=B[(n-100):n,])
> summary(reg)

Call:
lm(formula = Y ~ X, data = B[(n - 100):n, ])

Residuals:
Min       1Q   Median       3Q      Max
-0.33396 -0.03743  0.02279  0.04754  0.62946

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.67377    0.06777   9.942   <2e-16 ***
X           -1.58536    0.02240 -70.772   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1299 on 99 degrees of freedom
Multiple R-squared: 0.9806,	Adjusted R-squared: 0.9804
F-statistic:  5009 on 1 and 99 DF,  p-value: < 2.2e-16

The slope here is somehow related to the tail index of the distribution. Consider some heavy tailed distribution, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill03.gif, so that http://freakonometrics.hypotheses.org/files/2015/12/hill27.gif, where http://freakonometrics.hypotheses.org/files/2015/12/hill28.gif is some slowly varying function. Equivalently, the exists a slowly varying function http://freakonometrics.hypotheses.org/files/2015/12/hill29.gif such that http://freakonometrics.hypotheses.org/files/2015/12/hill30.gif. Then

http://freakonometrics.hypotheses.org/files/2015/12/hill33.gif

i.e. since a natural estimator for http://freakonometrics.hypotheses.org/files/2015/12/hill35.gif is the order statistic http://freakonometrics.hypotheses.org/files/2015/12/hill36.gif, the slope of the straight line is the opposite of tail index http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif. The estimator of the slope is (considering only the http://freakonometrics.hypotheses.org/files/2015/12/hill99.gif largest observations)

http://freakonometrics.hypotheses.org/files/2015/12/hill39.gif

Hill‘s estimator is based on the assumption that the denominator above is almost 1 (which means that  http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif, as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif), i.e.

http://freakonometrics.hypotheses.org/files/2015/12/hill02.gif

Note that, if http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, but not two fast, i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill12.gif (one can even get http://freakonometrics.hypotheses.org/files/2015/12/hill11.gif  with stronger convergence assumptions). Further

http://freakonometrics.hypotheses.org/files/2015/12/hill04.gif

Based on that (asymptotic) distribution, it is possible to get a (asymptotic) confidence interval for http://freakonometrics.hypotheses.org/files/2015/12/hill98.gif

> xi=1/(1:n)*cumsum(logXs)-logXs
> xise=1.96/sqrt(1:n)*xi
> plot(1:n,xi,type="l",ylim=range(c(xi+xise,xi-xise)),
+ xlab="",ylab="",)
> polygon(c(1:n,n:1),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(1:n,xi+xise,col="red",lwd=1.5)
> lines(1:n,xi-xise,col="red",lwd=1.5)
> lines(1:n,xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to work with http://freakonometrics.hypotheses.org/files/2015/12/hill06.gif, then http://freakonometrics.hypotheses.org/files/2015/12/hill05.gif. And similarly http://freakonometrics.hypotheses.org/files/2015/12/hill13.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif (and again http://freakonometrics.hypotheses.org/files/2015/12/hill10.gif with additional assumptions on the rate of convergence), and

http://freakonometrics.hypotheses.org/files/2015/12/hill09.gif

(obtained using the delta-method). Again, we can use that result to derive (asymptotic) confidence intervals

> alpha=1/xi
> alphase=1.96/sqrt(1:n)/xi
> YL=c(0,3)
> plot(1:n,alpha,type="l",ylim=YL,xlab="",ylab="",)
> polygon(c(1:n,n:1),c(alpha+alphase,rev(alpha-alphase)),
+ border=NA,col="lightblue")
> lines(1:n,alpha+alphase,col="red",lwd=1.5)
> lines(1:n,alpha-alphase,col="red",lwd=1.5)
> lines(1:n,alpha,lwd=1.5)
> abline(h=0,col="grey")

The Deckers-Einmahl-de Haan estimator is

http://freakonometrics.hypotheses.org/files/2015/12/hill25.gif

where for

http://freakonometrics.hypotheses.org/files/2015/12/hill21.gif

Then (given again conditions on the speed of convergence i.e. http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif, with http://freakonometrics.hypotheses.org/files/2015/12/hill15.gif as http://freakonometrics.hypotheses.org/files/2015/12/hill16.gif),

http://freakonometrics.hypotheses.org/files/2015/12/hill42.gif

Finally, Pickands‘ estimator

http://freakonometrics.hypotheses.org/files/2015/12/hill26.gif

it is possible to prove that, as http://freakonometrics.hypotheses.org/files/2015/12/hill14.gif,

http://freakonometrics.hypotheses.org/files/2015/12/hill41.gif

Here the code is

> Xs=rev(sort(X))
> xi=1/log(2)*log( (Xs[seq(1,length=trunc(n/4),by=1)]-
+ Xs[seq(2,length=trunc(n/4),by=2)])/
+ (Xs[seq(2,length=trunc(n/4),by=2)]-Xs[seq(4,
+ length=trunc(n/4),by=4)]) )
> xise=1.96/sqrt(seq(1,length=trunc(n/4),by=1))*
+sqrt( xi^2*(2^(xi+1)+1)/((2*(2^xi-1)*log(2))^2))
> plot(seq(1,length=trunc(n/4),by=1),xi,type="l",
+ ylim=c(0,3),xlab="",ylab="",)
> polygon(c(seq(1,length=trunc(n/4),by=1),rev(seq(1,
+ length=trunc(n/4),by=1))),c(xi+xise,rev(xi-xise)),
+ border=NA,col="lightblue")
> lines(seq(1,length=trunc(n/4),by=1),
+ xi+xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),
+ xi-xise,col="red",lwd=1.5)
> lines(seq(1,length=trunc(n/4),by=1),xi,lwd=1.5)
> abline(h=0,col="grey")

It is also possible to use maximum likelihood techniques to fit a GPD distribution over a high threshold.

> library(evd)
> library(evir)
> gpd(X,5)
$n
[1] 2167

$threshold
[1] 5

$p.less.thresh
[1] 0.8827873

$n.exceed
[1] 254

$method
[1] "ml"

$par.ests
xi      beta
0.6320499 3.8074817

$par.ses
xi      beta
0.1117143 0.4637270

$varcov
[,1]        [,2]
[1,]  0.01248007 -0.03203283
[2,] -0.03203283  0.21504269

$information
[1] "observed"

$converged
[1] 0

$nllh.final
[1] 754.1115

attr(,"class")
[1] "gpd"

or equivalently (or almost)

> gpd.fit(X,5)
$threshold
[1] 5

$nexc
[1] 254

$conv
[1] 0

$nllh
[1] 754.1115

$mle
[1] 3.8078632 0.6315749

$rate
[1] 0.1172127

$se
[1] 0.4636270 0.1116136

The interest of the latest function is that it is possible to visualize the profile likelihood of the tail index,

> gpd.profxi(gpd.fit(X,5),xlow=0,xup=3)

or

> gpd.profxi(gpd.fit(X,20),xlow=0,xup=3)

Hence, it is possible to plot the maximum likelihood estimator of the tail index, as a function of the threshold (including a confidence interval),

> GPDE=Vectorize(function(u){gpd(X,u)$par.ests[1]})
> GPDS=Vectorize(function(u){
+ gpd(X,u)$par.ses[1]})
> u=c(seq(2,10,by=.5),seq(11,25))
> XI=GPDE(u)
> XIS=GPDS(u)
> plot(u,XI,ylim=c(0,2))
> segments(u,XI-1.96*XIS,u,XI+
+ 1.96*XIS,lwd=2,col="red")

Finally, it is possible to use block-maxima techniques.

> gev.fit(X)
$conv
[1] 0

$nllh
[1] 3392.418

$mle
[1] 1.4833484 0.5930190 0.9168128

$se
[1] 0.01507776 0.01866719 0.03035380

The estimator of the tail index is here the last coefficient, on the right.
Since it is rather difficult to install a package in class rooms, here is the source of rcodes used here (to fit a GPD for exceedances)

> source("http://freakonometrics.blog.free.fr/public/code/gpd.R")

Next time, we will discuss how to use those estimators.

« l’homme qui avait prédit la crise »

Je vais ressortir un peu ma casquette de schtroumpf grognon aujourd’hui, au risque de me faire encore taxer d’anti-journaliste primaire. Mais j’ai été agacé en lisant dans un article (journalistique, pas académique) qu’un personnage était présenté comme « l’homme qui avait prédit la crise ». En sept mots, le journaliste a réussi a utiliser trois clichés qui m’agacent.

Commençons par la fin, à savoir « la crise ». Sans vouloir jouer le schtroumpf grognon, je n’aime pas ce mot, abondamment utilisé par les journalistes… C’est quoi « la crise » ? Surtout que quand on pose la question, la réponse est toujours « allons, tu sais bien… la crise, quoi ». A la rigueur, «la crise financière »… mais laquelle ? « allons, arrête de faire du mauvais esprit… la crise de la dette, tu sais… ».  Mais de quoi on parle là ? Des états européens sur-endettés qui risquent de se faire downgrader par les agences de notation ? Des dettes des particuliers qui explosent ? Des endettements des étudiants qui doivent faire face à la hausse des frais d’inscription ? Bref, je n’aime pas ce mot fourre tout qui ne peut être défini de manière univoque. Mais ce n’est pas ce qui m’agace le plus…

Il y a ensuite le mot « prédit ». Car c’est le mot qui est le plus utilisé par les journalistes, en France. Étrangement, le mot « prévu » n’est pas utilisé. Car «prédire » n’est pas « prévoir ». Certes, les deux se rapportent à l’avenir, de part le préfixe pré- qui exprime l’antériorité dans le temps. Le verbe « prédire » est souvent employé pour des pronostics, qui relèvent de l’intuition, d’un sentiment prémonitoire, voire d’une expérience surnaturelle: les voyantes « prédisent » l’avenir. Par contre, il existe des instituts de « prévision », ainsi que des cours de méthodes de « prévision » (ou encore des ouvrages, sur le sujet). On fait des « prévisions » météo, ou budgétaires. Bref, « prédire » relève de la foi, alors que « prévoir » relève davantage de la science. Il y a deux ans, j’avais déjà parlé des prédicateurs dans un billet suite au tremblement de terre qui avait eu lieu en mars 2009 en Italie (et je me permets de reprendre – plus ou moins – l’image extraite du plus formidable des albums de Tintin – d’où est aussi tiré l’image qui figure sur mon blog, et à laquelle je me suis pleinement identifiée). Et ce qui m’agace c’est que les journalistes préfèrent écouter les prédicateurs aux prévisionnistes… Et je ne parle pas des nombreux articles sur Paul le poulpe

Enfin, il y a l’utilisation du terme « l’homme », ou plus précisément l’emploi du «l’», qui laisse entendre qu’il y a unicité: une seule personne aurait annoncé la crise. Vu de loin, c’est d’ailleurs amusant de noter que chaque journaliste a « son homme » qui aurait annoncé la crise. C’est Paul Jorion pour Rue89 (et France Culture), Nouriel Roubini pour le New York Times (et d’autres médias en France) mais certains pensent à Robert ShillerGeorges MagnusMelchior PalyiVictor MaslovRaghuram Rajan, etc. Manifestement, il n’y a pas unicité, et pourtant tous les journalistes aiment utiliser ce pronom « l’ ». En fait, souvent les journalistes omettent de dire que cet homme est « l’homme » au sein des personnes faisant parti du réseau de personnes qu’ils lisent, et qui font partie de leur cercle médiatique. Ce qui est agaçant, c’est que les journalistes ne prennent pas la peine de lire les articles écrits par les économistes dans les revues d’économie. Malheureusement, dans un article académique, on ne prédit pas des crises, mais il y a de nombreuses analyses critiques qui permettent d’éclairer, et pas seulement ex-post. Mais lire des articles théoriques demande du temps, et des compétences…

Multivariate probit regression using (direct) maximum likelihood estimators

Consider a random pair http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-01.gif of binary responses, i.e. http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-02.gif with http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-03.gif taking values 1 or 2. Assume that probability http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-04.gif can be function of some covariates.

  • The Gaussian vector latent structure

A standard model is based a latent Gaussian structure, i.e. there exists some random vector http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-06.gif such that http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-07.gif if http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-08.gif is lower than a given threshold, and 1 otherwise.
As in standard probit models, assume that

http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-09.gif

where we can assume that http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-10.gif is a Gaussian random vector. This assumption can be used to derive the likelihood of a sample http://freakonometrics.hypotheses.org/files/2015/12/biv-prob-11.gif.

> logV=function(parameter){
+ CORRELATION=parameter[1]
+ BETA=matrix(parameter[2:length(parameter)],ncol(Y),ncol(X))
+ z=cbind(X%*%(BETA[1,]),X%*%(BETA[2,]))
+ sigma=matrix(c(1,CORRELATION,CORRELATION,1),2,2)
+     a11=pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
+ for(i in 2:nrow(z)){a11=c(a11,pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
+     a10=pnorm(z[1,1],sd=sqrt(sigma[1,1]))-pmnorm(z[1,],varcov=sigma)
+ for(i in
+ 2:nrow(z)){a10=c(a10,pnorm(z[i,1],sd=sqrt(sigma[1,1]))-pmnorm(z[i,],varcov=sigma))}
+     a01=pnorm(z[1,2],sd=sqrt(sigma[2,2]))-pmnorm(z[1,],varcov=sigma)
+ for(i in
+ 2:nrow(z)){a01=c(a01,pnorm(z[i,2],sd=sqrt(sigma[2,2]))-pmnorm(z[i,],varcov=sigma))}
+     a00=1-a10-a01-a11
+ -sum(((Y[,1]==1)&(Y[,2]==1))*log(a11) +
+     1*log(a01) +
+     2*log(a10) +
+     3*log(a00) )
+ }
> OPT=optim(fn=logV,par=c(0,1,1,1,1,1,1),method="BFGS")$par

(the code is a bit long since I had trouble working properly with matrices – or more precisely to vectorize my functions – so I used loops… I am sure it is possible to write a better code).
It is possible to generate samples (based on that specific model) to check that we can actually derive proper maximum likelihood estimators,

> library(mnormt)
> set.seed(1)
> n=1000
> r=0.5
> X1=runif(n)
> X2=rnorm(n)
> Y1S=1+5*X1
> Y2S=8-5*X1
> RES=rmnorm(n,mean=c(0,0),varcov=matrix(c(1,r,r,1),2,2))
> YS=cbind(Y1S,Y2S)+RES
> Y1=(YS[,1]>quantile(YS[,1],.5))*1
> Y2=(YS[,2]>quantile(YS[,2],.5))*1
> base=data.frame(i,Y1,Y2,X1,X2,YS)
> head(base)
  i Y1 Y2        X1          X2      Y1S      Y2S
1 1  0  0 0.2655087  0.07730312 3.177587 5.533884
2 2  0  0 0.3721239 -0.29686864 1.935307 5.089524
3 3  1  0 0.5728534 -1.18324224 4.757848 5.172584
4 4  1  0 0.9082078  0.01129269 4.600029 3.878225
5 5  0  1 0.2016819  0.99160104 2.547362 6.743714
6 6  1  0 0.8983897  1.59396745 5.309974 4.421523

(the two columns on the right are latent observations, that cannot be used since theoretically they are unobservable). Note that it is a simple regression, one of the component is here only to bring some noise. First of all, let us look at marginal probit regression

>  reg1=glm(Y1~X1+X2,data=base,family=binomial)
>  reg2=glm(Y2~X1+X2,data=base,family=binomial)
> summary(reg1)
 
Call:
glm(formula = Y1 ~ X1 + X2, family = binomial, data = base)
 
Deviance Residuals:
Min        1Q    Median        3Q       Max
-2.90570  -0.50126  -0.00266   0.49162   2.78256
 
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.291725   0.267149 -16.065   <2e-16 
X1           8.656836   0.510153  16.969   <2e-16 ***
X2           0.007375   0.090530   0.081    0.935
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 1386.29  on 999  degrees of freedom
Residual deviance:  726.48  on 997  degrees of freedom
AIC: 732.48

Number of Fisher Scoring iterations: 5
> summary(reg2)
Call:
glm(formula = Y2 ~ X1 + X2, family = binomial, data = base)
Deviance Residuals:
Min        1Q    Median        3Q       Max
-2.74682  -0.51814  -0.00001   0.57969   2.58565
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  3.91709    0.24399  16.054   <2e-16 ***
X1          -7.89703    0.46277 -17.065   <2e-16 ***
X2           0.18360    0.08758   2.096    0.036 *
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)

Null deviance: 1386.29  on 999  degrees of freedom
Residual deviance:  777.61  on 997  degrees of freedom
AIC: 783.61
Number of Fisher Scoring iterations: 5

Here, the optimization yields,

> OPT=optim(fn=logV,par=c(0,1,1,1,1,1,1),method="BFGS")$par
> OPT[1]
[1] 0.5261382
> matrix(OPT[2:7],2,3)
          [,1]      [,2]       [,3]
[1,] -2.451721  4.908633 0.01600769
[2,]  2.241962 -4.539946 0.10614807

Note that the coefficients we have obtained are almost identical to the ones obtained with R standard procedure,

>  library(Zelig)
>  REG= zelig(list(mu1=Y1~X1+X2,
+             mu2=Y2~X1+X2,
+     rho=~1),
+     model="bprobit",data=base)
>  summary(REG)
 
Call:
zelig(formula = list(mu1 = Y1 ~ X1 + X2, mu2 = Y2 ~ X1 + X2,
    rho = ~1), model = "bprobit", data = base)
 
Pearson Residuals:
                 Min        1Q     Median      3Q     Max
probit(mu1) -10.5442 -0.377243  0.0041803 0.36709 8.60398
probit(mu2)  -7.8547 -0.376888  0.0083715 0.42923 5.88264
rhobit(rho) -13.8322 -0.091502 -0.0080544 0.37218 0.85101
 
Coefficients:
                  Value Std. Error   t value
(Intercept):1 -2.451699   0.135369 -18.11116
(Intercept):2  2.241964   0.125072  17.92536
(Intercept):3  1.169461   0.189771   6.16249
X1:1           4.908617   0.252683  19.42602
X1:2          -4.539951   0.233632 -19.43203
X2:1           0.015992   0.050443   0.31703
X2:2           0.106154   0.049092   2.16235
 
Number of linear predictors:  3
 
Names of linear predictors: probit(mu1), probit(mu2), rhobit(rho)
&n
bsp;
Dispersion Parameter for binom2.rho family:   1
 
Residual Deviance: 1460.355 on 2993 degrees of freedom
 
Log-likelihood: -730.1774 on 2993 degrees of freedom
 
Number of Iterations: 3

> matrix(coefficients(REG)[c(1:2,4:7)],2,3)
          [,1]      [,2]       [,3]
[1,] -2.451699  4.908617 0.01599183
[2,]  2.241964 -4.539951 0.10615443

The correlation here is also the same

> (exp(summary(REG)@coef3[3])-1)/(exp(summary(REG)@coef3[3])+1)
[1] 0.5260951

That procedure works well an can be extended to ordinal responses (not only binary ones, or to three dimensional problems,

logV=function(beta){
BETA=matrix(beta[4:(3+ncol(Y)*ncol(X))],ncol(Y),ncol(X))
z=cbind(X%*%(BETA[1,]),X%*%(BETA[2,]),X%*%(BETA[3,]))
r12=beta[1]
r23=beta[2]
r31=beta[3]
s1=s2=s3=1
sigma=matrix(c(s1^2,r12*s1*s2,r31*s1*s3,
               r12*s1*s2,s2^2,r23*s2*s3,
               r31*s1*s3,r23*s2*s3,s3^2),3,3)
sigma1=matrix(c(s2^2,r23*s2*s3,
                r23*s2*s3,s3^2),2,2)
sigma2=matrix(c(s1^2,r31*s1*s3,
                r31*s1*s3,s3^2),2,2)
sigma3=matrix(c(s1^2,r12*s1*s2,
                r12*s1*s2,s2^2),2,2)
    a111=pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a111=c(a111,pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a011=pmnorm(z[1,2:3],varcov=sigma1)-pmnorm(z[1,],varcov=sigma)
for(i in 2:nrow(z)){a011=c(a011,pmnorm(z[i,2:3],varcov=sigma1)-pmnorm(z[i,],varcov=sigma))}
    a101=pmnorm(z[1,c(1,3)],varcov=sigma2)-pmnorm(z[1,],varcov=sigma)
for(i in 2:nrow(z)){a101=c(a101,pmnorm(z[i,c(1,3)],varcov=sigma2)-pmnorm(z[i,],varcov=sigma))}
    a110=pmnorm(z[1,1:2],varcov=sigma3)-pmnorm(z[1,],varcov=sigma)
for(i in 2:nrow(z)){a110=c(a110,pmnorm(z[i,1:2],varcov=sigma3)-pmnorm(z[i,],varcov=sigma))}
    a100=pnorm(z[1,1],sd=s1)-pmnorm(z[1,c(1,2)],varcov=sigma3)-pmnorm(z[1,c(1,3)],varcov=sigma2)+pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a100=c(a100,pnorm(z[i,1],sd=s1)-pmnorm(z[i,c(1,2)],varcov=sigma3)-pmnorm(z[i,c(1,3)],varcov=sigma2)+pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a010=pnorm(z[1,2],sd=s2)-pmnorm(z[1,c(1,2)],varcov=sigma3)-pmnorm(z[1,c(2,3)],varcov=sigma1)+pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a010=c(a010,pnorm(z[i,2],sd=s2)-pmnorm(z[i,c(1,2)],varcov=sigma3)-pmnorm(z[i,c(2,3)],varcov=sigma1)+pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a001=pnorm(z[1,3],sd=s3)-pmnorm(z[1,c(2,3)],varcov=sigma1)-pmnorm(z[1,c(1,3)],varcov=sigma2)+pmnorm(z[1,],rep(0,ncol(Y)),varcov=sigma)
for(i in 2:nrow(z)){a001=c(a001,pnorm(z[i,3],sd=s3)-pmnorm(z[i,c(2,3)],varcov=sigma1)-pmnorm(z[i,c(1,3)],varcov=sigma2)+pmnorm(z[i,],rep(0,ncol(Y)),varcov=sigma))}
    a000=1-a111-a011-a101-a110-a001-a010-a100
 
a111[a111<=0]=1e-50
a110[a110<=0]=1e-50
a101[a101<=0]=1e-50
a011[a011<=0]=1e-50
a100[a100<=0]=1e-50
a010[a010<=0]=1e-50
a001[a001<=0]=1e-50
a000[a000<=0]=1e-50
 
-sum(((Y[,1]==0)&(Y[,2]==0)&(Y[,3]==0))*log(a111) +
    4*log(a011) +
    5*log(a101) +
    6*log(a110) +
    7*log(a001) +
    8*log(a010) +
    9*log(a100) +
    10*log(a000) )
}

A strong assumption in that bivariate model is that residuals have a Gaussian structure. It is possible to change that assumption

  • marginally: for instance if we use a logistic cumulative distribution function, then we will have a bivariate logit regression
  • in terms of dependence structure: it is possible to consider another copula than the gaussian one, e.g. Gumbel’s copula (also called the bivariate logistic copula), or Clayton’s

Here, the following code can be used to extend the model to non Gaussian structures,

> F=function(x,r){pmnorm(x,rep(0,length(x)),
+                 varcov=matrix(c(1,r,r,1),2,2))}
> Fx=function(x1){F(c(x1,1e40),0)}
> Fy=function(x2){Fx(x2)}
> 
> logVgen=function(parameter){
+ CORRELATION=parameter[1]
+ BETA=matrix(parameter[2:length(parameter)],ncol(Y),ncol(X))
+ z=cbind(X%*%(BETA[1,]),X%*%(BETA[2,]))
+     a11=F(z[1,],r=CORRELATION)
+ for(i in 2:nrow(z)){a11=c(a11,F(z[i,],r=CORRELATION))}
+     a10=Fx(z[1,1])-F(z[1,],r=CORRELATION)
+ for(i in 2:nrow(z)){a10=c(a10,Fx(z[i,1])-F(z[i,],r=CORRELATION))}
+     a01=Fy(z[1,2])-F(z[1,],r=CORRELATION)
+ for(i in 2:nrow(z)){a01=c(a01,Fy(z[i,2])-F(z[i,],r=CORRELATION))}
+     a00=1-a10-a01-a11
+ -sum(((Y[,1]==1)&(Y[,2]==1))*log(a11) +
+     11*log(a01) +
+     12*log(a10) +
+     13*log(a00) )
+ }
>
> beta0=c(0,1,1,1,1,1,1)
> (OPT=optim(fn=logVgen,par=beta0,method="BFGS")$par)
[1]  0.52613820 -2.45172059  2.24196154  4.90863292 -4.53994592  0.01600769
[7]  0.10614807
There were 23 warnings (use warnings() to see them)

E.g.

> library(copula)
> F=function(x,r){pcopula(pnorm(x),
               claytonCopula(2, r))}
> Fx=function(x1){F(c(x1,1e40),0)
}
> Fy=function(x2){Fx(x2)}
  • An application to school tests

Consider the following dataset,

hsb2=read.table("http://freakonometrics.free.fr/hsb2.csv",
        header=TRUE, sep=",")
math_male=hsb2$math[female==0]
write_male=hsb2$write[female==0]
math_female=hsb2$math[female==1]
write_female=hsb2$write[female==1]
plot(math_female, write_female, type="p",
     pch=19,col="red",xlab="maths",ylab="writing",cex=.8)
points(math_male, write_male, cex=1.2, col="blue")

with here maths versus writing, with girls in red and boys in blue, where variables here are

  female :
    0: male
    1: female
  race :
    1: hispanic
    2: asian
    3: african-amer
    4: white
  ses :
    1: low
    2: middle
    3: high
  schtyp : type of school
    1: public
    2: private
  prog : type of program
    1: general
    2: academic
    3: vocation
  read : reading score
  write : writing score
  math : math score
  science : science score
  socst : social studies score

We can try to understand correlation between math and writing skills. Covariates can be the sex of the child, and his reading skills. The question will then be: are good students in maths and writing simply students that can read well ?

Here the code is simply

> W=hsb2$write>=50
> M=hsb2$math>=50
> base=data.frame(Y1=W,Y2=M,
+             X1=hsb2$female,X2=hsb2$read)
>
> library(Zelig)
> REG= zelig(list(mu1=Y1~X1+X2,
+             mu2=Y2~X1+X2,
+     rho=~1),
+     model="bprobit",data=base)
> summary(REG)
 
Call:
zelig(formula = list(mu1 = Y1 ~ X1 + X2, mu2 = Y2 ~ X1 + X2,
    rho = ~1), model = "bprobit", data = base)
 
Pearson Residuals:
                Min        1Q  Median      3Q    Max
probit(mu1) -4.7518 -0.502594 0.15038 0.53038 1.8592
probit(mu2) -3.4243 -0.653537 0.23673 0.67011 2.6072
rhobit(rho) -4.9821  0.010481 0.13500 0.40776 2.9171
 
Coefficients:
                  Value Std. Error  t value
(Intercept):1 -5.484711   0.787101 -6.96825
(Intercept):2 -4.061384   0.633781 -6.40818
(Intercept):3  1.332187   0.322175  4.13497
X1:1           1.125924   0.233550  4.82092
X1:2           0.167258   0.202498  0.82598
X2:1           0.103997   0.014662  7.09286
X2:2           0.082739   0.012026  6.88017
 
Number of linear predictors:  3
 
Names of linear predictors: probit(mu1), probit(mu2), rhobit(rho)
 
Dispersion Parameter for binom2.rho family:   1
 
Residual Deviance: 364.51 on 593 degrees of freedom
 
Log-likelihood: -182.255 on 593 degrees of freedom
 
Number of Iterations: 3
> (exp(summary(REG)@coef3[3])-1)/(exp(
summary(REG)@coef3[3])+1)
[1] 0.5824045

with a remaining correlation among residuals of 0.58. So with only the sex of the student, and his or her reading skill, we cannot explain the correlation between maths and writing skills. With our previous code, we have here

> beta0=c((exp(summary(REG)@coef3[3])-1)/(exp(summary(REG)@coef3[3])+1),
+      summary(REG)@coef3[c(1:2,4:7),1])
> beta0
              (Intercept):1 (Intercept):2          X1:1          X1:2
0.58240446   -5.48471133   -4.06138412    1.12592427    0.16725842
X2:1          X2:2
0.10399668    0.08273879
> (OPT=optim(fn=logV,par=beta0,method="BFGS")$par)
(Intercept):1 (Intercept):2          X1:1          X1:2
0.5824045    -5.4847113    -4.0613841     1.1259243     0.1672584
X2:1          X2:2
0.1039967     0.0827388

i.e. we obtain (almost) exactly the same estimators. But here I have used as starting values for the optimization procedure the estimators given by R. If we change them, hopefully we have a robust maximum likelihood estimator,

> (OPT=optim(fn=logV,par=beta0/2,method="BFGS")$par)
              (Intercept):1 (Intercept):2          X1:1          X1:2
   0.58233360   -5.49428984   -4.06839571    1.12696594    0.16760347
         X2:1          X2:2
   0.10417767    0.08287409
There were 12 warnings (use warnings() to see them)

So once again, it is possible to optimize numerically a likelihood function, and it works.

  1. Y[,1]==0)&(Y[,2]==1 []
  2. Y[,1]==1)&(Y[,2]==0 []
  3. Y[,1]==0)&(Y[,2]==0 []
  4. Y[,1]==1)&(Y[,2]==0)&(Y[,3]==0 []
  5. Y[,1]==0)&(Y[,2]==1)&(Y[,3]==0 []
  6. Y[,1]==0)&(Y[,2]==0)&(Y[,3]==1 []
  7. Y[,1]==1)&(Y[,2]==1)&(Y[,3]==0 []
  8. Y[,1]==1)&(Y[,2]==0)&(Y[,3]==1 []
  9. Y[,1]==0)&(Y[,2]==1)&(Y[,3]==1 []
  10. Y[,1]==1)&(Y[,2]==1)&(Y[,3]==1 []
  11. Y[,1]==0)&(Y[,2]==1 []
  12. Y[,1]==1)&(Y[,2]==0 []
  13. Y[,1]==0)&(Y[,2]==0 []

Lâcher de bombes et facteur d’échelle

Pour ceux qui auraient pris mon précédant billet (en ligne ici) un peu trop au sérieux (oui, il y en a) deux petites précisions,

  • tout d’abord ce n’est pas bien de lancer des bombes sur des gens que l’on ne connait pas (oui, j’ai eu un courriel sur ce point),
  • ensuite, l’étude que je mentionnais date un peu: il y a 50 ans, les outils statistiques étaient limités….

En fait, on s’en doute, la notion d’échelle est fondamentale. Vu de loin, les tirs ne sont pas du tout aléatoires ! ils sont même relativement précis.

 

 

C’est lorsque l’on regarde de manière beaucoup plus fine que le hasard apparait.

Les données précédentes étaient tirés d’une carte google dont j’ai extrait les latitudes et les longitudes (ici), et la base est malheureusement trop incomplète pour être utile. Mais il existe d’autres sources. Par exemple Charles Franklin a mis en ligne une étude intéressante (), et il a bien voulu m’envoyer les données (qu’il a saisi auparavant à la main). De la même manière que dans le billet précédant, on compte les points dans une grille relativement fine (ici 1km de long et de large) centrée sur Londres. Les données brutes sont à gauche, et les données lissées à droite,

 

 

L’hypothèse d’observations i.i.d. utilisée implicitement dans la précédente étude (mais que l’on ne pouvait pas discuter faute de données) ne semble pas vraiment tenir la route.  Et on voit que les régions ayant eu la plus grosse concentration de bombe sont très proches, ce qui traduit l’idée que les tirs ne sont pas complètement aléatoire. Plus on regarde sur une échelle fine, plus on a cette impression (à l’échelle de la rue, la probabilité d’être touché par une bombe est la même pour tous les bâtiments; mais pas à l’échelle de la région).
Mais pour aller plus loin, il faudrait des notions plus poussées que celles abordées en cours, et encore une fois, le but était juste de proposer une application (réelle) de test d’ajustement de loi par un test du chi-deux.

Test d’ajustement et lâcher de bombes

Avant les applications demain en cours, un petit billet (presque d’actualité) sur une application évoquée vendredi dernier sur le test du chi-deux: l’ajustement de lois.

Le problème est le suivant: un pays se fait bombarder. Et les dirigeants doivent se demander si certaines cibles sont visées (auquel cas il peut être légitime de les déplacer) ou au contraire si les tirs sont aléatoires. Quand je dis que des cibles sont visées, j’entends aussi par là que les pilotes savent viser. Car les pilotes ont (je pense, ou j’espère) un carnet de route à suivre, avec des cibles à viser

Ce que l’on va se demander c’est plutôt si les pilotes savent – ou arrive à – viser. Bref, le problème peut se modéliser en supposant que le lancer de bombes se fait (globalement) selon un processus de Poisson. Localement, si les tirs sont aléatoires, on devrait observer des tirages de lois de Poisson. C’est tout du moins la théorie que R. D. Clarke (alors actuaire chez Prudential Assurance Company) a utilisé pendant la seconde guerre mondiale, sur les bombardements à Londres (en ligne ici). Cette histoire (vraie) est reprise par Thomas Pynchon dans Gravity’s Rainbow (en ligne ici)

(l’exemple est même repris dans Feller). Bref, le problème n’est pas de savoir quel serait le paramètre de la loi de Poisson, mais si la loi de Poisson est adaptée, ou pas. C’est ce qu’on appelle un problème d’ajustement de loi.

  • Test du chi-deux et ajustement de lois

Jusqu’à présent, on avait supposé que les observations suivaient une certaine loi, e.g. une loi de Poisson http://freakonometrics.hypotheses.org/files/2015/12/chi2-16.gif, et on cherchait à tester une hypothèse de la forme

http://freakonometrics.hypotheses.org/files/2015/12/chi2-13.gif

versus

http://freakonometrics.hypotheses.org/files/2015/12/chi2-14.gif

Ici on va cherche à utiliser un test sur une loi multinomiale, de la forme

http://freakonometrics.hypotheses.org/files/2015/12/chi2-01.gif

versus

http://freakonometrics.hypotheses.org/files/2015/12/chi2-02.gif

L’hypothèse nulle est ici une égalité vectorielle,

http://freakonometrics.hypotheses.org/files/2015/12/chi2-03.gif

ou encore

http://freakonometrics.hypotheses.org/files/2015/12/chi2-56.gif

Dans le cas d’un test d’ajustement de lois de Poisson, si on suppose que les observations suivent une loi http://freakonometrics.hypotheses.org/files/2015/12/chi2-09.gif, on va utiliser un test sur une loi multinomiale, avec

http://freakonometrics.hypotheses.org/files/2015/12/chi2-08.gif

Le soucis est que, puisque l’on se limite à un vecteur de taille finie, le vecteur ne sera pas dans le simplexe (cf ici). Donc classiquement, pour la dernière valeur, on retient une hypothèse de la forme

http://freakonometrics.hypotheses.org/files/2015/12/chi2-10.gif

Le test est alors basé sur la statistique de Pearson

http://freakonometrics.hypotheses.org/files/2015/12/chi2-11.gif

qui va suivre (si effectivement les observations suivent une loi de Poisson http://freakonometrics.hypotheses.org/files/2015/12/chi2-09.gif) une loi http://freakonometrics.hypotheses.org/files/2015/12/chi2-12.gif.
Rappelons que cette statistique peut aussi s’écrire

http://freakonometrics.hypotheses.org/files/2015/12/chi2-21.gif

 

  • Application pour étudier la précision d’un lancer de bombes

Pendant la second guerre mondiale, R.D. Clarke étudia les impacts de bombes V1 et V2 tombées dans une zone de 144 km2 dans le sud de Londres (l’article original, publié après guerre en 1946, est en ligne ici). Il divisa cette zone en 576 zones de 0,25 km2 et compta le nombre d’impact dans chacune des zones. Il obtint plus précisément la distribution suivante

nbre impacts par zone 0 1 2 3 4 5 et plus
fréquence (nbre zones) 229 211 93 35 7 1

(en fait, on sait que le “5
et plus” correspond à 7, car sait qu’il y a eu 537 bombes sur 576 zones). Avant de se lancer tête baissée, réfléchissons un peu au type de loi que l’on pourrait utiliser. Pour cela, notons http://freakonometrics.hypotheses.org/files/2015/12/Nb.gif le nombre de points tombés dans un ensemble http://freakonometrics.hypotheses.org/files/2015/12/asubb.gif (ou http://freakonometrics.hypotheses.org/files/2015/12/calA.gif désigne la région globale). Si on suppose qu’un nombre aléatoire http://freakonometrics.hypotheses.org/files/2015/12/Npois.gif de points sont lancés aléatoirement dans http://freakonometrics.hypotheses.org/files/2015/12/calA.gif, et que http://freakonometrics.hypotheses.org/files/2015/12/Npois.gif suit une loi de Poisson http://freakonometrics.hypotheses.org/files/2015/12/lambda.gif, alors http://freakonometrics.hypotheses.org/files/2015/12/Nb.gif suit une loi de Poisson de paramètre

http://freakonometrics.hypotheses.org/files/2015/12/ratio-aire.gif

Bref, si on regarde plusieurs plusieurs régions http://freakonometrics.hypotheses.org/files/2015/12/calB.gif (de même taille, éventuellement – afin de garder toutes les observations – formant une partition de http://freakonometrics.hypotheses.org/files/2015/12/calA.gif) et si on observe une loi de Poisson, c’est que dans la région http://freakonometrics.hypotheses.org/files/2015/12/calA.gif, les tirs sont faits au hasard.

> (n=c(229,211,93,35,7,0,0,1))
[1] 229 211  93  35   7   0   0   1
> y=0:7
> (lambda=sum(n*y)/sum(n))
[1] 0.9322917
> prob=dpois(y,lambda)
> freq.theo=sum(n)*prob
> freq.emp =n
> cbind(y,trunc(freq.theo),freq.emp)
     y     freq.emp
[1,] 0 226      229
[2,] 1 211      211
[3,] 2  98       93
[4,] 3  30       35
[5,] 4   7        7
[6,] 5   1        0
[7,] 6   0        0
[8,] 7   0        1

On peut commencer par regarder la log-vraisemblance

> logvrais=function(L){sum(log(dpois(y,L))*n)}
> param=seq(.5,1.5,by=.025)
> LV=sapply(param,logvrais)
> plot(param,LV,type="b",col="blue")
http://freakonometrics.hypotheses.org/files/2015/12/logvrais1-bombes.png

La statistique du chi-deux, elle, ressemble à ça

> chi2=function(L)sum(n)*sum(((n/sum(n)-dpois(y,L))^2)/dpois(y,L))
> C2=sapply(param,chi2)
> plot(param,C2,type="b",col="red")xxx.
http://freakonometrics.hypotheses.org/files/2015/12/chi2-bombes.png

Bon, le soucis c’est qu’en tronquant le vecteur (on suppose que le nombre maximum d’impact est 7), la somme des probabilités ne fait pas exactement un. On peut regrouper dans une même classe les fréquences élevées (comme le fait Clarke dans le papier initial d’ailleurs).

> (n=c(229,211,93,35,7,1))
[1] 229 211  93  35   7   1
> y=0:4
> prob=c(dpois(y,lambda),1-ppois(4,lambda))
> freq.theo=sum(n)*prob
> freq.emp =n
> (Q=sum(1
[1] 1.169155
> 1-pchisq(Q,length(y)-1)
[1] 0.8831505

On retrouve les quantités évoquées dans l’article de Clarke. La  statistique de test vaut 1.16 et la p-value associée est de l’ordre de 88%. On va donc accepter l’hypothèse de loi de Poisson,

> chi2=function(L){
+ sum(n)*sum(2)^2)/
+ c(dpois(y,L),1-ppois(4,L)) )}
> C2=sapply(param,chi2)
> plot(param,C2,type="b",col="red")

La valeur de la statistique du chi-deux en fonction du paramètre de la loi de Poisson est représentée ci-dessous,

http://freakonometrics.hypotheses.org/files/2015/12/chi2-bombes-v2.png

et le trait horizontal est la valeur seuil de la région critique (en dessous, on accepte l’ajustement d’une loi de Poisson),

> abline(h=qchisq(.95,length(y)-1),lty=2)

Il existe plusieurs fonctions sous R permettant de faire des choses semblables,

> library(vcd)
> (n=c(229,211,93,35,7,0,0,1))
[1] 229 211  93  35   7   0   0   1
> nsim=c(rep(y[0],n[0]),rep(y[1],n[1]),
       rep(y[2],n[2]),rep(y[3],n[3]),
       rep(y[4],n[4]),rep(y[5],n[5]),
       rep(y[6],n[6]),rep(y[7],n[7]),
       rep(y[8],n[8]))
> gf=goodfit(nsim,type="poisson",method="ML")
> summary(gf)
 
 Goodness-of-fit test for poisson distribution
 
 X^2 df P(> X^2)
Likelihood Ratio 4.007784 3 0.2606249
 
> gf=goodfit(nsim,type="poisson",method="MinChisq")
> summary(gf)
 
 Goodness-of-fit test for poisson distribution
 
 X^2 df P(> X^2)
Pearson 1.275499 3 0.7349592

Bref, quelle que soit la méthode utilisée, on notera que l’on accepte toujours l’hypothèse d’une loi de Poisson. Autrement dit les bombes étaient envoyés un peu au hasard dans Londres….

Et si on y réfléchit un peu, d’un point de vue de la théorie des jeux, c’est effectivement une stratégie optimale…

  1. freq.theo-freq.emp)^2)/freq.theo []
  2. n/sum(n)-c(dpois(y,L),1-ppois(4,L []

Cochrane, Pearson et le test du chi-deux

En cours, nous avons poursuivi sur la loi multinomiale, et le test du chi-deux. Je voulais mettre un petit billet pour récapituler les différents points, et montrer une application numérique (nous reviendrons en détails mercredi sur des applications des outils vus jusqu’à présent).

  • Inférence avec la loi multinomiale

On suppose qu’une variable http://freakonometrics.blog.free.fr/public/maths/coch-01.gif peut prendre http://freakonometrics.blog.free.fr/public/maths/coch-02.gif modalités, notées http://freakonometrics.blog.free.fr/public/maths/coch-03.gif, avec http://freakonometrics.blog.free.fr/public/maths/coch-04.gif. On posera

http://freakonometrics.blog.free.fr/public/maths/coch-05.gif

en notant que http://freakonometrics.blog.free.fr/public/maths/coch-06.gif appartient au simplexe de http://freakonometrics.blog.free.fr/public/maths/coch-07.gif au sens où

http://freakonometrics.blog.free.fr/public/maths/coch-08.gif

On a vu que l’estimateur du maximum de vraisemblance s’obtenait en faisant un peu d’optimisation sous contrainte, et que

http://freakonometrics.blog.free.fr/public/maths/coch-10.gif

(en reprenant les notations du cours). On avait montré que

http://freakonometrics.blog.free.fr/public/maths/coch-11.gif

et on a vu

http://freakonometrics.blog.free.fr/public/maths/coch-13.gif

(ce qui peut se retrouver en introduisant la variable binomiale http://freakonometrics.blog.free.fr/public/maths/coch-16.gif). Mais plus généralement, on finira les calculs permettant d’établir que, pour http://freakonometrics.blog.free.fr/public/maths/coch-17.gif

http://freakonometrics.blog.free.fr/public/maths/coch-18.gif

(ce qui permet d’obtenir la matrice de variance covariance de http://freakonometrics.blog.free.fr/public/maths/coch-20.gif).
En utilisant le théorème central limite on peut montrer que

http://freakonometrics.blog.free.fr/public/maths/coch-23.gif

Sous une forme multivariée, on écrira

http://freakonometrics.blog.free.fr/public/maths/coch-25.gif
http://freakonometrics.blog.free.fr/public/maths/coch-26.gif avec http://freakonometrics.blog.free.fr/public/maths/coch-27.gif et pour http://freakonometrics.blog.free.fr/public/maths/coch-17.gif, http://freakonometrics.blog.free.fr/public/maths/coch-28.gif.

On notera que la somme de la ième colonne de http://freakonometrics.blog.free.fr/public/maths/coch-29.gif est http://freakonometrics.blog.free.fr/public/maths/coch-30.gif.
Aussi, http://freakonometrics.blog.free.fr/public/maths/coch-29.gif n’est pas inversible (c’est le fait que notre paramètre appartient au simplexe).
Pour s’en sortir, la première idée est d’utiliser un peu d’algèbre linéaire. Une matrice http://freakonometrics.blog.free.fr/public/maths/coch-31.gif est une matrice de projection si elle est idempotente, i.e. http://freakonometrics.blog.free.fr/public/maths/coch-32.gif. Ses valeurs propres sont alors 0 ou 1, et si http://freakonometrics.blog.free.fr/public/maths/coch-35.gif est le nombre de fois où 1 est valeur propre, et si http://freakonometrics.blog.free.fr/public/maths/coch-36.gif, alors http://freakonometrics.blog.free.fr/public/maths/coch-37.gif.
Posons http://freakonometrics.blog.free.fr/public/maths/coch-38.gif. Alors

http://freakonometrics.blog.free.fr/public/maths/coch-39.gif

Or compte tenu de la forme de http://freakonometrics.blog.free.fr/public/maths/coch-29.gif,

http://freakonometrics.blog.free.fr/public/maths/coch-40.gif

qui est une matrice de projection dont la trace est http://freakonometrics.blog.free.fr/public/maths/coch-41.gif (qui est aussi le nombre de fois où 1 est valeur propre). Donc

http://freakonometrics.blog.free.fr/public/maths/coch-42.gif

Le test du chi-deux sera basé sur le fait qu’asymmptotiquement

http://freakonometrics.blog.free.fr/public/maths/coch-44.gif

Une autre idée consiste à construire http://freakonometrics.blog.free.fr/public/maths/coch-41.gif variables aléatoires qui seront indépendantes. Mais on peut plutôt regarder les applications de ce test, en particulier comme test d’indépendance.
Pour information, Frank Yates a proposé un correction “pour continuité“, ici. La statistique considérée est alors

http://fre<br /><br /><br /><br />
akonometrics.blog.free.fr/public/maths/coch-46.gif
  • Retour sur le théorème de Cochrane

Soit http://freakonometrics.blog.free.fr/public/maths/coch-50.gif de dimension http://freakonometrics.blog.free.fr/public/maths/coch-51.gif. Posons http://freakonometrics.blog.free.fr/public/maths/coch-59.gif, pour http://freakonometrics.blog.free.fr/public/maths/coch-04.gif, où on notera http://freakonometrics.blog.free.fr/public/maths/coch-60.gif le rang de http://freakonometrics.blog.free.fr/public/maths/coch-62.gif, en supposant que les http://freakonometrics.blog.free.fr/public/maths/coch-62.gif sont positives semidefinies, alors on a équivalence entre

  •  http://freakonometrics.blog.free.fr/public/maths/coch-63.gif
  • http://freakonometrics.blog.free.fr/public/maths/coch-64.gif pour http://freakonometrics.blog.free.fr/public/maths/coch-04.gif,
  • les http://freakonometrics.blog.free.fr/public/maths/coch-65.gif sont des variables indépendantes.

Les http://freakonometrics.blog.free.fr/public/maths/coch-65.gif s’interprètent comme des longueurs (euclidienne) de projections d’un vecteur Gaussien sur des sous-espaces orthogonaux (de dimension respective http://freakonometrics.blog.free.fr/public/maths/coch-60.gif). Si ces vecteurs sont indépendants, et suivent des lois du chi-deux à http://freakonometrics.blog.free.fr/public/maths/coch-60.gif degrés de libertés, avec http://freakonometrics.blog.free.fr/public/maths/coch-63.gif, alors les sous-espaces sont orthogonaux, et supplémentaires. On peut y voir une espèce d’extension du théorème de Pythagore, en remplaçant la notion de vecteurs orthogonaux par des variables indépendantes suivant des lois du chi-deux, et la somme des carrés des longueurs devient la somme des degrés de liberté.

  • Application comme test d’indépendance

Considérons deux variables http://freakonometrics.blog.free.fr/public/maths/coch-66.gif pouvant prendre toutes les deux deux modalités (disons deux lois binomiales). On est alors face a une loi multinomiale à 4 modalités

  • http://freakonometrics.blog.free.fr/public/maths/coch-79.gif avec probabilité http://freakonometrics.blog.free.fr/public/maths/coch-70.gif
  • http://freakonometrics.blog.free.fr/public/maths/coch-78.gif avec probabilité http://freakonometrics.blog.free.fr/public/maths/coch-73.gif
  • http://freakonometrics.blog.free.fr/public/maths/10gif.gif avec probabilité http://freakonometrics.blog.free.fr/public/maths/coch-74.gif
  • http://freakonometrics.blog.free.fr/public/maths/coch-77.gif avec probabilité http://freakonometrics.blog.free.fr/public/maths/coch-75.gif

Un test d’indépendance revient à tester si la loi multinomiale peut s’écrire

http://freakonometrics.blog.free.fr/public/maths/chi2-ab.gif
http://freakonometrics.blog.free.fr/public/maths/chi_ab2.gif
http://freakonometrics.blog.free.fr/public/maths/chi_ab3.gif
http://freakonometrics.blog.free.fr/public/maths/chi_ab4.gif

pour des vecteurs http://freakonometrics.blog.free.fr/public/maths/chi-a.gif et http://freakonometrics.blog.free.fr/public/maths/chi-b.gif. tels que http://freakonometrics.blog.free.fr/public/maths/chi-a2.gif et http://freakonometrics.blog.free.fr/public/maths/chi-b2.gif. On a alors http://freakonometrics.blog.free.fr/public/maths/chi212121.gif contraintes sur les paramètres. Ces deux contraintes additionnelles font que la statistique de test s’écrit

http://freakonometrics.blog.free.fr/public/maths/CHI-INDEP.gif

qui va suivre asymptotiquement une loi http://freakonometrics.blog.free.fr/public/maths/CHI1.gif i.e. http://freakonometrics.blog.free.fr/public/maths/CHI12.gif d’après le théorème de Cochrane.

  • Peine de mort pour les condamnées pour meurtre en Floride 1976-1987

en fonction de la “race” du meurtrier et de la victime,

  • meurtrier de “race blanche” et victime de “race blanche“: 53 condamnés à mort, 414 non condamnés à mort
  • meurtrier de “race blanche” et victime de “race noire“: 0 condamné à mort, 16 non condamnés à mort
  • meurtrier de “race noire“et victime de “race blanche“: 11 condamnés à mort, 37 non condamnés à mort
  • meurtrier de “race noire“et victime de “race noire“: 4 condamnés à mort, 139 non condamnés à mort

On peut alors faire des tests d’indépendance, entre la “race” du meurtrier et le verdict par exemple.

MEURTRIER=matrix(c(53+0,11+4,414+16,139+37),2,2)
VICTIME  =matrix(c(53+11,0+4,414+37,139+16),2,2)
n=sum(MEURTRIER)
(PROBMEUTR=MEURTRIER/n)
           [,1]      [,2]
[1,] 0.07863501 0.6379822
[2,] 0.02225519 0.2611276

SL=rowSums(PROBMEUTR)
SC=colSums(PROBMEUTR)
(MEUTRINDEP=outer(SL, SC, "*"))
           [,1]      [,2]
[1,] 0.07229966 0.6443176
[2,] 0.02859055 0.2547922

(Q=n*sum((PROBMEUTR - MEUTRINDEP)^2/MEUTRINDEP))
[1] 1.468519

(Qcorrec=n*sum((abs(PROBMEUTR - MEUTRINDEP)-.5/n)^2/MEUTRINDEP))
[1] 1.144741

pchisq(Qcorrec, (2-1)*(2-1), lower.tail
 = FALSE)
[1] 0.2846528

qchisq(.95, (2-1)*(2-1))
[1] 3.841459

chisq.test(MEURTRIER)

Pearson's Chi-squared test with Yates' continuity correction

data:  MEURTRIER 
X-squared = 1.1447, df = 1, p-value = 0.2847

On rejette donc l’hypothèse d’indépendance.

Circular or spherical data, and density estimation

I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.

  • circular data density estimation

Consider the density of an angle say, i.e. a function http://freakonometrics.hypotheses.org/files/2015/12/circ-01.gif such that

http://freakonometrics.hypotheses.org/files/2015/12/circ-02.gif

with a circular relationship, i.e. http://freakonometrics.hypotheses.org/files/2015/12/circ-03.gif. It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that

http://freakonometrics.hypotheses.org/files/2015/12/circ-04.gif

where http://freakonometrics.hypotheses.org/files/2015/12/circ-05.gif is Bessel modified function of order 1,

http://freakonometrics.hypotheses.org/files/2015/12/circ-06.gif

(which is simply a normalization parameter). There are two parameters here, http://freakonometrics.hypotheses.org/files/2015/12/circ-07.gif (some concentration parameter) and mu a direction.
From a series of observed angleshttp://freakonometrics.hypotheses.org/files/2015/12/circ-08.gif, the maximum likelihood estimator for kappa is solution of

http://freakonometrics.hypotheses.org/files/2015/12/circ-09.gif

where

http://freakonometrics.hypotheses.org/files/2015/12/circ-10.gif

and

http://freakonometrics.hypotheses.org/files/2015/12/circ-11.gif

and where http://freakonometrics.hypotheses.org/files/2015/12/circ-12.gif, where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….

  • density estimation for hours on simulated data

A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let http://freakonometrics.hypotheses.org/files/2015/12/circ-13.gif is the time (in hours) for the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth observation (the http://freakonometrics.hypotheses.org/files/2015/12/circ-14.gifth phone call received). Then set

http://freakonometrics.hypotheses.org/files/2015/12/circ-15.gif

The time is now seen as an angle. It is possible to consider the equivalent of an histogram,

set.seed(1)
library(circular)
X=rbeta(100,shape1=2,shape2=4)*24
Omega=2*pi*X/24
Omegat=2*pi*trunc(X)/24
H=circular(Omega,type="angle",units="radians",rotation="clock")
Ht=circular(Omegat,type="angle",units="radians",rotation="clock")
plot(Ht, stack=FALSE, shrink=1.3, cex=1.03,
axes=FALSE,tol=0.8,zero=c(rad(90)),bins=24,ylim=c(0,1))
points(Ht, rotation = "clock", zero =c(rad(90)),
col = "1", cex=1.03, stack=TRUE )

rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2),
axes=FALSE,prop=1.5)

or a kernel based estimation of the density (the gray line on the right).

circ.dens = density(Ht+3*pi/2,bw=20)
plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0,
axes=FALSE,tol=.8,zero=c(0),bins=24,
xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075)
lines(circ.dens, col="darkgrey", lwd=3)
text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2);
text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)

The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif, we estimate it on the sample http://freakonometrics.hypotheses.org/files/2015/12/circular-density-3.gif. Then we multiply by 3 to get the density only on http://freakonometrics.hypotheses.org/files/2015/12/0-24.gif. For the bandwidth, I took the same as the one that we would have taken on http://freakonometrics.hypotheses.org/files/2015/12/Xi.gif

The code is simply the following

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
X=rbeta(100,shape1=2,shape2=4)*24
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*pi-pi/2
Dd=d$y[I]/max(d$y)+1

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)

Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.

plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
type="l",axes=FALSE,xlab="",ylab="")
for(i in pi/12*(0:12)){
abline(a=0,b=tan(i),lty=1,col="light yellow")}
segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey")
segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

We get here those two graphs,

To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.

density of wind direction

On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.

A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.

density of 911 phone calls

Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).

In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.

That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right

We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right

  • density of earth temperatures, or earthquakes

Of course it is also possible to work in higher dimension. Before, we went from densities on http://freakonometrics.hypotheses.org/files/2015/12/circ-16.gif to densities on the unit circle http://freakonometrics.hypotheses.org/files/2015/12/circ-18.gif. But similarly, it is possible to go from http://freakonometrics.hypotheses.org/files/2015/12/circ-17.gif to the unit sphere http://freakonometrics.hypotheses.org/files/2015/12/circ-19.gif. A nice application being global climate studies,

The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.

library(ks)
X=cbind(EQ$Longitude,EQ$Latitude)
Hpi1 = Hpi(x = X)
DX=kde(x = X, H = Hpi1)
library(maps)
map("world")
plot(DX,add=TRUE,col="red")
points(X,cex=.2,col="blue")
Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]),
cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180),
cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")

Without any correction, we get the red level curves. The pink one integrates correction.