Tag Archives: inference

INF7100, statistiques

La seconde partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les statistiques, univariées et multivarirées. Le plan sera le suivant

  • 201: De la Statistique aux Sciences de Données pdf video (14:24)
  • 211: Fonctions Usuelles en Statistique (fonction de réparition, densité, histogramme) pdf video (28:37)
  • 221: Indicateurs Statistique: Valeur Centrale (moyenne) pdf video (32:56)
  • 222: Indicateurs Statistique: Dispersion (variance, inégalités) pdf video (22:21)
  • 223: Indicateurs Statistique: Approximations (approximation normale) pdf video (18:42)
  • 224: Indicateurs Statistique: Quantiles pdf video (24:54)
  • 231: Inférence (statistique bayésienne) pdf video (39:33)
  • 241: Tests Statistiques (1) (tests, significativité, p-value) pdf video (43:41)
  • 242: Tests Statistiques (2) (erreurs) pdf video (16:51)
  • 261: Statistiques Bivariées pdf video (25:16)
  • 271: Statistiques Multivariées: Projections pdf video (29:06)
  • 272: Statistiques Multivariées: Clusters pdf video (32:21)
  • 281: Réseaux et Graphs pdf video (32:40)
  • 291: Données Chronologiques pdf video (29:01)

 

Re-parametrization and Maximum Likelihood

The maximum likelihood estimator is invariant in the sense that for all bijective function  , if   is the maximum likelihood estimator of   then  . Let  , then   is equal to  , and the likelihood function in   is  . And since   is the maximum likelihood estimator of  ,

hence,   is the maximum likelihood estimator of  .

For instance, the Bernoulli distribution is   with   and

 

Given sample  , the likelihood is

 

The log-likelihood is then

 

with ICI

 https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20p}\log\mathcal{L}(p)=\frac{\sum%20x_i}{p}-\frac{n-\sum%20x_i}{1-p}.

Thus, the first order condition

 https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20p}\log\mathcal{L}(p)=0

is satisfied when  . In order to illustrate, consider the following data


> set.seed(1)
> X=sample(0:1,size=15,replace=TRUE)
> X
[1] 0 0 1 1 0 1 1 1 1 0 0 0 1 0 1

The (negative) log-likelihood is here


> loglik=function(p){
+ -sum(log(dbinom(X,size=1,prob=p)))
+ }

that we can visualize below


> u=seq(0,1,by=.025)
> v=-Vectorize(loglik)(u)
> plot(u,v,type="l",xlab="",ylab="")

From calculations above, we know that the maximum likelihood estimator for  is


> mean(X)
[1] 0.5333333

The numerical version is


> (opt=optim(.5,loglik))
$par
[1] 0.5333008

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL

Somehow, we were lucky here, because we did not say that the optimization was on the interval . Nevertheless, our estimator for the probability belongs to . In order to insure that the optimal value is in , we can consider some constrained optimization routine


> constrOptim(.5, loglik, grad=NULL,ui=matrix(c(1,-1),2,1), ci=c(0,-1))
$par
[1] 0.5333008

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 6.909277e-05

On the previous graph, we did – indeed – reach that maximum of the log-likelihood


> abline(v=opt$par,col="red")

An alternative is to consider   (as in the exponential family). The log-likelihood is then

 

since

 

Here

 

Thus, the first order condition

 

is satisfied when

i.e.

 

From a numerical perspective, we have the same optimal value


> loglik=function(theta){
+ -sum(log(dbinom(X,size=1,prob=exp(theta)/(1+exp(theta)))))
+ }
> (opt=optim(0,loglik))
$par
[1] 0.1335938

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL
> exp(opt$par)/(1+exp(opt$par))
[1] 0.5333489

Modeling the Marginals and the Dependence separately

When introducing copulas, it is commonly admitted that copulas are interesting because they allow to model the marginals and the dependence structure separately. The motivation is probably Sklar’s theorem, which says that given some marginal cumulative distribution functions (say  and , in dimension 2), and a copula (denoted ), then we can generate a multivariate cumulative distribution function with marginals the one specified previously, using

But this separability might be misleading. Consider the case of a fully parametric model,

Assume that those distributions are continuous, so that we can write the likelihood using densities,

and the log-likelihood is

The first part is the log-likelihood if we consider the first marginal (only). The second part is the log-likelihood if we consider the second marginal (only). If the two components are not independent (i.e. the copula density  is not equal to 1 everywhere) the third part cannot be considered as null, and so, in a general context,

where

while

In order to illustrate this point, consider a bivariate lognormal distribution (obtained by taking the exponential of a Gaussian vector)

> mu1=1
> mu2=2
> MU=c(mu1,mu2)
> s1=1
> s2=sqrt(2)
> r=.8
> SIGMA=matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
> library(mnormt)
> set.seed(1)
> Z=exp(rmnorm(25,MU,SIGMA))

If we believe that marginals and correlations can be treated separately, we can start with marginal distributions.

> library(MASS)
> (p1=fitdistr(Z[,1],"lognormal"))
    meanlog      sdlog  
  1.1686652   0.9309119 
 (0.1861824) (0.1316508)
> (p2=fitdistr(Z[,2],"lognormal"))
    meanlog      sdlog  
  2.2181721   1.1684049 
 (0.2336810) (0.1652374)

Based on those marginal distributions, define  and , and consider the maximum likelihood estimator  of the copula parameter, obtained from this pseudo sample,

Numerically, we get (since we consider a Gaussian copula, which is the true copula generated here)

> library(copula)
> Gcop=normalCopula(.3,dim=2)
> U=cbind(plnorm(Z[,1],p1$estimate[1],p1$estimate[2]),
+ plnorm(Z[,2],p2$estimate[1],p2$estimate[2]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.86530    0.03799   22.77

But clearly, we did not treat the dependence structure separately, since it was a function of marginal distributions,

If we consider a global optimization problem, then results are different. The joint density can be derived (see e.g. Mostafa & Mahmoud (1964))

> dbivlognorm=function(x,theta){
+ mu1=theta[1]
+ mu2=theta[2]
+ s1=theta[3]
+ s2=theta[4]
+ r=theta[5]
+ a1=(log(x[,1])-mu1)/s1
+ a2=(log(x[,2])-mu2)/s2
+ d=1/(2*pi*s1*s2*sqrt(1-r^2))*1/(x[,1]*x[,2])*
+ exp(-(a1^2-2*r*a1*a2+a2^2)/(2*(1-r^2)))
+ return(d)
+ }
> LogLik=function(theta){
+ return(-sum(log(dbivlognorm(Z,theta))))}
> optim(par=c(0,0,1,1,0),fn=LogLik)$par
[1] 1.1655359 2.2159767 0.9237853 1.1610132 0.8645052

The difference is not huge, but still. The estimators are not identical. From a statistical point of view, we can hardly treat the marginals and the dependence structure separately.

Another point we should keep in mind is that the estimation of the copula parameter depends on the margins, not only through the parameters, but more deeply, through the choice of the marginal distributions (that might be misspecified). For instance, if we assume that margins are exponentially distributed,

> (p1=fitdistr(Z[,1],"exponential"))
      rate   
  0.22288362 
 (0.04457672)
> (p2=fitdistr(Z[,2],"exponential"))
      rate   
  0.06543665 
 (0.01308733)

the estimation of the parameter of the Gaussian copula yields

> U=cbind(pexp(Z[,1],p1$estimate[1]),
+ pexp(Z[,2],p2$estimate[1]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.87421    0.03617   24.17   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The maximized loglikelihood is  15.4 
Optimization converged

The problem is that since we misspecify marginal distribution, our pseudo sample is defined on the unit-interval, but there is no chance that we get uniform margins. If we generate a sample of size 500 with the code above,

> x <- U[,1]; y <- U[,2]
> xhist <- hist(x, plot=FALSE) ; yhist <- hist(y, plot=FALSE)
> top <- max(c(xhist$counts, yhist$counts)) 
> nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) 
> par(mar=c(3,3,1,1)) 
> plot(x, y, xlab="", ylab="",col="red",xlim=0:1,ylim=0:1) 
> par(mar=c(0,3,1,1))
> barplot(xhist$counts, axes=FALSE, ylim=c(0, top), 
+ space=0,col="light green") 
> par(mar=c(3,0,1,1))
> barplot(yhist$counts, axes=FALSE, xlim=c(0, top), 
+ space=0, horiz=TRUE,col="light blue")

If we compare with the previous case, when marginal distribution were well-specified, we can clearly see that the dependence structure depends on marginal distributions,

Inference for MA(q) Time Series

Yesterday, we’ve seen how inference for https://latex.codecogs.com/gif.latex?AR(p) time series was possible.  I started  with that one because it is actually the simple case. For instance, we can use ordinary least squares. There might be some possible bias (see e.g. White (1961)), but asymptotically, estimators are fine (consistent, with asymptotic normality). But when the noise is (auto)correlated, then it is more complex. So, consider here some  time series

for some white noise https://latex.codecogs.com/gif.latex?(\varepsilon_t).

> theta1=.25
> theta2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=e[t]+theta1*e[t-1]+theta2*e[t-2]
> Z=Z[800:1000]
> plot(Z,type="l")

  • Using the empirical autocorrelations

The first idea might be to use the first two (empirical) autocorrelations (the two that are supposed to be – theoretically – non null).

with  when . We also have the following relationship on the variance of the process

With those three equations, for three unknown parameters, https://latex.codecogs.com/gif.latex?\theta_1https://latex.codecogs.com/gif.latex?\theta_2 and https://latex.codecogs.com/gif.latex?\sigma, we simply have to solve (numerically) that system of equations,

> v=c(as.numeric(acf(Z)$acf[2:3]),var(Z))
> v
[1] 0.1658760 0.3823053 1.6379498
> library(rootSolve)
> seteq=function(x){
+ F1=v[1]-(x[1]+x[1]*x[2])/(1+x[1]^2+x[2]^2)
+ F2=v[2]-(x[2])/(1+x[1]^2+x[2]^2)
+ F3=v[3]-(1+x[1]^2+x[2]^2)*x[3]^2
+ return(c(F1,F2,F3))}
> multiroot(f=seteq,start=c(.1,.1,1))
$root
[1] 0.1400579 0.4766699 1.1461636

$f.root
[1]  7.876355e-10  4.188458e-09 -2.839977e-09

$iter
[1] 5

$estim.precis
[1] 2.605357e-09

We are a bit far away from the true values, used to generate our sample. And if we consider 1,000 sample (instead of only one), we still have the bias, and a large variance for our three estimators,

http://freakonometrics.hypotheses.org/files/2014/01/Capture-d%E2%80%99e%CC%81cran-2014-01-29-a%CC%80-11.34.46.png

  • Using least square techniques

We can try something quite different here. The problem we have is that we do not observe the noise https://latex.codecogs.com/gif.latex?(\varepsilon_t), we only observe our series https://latex.codecogs.com/gif.latex?(X_t). But we can try to rebuild that series (call it  since we’re not sure it will be a reconstruction of the noise). As suggested in Box & Jenkins (1967), assume that the first two values are null. And then, use

and then, we can use least square techniques

The code will be

> V=function(p){
+ theta1=p[1]
+ theta2=p[2]
+ u=rep(0,length(Z))
+ for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2]
+ return(sum(u^2))
+ }

If we try to minimize the sum of the squares of the residuals, we get

> optim(par=c(.1,.1),V)
$par
[1] 0.2751667 0.6723909

$value
[1] 225.8104

$counts
function gradient 
      77       NA 

$convergence
[1] 0

$message
NULL

which is close to the true value. Another good thing is that, if we compare that rebuilt noise with the true one (since we actually have it), then we have the same vector,

> plot(e[800:1000],col="blue",type="l")
> theta1=0.2751667
> theta2=0.6723909
> u=rep(0,length(Z))
> for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2]
> lines(1:201,u,col="red")

So far, so good. And if we look at 1,000 samples, we get

It looks like we have some bias here. And since the two estimators should be negatively correlated, one over-estimates, while the other one under-estimates.

  • Using the (global) maximum likelihood technique

And a final method might be to use the maximum likelihood technique (globally). Again, if we assume that we have a Gaussian i.i.d noise, then the vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is Gaussian, with a simple variance matrix (since a lot of elements will be null),

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ theta1=A[1];  theta2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=(theta1+theta1*theta2)/(1+theta1^2+theta2^2)
+ rho[3]=(theta2)/(1+theta1^2+theta2^2)
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1+theta1^2+theta2^2)*sigma^2
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
$par
[1] 0.2584144 0.6826530 1.0669820

$value
[1] 298.8699

$counts
function gradient 
      86       NA 

$convergence
[1] 0

$message
NULL

Here, the values that minimize the likelihood are rather close to the ones used to generate our sample. And if we run this algorithm on 1,000 samples, we can see that those estimates are fine,

I could not find other ideas, to estimate those parameters. I guess we can use the partial autocorrelation function, since we have relationships that can be related to Yule-Walker equations for https://latex.codecogs.com/gif.latex?AR(p) time series.

Inference for AR(p) Time Series

Consider a (stationary) autoregressive process, say of order 2,

https://latex.codecogs.com/gif.latex?Y_t%20=\varphi_1%20Y_{t-1}+\varphi_2%20Y_{t-2}+\varepsilon_t

for some white noise with variance . Here is a code to generate such a process,

> phi1=.25
> phi2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t]
> Z=Z[800:1000]
> n=length(Z)
> plot(Z,type="l")

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, since (if we consider a matrix formulation)

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.0268 -0.7063  0.1065  0.6925  3.2566 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1  0.23400    0.05463   4.283 2.88e-05 ***
X2  0.62863    0.05476  11.479  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.062 on 197 degrees of freedom
Multiple R-squared:  0.6349,	Adjusted R-squared:  0.6312 
F-statistic: 171.3 on 2 and 197 DF,  p-value: < 2.2e-16

so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise

> regression$coefficients
       X1        X2 
0.2339959 0.6286321 
> summary(regression)$sigma
[1] 1.061839
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written (again, using a matrix expression)

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
          [,1]
[1,] 0.2256270
[2,] 0.6315329

Now, we need to extract the estimated innovation process, from this set of parameters

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.058558

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\gamma_0%20=%20\varphi_1%20\gamma_1+\varphi_2%20\gamma_2+\sigma^2\\%20\gamma_1=\varphi_1%20\gamma_0+\varphi_2%20\gamma_1%20\\%20\gamma_2=\varphi_1%20\gamma_1+\varphi_2%20\gamma_0\end{array}\right.

It is not much more complicated to solve, actually,

> gamma0=var(Z[1:n])
> gamma1=var(Z[1:(n-1)],Z[2:n])
> gamma2=var(Z[1:(n-2)],Z[3:n])
> A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3)
> b=matrix(c(gamma0,gamma1,gamma2),3,1)
> (PHISIGMA=solve(A,b))
          [,1]
[1,] 0.2283151
[2,] 0.6283431
[3,] 1.1335501
  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.

https://latex.codecogs.com/gif.latex?Y_t\vert%20Y_{t-1}=y_{t-1},Y_{t-2}=y_{t-2}

has a Gaussian distribution

https://latex.codecogs.com/gif.latex?\mathcal{N}(\varphi_1y_{t-1}+\varphi_2y_{t-2},\sigma^2)

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1] 0.2339589 0.6285002 1.0565613

$value
[1] 293.3042

$counts
function gradient 
     106       NA 

$convergence
[1] 0

$message
NULL

It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) has a know form.

  • using (unconditional) likelihood estimators

The variance matrix of https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is https://latex.codecogs.com/gif.latex?\boldsymbol{\Gamma}=[\gamma(\vert%20i-j\vert)], where autocovariances are not not know, be can easily be computed using a recursive relationship.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=phi1/(1-phi2)
+ for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2))
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
Error in chol.default(x, pivot = FALSE) : 
Error in pd.solve(varcov, log.det = TRUE) : 
  x appears to be not positive definite

The problem is that there is a strong constraint on the pair https://latex.codecogs.com/gif.latex?(\varphi_1,\varphi_2) to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

i.e. in a standard matrix form

https://latex.codecogs.com/gif.latex?\left[\begin{array}{cc}%20+1%20&%20-1%20\\%20-1%20&%20-1%20\\%200%20&%20+1\end{array}\right]\left[\begin{array}{c}%20\varphi_1%20\\%20\varphi_2\end{array}\right]%20%3E%20\left[\begin{array}{c}%20-1%20\\%20-1%20\\%20-1\end{array}\right]

(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider

> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3)
> C=c(0,0,0,-.99999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.2238892 0.6342850 1.0613388

$value
[1] 297.9202

$counts
function gradient 
     108       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 0.000189892

(here, to faster, we restrain the parameters so that they will be positive).

  • comparing those estimates

Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter https://latex.codecogs.com/gif.latex?\widehat{\varphi_1}, we get

and for the second one, https://latex.codecogs.com/gif.latex?\widehat{\varphi_2}, we have

The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.

Observe that the standard-deviation of the innovation process https://latex.codecogs.com/gif.latex?\widehat{\sigma} is here, well estimated,

(with clearly some estimators that perform better than others).