Category Archives: Time Series

Inference for MA(q) Time Series

Yesterday, we’ve seen how inference for https://latex.codecogs.com/gif.latex?AR(p) time series was possible.  I started  with that one because it is actually the simple case. For instance, we can use ordinary least squares. There might be some possible bias (see e.g. White (1961)), but asymptotically, estimators are fine (consistent, with asymptotic normality). But when the noise is (auto)correlated, then it is more complex. So, consider here some  time series

for some white noise https://latex.codecogs.com/gif.latex?(\varepsilon_t).

> theta1=.25
> theta2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=e[t]+theta1*e[t-1]+theta2*e[t-2]
> Z=Z[800:1000]
> plot(Z,type="l")

  • Using the empirical autocorrelations

The first idea might be to use the first two (empirical) autocorrelations (the two that are supposed to be – theoretically – non null).

with  when . We also have the following relationship on the variance of the process

With those three equations, for three unknown parameters, https://latex.codecogs.com/gif.latex?\theta_1https://latex.codecogs.com/gif.latex?\theta_2 and https://latex.codecogs.com/gif.latex?\sigma, we simply have to solve (numerically) that system of equations,

> v=c(as.numeric(acf(Z)$acf[2:3]),var(Z))
> v
[1] 0.1658760 0.3823053 1.6379498
> library(rootSolve)
> seteq=function(x){
+ F1=v[1]-(x[1]+x[1]*x[2])/(1+x[1]^2+x[2]^2)
+ F2=v[2]-(x[2])/(1+x[1]^2+x[2]^2)
+ F3=v[3]-(1+x[1]^2+x[2]^2)*x[3]^2
+ return(c(F1,F2,F3))}
> multiroot(f=seteq,start=c(.1,.1,1))
$root
[1] 0.1400579 0.4766699 1.1461636

$f.root
[1]  7.876355e-10  4.188458e-09 -2.839977e-09

$iter
[1] 5

$estim.precis
[1] 2.605357e-09

We are a bit far away from the true values, used to generate our sample. And if we consider 1,000 sample (instead of only one), we still have the bias, and a large variance for our three estimators,

http://freakonometrics.hypotheses.org/files/2014/01/Capture-d%E2%80%99e%CC%81cran-2014-01-29-a%CC%80-11.34.46.png

  • Using least square techniques

We can try something quite different here. The problem we have is that we do not observe the noise https://latex.codecogs.com/gif.latex?(\varepsilon_t), we only observe our series https://latex.codecogs.com/gif.latex?(X_t). But we can try to rebuild that series (call it  since we’re not sure it will be a reconstruction of the noise). As suggested in Box & Jenkins (1967), assume that the first two values are null. And then, use

and then, we can use least square techniques

The code will be

> V=function(p){
+ theta1=p[1]
+ theta2=p[2]
+ u=rep(0,length(Z))
+ for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2]
+ return(sum(u^2))
+ }

If we try to minimize the sum of the squares of the residuals, we get

> optim(par=c(.1,.1),V)
$par
[1] 0.2751667 0.6723909

$value
[1] 225.8104

$counts
function gradient 
      77       NA 

$convergence
[1] 0

$message
NULL

which is close to the true value. Another good thing is that, if we compare that rebuilt noise with the true one (since we actually have it), then we have the same vector,

> plot(e[800:1000],col="blue",type="l")
> theta1=0.2751667
> theta2=0.6723909
> u=rep(0,length(Z))
> for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2]
> lines(1:201,u,col="red")

So far, so good. And if we look at 1,000 samples, we get

It looks like we have some bias here. And since the two estimators should be negatively correlated, one over-estimates, while the other one under-estimates.

  • Using the (global) maximum likelihood technique

And a final method might be to use the maximum likelihood technique (globally). Again, if we assume that we have a Gaussian i.i.d noise, then the vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is Gaussian, with a simple variance matrix (since a lot of elements will be null),

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ theta1=A[1];  theta2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=(theta1+theta1*theta2)/(1+theta1^2+theta2^2)
+ rho[3]=(theta2)/(1+theta1^2+theta2^2)
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1+theta1^2+theta2^2)*sigma^2
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
$par
[1] 0.2584144 0.6826530 1.0669820

$value
[1] 298.8699

$counts
function gradient 
      86       NA 

$convergence
[1] 0

$message
NULL

Here, the values that minimize the likelihood are rather close to the ones used to generate our sample. And if we run this algorithm on 1,000 samples, we can see that those estimates are fine,

I could not find other ideas, to estimate those parameters. I guess we can use the partial autocorrelation function, since we have relationships that can be related to Yule-Walker equations for https://latex.codecogs.com/gif.latex?AR(p) time series.

Inference for AR(p) Time Series

Consider a (stationary) autoregressive process, say of order 2,

https://latex.codecogs.com/gif.latex?Y_t%20=\varphi_1%20Y_{t-1}+\varphi_2%20Y_{t-2}+\varepsilon_t

for some white noise with variance . Here is a code to generate such a process,

> phi1=.25
> phi2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t]
> Z=Z[800:1000]
> n=length(Z)
> plot(Z,type="l")

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, since (if we consider a matrix formulation)

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.0268 -0.7063  0.1065  0.6925  3.2566 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1  0.23400    0.05463   4.283 2.88e-05 ***
X2  0.62863    0.05476  11.479  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.062 on 197 degrees of freedom
Multiple R-squared:  0.6349,	Adjusted R-squared:  0.6312 
F-statistic: 171.3 on 2 and 197 DF,  p-value: < 2.2e-16

so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise

> regression$coefficients
       X1        X2 
0.2339959 0.6286321 
> summary(regression)$sigma
[1] 1.061839
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written (again, using a matrix expression)

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
          [,1]
[1,] 0.2256270
[2,] 0.6315329

Now, we need to extract the estimated innovation process, from this set of parameters

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.058558

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\gamma_0%20=%20\varphi_1%20\gamma_1+\varphi_2%20\gamma_2+\sigma^2\\%20\gamma_1=\varphi_1%20\gamma_0+\varphi_2%20\gamma_1%20\\%20\gamma_2=\varphi_1%20\gamma_1+\varphi_2%20\gamma_0\end{array}\right.

It is not much more complicated to solve, actually,

> gamma0=var(Z[1:n])
> gamma1=var(Z[1:(n-1)],Z[2:n])
> gamma2=var(Z[1:(n-2)],Z[3:n])
> A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3)
> b=matrix(c(gamma0,gamma1,gamma2),3,1)
> (PHISIGMA=solve(A,b))
          [,1]
[1,] 0.2283151
[2,] 0.6283431
[3,] 1.1335501
  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.

https://latex.codecogs.com/gif.latex?Y_t\vert%20Y_{t-1}=y_{t-1},Y_{t-2}=y_{t-2}

has a Gaussian distribution

https://latex.codecogs.com/gif.latex?\mathcal{N}(\varphi_1y_{t-1}+\varphi_2y_{t-2},\sigma^2)

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1] 0.2339589 0.6285002 1.0565613

$value
[1] 293.3042

$counts
function gradient 
     106       NA 

$convergence
[1] 0

$message
NULL

It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) has a know form.

  • using (unconditional) likelihood estimators

The variance matrix of https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is https://latex.codecogs.com/gif.latex?\boldsymbol{\Gamma}=[\gamma(\vert%20i-j\vert)], where autocovariances are not not know, be can easily be computed using a recursive relationship.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=phi1/(1-phi2)
+ for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2))
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
Error in chol.default(x, pivot = FALSE) : 
Error in pd.solve(varcov, log.det = TRUE) : 
  x appears to be not positive definite

The problem is that there is a strong constraint on the pair https://latex.codecogs.com/gif.latex?(\varphi_1,\varphi_2) to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

i.e. in a standard matrix form

https://latex.codecogs.com/gif.latex?\left[\begin{array}{cc}%20+1%20&%20-1%20\\%20-1%20&%20-1%20\\%200%20&%20+1\end{array}\right]\left[\begin{array}{c}%20\varphi_1%20\\%20\varphi_2\end{array}\right]%20%3E%20\left[\begin{array}{c}%20-1%20\\%20-1%20\\%20-1\end{array}\right]

(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider

> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3)
> C=c(0,0,0,-.99999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.2238892 0.6342850 1.0613388

$value
[1] 297.9202

$counts
function gradient 
     108       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 0.000189892

(here, to faster, we restrain the parameters so that they will be positive).

  • comparing those estimates

Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter https://latex.codecogs.com/gif.latex?\widehat{\varphi_1}, we get

and for the second one, https://latex.codecogs.com/gif.latex?\widehat{\varphi_2}, we have

The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.

Observe that the standard-deviation of the innovation process https://latex.codecogs.com/gif.latex?\widehat{\sigma} is here, well estimated,

(with clearly some estimators that perform better than others).

 

Defining Properly MA(∞) Time Series

In order to properly define https://latex.codecogs.com/gif.latex?MA(\infty) series, we need to get back on some properties of infinite sequences, as briefly mentioned yesterday in the MAT8181 course. Consider some sequence https://latex.codecogs.com/gif.latex?(a_i)_{i\in\mathbb{N}}. The sequence is said to be summable if

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=0}^n%20a_i

is convergent, i.e. if the limit of https://latex.codecogs.com/gif.latex?S_n exists when https://latex.codecogs.com/gif.latex?n\rightarrow\infty.

From Cauchy criterionhttps://latex.codecogs.com/gif.latex?\sum%20a_i converges if and only if for each https://latex.codecogs.com/gif.latex?\eta%3E0, there is https://latex.codecogs.com/gif.latex?m\in\mathbb{N} for which

https://latex.codecogs.com/gif.latex?\vert%20a_i+a_{i+1}+\cdots+a_{j-1}+a_j\vert%3C\eta

when https://latex.codecogs.com/gif.latex?%20i,j%3Em. The sequence https://latex.codecogs.com/gif.latex?(a_i)_{i\in\mathbb{N}} is said to be absolutely summable if

https://latex.codecogs.com/gif.latex?%20\sum_{i=0}^\infty%20\vert%20a_i\vert%20%3C\infty

and square-summable if

https://latex.codecogs.com/gif.latex?%20\sum_{i=0}^\infty%20a_i^2%20%3C\infty

Observe that absolute summability will imply square summability (since for https://latex.codecogs.com/gif.latex?j‘s large enough https://latex.codecogs.com/gif.latex?%20\vert%20a_j\vert%20\leq1, and then https://latex.codecogs.com/gif.latex?%20a_j^2\leq\vert%20a_j\vert)

Consider now some https://latex.codecogs.com/gif.latex?MA(\infty) time series

https://latex.codecogs.com/gif.latex?%20X_t=\sum_{h=0}^\infty%20\theta_h%20\varepsilon_{t-h}

If the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is square-summable, then

https://latex.codecogs.com/gif.latex?%20S_T%20=%20\sum_{h=0}^T%20\theta_h%20\varepsilon_{t-h}

converges in https://latex.codecogs.com/gif.latex?%20L_2  to some random varible as https://latex.codecogs.com/gif.latex?%20T\rightarrow\infty. This can be proved easily using Cauchy criteria, in the sense that for any https://latex.codecogs.com/gif.latex?\eta%3E0, there is a https://latex.codecogs.com/gif.latex?%20T large enough such that, for any https://latex.codecogs.com/gif.latex?%20h,

In that case, if the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is square-summable, then https://latex.codecogs.com/gif.latex?%20(X_t) is stationary (in the https://latex.codecogs.com/gif.latex?%20L_2 sense) since the process is centered, and

https://latex.codecogs.com/gif.latex?\gamma(h)=\sigma^2%20\cdot%20\sum_{i=0}^\infty%20\theta_i%20\theta_{i+h}

for all https://latex.codecogs.com/gif.latex?h\in\mathbb{N}.

Further, ergodicity of the time series, define as the absolute summability of the autocovariance sequence, is obtained when the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is absolutely summable.

Causal Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we will discuss (almost) only causal models. For instance, with https://latex.codecogs.com/gif.latex?AR(1),

https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t

with some white noise https://latex.codecogs.com/gif.latex?(\varepsilon_t), those models are obtained when https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3C1. In that case, we’ve seen that https://latex.codecogs.com/gif.latex?(\varepsilon_t) was actually the innovation process, and we can write

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h=0}^{+\infty}%20\phi^h%20\varepsilon_{t-h}

which is actually a mean-square convergent series (using simple Analysis arguments on series). From that expression, we can easily see that https://latex.codecogs.com/gif.latex?(X_t) is stationary, since https://latex.codecogs.com/gif.latex?\mathbb{E}(X_t)=0 (which does not depend on https://latex.codecogs.com/gif.latex?t) and

https://latex.codecogs.com/gif.latex?\text{cov}(X_t,X_{t-h})=\frac{\phi^h}{1-\phi^2}\sigma^2(which does not depend on https://latex.codecogs.com/gif.latex?t).

Consider now the case where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1. Clearly, we have some problem here, since

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h=0}^{+\infty}%20\phi^h%20\varepsilon_{t-h}

cannot be defined (the series does not converge, in https://latex.codecogs.com/gif.latex?L^2). Nevertheless, it is still possible to write

https://latex.codecogs.com/gif.latex?X_t=\frac{1}{\phi}%20X_{t{\color{Red}%20+1}}{\color{Red}%20-\frac{1}{\phi}}\varepsilon_{t{\color{Red}%20+1}}But it is possible to iterate (as in the previous case) and write

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h={\color{Red}%201}}^{+\infty}%20\frac{-1}{\phi^h}%20\varepsilon_{t{\color{Red}%20+h}}

which is actually well defined. And in that case, the sequence of random variables https://latex.codecogs.com/gif.latex?(X_t) obtained from this equation is the unique stationary solution of the recursive equation https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t. This might be confusing, but the thing is this solution should not be confused with the usual non-stationary solution of https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t obtained from https://latex.codecogs.com/gif.latex?X_0. As in the code writen to generate a time series, from some starting value https://latex.codecogs.com/gif.latex?X_0 in the previous post.

Now, let us spent some time with this stationary time series, considered as unatural in Brockwell and Davis (1991). One point is that, in the previous case (where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3C1) https://latex.codecogs.com/gif.latex?(\varepsilon_t) was the innovation process. So variable https://latex.codecogs.com/gif.latex?X_t was not correlated with the future of the noise, https://latex.codecogs.com/gif.latex?\sigma\{\varepsilon_{t+1},\varepsilon_{t+2},\cdots\}. Which is not the case when https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1.

All that looks nice, if you’re willing to understand thing at some theoretical level. What does all that mean from a computational perspective ? Consider some white noise (this noise actually does exist whatever you want to define, based on that time series)

> n=10000
> e=rnorm(n)
> plot(e,type="l",col="red")

If we look at the simple case, to start with,

> phi=.8
> X=rep(0,n)
> for(t in 2:n) X[t]=phi*X[t-1]+e[t]

The time series – the latest 1,000 observations – looks like

Now, if we use the cumulated sum of the noise,

> Y=rep(0,n)
> for(t in 2:n) Y[t]=sum(phi^((0:(t-1)))*e[t-(0:(t-1))])

we get

Which is exactly the same process ! This should not surprise us because that’s what the theory told us. Now, consider the problematic case, where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1

> phi=1.1
> X=rep(0,n)
> for(t in 2:n) X[t]=phi*X[t-1]+e[t]

Clearly, that series is non-stationary (just look at the first 1,000 values)

Now, if we look at the series obtained from the cumulated sum of future values of the noise

> Y=rep(0,n)
> for(t in 1:(n-1)) Y[t]=sum((1/phi)^((1:(n-t)))*e[t+(1:(n-t))])

We get something which is, actually, stationary,

So, what is this series exactly ? If you look that the autocorrelation function,

> acf(Y)

we get the autocorrelation function of a (stationary) https://latex.codecogs.com/gif.latex?AR(1) process,

> acf(Y)[1]

Autocorrelations of series ‘Y’, by lag

    1 
0.908 

> 1/phi
[1] 0.9090909

Observe that there is a white noise – call it https://latex.codecogs.com/gif.latex?(\eta_t) – such that

https://latex.codecogs.com/gif.latex?X_t=\frac{1}{\phi}X_{t-1}+\eta_t

This is what we call the canonical form of the stationary process https://latex.codecogs.com/gif.latex?(X_t).

Visualizing Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we started discussing autoregressive models. Just to illustrate, here is some code to plot https://latex.codecogs.com/gif.latex?AR(1) – causal – process,

> graphar1=function(phi){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 2:n) X[t]=phi*X[t-1]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(1/phi,0,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-.2,.2),xlim=c(-1,1))
+ axis(1)
+ points(phi,0,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

e.g.

> graphar1(.8)

or

> graphar1(-.7)

(with, on the bottom, the root of the characteristic polynomial, the value of the parameter https://latex.codecogs.com/gif.latex?\phi_{1}, the autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\rho(h) and the partial autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\psi(h)).

Of course, it is possible to do something similar with https://latex.codecogs.com/gif.latex?AR(2) processes,

> graphar2=function(phi1,phi2){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 3:n) X[t]=phi1*X[t-1]+phi2*X[t-2]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ P=polyroot(c(1,-phi1,-phi2))
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(P,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,xlim=c(-2.1,2.1),ylim=c(-1.2,1.2))
+ polygon(c(-2,0,2,-2),c(-1,1,-1,-1),col="light green")
+ u=seq(-2,2,by=.001)
+ lines(u,-u^2/4)
+ abline(v=seq(-2,2,by=.2),col="grey",lty=2)
+ abline(h=seq(-1,1,by=.2),col="grey",lty=2)
+ segments(0,-1,0,1)
+ axis(1)
+ axis(2)
+ points(phi1,phi2,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

For example,

> graphar2(.65,.3)

or

> graphar2(-1.4,-.7)

Generating a Markov chain vs. computing the transition matrix

A couple of days ago, we had a quick chat on Karl Broman‘s blog, about snakes and ladders (see http://kbroman.wordpress.com/…) with Karl and Corey (see http://bayesianbiologist.com/….), and the use of Markov Chain. I do believe that this application is truly awesome: the example is understandable by anyone, and computations (almost any kind, from what we’ve tried) are easy to perform. At the same time, some French students asked me specific details regarding some old lectures notes on Markov chains, and on some introductory example I used as a possible motivation: the stepping stone algorithm. In the notes, I just mentioned the idea of this popular generic algorithm (introduced in Sawyer (1976)) and I use simulations to show – visually – how it works. Again, it was just to motivate the course which actually did focus on the theory of Markov Chains. But those student wanted more, like how did I get the transition matrix, for instance. And that is actually not a simple question, from a computational perspective. I mean, I can easily generate this Markov Chain, but writing explicitly the transition, that was another story. Which took me a bit longer. In a very specific case…

But let us get back to the roots, and to the stepping stone algorithm. At least, one of them (the one I used in my notes) because it looks like there are several algorithm. We do consider a grid, say , with some colors inside, say  possible colors. Each cell of the grid has a given color. Then, at some stage, we select randomly one cell in the grid, and it will take the color of one of its neighbor (some kind of absorption, or mutation). This is, more or less, what is also detailed in some lecture notes by James Propp (see also e Sato (1983) or Zähle et al. (2005) for more theoretical details about that Markov chain). This is extremely simple to generate (that’s what I did in my notes, with very big grids, and a lot of colors). But what if we want to write the transition matrix ?

First of all, we need to define the state space. Basically, we do have  cells, each of them has one color, chosen among . Which gives us  possible states…. And that can be large. I mean, if we consider the smallest possible grid (that might be interesting), say , and only  colors, then we talk about possible states. That is large, not huge. But we should keep in mind that we have to compute a transition matrix, that would be a matrix with  elements. More generally, we talk about writing down matrices with  elements. If we want black and white  grids, that would mean a matrix with  which mean 4 billion elements ! And if we consider an red-green-blue  grid, we have to explicit a matrix with  i.e almost 400 million elements. So, let’s face it: we can only work with  bi-color grids.

So let’s try… The good thing is that it can be related to work I’ve been doing recently on binomial recombining trees (binomial being related to bi-color). First of all, our grid will be describes as follows

> h=3
> M=matrix(1:(h^2),h,h)
> M
     [,1] [,2] [,3]
[1,]    1    4    7
[2,]    2    5    8
[3,]    3    6    9

with two colors

> color=c("red","blue")

Then, we should look for neighbors, or derive an neighborhood matrix,

> d=function(i,j) dist(rbind(c((i-1)%/%h,(i-1)%%h),
+                            c((j-1)%/%h,(j-1)%%h)))
> Neighb=matrix(Vectorize(d)(rep(1:(h^2),each=h^2),
+                            rep(1:(h^2),h^2)),h^2,h^2)
> trunc(Neighb*100)/100
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
 [1,] 0.00 1.00 2.00 1.00 1.41 2.23 2.00 2.23 2.82
 [2,] 1.00 0.00 1.00 1.41 1.00 1.41 2.23 2.00 2.23
 [3,] 2.00 1.00 0.00 2.23 1.41 1.00 2.82 2.23 2.00
 [4,] 1.00 1.41 2.23 0.00 1.00 2.00 1.00 1.41 2.23
 [5,] 1.41 1.00 1.41 1.00 0.00 1.00 1.41 1.00 1.41
 [6,] 2.23 1.41 1.00 2.00 1.00 0.00 2.23 1.41 1.00
 [7,] 2.00 2.23 2.82 1.00 1.41 2.23 0.00 1.00 2.00
 [8,] 2.23 2.00 2.23 1.41 1.00 1.41 1.00 0.00 1.00
 [9,] 2.82 2.23 2.00 2.23 1.41 1.00 2.00 1.00 0.00
> Neighb=(Neighb<2)&(Neighb>0)
> Neighb
       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]  [,8]  [,9]
 [1,] FALSE  TRUE FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE
 [2,]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE
 [3,] FALSE  TRUE FALSE FALSE  TRUE  TRUE FALSE FALSE FALSE
 [4,]  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE  TRUE FALSE
 [5,]  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE
 [6,] FALSE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE  TRUE
 [7,] FALSE FALSE FALSE  TRUE  TRUE FALSE FALSE  TRUE FALSE
 [8,] FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
 [9,] FALSE FALSE FALSE FALSE  TRUE  TRUE FALSE  TRUE FALSE

Now, let us explicit our 512 possible states.

> n=h^2
> states=function(x){
+   Base.b=rep(0,n)
+   ndigits=(floor(logb(x,base=length(color)))+1)
+   for(i in 1:ndigits){
+     Base.b[n-i+1]=(x%%length(color))
+     x=(x %/% length(color))}
+   return(Base.b)}
> M=Vectorize(states)(1:(length(color)^n-1))
> liststates=data.frame(rbind(rep(0,h^2),t(M)))
> head(liststates)
  X1 X2 X3 X4 X5 X6 X7 X8 X9
1  0  0  0  0  0  0  0  0  0
2  0  0  0  0  0  0  0  0  1
3  0  0  0  0  0  0  0  1  0
4  0  0  0  0  0  0  0  1  1
5  0  0  0  0  0  0  1  0  0
6  0  0  0  0  0  0  1  0  1

(for the first six, with 0/1 digits instead of colors). For instance, if we look at a specific one, it is possible to plot the grid, using

> plotsteps=function(u){
+   plot(0:h,0:h,col="white",xlab="",ylab="",axes=FALSE)
+   for(i in 0:(h^2-1)){
+   x=i%/%h
+   y=i%%h
+   polygon(x+c(1,.1,.1,1),y+c(1,1,.1,.1),
+   col=color[as.numeric(u)[i+1] + 1])
+   text(x+.45,y+.45,i)
+   }}

Here,

> plotsteps(liststates[100,])

Then, given one state, let us see what could happen next,

  • let us compute all connected states: all states where we can end up in if we change one cell
  • we have to check, for each connect state which cell did change
  • we should compute probabilities to reach those 9 states, based on the fact that each of the cell is chosen with the same probability, and the fact that probability to change the color is based on the colors around.
  • if some states cannot be reached (if a cell is surrounded by elements of the same color, so it cannot change its color), then, we should remove then from the list of reachable (possible) states.

The code will be something like the following

> listneighbour=function(i){
+   start=liststates[i,]
+   difference2only=function(j) {
+     w=which(liststates[j,]!=liststates[i,])
+     return((length(w)==1))}
+   possible=which( Vectorize(difference2only)(1:nrow(liststates))==TRUE )
+   P=function(j){   
+     L=liststates[i,which(Neighb[which(liststates[j,]!=liststates[i,]),]==TRUE)]
+     T=table(as.numeric(L))
+     T=T[as.character(0:(length(color)-1))]
+     T[is.na(T)]=0
+     return(as.numeric(T)/sum(T))
+   }
+   probability=Vectorize(P)(possible)
+   W=NULL
+   for(j in possible) W=c(W,which(liststates[j,]!=liststates[i,]))
+   I=1-liststates[i,W]+1
+   vp=diag(probability[as.numeric(I),])
+   vproba=0*vp
+   if(sum(vp)!=0) vproba=vp/sum(vp)
+   return(list(
+     color=liststates[i,W],
+     absorb=W,
+     possible=possible,
+     probability=probability,
+     prob=vproba))
+ }

For instance, if we start from state 100 (here, on the right)

> listneighbour(100)
$color
    X3 X4 X8 X9 X7 X6 X5 X2 X1
100  1  1  1  1  0  0  0  0  0

$absorb
[1] 3 4 8 9 7 6 5 2 1

$possible
[1]  36  68  98  99 104 108 116 228 356

$probability
     [,1] [,2] [,3]   [,4]   [,5] [,6] [,7] [,8]   [,9]
[1,]    1  0.8  0.6 0.6667 0.3333  0.4  0.5  0.6 0.6667
[2,]    0  0.2  0.4 0.3333 0.6667  0.6  0.5  0.4 0.3333

$prob
[1] 0.17964072 0.14371257 0.10778443 0.11976048 0.11976048
[6] 0.10778443 0.08982036 0.07185629 0.05988024

Let us look more specificaly at the 99th state (which appears above as a state that could be reached from the 100th),

> liststates[99,]
   X1 X2 X3 X4 X5 X6 X7 X8 X9
99  0  0  1  1  0  0  0  1  0

If we plot it (here on the right, again), we get

> plotsteps(liststates[99,])

Clearly, here, the cell in the upper corner (number 9) changed from blue to red. Now, about the probability… The probability to select cell 9 is 1/9, and given that cell 9 is chosen, the probability to go from blue to red is 2/3 (the cell is surrounded by 2 red cells, and 1 blue cell). The probability to remain blue is then 1/3. Those are the probabilities computed by our function (the table with two rows, one per color). In order to get a better understanding on the meaning of the last line, with some sort of probabilities), let us look at the following (simpler) example.

> liststates[2,]
  X1 X2 X3 X4 X5 X6 X7 X8 X9
2  0  0  0  0  0  0  0  0  1

that can be visualized on the right (on the right). Here,

> listneighbour(2)
$color
  X9 X8 X7 X6 X5 X4 X3 X2 X1
2  1  0  0  0  0  0  0  0  0

$absorb
[1] 9 8 7 6 5 4 3 2 1

$possible
[1]   1   4   6  10  18  34  66 130 258

$probability
     [,1] [,2] [,3] [,4]  [,5] [,6] [,7] [,8] [,9]
[1,]    1  0.8    1  0.8 0.875    1    1    1    1
[2,]    0  0.2    0  0.2 0.125    0    0    0    0

$prob
[1] 0.65573770 0.13114754 0.00000000 0.13114754 0.08196721 
[6] 0.00000000 0.00000000 0.00000000 0.00000000

Things are pretty simple here

  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{1,2,3,4,7\}, then nothing change, since all the neighbors have the same color. So if we want to focus on changes (or say run the algorithm until the first color change, then choosing those cells is a waste of time)
  • if we chose cells https://latex.codecogs.com/gif.latex%20?\{5,6,8\}, then it could be possible to change the color. And actually, https://latex.codecogs.com/gif.latex%20?\{5\} is different from https://latex.codecogs.com/gif.latex%20?\{6,8\} (since it does have much more neighbors)
  • if we chose cell https://latex.codecogs.com/gif.latex%20?\{9\}, then definitively, the color will change, since all neighbors have the other color here,

Now, the probability to select cell  given that there was a color change would be, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{3}{3}=1

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{5}and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%20\frac{1}{8}

And for the other, https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)\propto%200. So, it comes – since we assume that cells are drawn independently, and with the same probability, if  is in https://latex.codecogs.com/gif.latex%20?\{9\}

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{1%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{40}{61}

while if is in https://latex.codecogs.com/gif.latex%20?\{6,8\}, then there are 4 out 5 neighbors that are red, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{5}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{8}{61}

and if is https://latex.codecogs.com/gif.latex%20?\{5\}, then, only one neighbor has a different color, out of 8, so

https://latex.codecogs.com/gif.latex%20?\mathbb{P}(k)=%20\frac{\frac{1}{8}%20\cdot%20\frac{1}{9}}{\left(1+2\times%20\frac{1}{5}+%20\frac{1}{8}+5\times%200\right)\cdot%20\frac{1}{9}}=\frac{5}{61}

Which are exactly the probability computed above. The point is that we compute probabilities given that a color change will actually occur. The good point is that it should faster convergence to some limiting distribution. If any.

What about our transition matrix ? Well, using a simply loop, we should get it easily

> M=matrix(0,nrow(liststates),nrow(liststates))
+ for(i in 1:nrow(liststates)){
+ L=listneighbour(i)
+ if(sum(L$prob)!=0){
+ j=L$possible
+ M[i,j]=L$prob
+ }
+ if(sum(L$prob)==0){
+ j=i
+ M[i,j]=1
+ }
+ }

One can check that this matrix satisfies some properties of transition matrices. For instance, the sum per row is one,

> sum(apply(M,1,sum)!=1)
[1]  0

Remember that this matrix is big, so I will not print if here. But trust me, it works (it might take a while on an old laptop, but anyone can do it). Now, if we want to visualize some paths of that chain, we can use the following algorithm. First, we need a starting point, that can be chosen randomly,

> j=sample(1:nrow(liststates),size=1)

or using a given colored grid, say

> j=100

Then we plot it,

> plotsteps(liststates[j,])

Now, the code within the loop is here

> d=rep(0,nrow(liststates))
> d[j]=1
> d=d%*%M
> j=sample(1:nrow(M),size=1,prob=d)
> plotsteps(liststates[j,])

Here are some examples. And indeed, we end up either with all cells in blue, or all cells in red.

Now, do we have to compute that transition matrix to produce those graph (and to generate that Markov chain) ? No. Of course not… At each step, I use a Dirac measure, and use the transition matrix just to get the probability to generate then the next state. Actually, one can write a faster and more intuitive code to generate the same chain… But I should probably keep that for another post…

Easter

This morning, there was an interesting post entitled “why does Easter move around so much?” online on http://economist.com/blogs/economist-explains/…

In my time series classes, I keep saying that sometimes, series can exhibit seasonlity, but the seasonal effect can be quite irregular. It is the cas for river levels, where snowmelt can have a huge impact, and it is irregular. Similarly, chocolate sales (even monthly, or quarterly) depends on Easter. Because it can be either in March, or in April, the seasonal pattern is not as regular as flower sales for instance (Valentine beeing always on February 14th, as far as I remember). If we look at the word eggs on http://google.com/trends/q=eggs…, we do observe a cycle related to Easter.

The title of the article published by http://economist.com/blogs/economist-explains/… claims that there is a lot of variability on Eater’s day. Let us check ! The answer to the question “When is Easter ?” can be the following (if we want a short answer): Easter Sunday is the first Sunday after the first full moon after vernal equinox. For more details, see e.g. http://ortelius.de/east. The algorithm used to compute the date of Easter can is online, on http://smart.net/~mmontes/….

> century = year/100
> G = year % 19
> K = (century - 17)/25
> I = (century - century/4 - (century - K)/3 + 19*G + 15) % 30
> I = I - (I/28)*(1 - (I/28)*(29/(I + 1))*((21 - G)/11))
> J = (year + year/4 + I + 2 - century + century/4) % 7
> L = I - J
> EasterMonth = 3 + (L + 40)/44
> EasterDay = L + 28 - 31*(EasterMonth/4)

Actually, this algorithm can be found in some R packages. Here we use the date of Easter from AD 1000 and AD 3000,

> library(timeDate)
> E=Easter(1000:3000)
> D=as.Date(E)
> table(months(D))/2001

    april     march 
0.7651174 0.2348826

(April being before March, in the alphabetical order) If we look at the distribution of the date, it is the following, the starting point being March 1st,

> J=as.numeric(D-as.Date(paste("01/03/",1000:3000,sep=""),"%d/%m/%Y"))
> hist(J,breaks=seq(20,55),col="light green")

And if we look at the autocorrelation function, we can observe that indeed, after 19 years, there is a strong correlation (that could be seen in the algorithm given previously),

> plot(acf(J))

But in order to get a better understanding of the dynamics, we can also look at transiftion matrices. Define

> Q=quantile(J,seq(0,1,by=.25))
> Q[1]=Q[1]-1
> C=cut(J,Q)

Then, the one year transition matrix is (in %)

> k=1; n=length(C)
> B=data.frame(X1=(C[1:(n-k)]),X2=(C[(k+1):n]))
> (T=table(B$X1,B$X2))

          (20,31] (31,39] (39,46] (46,55]
  (20,31]       0       0     265     277
  (31,39]     316       0      13     182
  (39,46]     224     264       0       0
  (46,55]       1     247     211       0
> P=T/apply(T,1,sum)
> round(P*1000)/10

          (20,31] (31,39] (39,46] (46,55]
  (20,31]     0.0     0.0    48.9    51.1
  (31,39]    61.8     0.0     2.5    35.6
  (39,46]    45.9    54.1     0.0     0.0
  (46,55]     0.2    53.8    46.0     0.0

I.e. if  Easter was early in the year (say in March, in the first quartile), then very likeliy, the year after, it will be late in the year (with 50% chance in the third quartile, and 50% chance in the fourth one).

A random walk ? What else ?

Consider the following time series,

What does it look like ? I know, this is a stupid game, but I keep using it in my time series courses. It does look like a random walk, doesn’t it ? If we use Philipps-Perron test, yes, it does,

> PP.test(x)

	Phillips-Perron Unit Root Test

data:  x 
Dickey-Fuller = -2.2421, Truncation lag parameter = 6, p-value = 0.4758

If we look at the autocorrelation function, we do observe some persistence,

> acf(x,100)

Perhaps this persistence can be related to long range dependence, or to some fractional random walk. A natural idea could be estimate Hurst parameter, using for instance Beran (1992) estimator – based on Whittle (1956) – where we assume that the autocorrelation function satisfies

as  for some  (the so called Hurst index). But here, we start to observe unexpected ouputs,

> library(longmemo)
> (d  <- WhittleEst(x))
'WhittleEst' Whittle estimator for  fractional Gaussian noise ('fGn');	 call:
WhittleEst(x = x)
	  time series of length  n = 759.

H = 0.9899335
coefficients 'eta' =
    Estimate Std. Error z value   Pr(>|z|)
H 0.98993350 0.02468323 40.1055 < 2.22e-16
 <==> d := H - 1/2 = 0.49 (0.025)

 $ vcov       : num [1, 1] 0.000609
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : chr "H"
  .. ..$ : chr "H"
 $ periodogr.x: num [1:379] 1479.3 1077.3 371.7 287.2 51.2 ...
 $ spec       : num [1:379] 62.5 31.7 21.3 16.1 12.9 ...

or more precisely some non-expected values for Hurst parameter, which should be in 

> confint(d)
      2.5 %   97.5 %
H 0.9415553 1.038312

Oops, perhaps, we did miss something, because it looks like there is extremely strong persistence on our time series,

> plot(d)

It is probablty time to ask where I found that series… To be honest, I did borrow  it from a great canadian website http://climate.weatheroffice.gc.ca/climateData/. For instance, it you want the temperature we did experience a few days ago, you can use

> Y=2013
> M=1
> D=25
> url=paste(
"http://climate.weatheroffice.gc.ca/climateData/hourlydata_e.html?
timeframe=1&Prov=QC&StationID=5415&hlyRange=1953-01-01|2013-02-
01&Year=",Y,"&Month=",M,"&Day=",D,sep="")
> page=scan(url,what="character")

Yes, that series is the temperature we did experience in Montréal last month (hourly time seies). On the graph below, you can actually compare it with temperature experienced in Januarys over the past 60 years,

So it is not that surprising to see long range dependence models appearing (I did write a paper on that topic precisely a few years ago). What I found puzzeling is that persistence is large, extremely large. And the problem is that I do not see how we can explain ‘jumps’ that we do observe on that series. For instance the behavior of the series while I was in Europe, before January 20th: within 3 days, the temperature went down, from 0°C to -20°C, and up from -20°C to 0°C, and then down again, from 0°C to -20°C (a nice И if we use cyrillic letters). Or how can we explain the oscillating behavior observed the week after, where the temperature went up, from -25°C to (almost) +10°C in a few days. Within 10 days, we did observe also two ‘jumps’ (or ‘crashes‘ if we want to use the terminology of financial time series) with a decrease of 25 degrees in less than 24 hours ! Obviously, we need to find other classes of model to replicate that kind of behavior we observe on temperatures…

ACT6420 examen final

Mercredi prochain, c’est l’examen final (qui compte pour 30%). Au programme, comme annoncé ce matin, la forme sera proche de celle de l’examen intra, avec 33 questions à choix multiple

  • quelques questions de compréhension générales sur la modélisation des séries temporelles,
  • quelques questions portant sur de l’analyse de sorties obtenues suite à une modélisation d’une série.

Cette session, la série à étudier sera celle obtenue sur la fréquentation d’un aéroport, sur une quinzaine d’années. Les données sont mensuelles, et sont en ligne via le code suivant

> base=read.table(
"http://freakonometrics.blog.free.fr/public/data/TS-examen.txt",
+ sep=";",header=TRUE)
> X=ts(base$X,start=c(base$A[1],base$M[1]),frequency=12)
> plot(X)

Les annexes qu’il faudra discuter à l’examen sont en ligne. Est-il utile de préciser que je ne répondrais aps aux questions sur ce document d’ici mercredi ?

Bon courage.

Modélisation et prévision, cas d’école

Quelques lignes de code que l’on reprendra au prochain cours, avec une transformation en log, et une tendance linéaire. Considérons la recherche du mot clé headphones, au Canada, la base est en ligne sur l’ancien blog, à l’adresse freakonometrics.blog.free.fr/…

> report=read.table(
+ "report-headphones.csv",
+ skip=4,header=TRUE,sep=",",nrows=464)
> source("http://freakonometrics.blog.free.fr/public/code/H2M.R")
> headphones=H2M(report,lang="FR",type="ts")
> plot(headphones)

Mais le modèle linéaire ne devrait pas convenir, car la série explose,

> n=length(headphones)
> X1=seq(12,n,by=12)
> Y1=headphones[X1]
> points(time(headphones)[X1],Y1,pch=19,col="red")
> X2=seq(6,n,by=12)
> Y2=headphones[X2]
> points(time(headphones)[X2],Y2,pch=19,col="blue")

Il est alors naturel de prendre le logarithme de la série,

> plot(headphones,log="y")

C’est cette série que l’on va modéliser (mais c’est bien entendu la première série, au final, qu’il faudra prévoir). On commence par ôter la tendance (ici linéaire)

> X=as.numeric(headphones)
> Y=log(X)
> n=length(Y)
> T=1:n
> B=data.frame(Y,T)
> reg=lm(Y~T,data=B)
> plot(T,Y,type="l")
> lines(T,predict(reg),col="purple",lwd=2)

On travaille alors sur la série résiduelle.

> Z=Y-predict(reg)
> acf(Z,lag=36,lwd=6)
> pacf(Z,lag=36,lwd=6)

On peut tenter de différencier de manière saisonnière,

> DZ=diff(Z,12)
> acf(DZ,lag=36,lwd=6)
> pacf(DZ,lag=36,lwd=6)

On ajuste alors un processus ARIMA, sur la série différenciée,

> mod=arima(DZ,order=c(1,0,0),
+ seasonal=list(order=c(1,0,0),period=12))
> mod

Coefficients:
ar1     sar1  intercept
0.7937  -0.3696     0.0032
s.e.  0.0626   0.1072     0.0245

sigma^2 estimated as 0.0046:  log likelihood = 119.47

Mais comme c’est la série de base qui nous intéresse, on utilise une écriture SARIMA,

> mod=arima(Z,order=c(1,0,0),
+ seasonal=list(order=c(1,1,0),period=12))

On fait alors la prévision de cette série.

> modpred=predict(mod,24)
> Zm=modpred$pred
> Zse=modpred$se

On utilise aussi le prolongement de la tendance linéaire,

> tendance=predict(reg,newdata=data.frame(T=n+(1:24)))

Pour revenir enfin à notre série initiale, on utilise les propriétés de la loi lognormales, et plus particulièrement la forme de la moyenne, pour prédire la valeur de la série,

> Ym=exp(Zm+tendance+Zse^2/2)

Graphiquement, on a

> plot(1:n,X,xlim=c(1,n+24),type="l",ylim=c(10,90))
> lines(n+(1:24),Ym,lwd=2,col="blue")

Pour les intervalles de confiance, on peut utiliser les quantiles de la loi lognormale,

> Ysup975=qlnorm(.975,meanlog=Zm+tendance,sdlog=Zse)
> Yinf025=qlnorm(.025,meanlog=Zm+tendance,sdlog=Zse)
> Ysup9=qlnorm(.9,meanlog=Zm+tendance,sdlog=Zse)
> Yinf1=qlnorm(.1,meanlog=Zm+tendance,sdlog=Zse)
> polygon(c(n+(1:24),rev(n+(1:24))),
+ c(Ysup975,rev(Yinf025)),col="orange",border=NA)
> polygon(c(n+(1:24),rev(n+(1:24))),
+ c(Ysup9,rev(Yinf1)),col="yellow",border=NA)

De la difficulté de faire parler les chiffres…

Parution d’un court article intituléde la difficulté de faire parler des chiffres pour analyser la gravité des accidents de la route” dans le dernier numéro de Variance. Le numéro complet est en ligne sur http://ensae.org/…. Sinon, tous les articles de vulgarisation sont en ligne sur http://freakonometrics.hypotheses.org/….

Le code pour le premier graphique (sur les tuées) est

base=read.table(
"http://freakonometrics.free.fr/base-graph-accidents-graves.txt",
header=TRUE,sep=";")
base$date=as.Date(base$date)
base$dateavant=as.Date(base$dateavant)
base$dateapres=as.Date(base$dateapres)
plot(base$date,base$compte,main="Blessés graves sur route entre 2002 et 2009",
xlab="Date",
ylab="Nombre de blessés sur la route, par jour",col="white")
points(base$dateavant,base$compteavant,col="light green")
lines(base$dateavant,base$tendanceavant,col="red",lty=2)
lines(base$dateavant,base$splinesavant,lwd=3,col="red")
points(base$dateapres,base$compteapres,col="light blue")
lines(base$dateapres,base$tendanceapres,col="red",lty=2)
lines(base$dateapres,base$splinesapres,lwd=3,col="red")

alors que pour les seconds (sur les blessés)

base=read.table(
"http://freakonometrics.free.fr/base-graph-accidents-deces.txt",
header=TRUE,sep=";")
base$date=as.Date(base$date)
base$dateavant=as.Date(base$dateavant)
base$dateapres=as.Date(base$dateapres)
plot(base$date,base$compte,main="Blessés graves sur route entre 2002 et 2009",
xlab="Date",
ylab="Nombre de blessés sur la route, par jour",col="white")
points(base$dateavant,base$compteavant,col="light green")
lines(base$dateavant,base$tendanceavant,col="red",lty=2)
lines(base$dateavant,base$splinesavant,lwd=3,col="red")
points(base$dateapres,base$compteapres,col="light blue")
lines(base$dateapres,base$tendanceapres,col="red",lty=2)
lines(base$dateapres,base$splinesapres,lwd=3,col="red")

Estimation et prévision pour des séries temporelles

Pour la fin de cours de modèles de prévision , quelques transparents sur l’identification et l’estimation des modèles SARIMA, quelques compléments sur les tests (racines unités et non-stationnarité, ainsi que saisonnalité), et enfin, quelques pistes pour construire des prédictions (avec une quantification de l’incertitude), avec des codes R. Les transparents sont en ligne ici (même si la page de garde est identique aux autres, il s’agit de nouveau matériel). Maintenant que les transparents sont finis (et en ligne) les prochains billets seront orientés autour de la modélisation et des aspects computationnels.

Introduction aux processus SARIMA

Quelques transparents en plus, qui devraient correspondre aux deux prochains cours de séries temporelles, sur les processus autorégressifs (AR) et moyennes mobiles (MA), les ARMA, les ARIMA (intégrés) et les SARIMA (saisonniers). J’ai mis des notes sur les tests de racine unités, je rajouterais quelques transparents la semaine prochaine sur les tests de saisonnalité, et quelques exemples pratiques de prévision. Les transparents sont en ligne ici,

Introduction aux séries temporelles

Nous allons commencer demain la modélisation des séries temporelles. Les transparents de la séance sont en ligne ici. Je rappelle que les notes de cours (complètes) sont également en ligne, . Je continuerai à poster régulièrement des billets contenant des commandes R.

Au programme cette semaine, l’utilisation des méthodes de régression pour extraire tendance et cycle, le lissage exponentielle (simple, double et saisonnier), et une présentation des notions importantes dans le cours (stationnarité, autocorrélations, bruit blanc, etc).

Régression: données individuelles ou temporelles ?

Avant de parler davantage du premier devoir, un mot pour expliquer pourquoi dans la première partie il est important d’éviter d’utiliser des données temporelles (le cours présente des outils pour modéliser des données individuelles). Considérons un exemple simple (évoqué voilà plusieurs mois ici même), où on cherche à prédire un tour de poitrine conditionnellement à un tour de taille (pour une personne de sexe féminin, cf. le mannequin ci-contre). L’idée intuitive est qu’il doit exister une relation positive, possiblement linéaire entre les deux mensurations. A partir de données, on devrait pouvoir quantifier cette relation. On dispose pour cela de données, en ligne sur

> mensurations<-read.table("
+ http://freakonometrics.free.fr/mensurations.csv",
+ header=TRUE,sep=";")
> head(mensurations[,-1])
POITRINE TAILLE HANCHE HAUTEUR POIDS      AGE
1       NA     NA     NA      NA    NA       NA
2       94     58     89     165    51 22.24230
3       NA     NA     NA      NA    NA       NA
4       94     58     89     165    51 22.40383
5       91     58     97     165    55       NA
6       NA     NA     NA      NA    NA       NA

Si on regarde rapidement la corrélation entre la taille et la poitrine, cette dernière est faible,

> Z=mensurations[,c("TAILLE","POITRINE")]
> I=apply(Z,1,function(x) sum(is.na(x)))
> cor(Z[I==0,])
TAILLE   POITRINE
TAILLE   1.00000000 0.09700002
POITRINE 0.09700002 1.00000000

… mais significativement non nulle,

> cor.test(Z[,1],Z[,2],,method = "pearson")

Pearsons product-moment correlation

data:  Z[, 1] and Z[, 2]
t = 2.4559, df = 635, p-value = 0.01432
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.0194634 0.1733769
sample estimates:
cor
0.09700002

Avec une https://latex.codecogs.com/gif.latex?p-value de l’ordre de 1.5%, on va rejeter l’hypothèse https://latex.codecogs.com/gif.latex?H_0 de nullité de la corrélation (c’est ici un test basé sur la transformation de Fisher). On peut aussi faire une régression simple,

> reg=lm(POITRINE~TAILLE,data=mensurations)
> summary(reg)

Call:
lm(formula = POITRINE ~ TAILLE, data = mensurations)

Residuals:
Min      1Q  Median      3Q     Max
-9.7253 -3.4424 -0.2091  1.5576 13.8579

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 82.67798    2.82953  29.220   <2e-16 ***
TAILLE       0.11663    0.04749   2.456   0.0143 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.133 on 635 degrees of freedom
(22 observations deleted due to missingness)
Multiple R-squared: 0.009409,	Adjusted R-squared: 0.007849
F-statistic: 6.031 on 1 and 635 DF,  p-value: 0.01432

(on notera que le test sur la corrélation, le test de significativité de la – seule – variable explicative, et le test de significativité globale donnent ici exactement la même https://latex.codecogs.com/gif.latex?p-value de l’ordre de 1.5%). Là encore, on voit que le tour de taille a un effet significativement non nul… mais marginal. Graphiquement, la prédiction avec le tour de taille est le trait rouge,

La valeur moyenne est en trait pointillé. On peut voir que la différence est infime,

> base=data.frame(TAILLE=seq(50,70,by=5))
> base$Pred.POITRINE=predict(reg,newdata=base)
> base$Moy.POITRINE=mean(POITRINE,na.rm=TRUE)
> base
TAILLE Pred.POITRINE Moy.POITRINE
1     50      88.50935     89.61538
2     55      89.09248     89.61538
3     60      89.67562     89.61538
4     65      90.25876     89.61538
5     70      90.84189     89.61538

Le soucis est que cette base est en fait une base sur données individuelles. En fait, il s’agit des mensurations des demoiselles qui ont posé comme playmate dans la version U.S. du magasine Playboy. Si on se restreint sur des sous-période, par exemple les années 70, on a des prédictions relativement différentes,

> submensurations=mensurations[abs(DATE-1975)<=5,]
> subreg=lm(POITRINE~TAILLE,data=submensurations)
> summary(subreg)

Call:
lm(formula = POITRINE ~ TAILLE, data = submensurations)

Residuals:
Min     1Q Median     3Q    Max
-8.655 -1.871 -0.655  1.750 13.155

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  66.1485     7.3839   8.958 6.12e-15 ***
TAILLE        0.4053     0.1230   3.296   0.0013 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.574 on 117 degrees of freedom
(2 observations deleted due to missingness)
Multiple R-squared: 0.08496,	Adjusted R-squared: 0.07714
F-statistic: 10.86 on 1 and 117 DF,  p-value: 0.001299

> base$Pred70s.POITRINE=predict(subreg,newdata=base)
> base
TAILLE Pred.POITRINE Moy.POITRINE Pred70s.POITRINE
1     50      88.50935     89.61538         86.41274
2     55      89.09248     89.61538         88.43917
3     60      89.67562     89.61538         90.46559
4     65      90.25876     89.61538         92.49201
5     70      90.84189     89.61538         94.51844

que l’on peut visualiser ci-dessous,

L’explication est que la morphologie des femmes (ou en tous les cas de celles qui ont posé nues dans Playboy) a beaucoup changé en 60 ans, comme évoquéprécédemment. On peut le voir en regardant l’évolution de la pente, par date (https://latex.codecogs.com/gif.latex?\pm5ans)

(le trait horizontal est la pente obtenue sur l’ensemble de la période). On peut d’ailleurs intégrer l’année dans notre modèle, en rajoutant le temps comme variable explicative dans la régression,

> reg=lm(POITRINE~TAILLE+DATE,data=mensurations)
> summary(reg)

Call:
lm(formula = POITRINE ~ TAILLE + DATE, data = mensurations)

Residuals:
Min      1Q  Median      3Q     Max
-9.7304 -2.2184 -0.5405  2.0742 12.1041

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 368.296266  18.008455  20.451  < 2e-16 ***
TAILLE        0.323319   0.042140   7.673 6.39e-14 ***
DATE         -0.150301   0.009393 -16.002  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.491 on 634 degrees of freedom
(22 observations deleted due to missingness)
Multiple R-squared: 0.2944,	Adjusted R-squared: 0.2922
F-statistic: 132.3 on 2 and 634 DF,  p-value: < 2.2e-16

Visuellement, on aurait la prédiction suivante,

Si on se place autour de 1970,

> submensurations=mensurations[abs(DATE-1970)<=5,]
> subreg=lm(POITRINE~TAILLE,data=submensurations)
> predict(subreg,newdata=data.frame(TAILLE=c(55,70)))
1        2
89.14361 95.75476

alors qu’autour de 1990,

> submensurations=mensurations[abs(DATE-1990)<=5,]
> subreg=lm(POITRINE~TAILLE,data=submensurations)
> predict(subreg,newdata=data.frame(TAILLE=c(55,70)))
1        2
87.69853 92.70228

Notons qu’on fait ici des régressions locales (dans ces deux derniers calculs), ce qui n’est pas ce qui est représenté visuellement… Si on veut une meilleure modélisation locale, il faudrait quitter le monde des modèles linéaires pour aller un peu plus loin. On verra en fin de cours si on a le temps d’en parler

Cela dit, globalement, on avait un effet positif, mais il aurait été tout à fait possible d’avoir un effet (global négatif). C’est ce qui arrive dans le paradoxe de Simpson.