Tag Archives: AR(2)

Inference for AR(p) Time Series

Consider a (stationary) autoregressive process, say of order 2,

https://latex.codecogs.com/gif.latex?Y_t%20=\varphi_1%20Y_{t-1}+\varphi_2%20Y_{t-2}+\varepsilon_t

for some white noise with variance . Here is a code to generate such a process,

> phi1=.25
> phi2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t]
> Z=Z[800:1000]
> n=length(Z)
> plot(Z,type="l")

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, since (if we consider a matrix formulation)

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.0268 -0.7063  0.1065  0.6925  3.2566 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1  0.23400    0.05463   4.283 2.88e-05 ***
X2  0.62863    0.05476  11.479  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.062 on 197 degrees of freedom
Multiple R-squared:  0.6349,	Adjusted R-squared:  0.6312 
F-statistic: 171.3 on 2 and 197 DF,  p-value: < 2.2e-16

so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise

> regression$coefficients
       X1        X2 
0.2339959 0.6286321 
> summary(regression)$sigma
[1] 1.061839
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written (again, using a matrix expression)

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
          [,1]
[1,] 0.2256270
[2,] 0.6315329

Now, we need to extract the estimated innovation process, from this set of parameters

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.058558

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\gamma_0%20=%20\varphi_1%20\gamma_1+\varphi_2%20\gamma_2+\sigma^2\\%20\gamma_1=\varphi_1%20\gamma_0+\varphi_2%20\gamma_1%20\\%20\gamma_2=\varphi_1%20\gamma_1+\varphi_2%20\gamma_0\end{array}\right.

It is not much more complicated to solve, actually,

> gamma0=var(Z[1:n])
> gamma1=var(Z[1:(n-1)],Z[2:n])
> gamma2=var(Z[1:(n-2)],Z[3:n])
> A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3)
> b=matrix(c(gamma0,gamma1,gamma2),3,1)
> (PHISIGMA=solve(A,b))
          [,1]
[1,] 0.2283151
[2,] 0.6283431
[3,] 1.1335501
  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.

https://latex.codecogs.com/gif.latex?Y_t\vert%20Y_{t-1}=y_{t-1},Y_{t-2}=y_{t-2}

has a Gaussian distribution

https://latex.codecogs.com/gif.latex?\mathcal{N}(\varphi_1y_{t-1}+\varphi_2y_{t-2},\sigma^2)

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1] 0.2339589 0.6285002 1.0565613

$value
[1] 293.3042

$counts
function gradient 
     106       NA 

$convergence
[1] 0

$message
NULL

It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) has a know form.

  • using (unconditional) likelihood estimators

The variance matrix of https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is https://latex.codecogs.com/gif.latex?\boldsymbol{\Gamma}=[\gamma(\vert%20i-j\vert)], where autocovariances are not not know, be can easily be computed using a recursive relationship.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=phi1/(1-phi2)
+ for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2))
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
Error in chol.default(x, pivot = FALSE) : 
Error in pd.solve(varcov, log.det = TRUE) : 
  x appears to be not positive definite

The problem is that there is a strong constraint on the pair https://latex.codecogs.com/gif.latex?(\varphi_1,\varphi_2) to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

i.e. in a standard matrix form

https://latex.codecogs.com/gif.latex?\left[\begin{array}{cc}%20+1%20&%20-1%20\\%20-1%20&%20-1%20\\%200%20&%20+1\end{array}\right]\left[\begin{array}{c}%20\varphi_1%20\\%20\varphi_2\end{array}\right]%20%3E%20\left[\begin{array}{c}%20-1%20\\%20-1%20\\%20-1\end{array}\right]

(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider

> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3)
> C=c(0,0,0,-.99999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.2238892 0.6342850 1.0613388

$value
[1] 297.9202

$counts
function gradient 
     108       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 0.000189892

(here, to faster, we restrain the parameters so that they will be positive).

  • comparing those estimates

Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter https://latex.codecogs.com/gif.latex?\widehat{\varphi_1}, we get

and for the second one, https://latex.codecogs.com/gif.latex?\widehat{\varphi_2}, we have

The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.

Observe that the standard-deviation of the innovation process https://latex.codecogs.com/gif.latex?\widehat{\sigma} is here, well estimated,

(with clearly some estimators that perform better than others).

 

Triangle for Parameters of AR(2) Stationary Processes

We’ve seen yesterday conditions on https://latex.codecogs.com/gif.latex?(\phi_1,\phi_2) so that the canonical https://latex.codecogs.com/gif.latex?AR(2) process, https://latex.codecogs.com/gif.latex?(X_t), satisfying

https://latex.codecogs.com/gif.latex?X_t=\phi_1%20X_{t-1}+\phi_2%20X_{t-2}+\varepsilon_t

The condition is rather simple, since https://latex.codecogs.com/gif.latex?(\phi_1,\phi_2) should be a triangular region. But the proof is a bit more tricky…

Recall that we want to parametrize the region

https://latex.codecogs.com/gif.latex?\{(\phi_1%20,\phi_2)\in\mahtbb{R}^2:%201-\phi_1z-\phi_1z^2\neq%200,\forall%20z\in\mathbb{C},\vert\vert%20z\vert\vert%20\leq%201\}

Since we have a true https://latex.codecogs.com/gif.latex?AR(2) process, then https://latex.codecogs.com/gif.latex?\phi_2\neq%200. Our polynomial is here

https://latex.codecogs.com/gif.latex?\Phi(z)=1-\phi_1z-\phi_1z^2=\left(1-\frac{z}{\lambda_1}\right)\left(1-\frac{z}{\lambda_2}\right)

where https://latex.codecogs.com/gif.latex?\lambda_i‘s are the roots – in https://latex.codecogs.com/gif.latex?\mathbb{C} – of https://latex.codecogs.com/gif.latex?\Phi(\cdot). Consider now some kind of dual version of that polynomial,

https://latex.codecogs.com/gif.latex?\tilde\Phi(z)=\left(1-{z}{\lambda_1}\right)\left(1-{z}{\lambda_2}\right)=1+\frac{\phi_1}{\phi_2}z+\frac{1}{\phi_2}z^2

Having the roots of https://latex.codecogs.com/gif.latex?\Phi(\cdot) outside the unit circle is the same as having the roots of https://latex.codecogs.com/gif.latex?\tilde\Phi(\cdot) inside the unit circle. Obserse that we can write

https://latex.codecogs.com/gif.latex?\tilde\Phi(z)=\frac{1}{\phi_2}(\underbrace{z^2-\phi_1%20z-\phi_2}_{\bar{\Phi}(z)})

Roots of https://latex.codecogs.com/gif.latex?\bar{\Phi}(\cdot)} are then

https://latex.codecogs.com/gif.latex?\xi%20=%20\frac{1}{2}\left(\phi_1\pm\sqrt{\phi_1^2+4\phi_2}\right)

From this point, we should discuss a little bit, depending on the value of https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2.

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2=0

Then there is one root, and only one. So we need to have https://latex.codecogs.com/gif.latex?\vert\phi_1\vert%20%3C2 or equivalently https://latex.codecogs.com/gif.latex?\phi_2%3E-1.

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2%3E0

Then we got roots in https://latex.codecogs.com/gif.latex?\mathbb{R}, and

https://latex.codecogs.com/gif.latex?-1%3C%20\frac{1}{2}\left(\phi_1\pm\sqrt{\phi_1^2+4\phi_2}\right)%3C%201

means, equivalently, that

https://latex.codecogs.com/gif.latex?\phi_2%3E-1%20\%20;%20\%20\phi_2-\phi_1%3C1%20\%20;%20\%20\phi_2+\phi_1%3C1

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2%3C0

Then we have two (conjugate) roots in https://latex.codecogs.com/gif.latex?\mathbb{C}, and the square of norm of those roots is https://latex.codecogs.com/gif.latex?\vert\vert%C2%A0\xi\vert\vert^2=-\phi_2. Thus, https://latex.codecogs.com/gif.latex?\phi_2%3E-1.

We get what was mention in the course: the canonical https://latex.codecogs.com/gif.latex?AR(2) has a stationary solution if, and only if

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

which is a triangular region, see

Visualizing Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we started discussing autoregressive models. Just to illustrate, here is some code to plot https://latex.codecogs.com/gif.latex?AR(1) – causal – process,

> graphar1=function(phi){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 2:n) X[t]=phi*X[t-1]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(1/phi,0,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-.2,.2),xlim=c(-1,1))
+ axis(1)
+ points(phi,0,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

e.g.

> graphar1(.8)

or

> graphar1(-.7)

(with, on the bottom, the root of the characteristic polynomial, the value of the parameter https://latex.codecogs.com/gif.latex?\phi_{1}, the autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\rho(h) and the partial autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\psi(h)).

Of course, it is possible to do something similar with https://latex.codecogs.com/gif.latex?AR(2) processes,

> graphar2=function(phi1,phi2){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 3:n) X[t]=phi1*X[t-1]+phi2*X[t-2]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ P=polyroot(c(1,-phi1,-phi2))
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(P,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,xlim=c(-2.1,2.1),ylim=c(-1.2,1.2))
+ polygon(c(-2,0,2,-2),c(-1,1,-1,-1),col="light green")
+ u=seq(-2,2,by=.001)
+ lines(u,-u^2/4)
+ abline(v=seq(-2,2,by=.2),col="grey",lty=2)
+ abline(h=seq(-1,1,by=.2),col="grey",lty=2)
+ segments(0,-1,0,1)
+ axis(1)
+ axis(2)
+ points(phi1,phi2,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

For example,

> graphar2(.65,.3)

or

> graphar2(-1.4,-.7)

Simulation de séries temporelles

Un billet rapide pour reprendre le code tapé en cours, la semaine passée. Considérons  un processus autorégressif d’ordre 1,  où  est un bruit blanc, stationnaire, i.e.  appartient à l’intervalle . Le code pour simuler un tel processus est

n=1000
bruit=rnorm(n)
phi1= .85
X=rep(NA,n)
X[1]=0
for(t in 2:n){X[t]=phi1*X[t-1]+bruit[t]}
plot(acf(X),lwd=5,col='blue')
plot(pacf(X),lwd=5,col='blue')

ou avec un autocorrélation au premier ordre négative,

phi1= -0.7

On peut aussi regarder un processus autorégressif au second ordre,

sur la figure ci-dessous (avec en haut à gauche le triangle de stationnarité du couple de paramètres).

phi1=  0.3
phi2=  0.5
X=rep(NA,n)
X[1:2]=0
for(t in 3:n){
X[t]=phi1*X[t-1]+phi2*X[t-2]+bruit[t]}

Histoire de changer un peu, on peut regarder un processus moyenne mobile au premier ordre,  où  est un paramètre dans .

theta1=  .8
X=rep(NA,n)
X[1]=0
for(t in 2:n){
X[t]=bruit[t]+theta1*bruit[t-1]}

ou une moyenne mobile du second ordre,

theta1= -.6
theta2=  .5
X=rep(NA,n)
X[1:2]=0
for(t in 3:n){
X[t]=bruit[t]+theta1*bruit[t-1]+
theta2*bruit[t-2]}

 

Inference and autoregressive processes

Consider a (stationary) autoregressive process, say of order 2,

for some white noise  with variance . Here is a code to generate such a process,

> phi1=.5
> phi2=-.4
> sigma=1.5
> set.seed(1)
> n=240
> WN=rnorm(n,sd=sigma)
> Z=rep(NA,n)
> Z[1:2]=rnorm(2,0,1)
> for(t in 3:n){Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+WN[t]}

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . There are (at least) three techniques to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, and thus, if we consider a matrix formulation,

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
Min      1Q  Median      3Q     Max
-4.3491 -0.8890 -0.0762  0.9601  3.6105

Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1  0.45107    0.05924   7.615 6.34e-13 ***
X2 -0.41454    0.05924  -6.998 2.67e-11 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.449 on 236 degrees of freedom
Multiple R-squared: 0.2561,	Adjusted R-squared: 0.2497
F-statistic: 40.61 on 2 and 236 DF,  p-value: 6.949e-16

> regression$coefficients
X1         X2
0.4510703 -0.4145365
> summary(regression)$sigma
[1] 1.449276
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
[,1]
[1,]  0.4517579
[2,] -0.4155920

Now, we need to extract the estimated innovation process, from this set of parameters (note that it could be possible to include the variance term in Yule-Walker equations, to get a three dimensional linear equation)

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.445706

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. Thestandard model is a Gaussian model, i.e.

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]	; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1]  0.4509685 -0.4144938  1.4430930

$value
[1] 425.0164

$counts
function gradient
88       NA

$convergence
[1] 0

$message
NULL

Here, our three estimators are rather close. Actually, if we generate 1,000 time series (of size 240), those are the Box-plots of our three estimators, for the first order autoregressive coefficient

for the second one,

and finally for the standard deviation of the innovation process

All those estimators behave nicely, and are rather close. Note that they all might be biased, but they are consistent (see Davidson and MacKinnon for instance, in their book, for more details).