Category Archives: MAT8181

Triangle for Parameters of AR(2) Stationary Processes

We’ve seen yesterday conditions on https://latex.codecogs.com/gif.latex?(\phi_1,\phi_2) so that the canonical https://latex.codecogs.com/gif.latex?AR(2) process, https://latex.codecogs.com/gif.latex?(X_t), satisfying

https://latex.codecogs.com/gif.latex?X_t=\phi_1%20X_{t-1}+\phi_2%20X_{t-2}+\varepsilon_t

The condition is rather simple, since https://latex.codecogs.com/gif.latex?(\phi_1,\phi_2) should be a triangular region. But the proof is a bit more tricky…

Recall that we want to parametrize the region

https://latex.codecogs.com/gif.latex?\{(\phi_1%20,\phi_2)\in\mahtbb{R}^2:%201-\phi_1z-\phi_1z^2\neq%200,\forall%20z\in\mathbb{C},\vert\vert%20z\vert\vert%20\leq%201\}

Since we have a true https://latex.codecogs.com/gif.latex?AR(2) process, then https://latex.codecogs.com/gif.latex?\phi_2\neq%200. Our polynomial is here

https://latex.codecogs.com/gif.latex?\Phi(z)=1-\phi_1z-\phi_1z^2=\left(1-\frac{z}{\lambda_1}\right)\left(1-\frac{z}{\lambda_2}\right)

where https://latex.codecogs.com/gif.latex?\lambda_i‘s are the roots – in https://latex.codecogs.com/gif.latex?\mathbb{C} – of https://latex.codecogs.com/gif.latex?\Phi(\cdot). Consider now some kind of dual version of that polynomial,

https://latex.codecogs.com/gif.latex?\tilde\Phi(z)=\left(1-{z}{\lambda_1}\right)\left(1-{z}{\lambda_2}\right)=1+\frac{\phi_1}{\phi_2}z+\frac{1}{\phi_2}z^2

Having the roots of https://latex.codecogs.com/gif.latex?\Phi(\cdot) outside the unit circle is the same as having the roots of https://latex.codecogs.com/gif.latex?\tilde\Phi(\cdot) inside the unit circle. Obserse that we can write

https://latex.codecogs.com/gif.latex?\tilde\Phi(z)=\frac{1}{\phi_2}(\underbrace{z^2-\phi_1%20z-\phi_2}_{\bar{\Phi}(z)})

Roots of https://latex.codecogs.com/gif.latex?\bar{\Phi}(\cdot)} are then

https://latex.codecogs.com/gif.latex?\xi%20=%20\frac{1}{2}\left(\phi_1\pm\sqrt{\phi_1^2+4\phi_2}\right)

From this point, we should discuss a little bit, depending on the value of https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2.

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2=0

Then there is one root, and only one. So we need to have https://latex.codecogs.com/gif.latex?\vert\phi_1\vert%20%3C2 or equivalently https://latex.codecogs.com/gif.latex?\phi_2%3E-1.

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2%3E0

Then we got roots in https://latex.codecogs.com/gif.latex?\mathbb{R}, and

https://latex.codecogs.com/gif.latex?-1%3C%20\frac{1}{2}\left(\phi_1\pm\sqrt{\phi_1^2+4\phi_2}\right)%3C%201

means, equivalently, that

https://latex.codecogs.com/gif.latex?\phi_2%3E-1%20\%20;%20\%20\phi_2-\phi_1%3C1%20\%20;%20\%20\phi_2+\phi_1%3C1

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2%3C0

Then we have two (conjugate) roots in https://latex.codecogs.com/gif.latex?\mathbb{C}, and the square of norm of those roots is https://latex.codecogs.com/gif.latex?\vert\vert%C2%A0\xi\vert\vert^2=-\phi_2. Thus, https://latex.codecogs.com/gif.latex?\phi_2%3E-1.

We get what was mention in the course: the canonical https://latex.codecogs.com/gif.latex?AR(2) has a stationary solution if, and only if

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

which is a triangular region, see

Causal Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we will discuss (almost) only causal models. For instance, with https://latex.codecogs.com/gif.latex?AR(1),

https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t

with some white noise https://latex.codecogs.com/gif.latex?(\varepsilon_t), those models are obtained when https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3C1. In that case, we’ve seen that https://latex.codecogs.com/gif.latex?(\varepsilon_t) was actually the innovation process, and we can write

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h=0}^{+\infty}%20\phi^h%20\varepsilon_{t-h}

which is actually a mean-square convergent series (using simple Analysis arguments on series). From that expression, we can easily see that https://latex.codecogs.com/gif.latex?(X_t) is stationary, since https://latex.codecogs.com/gif.latex?\mathbb{E}(X_t)=0 (which does not depend on https://latex.codecogs.com/gif.latex?t) and

https://latex.codecogs.com/gif.latex?\text{cov}(X_t,X_{t-h})=\frac{\phi^h}{1-\phi^2}\sigma^2(which does not depend on https://latex.codecogs.com/gif.latex?t).

Consider now the case where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1. Clearly, we have some problem here, since

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h=0}^{+\infty}%20\phi^h%20\varepsilon_{t-h}

cannot be defined (the series does not converge, in https://latex.codecogs.com/gif.latex?L^2). Nevertheless, it is still possible to write

https://latex.codecogs.com/gif.latex?X_t=\frac{1}{\phi}%20X_{t{\color{Red}%20+1}}{\color{Red}%20-\frac{1}{\phi}}\varepsilon_{t{\color{Red}%20+1}}But it is possible to iterate (as in the previous case) and write

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h={\color{Red}%201}}^{+\infty}%20\frac{-1}{\phi^h}%20\varepsilon_{t{\color{Red}%20+h}}

which is actually well defined. And in that case, the sequence of random variables https://latex.codecogs.com/gif.latex?(X_t) obtained from this equation is the unique stationary solution of the recursive equation https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t. This might be confusing, but the thing is this solution should not be confused with the usual non-stationary solution of https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t obtained from https://latex.codecogs.com/gif.latex?X_0. As in the code writen to generate a time series, from some starting value https://latex.codecogs.com/gif.latex?X_0 in the previous post.

Now, let us spent some time with this stationary time series, considered as unatural in Brockwell and Davis (1991). One point is that, in the previous case (where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3C1) https://latex.codecogs.com/gif.latex?(\varepsilon_t) was the innovation process. So variable https://latex.codecogs.com/gif.latex?X_t was not correlated with the future of the noise, https://latex.codecogs.com/gif.latex?\sigma\{\varepsilon_{t+1},\varepsilon_{t+2},\cdots\}. Which is not the case when https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1.

All that looks nice, if you’re willing to understand thing at some theoretical level. What does all that mean from a computational perspective ? Consider some white noise (this noise actually does exist whatever you want to define, based on that time series)

> n=10000
> e=rnorm(n)
> plot(e,type="l",col="red")

If we look at the simple case, to start with,

> phi=.8
> X=rep(0,n)
> for(t in 2:n) X[t]=phi*X[t-1]+e[t]

The time series – the latest 1,000 observations – looks like

Now, if we use the cumulated sum of the noise,

> Y=rep(0,n)
> for(t in 2:n) Y[t]=sum(phi^((0:(t-1)))*e[t-(0:(t-1))])

we get

Which is exactly the same process ! This should not surprise us because that’s what the theory told us. Now, consider the problematic case, where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1

> phi=1.1
> X=rep(0,n)
> for(t in 2:n) X[t]=phi*X[t-1]+e[t]

Clearly, that series is non-stationary (just look at the first 1,000 values)

Now, if we look at the series obtained from the cumulated sum of future values of the noise

> Y=rep(0,n)
> for(t in 1:(n-1)) Y[t]=sum((1/phi)^((1:(n-t)))*e[t+(1:(n-t))])

We get something which is, actually, stationary,

So, what is this series exactly ? If you look that the autocorrelation function,

> acf(Y)

we get the autocorrelation function of a (stationary) https://latex.codecogs.com/gif.latex?AR(1) process,

> acf(Y)[1]

Autocorrelations of series ‘Y’, by lag

    1 
0.908 

> 1/phi
[1] 0.9090909

Observe that there is a white noise – call it https://latex.codecogs.com/gif.latex?(\eta_t) – such that

https://latex.codecogs.com/gif.latex?X_t=\frac{1}{\phi}X_{t-1}+\eta_t

This is what we call the canonical form of the stationary process https://latex.codecogs.com/gif.latex?(X_t).

Visualizing Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we started discussing autoregressive models. Just to illustrate, here is some code to plot https://latex.codecogs.com/gif.latex?AR(1) – causal – process,

> graphar1=function(phi){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 2:n) X[t]=phi*X[t-1]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(1/phi,0,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-.2,.2),xlim=c(-1,1))
+ axis(1)
+ points(phi,0,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

e.g.

> graphar1(.8)

or

> graphar1(-.7)

(with, on the bottom, the root of the characteristic polynomial, the value of the parameter https://latex.codecogs.com/gif.latex?\phi_{1}, the autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\rho(h) and the partial autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\psi(h)).

Of course, it is possible to do something similar with https://latex.codecogs.com/gif.latex?AR(2) processes,

> graphar2=function(phi1,phi2){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 3:n) X[t]=phi1*X[t-1]+phi2*X[t-2]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ P=polyroot(c(1,-phi1,-phi2))
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(P,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,xlim=c(-2.1,2.1),ylim=c(-1.2,1.2))
+ polygon(c(-2,0,2,-2),c(-1,1,-1,-1),col="light green")
+ u=seq(-2,2,by=.001)
+ lines(u,-u^2/4)
+ abline(v=seq(-2,2,by=.2),col="grey",lty=2)
+ abline(h=seq(-1,1,by=.2),col="grey",lty=2)
+ segments(0,-1,0,1)
+ axis(1)
+ axis(2)
+ points(phi1,phi2,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

For example,

> graphar2(.65,.3)

or

> graphar2(-1.4,-.7)

Sequences defined using a Linear Recurrence

In the introduction to the time series course (MAT8181) this morning, we did spend some time on the expression of (deterministic) sequences defined using a linear recurence (we will need that later on, so I wanted to make sure that those results were familiar to everyone).

  • First order recurence

The most simple case is the first order recurence, https://latex.codecogs.com/gif.latex?u_n=a+b%20u_{n-1} where https://latex.codecogs.com/gif.latex?b\neq%201 (for convenience). Observe that we can remove the constant, using a simple translation https://latex.codecogs.com/gif.latex?\underbrace{[u_n-m]}_{v_n}%20=%20b%20\underbrace{[u_{n-1}-m]}_{v_{n-1}} if https://latex.codecogs.com/gif.latex?%20m=a/(1-b). So, starting from this point, we will always remove the constant in the recurent equation. Thus, https://latex.codecogs.com/gif.latex?{v_n}%20=%20b{v_{n-1}}. From this equation, observe that https://latex.codecogs.com/gif.latex?{v_n}%20=%20b^n{v_{0}}, which is the general expression of https://latex.codecogs.com/gif.latex?{v_n}.

  • Second order recurence

Consider now a second order recurence, https://latex.codecogs.com/gif.latex?{v_n}%20=%20a{v_{n-1}}+b{v_{n-2}}. In order to find the general expression of https://latex.codecogs.com/gif.latex?{v_n}, define https://latex.codecogs.com/gif.latex?\boldsymbol{V}_n%20=(v_{n}},{v_{n-1}})^{\sffamily%20T}. Then https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}a&%20b%20\\%201%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} This time, we have a vectorial linear recurent equation. But what we’ve done previously still holds. For instance, https://latex.codecogs.com/gif.latex?%20{\boldsymbol{V}_n%20}=B{\boldsymbol{V}_{n-1}%20}=\cdots=B^n\boldsymbol{V}_{0} What could we say about https://latex.codecogs.com/gif.latex?%20B^n ? If https://latex.codecogs.com/gif.latex?B can be diagonalized, then https://latex.codecogs.com/gif.latex?%20B=P\Delta%20P^{-1} and https://latex.codecogs.com/gif.latex?%20B^n=P\Delta^n%20P^{-1}. Thus, https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20B^n%20\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20}=%20P\underbrace{\begin{bmatrix}\lambda_1^n&%200%20\\%200%20&%20\lambda_2^n\end{bmatrix}}_{\Delta^n}%20P^{-1}\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20} so what we’ll get here is something likehttps://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n for some constant https://latex.codecogs.com/gif.latex?%20\alpha and https://latex.codecogs.com/gif.latex?%20\beta. Recall that https://latex.codecogs.com/gif.latex?\lambda_1 and https://latex.codecogs.com/gif.latex?\lambda_2 are the eigenvalues of matrix https://latex.codecogs.com/gif.latex?B, and they are also the roots of the characteristic polynomial https://latex.codecogs.com/gif.latex?%20P(x)=x^2%20-%20ax%20-%20b. Since https://latex.codecogs.com/gif.latex?%20a and https://latex.codecogs.com/gif.latex?%20b are real-valued, there are two roots for the polynomial, possibly identical, possibly complex (but then conjugate). An interesting case is obtained when the roots are https://latex.codecogs.com/gif.latex?%20re^{\pm%20i\theta}. In that case https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(\alpha\cos(n\theta)%20+%20\beta\sin(n\theta)) To visualize this general term, consider the following code. A first strategy is to define the sequence, given the two parameters, and two starting values. E.g.

> a=.5
> b=-.9
> u1=1; u0=1

Then, we iterate to generate the sequence,

> v=c(u1,u0)
> while(length(v)<100) v=c(a*v[1]+b*v[2],v)
> plot(0:99,rev(v))

It is also possible to use the generic expression we’ve just seen. Here, the roots of the characteristic polynomial are

> r=polyroot(c(-b, -a, 1))
> r
[1] 0.25+0.9151503i 0.25-0.9151503i
> plot(r,xlim=c(-1.1,1.1),ylim=c(-1.1,1.1),pch=19,col="red")
> u=seq(-1,1,by=.01)
> lines(u,sqrt(1-u^2),lty=2)
> lines(u,-sqrt(1-u^2),lty=2)

http://freakonometrics.hypotheses.org/files/2014/01/Selection_546.png Since, https://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n, then https://latex.codecogs.com/gif.latex?%20\begin{cases}%20\alpha%20+%20\beta%20=%20v_0%20\\%20\alpha%20r_1%20+%20\beta%20r_2%20=%20v_1%20\end{cases} it is possible to derive numerical expressions for the two parameters. If https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(A\cos(n\theta)%20+%20B\sin(n\theta)), then https://latex.codecogs.com/gif.latex?A=\lambda+\mu while https://latex.codecogs.com/gif.latex?B=i(\lambda-\mu). Thus,

> A=sum(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))
> B=diff(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))* complex(real=0,imaginary=1)

We can plot the sequence of points

> plot(0:99,rev(v))

and then we can also plot the sine wave, too

> t=seq(0,100,by=.1)
> bv=function(t) Mod(r)[1]^t
> fv=function(t) Mod(r)[1]^t*(A*cos(t*Arg(r)[1])+B*sin(t*Arg(r)[1]))
> lines(t,Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,-Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,Vectorize(fv)(t-1),col="blue")

We will see a lot of graph like this in the course, when looking at autocorrelation functions.

  • Higher order recurence

More generally, we can write https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}\\v_{n-2}\\%20\vdots%20\\%20v_{n-p+1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}b_{1}%20&%20b_{2}%20&b_3&%20\cdots%20&%20b_{p}%20\\%201%20&%200%20&%200&%20\cdots%20&0\\%200%20&%201%20&%200&%20\cdots%20&0\\%20\vdots%20&%20\vdots%20&%20\vdots%20&%20\ddots%20&%20\vdots%20\\%200%20&%200%20&%200&%20\cdots%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}\\v_{n-3}%20\\%20\vdots%20\\%20v_{n-p}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} The matrix is a so called companion matrix. And similar results could be obtained for the expression of the general term of the sequence. If all that is not familar, I strongly recommand to read carefully a textbook on sequences and linear recurence.

Graduate Course on Time Series

This Winter, I will be giving a (graduate) course on time series, MAT8181. It is an ISM course, and even if it will probably be given in French, I will upload information here, in English. I will upload the (detailed) syllabus of the course during the Christmas holidays. But to give an overview, for those willing to register, the first part of the course will focus on linear models, univariate and then multivariate. The references for this first part are

In the second part, we will introcue non-linear models, used in financial econometrics, from ARCH to GARCH, as well as stochastic volatility models. The references for this second part are

[a pdf version can be found on Eric Zivot’s webpage]

Specific references and more details about the chapters will be given during the course. I will upload exercises this winter, as well as a list of articles that will be used for projects. Examples will be illustrated using R functions from dedicated packages.

Grades will be based on exercises (homework), report (based on a published paper) and final writen exam.