Tag Archives: MAT8181

Sequences defined using a Linear Recurrence

In the introduction to the time series course (MAT8181) this morning, we did spend some time on the expression of (deterministic) sequences defined using a linear recurence (we will need that later on, so I wanted to make sure that those results were familiar to everyone).

  • First order recurence

The most simple case is the first order recurence, https://latex.codecogs.com/gif.latex?u_n=a+b%20u_{n-1} where https://latex.codecogs.com/gif.latex?b\neq%201 (for convenience). Observe that we can remove the constant, using a simple translation https://latex.codecogs.com/gif.latex?\underbrace{[u_n-m]}_{v_n}%20=%20b%20\underbrace{[u_{n-1}-m]}_{v_{n-1}} if https://latex.codecogs.com/gif.latex?%20m=a/(1-b). So, starting from this point, we will always remove the constant in the recurent equation. Thus, https://latex.codecogs.com/gif.latex?{v_n}%20=%20b{v_{n-1}}. From this equation, observe that https://latex.codecogs.com/gif.latex?{v_n}%20=%20b^n{v_{0}}, which is the general expression of https://latex.codecogs.com/gif.latex?{v_n}.

  • Second order recurence

Consider now a second order recurence, https://latex.codecogs.com/gif.latex?{v_n}%20=%20a{v_{n-1}}+b{v_{n-2}}. In order to find the general expression of https://latex.codecogs.com/gif.latex?{v_n}, define https://latex.codecogs.com/gif.latex?\boldsymbol{V}_n%20=(v_{n}},{v_{n-1}})^{\sffamily%20T}. Then https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}a&%20b%20\\%201%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} This time, we have a vectorial linear recurent equation. But what we’ve done previously still holds. For instance, https://latex.codecogs.com/gif.latex?%20{\boldsymbol{V}_n%20}=B{\boldsymbol{V}_{n-1}%20}=\cdots=B^n\boldsymbol{V}_{0} What could we say about https://latex.codecogs.com/gif.latex?%20B^n ? If https://latex.codecogs.com/gif.latex?B can be diagonalized, then https://latex.codecogs.com/gif.latex?%20B=P\Delta%20P^{-1} and https://latex.codecogs.com/gif.latex?%20B^n=P\Delta^n%20P^{-1}. Thus, https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20B^n%20\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20}=%20P\underbrace{\begin{bmatrix}\lambda_1^n&%200%20\\%200%20&%20\lambda_2^n\end{bmatrix}}_{\Delta^n}%20P^{-1}\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20} so what we’ll get here is something likehttps://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n for some constant https://latex.codecogs.com/gif.latex?%20\alpha and https://latex.codecogs.com/gif.latex?%20\beta. Recall that https://latex.codecogs.com/gif.latex?\lambda_1 and https://latex.codecogs.com/gif.latex?\lambda_2 are the eigenvalues of matrix https://latex.codecogs.com/gif.latex?B, and they are also the roots of the characteristic polynomial https://latex.codecogs.com/gif.latex?%20P(x)=x^2%20-%20ax%20-%20b. Since https://latex.codecogs.com/gif.latex?%20a and https://latex.codecogs.com/gif.latex?%20b are real-valued, there are two roots for the polynomial, possibly identical, possibly complex (but then conjugate). An interesting case is obtained when the roots are https://latex.codecogs.com/gif.latex?%20re^{\pm%20i\theta}. In that case https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(\alpha\cos(n\theta)%20+%20\beta\sin(n\theta)) To visualize this general term, consider the following code. A first strategy is to define the sequence, given the two parameters, and two starting values. E.g.

> a=.5
> b=-.9
> u1=1; u0=1

Then, we iterate to generate the sequence,

> v=c(u1,u0)
> while(length(v)<100) v=c(a*v[1]+b*v[2],v)
> plot(0:99,rev(v))

It is also possible to use the generic expression we’ve just seen. Here, the roots of the characteristic polynomial are

> r=polyroot(c(-b, -a, 1))
> r
[1] 0.25+0.9151503i 0.25-0.9151503i
> plot(r,xlim=c(-1.1,1.1),ylim=c(-1.1,1.1),pch=19,col="red")
> u=seq(-1,1,by=.01)
> lines(u,sqrt(1-u^2),lty=2)
> lines(u,-sqrt(1-u^2),lty=2)

http://freakonometrics.hypotheses.org/files/2014/01/Selection_546.png Since, https://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n, then https://latex.codecogs.com/gif.latex?%20\begin{cases}%20\alpha%20+%20\beta%20=%20v_0%20\\%20\alpha%20r_1%20+%20\beta%20r_2%20=%20v_1%20\end{cases} it is possible to derive numerical expressions for the two parameters. If https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(A\cos(n\theta)%20+%20B\sin(n\theta)), then https://latex.codecogs.com/gif.latex?A=\lambda+\mu while https://latex.codecogs.com/gif.latex?B=i(\lambda-\mu). Thus,

> A=sum(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))
> B=diff(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))* complex(real=0,imaginary=1)

We can plot the sequence of points

> plot(0:99,rev(v))

and then we can also plot the sine wave, too

> t=seq(0,100,by=.1)
> bv=function(t) Mod(r)[1]^t
> fv=function(t) Mod(r)[1]^t*(A*cos(t*Arg(r)[1])+B*sin(t*Arg(r)[1]))
> lines(t,Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,-Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,Vectorize(fv)(t-1),col="blue")

We will see a lot of graph like this in the course, when looking at autocorrelation functions.

  • Higher order recurence

More generally, we can write https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}\\v_{n-2}\\%20\vdots%20\\%20v_{n-p+1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}b_{1}%20&%20b_{2}%20&b_3&%20\cdots%20&%20b_{p}%20\\%201%20&%200%20&%200&%20\cdots%20&0\\%200%20&%201%20&%200&%20\cdots%20&0\\%20\vdots%20&%20\vdots%20&%20\vdots%20&%20\ddots%20&%20\vdots%20\\%200%20&%200%20&%200&%20\cdots%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}\\v_{n-3}%20\\%20\vdots%20\\%20v_{n-p}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} The matrix is a so called companion matrix. And similar results could be obtained for the expression of the general term of the sequence. If all that is not familar, I strongly recommand to read carefully a textbook on sequences and linear recurence.

Graduate Course on Time Series

This Winter, I will be giving a (graduate) course on time series, MAT8181. It is an ISM course, and even if it will probably be given in French, I will upload information here, in English. I will upload the (detailed) syllabus of the course during the Christmas holidays. But to give an overview, for those willing to register, the first part of the course will focus on linear models, univariate and then multivariate. The references for this first part are

In the second part, we will introcue non-linear models, used in financial econometrics, from ARCH to GARCH, as well as stochastic volatility models. The references for this second part are

[a pdf version can be found on Eric Zivot’s webpage]

Specific references and more details about the chapters will be given during the course. I will upload exercises this winter, as well as a list of articles that will be used for projects. Examples will be illustrated using R functions from dedicated packages.

Grades will be based on exercises (homework), report (based on a published paper) and final writen exam.