Inference for AR(p) Time Series

Consider a (stationary) autoregressive process, say of order 2,

https://latex.codecogs.com/gif.latex?Y_t%20=\varphi_1%20Y_{t-1}+\varphi_2%20Y_{t-2}+\varepsilon_t

for some white noise with variance . Here is a code to generate such a process,

> phi1=.25
> phi2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t]
> Z=Z[800:1000]
> n=length(Z)
> plot(Z,type="l")

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, since (if we consider a matrix formulation)

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.0268 -0.7063  0.1065  0.6925  3.2566 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1  0.23400    0.05463   4.283 2.88e-05 ***
X2  0.62863    0.05476  11.479  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.062 on 197 degrees of freedom
Multiple R-squared:  0.6349,	Adjusted R-squared:  0.6312 
F-statistic: 171.3 on 2 and 197 DF,  p-value: < 2.2e-16

so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise

> regression$coefficients
       X1        X2 
0.2339959 0.6286321 
> summary(regression)$sigma
[1] 1.061839
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written (again, using a matrix expression)

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
          [,1]
[1,] 0.2256270
[2,] 0.6315329

Now, we need to extract the estimated innovation process, from this set of parameters

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.058558

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\gamma_0%20=%20\varphi_1%20\gamma_1+\varphi_2%20\gamma_2+\sigma^2\\%20\gamma_1=\varphi_1%20\gamma_0+\varphi_2%20\gamma_1%20\\%20\gamma_2=\varphi_1%20\gamma_1+\varphi_2%20\gamma_0\end{array}\right.

It is not much more complicated to solve, actually,

> gamma0=var(Z[1:n])
> gamma1=var(Z[1:(n-1)],Z[2:n])
> gamma2=var(Z[1:(n-2)],Z[3:n])
> A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3)
> b=matrix(c(gamma0,gamma1,gamma2),3,1)
> (PHISIGMA=solve(A,b))
          [,1]
[1,] 0.2283151
[2,] 0.6283431
[3,] 1.1335501
  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.

https://latex.codecogs.com/gif.latex?Y_t\vert%20Y_{t-1}=y_{t-1},Y_{t-2}=y_{t-2}

has a Gaussian distribution

https://latex.codecogs.com/gif.latex?\mathcal{N}(\varphi_1y_{t-1}+\varphi_2y_{t-2},\sigma^2)

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1] 0.2339589 0.6285002 1.0565613

$value
[1] 293.3042

$counts
function gradient 
     106       NA 

$convergence
[1] 0

$message
NULL

It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) has a know form.

  • using (unconditional) likelihood estimators

The variance matrix of https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is https://latex.codecogs.com/gif.latex?\boldsymbol{\Gamma}=[\gamma(\vert%20i-j\vert)], where autocovariances are not not know, be can easily be computed using a recursive relationship.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=phi1/(1-phi2)
+ for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2))
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
Error in chol.default(x, pivot = FALSE) : 
Error in pd.solve(varcov, log.det = TRUE) : 
  x appears to be not positive definite

The problem is that there is a strong constraint on the pair https://latex.codecogs.com/gif.latex?(\varphi_1,\varphi_2) to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

i.e. in a standard matrix form

https://latex.codecogs.com/gif.latex?\left[\begin{array}{cc}%20+1%20&%20-1%20\\%20-1%20&%20-1%20\\%200%20&%20+1\end{array}\right]\left[\begin{array}{c}%20\varphi_1%20\\%20\varphi_2\end{array}\right]%20%3E%20\left[\begin{array}{c}%20-1%20\\%20-1%20\\%20-1\end{array}\right]

(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider

> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3)
> C=c(0,0,0,-.99999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.2238892 0.6342850 1.0613388

$value
[1] 297.9202

$counts
function gradient 
     108       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 0.000189892

(here, to faster, we restrain the parameters so that they will be positive).

  • comparing those estimates

Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter https://latex.codecogs.com/gif.latex?\widehat{\varphi_1}, we get

and for the second one, https://latex.codecogs.com/gif.latex?\widehat{\varphi_2}, we have

The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.

Observe that the standard-deviation of the innovation process https://latex.codecogs.com/gif.latex?\widehat{\sigma} is here, well estimated,

(with clearly some estimators that perform better than others).

 

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

Defining Properly MA(∞) Time Series

In order to properly define https://latex.codecogs.com/gif.latex?MA(\infty) series, we need to get back on some properties of infinite sequences, as briefly mentioned yesterday in the MAT8181 course. Consider some sequence https://latex.codecogs.com/gif.latex?(a_i)_{i\in\mathbb{N}}. The sequence is said to be summable if

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=0}^n%20a_i

is convergent, i.e. if the limit of https://latex.codecogs.com/gif.latex?S_n exists when https://latex.codecogs.com/gif.latex?n\rightarrow\infty.

From Cauchy criterionhttps://latex.codecogs.com/gif.latex?\sum%20a_i converges if and only if for each https://latex.codecogs.com/gif.latex?\eta%3E0, there is https://latex.codecogs.com/gif.latex?m\in\mathbb{N} for which

https://latex.codecogs.com/gif.latex?\vert%20a_i+a_{i+1}+\cdots+a_{j-1}+a_j\vert%3C\eta

when https://latex.codecogs.com/gif.latex?%20i,j%3Em. The sequence https://latex.codecogs.com/gif.latex?(a_i)_{i\in\mathbb{N}} is said to be absolutely summable if

https://latex.codecogs.com/gif.latex?%20\sum_{i=0}^\infty%20\vert%20a_i\vert%20%3C\infty

and square-summable if

https://latex.codecogs.com/gif.latex?%20\sum_{i=0}^\infty%20a_i^2%20%3C\infty

Observe that absolute summability will imply square summability (since for https://latex.codecogs.com/gif.latex?j‘s large enough https://latex.codecogs.com/gif.latex?%20\vert%20a_j\vert%20\leq1, and then https://latex.codecogs.com/gif.latex?%20a_j^2\leq\vert%20a_j\vert)

Consider now some https://latex.codecogs.com/gif.latex?MA(\infty) time series

https://latex.codecogs.com/gif.latex?%20X_t=\sum_{h=0}^\infty%20\theta_h%20\varepsilon_{t-h}

If the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is square-summable, then

https://latex.codecogs.com/gif.latex?%20S_T%20=%20\sum_{h=0}^T%20\theta_h%20\varepsilon_{t-h}

converges in https://latex.codecogs.com/gif.latex?%20L_2  to some random varible as https://latex.codecogs.com/gif.latex?%20T\rightarrow\infty. This can be proved easily using Cauchy criteria, in the sense that for any https://latex.codecogs.com/gif.latex?\eta%3E0, there is a https://latex.codecogs.com/gif.latex?%20T large enough such that, for any https://latex.codecogs.com/gif.latex?%20h,

In that case, if the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is square-summable, then https://latex.codecogs.com/gif.latex?%20(X_t) is stationary (in the https://latex.codecogs.com/gif.latex?%20L_2 sense) since the process is centered, and

https://latex.codecogs.com/gif.latex?\gamma(h)=\sigma^2%20\cdot%20\sum_{i=0}^\infty%20\theta_i%20\theta_{i+h}

for all https://latex.codecogs.com/gif.latex?h\in\mathbb{N}.

Further, ergodicity of the time series, define as the absolute summability of the autocovariance sequence, is obtained when the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is absolutely summable.