Tag Archives: ARCH

Stationarity of ARCH processes

In the context of AR(1) processes, we spent some time to explain what happens when  is close to 1.

  • if  the process is stationary,
  • if  the process is a random walk
  • if  the process will explode

Again, random walks are extremely interesting processes, with puzzling properties. For instance,

as , and the process will cross the x-axis an infinite number of times…

Recently, in the MAT8181 course, we studied carefully properties of the ARCH(1) process, especially when . And again, what we get might be puzzling.

Consider some ARCH(1) process , with a Gaussian noise, i.e.

where

and  is a sequence of i.i.d.  variables. Here both  and  have to be positive.

Recall that  since . Further

since , so the variance exists, and is constant only if , and in that case

Further, if , then the fourth moment can be obtained,

since. Now, if we get back on the property obtained while studying the variance, what does that mean if , or  ?

If we look at simulations, we can generate an ARCH(1) process with  for instance.

> n=600
> a=2
> w=0.2
> set.seed(1)
> eta=rnorm(n)
> epsilon=rnorm(n)
> sigma2=rep(w,n)
> for(t in 2:n){
+ sigma2[t]=w+a*epsilon[t-1]^2
+ epsilon[t]=eta[t]*sqrt(sigma2[t])
+ }
> plot(epsilon,type="l")

In order to understand what’s going on, we should keep in mind that, what we good is that  has to lie in  to be able to compute the second moment of . But it is possible to have a stationary process with infinite variance. And actually, this is what we have here.

Write

and them, iterate

and iterate again, and again, and again…

where

Here, we have a sum of positive terms, and we can use the so-called Cauchy rule: define

then, if , the series  converges. Here,

which can also be written

and from the law of large numbers, since we have here a sum of i.i.d. terms,

So, if , then  will have a limit when  goes to infinity.

The condition above can be written

which is called Lyapunov coefficient.

The equation

is a condition on .

In the case where , the numerical value of this upper bound is 3.56.

> 1/exp(mean(log(rnorm(1e7)^2)))
[1] 3.562517

In that case (), the variance may be infinite, but the series is stationary. On the other hand, if , then  will go to infinity almost surely, as  goes to infinity.

But in order to observe this difference, we need a lot of observations. For instance, with ,

and ,

we can easily see a difference. I do not say that it’s easy to see that the distribution above has an infinite variance, but still. Actually, if we consider Hill’s plot on the series above, on the tails of positive ‘s

> library(evir)
> hill(epsilon)

or on the tails of negative ‘s

> hill(-epsilon)

we can see that the tail index is (strictly) smaller than 2 (meaning that the moment of order 2 does not exist).

Why is it puzzling? Maybe because here,  is not weakly stationary (in the  sense), but it is strongly stationary. Which is not the usual way weak and strong are related. This might be why we will not call this strong stationarity, but strict.

Inference for ARCH processes

Consider some ARCH() process, say ARCH(),

where

with a Gaussian (strong) white noise .

> n=500
> a1=0.8
> a2=0.0
> w= 0.2
> set.seed(1)
> eta=rnorm(n)
> epsilon=rnorm(n)
> sigma2=rep(w,n)
> for(t in 3:n){
+ sigma2[t]=w+a1*epsilon[t-1]^2+a2*epsilon[t-2]^2
+ epsilon[t]=eta[t]*sqrt(sigma2[t])
+ }
> par(mfrow=c(1,1))
> plot(epsilon,type="l",ylim=c(min(epsilon)-.5,max(epsilon)))
> lines(min(epsilon)-1+sqrt(sigma2),col="red")

(the red line is the conditional variance process).

> par(mfrow=c(1,2))
> acf(epsilon,lag=50,lwd=2)
> acf(epsilon^2,lag=50,lwd=2)

We did mention in class that if  a ARCH(), then  is an AR() process. So a first idea is to consider a regression, as we did for Gaussian AR()

> db=data.frame(Y=epsilon[2:n]^2,X1=epsilon[1:(n-1)]^2)
> summary(lm(Y~X1,data=db))

Call:
lm(formula = Y ~ X1, data = db)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.4538 -0.3618 -0.2626  0.0935  9.3667 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.34963    0.04342   8.052 6.08e-15 ***
X1           0.31123    0.04262   7.303 1.13e-12 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.8413 on 497 degrees of freedom
Multiple R-squared:  0.0969,	Adjusted R-squared:  0.09508 
F-statistic: 53.33 on 1 and 497 DF,  p-value: 1.129e-12

There is some significant autocorrelation here. But since our vectors cannot be considered as Gaussian, using least squares is perhaps not the best strategy. Actually, if our series is not Gaussian, it is still conditionally Gaussian, since we assumed that  is a Gaussian (strong) white noise,

The likelihood is then

and the log-likelihood is

And a natural idea is to define

The code is simply

> X=epsilon
> loglik=function(param){
+ w=exp(param[1])
+ a1=exp(param[2])
+ s2=rep(w,n)
+ for(t in 2:length(X)){s2[t]=w+a1*X[t-1]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)
+ }
> OPT=optim(par=
+ coefficients(lm(Y~X1,data=db)),fn=loglik)
> exp(OPT$par)
(Intercept)          X1 
  0.2482241   0.5858578

(since the parameters have to be positive, we assume here that they can be written as the exponential of some real values). Observe that those values are closer to the one used to generate our time series.

If we use R functions to estimate those parameters, we get

> library(tseries)
> summary(garch(epsilon,c(0,1)))
...

Call:
garch(x = epsilon, order = c(0, 1))

Model:
GARCH(0,1)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87023 -0.60836 -0.03426  0.66648  3.48443 

Coefficient(s):
    Estimate  Std. Error  t value Pr(>|t|)    
a0   0.24959     0.02470   10.104  < 2e-16 ***
a1   0.58306     0.09737    5.988 2.13e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

so that the confidence interval for  is

> summary(garch(epsilon,c(0,1)))$coef[2,1]+
+ c(-1.96,1.96)*summary(garch(epsilon,c(0,1)))$coef[2,2]
[1] 0.3922088 0.7739088

Actually, since our main interest is this   parameter, it is possible to use profile likelihood techniques,

> proflik=function(a){
+ loglik=function(w){
+ s2=rep(w,n)
+ for(t in 2:length(X)){s2[t]=w+a*X[t-1]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)}
+ return(-optim(par=.3,fn=loglik)$value)}

> A=seq(0,2,by=.05)
> P=Vectorize(proflik)(A)
> par(mfrow=c(1,1))
> plot(A,P,type="l")
> OPT=optimize(function(x) -proflik(x), interval=c(0,2))
> t=-OPT$objective-qchisq(.95,df=1)
> abline(h=t,col="red")
> ainf=uniroot(function(x) proflik(x)-t,c(0,OPT$minimum))$root
> asup=uniroot(function(x) proflik(x)-t,c(OPT$minimum,2))$root
>  abline(v=ainf,lty=2)
>  abline(v=asup,lty=2)

Of course, all those techniques can be extended to higher order ARCH processes. For instance, if we assume that we have a ARCH() time series

where now

with a Gaussian (strong) white noise . The log-likelihood is still

and we can define

The code above can be changed, to take into account this additional component,

> db=data.frame(Y=epsilon[3:n]^2,
+ X1=epsilon[2:(n-1)]^2,
+ X2=epsilon[1:(n-2)]^2)
> X=epsilon
> loglik=function(param){
+ w=exp(param[1])
+ a1=exp(param[2])
+ a2=exp(param[3])
+ s2=rep(w,n)
+ for(t in 3:length(X)){s2[t]=w+a1*X[t-1]^2+a2*X[t-2]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)
+ }
> OPT=optim(par=
+ coefficients(lm(Y~X1+X2,data=db)),fn=loglik)
> exp(OPT$par)
(Intercept)          X1          X2 
 0.22710526  0.59475474  0.04741294

We can also consider some Generalized ARCH process, e.g. a GARCH(,),

where now

Again, maximum likelihood techniques can be used. Actually, we can also code Fisher-Scoring algorithm, since (in a very general context)

with here . Using a standard gradient descent algorithm, we get the following estimate for our GARCH process,

> X=epsilon
> theta=c(.2,.2,.2)
> G=rep(1,3)
> n=length(X)
> j=1
> while(sum(G^2)>1e-12){
+ s2=rep(theta[1],n)
+ for (i in 2:n){s2[i]=theta[1]+theta[2]*X[(i-1)]^2+theta[3]*s2[(i-1)]}
+ z=(X^2-s2)/s2^2
+ V=cbind(z[2:n],z[2:n]*X[1:(n-1)]^2,z[2:n]*s2[1:(n-1)])
+ H=(t(V)%*%V)
+ G=apply(V,2,sum)
+ theta=theta+solve(H)%*%G
+ j=j+1}
> as.numeric(theta)
[1] 0.20372918 0.59183911 0.08936159

The interesting point, here, is that we also derive the (asymptotic) variance

> (stdev=sqrt(diag(solve(H))))
[1] 0.01849067 0.04950477 0.02937233