Inference for ARCH processes

Consider some ARCH() process, say ARCH(),

where

with a Gaussian (strong) white noise .

> n=500
> a1=0.8
> a2=0.0
> w= 0.2
> set.seed(1)
> eta=rnorm(n)
> epsilon=rnorm(n)
> sigma2=rep(w,n)
> for(t in 3:n){
+ sigma2[t]=w+a1*epsilon[t-1]^2+a2*epsilon[t-2]^2
+ epsilon[t]=eta[t]*sqrt(sigma2[t])
+ }
> par(mfrow=c(1,1))
> plot(epsilon,type="l",ylim=c(min(epsilon)-.5,max(epsilon)))
> lines(min(epsilon)-1+sqrt(sigma2),col="red")

(the red line is the conditional variance process).

> par(mfrow=c(1,2))
> acf(epsilon,lag=50,lwd=2)
> acf(epsilon^2,lag=50,lwd=2)

We did mention in class that if  a ARCH(), then  is an AR() process. So a first idea is to consider a regression, as we did for Gaussian AR()

> db=data.frame(Y=epsilon[2:n]^2,X1=epsilon[1:(n-1)]^2)
> summary(lm(Y~X1,data=db))

Call:
lm(formula = Y ~ X1, data = db)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.4538 -0.3618 -0.2626  0.0935  9.3667 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.34963    0.04342   8.052 6.08e-15 ***
X1           0.31123    0.04262   7.303 1.13e-12 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.8413 on 497 degrees of freedom
Multiple R-squared:  0.0969,	Adjusted R-squared:  0.09508 
F-statistic: 53.33 on 1 and 497 DF,  p-value: 1.129e-12

There is some significant autocorrelation here. But since our vectors cannot be considered as Gaussian, using least squares is perhaps not the best strategy. Actually, if our series is not Gaussian, it is still conditionally Gaussian, since we assumed that  is a Gaussian (strong) white noise,

The likelihood is then

and the log-likelihood is

And a natural idea is to define

The code is simply

> X=epsilon
> loglik=function(param){
+ w=exp(param[1])
+ a1=exp(param[2])
+ s2=rep(w,n)
+ for(t in 2:length(X)){s2[t]=w+a1*X[t-1]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)
+ }
> OPT=optim(par=
+ coefficients(lm(Y~X1,data=db)),fn=loglik)
> exp(OPT$par)
(Intercept)          X1 
  0.2482241   0.5858578

(since the parameters have to be positive, we assume here that they can be written as the exponential of some real values). Observe that those values are closer to the one used to generate our time series.

If we use R functions to estimate those parameters, we get

> library(tseries)
> summary(garch(epsilon,c(0,1)))
...

Call:
garch(x = epsilon, order = c(0, 1))

Model:
GARCH(0,1)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87023 -0.60836 -0.03426  0.66648  3.48443 

Coefficient(s):
    Estimate  Std. Error  t value Pr(>|t|)    
a0   0.24959     0.02470   10.104  < 2e-16 ***
a1   0.58306     0.09737    5.988 2.13e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

so that the confidence interval for  is

> summary(garch(epsilon,c(0,1)))$coef[2,1]+
+ c(-1.96,1.96)*summary(garch(epsilon,c(0,1)))$coef[2,2]
[1] 0.3922088 0.7739088

Actually, since our main interest is this   parameter, it is possible to use profile likelihood techniques,

> proflik=function(a){
+ loglik=function(w){
+ s2=rep(w,n)
+ for(t in 2:length(X)){s2[t]=w+a*X[t-1]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)}
+ return(-optim(par=.3,fn=loglik)$value)}

> A=seq(0,2,by=.05)
> P=Vectorize(proflik)(A)
> par(mfrow=c(1,1))
> plot(A,P,type="l")
> OPT=optimize(function(x) -proflik(x), interval=c(0,2))
> t=-OPT$objective-qchisq(.95,df=1)
> abline(h=t,col="red")
> ainf=uniroot(function(x) proflik(x)-t,c(0,OPT$minimum))$root
> asup=uniroot(function(x) proflik(x)-t,c(OPT$minimum,2))$root
>  abline(v=ainf,lty=2)
>  abline(v=asup,lty=2)

Of course, all those techniques can be extended to higher order ARCH processes. For instance, if we assume that we have a ARCH() time series

where now

with a Gaussian (strong) white noise . The log-likelihood is still

and we can define

The code above can be changed, to take into account this additional component,

> db=data.frame(Y=epsilon[3:n]^2,
+ X1=epsilon[2:(n-1)]^2,
+ X2=epsilon[1:(n-2)]^2)
> X=epsilon
> loglik=function(param){
+ w=exp(param[1])
+ a1=exp(param[2])
+ a2=exp(param[3])
+ s2=rep(w,n)
+ for(t in 3:length(X)){s2[t]=w+a1*X[t-1]^2+a2*X[t-2]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)
+ }
> OPT=optim(par=
+ coefficients(lm(Y~X1+X2,data=db)),fn=loglik)
> exp(OPT$par)
(Intercept)          X1          X2 
 0.22710526  0.59475474  0.04741294

We can also consider some Generalized ARCH process, e.g. a GARCH(,),

where now

Again, maximum likelihood techniques can be used. Actually, we can also code Fisher-Scoring algorithm, since (in a very general context)

with here . Using a standard gradient descent algorithm, we get the following estimate for our GARCH process,

> X=epsilon
> theta=c(.2,.2,.2)
> G=rep(1,3)
> n=length(X)
> j=1
> while(sum(G^2)>1e-12){
+ s2=rep(theta[1],n)
+ for (i in 2:n){s2[i]=theta[1]+theta[2]*X[(i-1)]^2+theta[3]*s2[(i-1)]}
+ z=(X^2-s2)/s2^2
+ V=cbind(z[2:n],z[2:n]*X[1:(n-1)]^2,z[2:n]*s2[1:(n-1)])
+ H=(t(V)%*%V)
+ G=apply(V,2,sum)
+ theta=theta+solve(H)%*%G
+ j=j+1}
> as.numeric(theta)
[1] 0.20372918 0.59183911 0.08936159

The interesting point, here, is that we also derive the (asymptotic) variance

> (stdev=sqrt(diag(solve(H))))
[1] 0.01849067 0.04950477 0.02937233

Modeling the Marginals and the Dependence separately

When introducing copulas, it is commonly admitted that copulas are interesting because they allow to model the marginals and the dependence structure separately. The motivation is probably Sklar’s theorem, which says that given some marginal cumulative distribution functions (say  and , in dimension 2), and a copula (denoted ), then we can generate a multivariate cumulative distribution function with marginals the one specified previously, using

But this separability might be misleading. Consider the case of a fully parametric model,

Assume that those distributions are continuous, so that we can write the likelihood using densities,

and the log-likelihood is

The first part is the log-likelihood if we consider the first marginal (only). The second part is the log-likelihood if we consider the second marginal (only). If the two components are not independent (i.e. the copula density  is not equal to 1 everywhere) the third part cannot be considered as null, and so, in a general context,

where

while

In order to illustrate this point, consider a bivariate lognormal distribution (obtained by taking the exponential of a Gaussian vector)

> mu1=1
> mu2=2
> MU=c(mu1,mu2)
> s1=1
> s2=sqrt(2)
> r=.8
> SIGMA=matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
> library(mnormt)
> set.seed(1)
> Z=exp(rmnorm(25,MU,SIGMA))

If we believe that marginals and correlations can be treated separately, we can start with marginal distributions.

> library(MASS)
> (p1=fitdistr(Z[,1],"lognormal"))
    meanlog      sdlog  
  1.1686652   0.9309119 
 (0.1861824) (0.1316508)
> (p2=fitdistr(Z[,2],"lognormal"))
    meanlog      sdlog  
  2.2181721   1.1684049 
 (0.2336810) (0.1652374)

Based on those marginal distributions, define  and , and consider the maximum likelihood estimator  of the copula parameter, obtained from this pseudo sample,

Numerically, we get (since we consider a Gaussian copula, which is the true copula generated here)

> library(copula)
> Gcop=normalCopula(.3,dim=2)
> U=cbind(plnorm(Z[,1],p1$estimate[1],p1$estimate[2]),
+ plnorm(Z[,2],p2$estimate[1],p2$estimate[2]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.86530    0.03799   22.77

But clearly, we did not treat the dependence structure separately, since it was a function of marginal distributions,

If we consider a global optimization problem, then results are different. The joint density can be derived (see e.g. Mostafa & Mahmoud (1964))

> dbivlognorm=function(x,theta){
+ mu1=theta[1]
+ mu2=theta[2]
+ s1=theta[3]
+ s2=theta[4]
+ r=theta[5]
+ a1=(log(x[,1])-mu1)/s1
+ a2=(log(x[,2])-mu2)/s2
+ d=1/(2*pi*s1*s2*sqrt(1-r^2))*1/(x[,1]*x[,2])*
+ exp(-(a1^2-2*r*a1*a2+a2^2)/(2*(1-r^2)))
+ return(d)
+ }
> LogLik=function(theta){
+ return(-sum(log(dbivlognorm(Z,theta))))}
> optim(par=c(0,0,1,1,0),fn=LogLik)$par
[1] 1.1655359 2.2159767 0.9237853 1.1610132 0.8645052

The difference is not huge, but still. The estimators are not identical. From a statistical point of view, we can hardly treat the marginals and the dependence structure separately.

Another point we should keep in mind is that the estimation of the copula parameter depends on the margins, not only through the parameters, but more deeply, through the choice of the marginal distributions (that might be misspecified). For instance, if we assume that margins are exponentially distributed,

> (p1=fitdistr(Z[,1],"exponential"))
      rate   
  0.22288362 
 (0.04457672)
> (p2=fitdistr(Z[,2],"exponential"))
      rate   
  0.06543665 
 (0.01308733)

the estimation of the parameter of the Gaussian copula yields

> U=cbind(pexp(Z[,1],p1$estimate[1]),
+ pexp(Z[,2],p2$estimate[1]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
      Estimate Std. Error z value Pr(>|z|)    
rho.1  0.87421    0.03617   24.17   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The maximized loglikelihood is  15.4 
Optimization converged

The problem is that since we misspecify marginal distribution, our pseudo sample is defined on the unit-interval, but there is no chance that we get uniform margins. If we generate a sample of size 500 with the code above,

> x <- U[,1]; y <- U[,2]
> xhist <- hist(x, plot=FALSE) ; yhist <- hist(y, plot=FALSE)
> top <- max(c(xhist$counts, yhist$counts)) 
> nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE) 
> par(mar=c(3,3,1,1)) 
> plot(x, y, xlab="", ylab="",col="red",xlim=0:1,ylim=0:1) 
> par(mar=c(0,3,1,1))
> barplot(xhist$counts, axes=FALSE, ylim=c(0, top), 
+ space=0,col="light green") 
> par(mar=c(3,0,1,1))
> barplot(yhist$counts, axes=FALSE, xlim=c(0, top), 
+ space=0, horiz=TRUE,col="light blue")

If we compare with the previous case, when marginal distribution were well-specified, we can clearly see that the dependence structure depends on marginal distributions,

Great Stats on Elsevier’s Website

A lot of researchers have been recently extremely unfair, criticizing Elsevier one major editor of academic journals. Yes, it is a big publisher. So what? When you publish a paper in any journal that belongs to that group, you now have access to amazing statistics, via scopus.

Not only you know how many time the pdf file has been downloaded (you can also get that information on arxiv or ssrn), but you can get much more!

You can even know how many time the pdf has been opened, and you can even visualize on which paragraph readers spent more time, while reading it! This EyeTrackPDF tool is just amazing!

(you need to pay to get details per IP address, the standard graph is an average of eyetracker information, from what I understood). You can see where are the most interesting parts of the paper, which is quite informative actually

Of course, on all the papers, what we observe is that people spend most of the time on the references, checking for their own names, making sure that their work was mentioned! You can even get an understanding score, which is an estimation of the number of readers that might have understood the paper 

Awesome, isn’t it?