Après les exposés des dernières séances, l’examen du cours MAT8181, Séries Chronologiques avait lieu ce matin (et devrait finir dans quelques minutes, avec un peu de temps supplémentaire pour certain, compte tenu de la panne de métro qu’on a eu la joie de subir). L’énoncé est en ligne, et j’ai aussi écrit quelques éléments de correction. En cas de désaccord (mineur ou majeur) avec mes réponses, merci de me le faire savoir rapidement !
Category Archives: MAT8181
Stationarity of ARCH processes
In the context of AR(1) processes, we spent some time to explain what happens when is close to 1.
- if the process is stationary,
- if the process is a random walk
- if the process will explode
Again, random walks are extremely interesting processes, with puzzling properties. For instance,
as , and the process will cross the x-axis an infinite number of times…
Recently, in the MAT8181 course, we studied carefully properties of the ARCH(1) process, especially when . And again, what we get might be puzzling.
Consider some ARCH(1) process , with a Gaussian noise, i.e.
where
and is a sequence of i.i.d. variables. Here both and have to be positive.
Recall that since . Further
since , so the variance exists, and is constant only if , and in that case
Further, if , then the fourth moment can be obtained,
since. Now, if we get back on the property obtained while studying the variance, what does that mean if , or ?
If we look at simulations, we can generate an ARCH(1) process with for instance.
> n=600 > a=2 > w=0.2 > set.seed(1) > eta=rnorm(n) > epsilon=rnorm(n) > sigma2=rep(w,n) > for(t in 2:n){ + sigma2[t]=w+a*epsilon[t-1]^2 + epsilon[t]=eta[t]*sqrt(sigma2[t]) + } > plot(epsilon,type="l")
In order to understand what’s going on, we should keep in mind that, what we good is that has to lie in to be able to compute the second moment of . But it is possible to have a stationary process with infinite variance. And actually, this is what we have here.
Write
and them, iterate
and iterate again, and again, and again…
where
Here, we have a sum of positive terms, and we can use the so-called Cauchy rule: define
then, if , the series converges. Here,
which can also be written
and from the law of large numbers, since we have here a sum of i.i.d. terms,
So, if , then will have a limit when goes to infinity.
The condition above can be written
which is called Lyapunov coefficient.
The equation
is a condition on .
In the case where , the numerical value of this upper bound is 3.56.
> 1/exp(mean(log(rnorm(1e7)^2))) [1] 3.562517
In that case (), the variance may be infinite, but the series is stationary. On the other hand, if , then will go to infinity almost surely, as goes to infinity.
But in order to observe this difference, we need a lot of observations. For instance, with ,
and ,
we can easily see a difference. I do not say that it’s easy to see that the distribution above has an infinite variance, but still. Actually, if we consider Hill’s plot on the series above, on the tails of positive ‘s
> library(evir) > hill(epsilon)
or on the tails of negative ‘s
> hill(-epsilon)
we can see that the tail index is (strictly) smaller than 2 (meaning that the moment of order 2 does not exist).
Why is it puzzling? Maybe because here, is not weakly stationary (in the sense), but it is strongly stationary. Which is not the usual way weak and strong are related. This might be why we will not call this strong stationarity, but strict.
Inference for ARCH processes
Consider some ARCH() process, say ARCH(),
where
with a Gaussian (strong) white noise .
> n=500 > a1=0.8 > a2=0.0 > w= 0.2 > set.seed(1) > eta=rnorm(n) > epsilon=rnorm(n) > sigma2=rep(w,n) > for(t in 3:n){ + sigma2[t]=w+a1*epsilon[t-1]^2+a2*epsilon[t-2]^2 + epsilon[t]=eta[t]*sqrt(sigma2[t]) + } > par(mfrow=c(1,1)) > plot(epsilon,type="l",ylim=c(min(epsilon)-.5,max(epsilon))) > lines(min(epsilon)-1+sqrt(sigma2),col="red")
(the red line is the conditional variance process).
> par(mfrow=c(1,2)) > acf(epsilon,lag=50,lwd=2) > acf(epsilon^2,lag=50,lwd=2)
We did mention in class that if a ARCH(), then is an AR() process. So a first idea is to consider a regression, as we did for Gaussian AR()
> db=data.frame(Y=epsilon[2:n]^2,X1=epsilon[1:(n-1)]^2) > summary(lm(Y~X1,data=db)) Call: lm(formula = Y ~ X1, data = db) Residuals: Min 1Q Median 3Q Max -2.4538 -0.3618 -0.2626 0.0935 9.3667 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.34963 0.04342 8.052 6.08e-15 *** X1 0.31123 0.04262 7.303 1.13e-12 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.8413 on 497 degrees of freedom Multiple R-squared: 0.0969, Adjusted R-squared: 0.09508 F-statistic: 53.33 on 1 and 497 DF, p-value: 1.129e-12
There is some significant autocorrelation here. But since our vectors cannot be considered as Gaussian, using least squares is perhaps not the best strategy. Actually, if our series is not Gaussian, it is still conditionally Gaussian, since we assumed that is a Gaussian (strong) white noise,
The likelihood is then
and the log-likelihood is
And a natural idea is to define
The code is simply
> X=epsilon > loglik=function(param){ + w=exp(param[1]) + a1=exp(param[2]) + s2=rep(w,n) + for(t in 2:length(X)){s2[t]=w+a1*X[t-1]^2} + logL=-.5*sum(log(s2))-.5*sum(X^2/s2) + return(-logL) + } > OPT=optim(par= + coefficients(lm(Y~X1,data=db)),fn=loglik) > exp(OPT$par) (Intercept) X1 0.2482241 0.5858578
(since the parameters have to be positive, we assume here that they can be written as the exponential of some real values). Observe that those values are closer to the one used to generate our time series.
If we use R functions to estimate those parameters, we get
> library(tseries) > summary(garch(epsilon,c(0,1))) ... Call: garch(x = epsilon, order = c(0, 1)) Model: GARCH(0,1) Residuals: Min 1Q Median 3Q Max -2.87023 -0.60836 -0.03426 0.66648 3.48443 Coefficient(s): Estimate Std. Error t value Pr(>|t|) a0 0.24959 0.02470 10.104 < 2e-16 *** a1 0.58306 0.09737 5.988 2.13e-09 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
so that the confidence interval for is
> summary(garch(epsilon,c(0,1)))$coef[2,1]+ + c(-1.96,1.96)*summary(garch(epsilon,c(0,1)))$coef[2,2] [1] 0.3922088 0.7739088
Actually, since our main interest is this parameter, it is possible to use profile likelihood techniques,
> proflik=function(a){ + loglik=function(w){ + s2=rep(w,n) + for(t in 2:length(X)){s2[t]=w+a*X[t-1]^2} + logL=-.5*sum(log(s2))-.5*sum(X^2/s2) + return(-logL)} + return(-optim(par=.3,fn=loglik)$value)} > A=seq(0,2,by=.05) > P=Vectorize(proflik)(A) > par(mfrow=c(1,1)) > plot(A,P,type="l") > OPT=optimize(function(x) -proflik(x), interval=c(0,2)) > t=-OPT$objective-qchisq(.95,df=1) > abline(h=t,col="red") > ainf=uniroot(function(x) proflik(x)-t,c(0,OPT$minimum))$root > asup=uniroot(function(x) proflik(x)-t,c(OPT$minimum,2))$root > abline(v=ainf,lty=2) > abline(v=asup,lty=2)
Of course, all those techniques can be extended to higher order ARCH processes. For instance, if we assume that we have a ARCH() time series
where now
with a Gaussian (strong) white noise . The log-likelihood is still
and we can define
The code above can be changed, to take into account this additional component,
> db=data.frame(Y=epsilon[3:n]^2, + X1=epsilon[2:(n-1)]^2, + X2=epsilon[1:(n-2)]^2) > X=epsilon > loglik=function(param){ + w=exp(param[1]) + a1=exp(param[2]) + a2=exp(param[3]) + s2=rep(w,n) + for(t in 3:length(X)){s2[t]=w+a1*X[t-1]^2+a2*X[t-2]^2} + logL=-.5*sum(log(s2))-.5*sum(X^2/s2) + return(-logL) + } > OPT=optim(par= + coefficients(lm(Y~X1+X2,data=db)),fn=loglik) > exp(OPT$par) (Intercept) X1 X2 0.22710526 0.59475474 0.04741294
We can also consider some Generalized ARCH process, e.g. a GARCH(,),
where now
Again, maximum likelihood techniques can be used. Actually, we can also code Fisher-Scoring algorithm, since (in a very general context)
with here . Using a standard gradient descent algorithm, we get the following estimate for our GARCH process,
> X=epsilon > theta=c(.2,.2,.2) > G=rep(1,3) > n=length(X) > j=1 > while(sum(G^2)>1e-12){ + s2=rep(theta[1],n) + for (i in 2:n){s2[i]=theta[1]+theta[2]*X[(i-1)]^2+theta[3]*s2[(i-1)]} + z=(X^2-s2)/s2^2 + V=cbind(z[2:n],z[2:n]*X[1:(n-1)]^2,z[2:n]*s2[1:(n-1)]) + H=(t(V)%*%V) + G=apply(V,2,sum) + theta=theta+solve(H)%*%G + j=j+1} > as.numeric(theta) [1] 0.20372918 0.59183911 0.08936159
The interesting point, here, is that we also derive the (asymptotic) variance
> (stdev=sqrt(diag(solve(H)))) [1] 0.01849067 0.04950477 0.02937233
Independence and correlation
A short post to get back on a property I gave briefly in the MAT8595 class in January, and again in the MAT8181 class this week (to illustrate the distinction between weak and strong white noises). Recall that (real-valued) random variables and are independent if for all , Another characterization, for integrable variable is that for all , which can be written, if variables are square integrable The idea to prove this characterization is to observe that if and are independent can be written Using a standard argument in integration theory, equality is valid for step functions (not only indicators), and then to positive measurable functions, and finally to integrable functions. Proving this result is not that difficult. Observe that Rényi (1959) – inspired by Gebelein (1947) – followed by Sarmanov (1958) introduced the concept of maximal correlation, that can be related to this result, where the maximum is taken over all functions and such that the correlation exist. Actually, it is possible to consider only transformations such that and (and similarly for , the idea is that we simple center and scale, which does not impact the correlation.Thus, and are independent if and only if Algorithm to estimate that coefficient are interesting. The problem can be written, equivalently And if the minimization is considered over , assuming that is fixed, then the optimal transformation is And similarly for . So using an iterative algorithm, it is possible to get and (see Breiman and Friedman (1985) for more details). Actually, those functions appear in nonlinear canonical analysis. As mentioned in Lancaster (1957), for a Gaussian random vector and in that case and are affine functions. This can be related to Hermite’s polynomial and to the expansion of the bivariate Gaussian density. I still hope that someone will go further for the project in the MAT8181 course.
Seasonal Unit Roots
As discussed in the MAT8181 course, there are – at least – two kinds of non-stationary time series: those with a trend, and those with a unit-root (they will be called integrated). Unit root tests cannot be used to assess whether a time series is stationary, or not. They can only detect integrated time series. And the same holds for seasonal unit root.
In a previous post, we’ve seen that it was difficult to model hourly temperature, since most test do not reject unit roots. Consider here the average monthly temperature, still in Montréal, QC.
> montreal=read.table("http://freakonometrics.free.fr/temp-montreal-monthly.txt") > M=as.matrix(montreal[,2:13]) > X=as.numeric(t(M)) > tsm=ts(X,start=1948,freq=12) > plot(tsm)
For those who don’t know Montréal, Winter and Summer are very different. We can visualize monthly differences using
> month=rep(1:12,length(tsm)/12) > plot(month,as.numeric(tsm)) > lines(1:12,apply(M,2,mean),col="red",type="b",pch=19)
or, if install the uroot package, removed from the CRAN repository, we can use
> library(uroot) > bbplot(tsm)
or
> bb3D(tsm) Loading required package: tcltk
It looks like our time series is cyclic, because of the yearly seasonal pattern. The autocorrelation function is here
> acf(tsm,lag=120)
Again, this cycle can be visualized using
> persp(1948:2013,1:12,M,theta=-50,col="yellow",shade=TRUE, + xlab="Year",ylab="Month",zlab="Temperature",ticktype="detailed")
Now, the question is is there a seasonal unit root ? This would mean that our model should be
If we forget about the autoregressive and the moving average component, we can estimate
If there is a seasonal unit root then should be close to 1. Somehow.
> arima(tsm,order=c(0,0,0),seasonal=list(order=c(1,0,0),period=12)) Call: arima(x = tsm, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12)) Coefficients: sar1 intercept 0.9702 6.4566 s.e. 0.0071 2.1515
It is not far away from 1. Actually, it cannot be too close to 1. If it was, then we would get an error message…
To illustrate some interesting models, let us consider also quarterly temperatures,
> N=cbind(apply(montreal[,2:4],1,sum),apply(montreal[,5:7],1,sum),apply(montreal[,8:10],1,sum),apply(montreal[,11:13],1,sum)) > X=as.numeric(t(N)) > tsq=ts(X,start=1948,freq=4) > persp(1948:2013,1:4,N,theta=-50,col="yellow",shade=TRUE, + xlab="Year",ylab="Quarter",zlab="Temperature",ticktype="detailed")
(again, the aim is just to be able to write down some equations, if necessary)
Why not consider a model on the quarterly temperature? Something like
i.e.
where is some matrix . This model can easily be estimated,
> library(vars) > df=data.frame(N) > names(df)=paste("y",1:4,sep="") > model=VAR(df) > model VAR Estimation Results: ======================= Estimated coefficients for equation y1: ======================================= Call: y1 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const y1.l1 y2.l1 y3.l1 y4.l1 const -0.13943065 0.21451118 0.08921237 0.30362065 -34.74793931 Estimated coefficients for equation y2: ======================================= Call: y2 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const y1.l1 y2.l1 y3.l1 y4.l1 const 0.02520938 0.05288958 -0.13277377 0.05134148 40.68955266 Estimated coefficients for equation y3: ======================================= Call: y3 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const y1.l1 y2.l1 y3.l1 y4.l1 const 0.07740824 -0.21142726 0.11180518 0.12963931 56.81087283 Estimated coefficients for equation y4: ======================================= Call: y4 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const y1.l1 y2.l1 y3.l1 y4.l1 const 0.18842863 -0.31964579 0.25099508 -0.04452577 5.73228873
and matrix is here
> A=rbind( + coefficients(model$varresult$y1)[1:4], + coefficients(model$varresult$y2)[1:4], + coefficients(model$varresult$y3)[1:4], + coefficients(model$varresult$y4)[1:4]) > A y1.l1 y2.l1 y3.l1 y4.l1 [1,] -0.13943065 0.21451118 0.08921237 0.30362065 [2,] 0.02520938 0.05288958 -0.13277377 0.05134148 [3,] 0.07740824 -0.21142726 0.11180518 0.12963931 [4,] 0.18842863 -0.31964579 0.25099508 -0.04452577
Since stationary of this multiple time series is closely related to eigenvalues of this matrix, let us look at them,
> eigen(A)$values [1] 0.35834830 -0.32824657 -0.14042175 0.09105836 > Mod(eigen(A)$values) [1] 0.35834830 0.32824657 0.14042175 0.09105836
So it looks like there is no stationarity issue, here. A restricted model is the periodic autoregressive model, so called model, discussed by Paap and Franses
where
and
Keep in mind that this is a model, since
This model can be estimated using a specific package (one can also look at the vignette, to get a better understanding of the syntax)
> library(partsm) > detcomp <- list(regular=c(0,0,0), seasonal=c(1,0), regvar=0) > model=fit.ar.par(wts=tsq, detcomp=detcomp, type="PAR", p=1) > PAR.MVrepr(model) ---- Multivariate representation of a PAR model. Phi0: 1.000 0.000 0.000 0 -0.242 1.000 0.000 0 0.000 -0.261 1.000 0 0.000 0.000 -0.492 1 Phi1: 0 0 0 0.314 0 0 0 0.000 0 0 0 0.000 0 0 0 0.000 Eigen values of Gamma = Phi0^{-1} %*% Phi1: 0.01 0 0 0 Time varing accumulation of shocks: 0.010 0.040 0.155 0.314 0.002 0.010 0.037 0.076 0.001 0.003 0.010 0.020 0.000 0.001 0.005 0.010
Here, the characteristic equation is
so there is a (seasonal) unit root if
Which is clearly not the case, here. It is possible to perform Canova-Hansen test,
> CH.test(tsq) ------ - ------ ---- Canova & Hansen test ------ - ------ ---- Null hypothesis: Stationarity. Alternative hypothesis: Unit root. Frequency of the tested cycles: pi/2 , pi , L-statistic: 1.122 Lag truncation parameter: 5 Critical values: 0.10 0.05 0.025 0.01 0.846 1.01 1.16 1.35
The idea is that polynomial has four root, in ,
since
If we get back to monthly data, has twelve roots,
each of them having different interpretations.
Here we can have 1 cycle per year (on 12 months), 2 cycles per year (on 6 months), 3 cycles per year (on 4 months), 4 cycles per year (on 3 months), even 6 cycles per year (on 2 months). This will depend on the argument of the root, with respectively
The output of the test is here
> CH.test(tsm) ------ - ------ ---- Canova & Hansen test ------ - ------ ---- Null hypothesis: Stationarity. Alternative hypothesis: Unit root. Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , L-statistic: 1.964 Lag truncation parameter: 20 Critical values: 0.10 0.05 0.025 0.01 2.49 2.75 2.99 3.27
And it looks like we reject the assumption of a seasonal unit root. I can even mention the following testing procedure
> library(forecast) > nsdiffs(tsm, test="ch") [1] 0
where the ouput: “1” means that there is a seasonal unit root and “0” that there is no seasonal unit root. Simple to read, isn’t it? If we consider the periodic autoregressive model on the monthly data, the output is
> model=fit.ar.par(wts=tsm, detcomp=detcomp, type="PAR", p=1) > model ---- PAR model of order 1 . y_t = alpha_{1,s}*y_{t-1} + alpha_{2,s}*y_{t-2} + ... + alpha_{p,s}*y_{t-p} + coeffs*detcomp + epsilon_t, for s=1,2,...,12 ---- Autoregressive coefficients. s=1 s=2 s=3 s=4 s=5 s=6 s=7 s=8 s=9 s=10 s=11 s=12 alpha_1s 0.15 0.05 0.07 0.33 0.11 0 0.3 0.38 0.31 0.19 0.15 0.37
So, whatever the test, we always reject the assumption that there is a seasonal unit root. Which does not mean that we can not have a strong cycle! Actually, the series is almost periodic. But there is no unit root! So all of this makes sense (I hardly believe that there might be unit root – seasonal, or not – in temperatures).
Just to make sure that we get it right, consider two times series, inspired from the previous one. The first one is a periodic sequence (with a very very small noise, just to avoid problem of non-definite matrices) and the second one is clearly integrated.
> Xp1=Xp2=as.numeric(t(M)) > for(t in 13:length(M)){ + Xp1[t]=Xp1[t-12] + Xp2[t]=Xp2[t-12]+rnorm(1,0,2) + } > Xp1=Xp1+rnorm(length(Xp1),0,.02) > tsp1=ts(Xp1,start=1948,freq=12) > tsp2=ts(Xp2,start=1948,freq=12) > par(mfrow=c(2,1)) > plot(tsp1) > plot(tsp2)
see also
> par(mfrow=c(1,2)) > bb3D(tsp1) > bb3D(tsp2)
If we quickly look at those series, I would say that the first one has no unit root – even if it is not stationary, but it is because the series is periodic – while there is (are ?) unit root(s) for the second one. If we look at Canova-Hansen test, we get
> CH.test(tsp1) ------ - ------ ---- Canova & Hansen test ------ - ------ ---- Null hypothesis: Stationarity. Alternative hypothesis: Unit root. Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , L-statistic: 2.234 Lag truncation parameter: 20 Critical values: 0.10 0.05 0.025 0.01 2.49 2.75 2.99 3.27 > CH.test(tsp2) ------ - ------ ---- Canova & Hansen test ------ - ------ ---- Null hypothesis: Stationarity. Alternative hypothesis: Unit root. Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , L-statistic: 5.448 Lag truncation parameter: 20 Critical values: 0.10 0.05 0.025 0.01 2.49 2.75 2.99 3.27
I know that this package has been removed, so maybe I should not use it. Consider instead
> nsdiffs(tsp1, 12,test="ch") [1] 0 > nsdiffs(tsp2, 12,test="ch") [1] 1
Here we have the same conclusion. The first one does not have a unit root, but the second one has. But be careful: with Osborn-Chui-Smith-Birchenhall test,
> nsdiffs(tsp1, 12,test="ocsb") [1] 1 > nsdiffs(tsp2, 12,test="ocsb") [1] 1
we have the feeling that there is also a unit root in our cyclic series…
So here, on a short-frequency, we do reject the assumption of a unit root – even a seasonal one – on our temperature series. We still have our high-frequency problem to deal with, some day (but I don’t think I’ll have enough time to introduce long range dependence this session, unfortunately).
Linear ‘Prediction’ for AR Time Series
In the exercises for the MAT8181 course, there are two Exercises (16 and 17) about prediction and extrapolation based on MA(1) and AR(1) time series. But before discussing those exercises (I had some request for hints), I wanted to recall the definition of the linear prediction,
where
As discussed previously on this blog, we consider here on projection not on (this would be the conditional expectancy) but on the linear subset.
The goal of Exercise 2 was to establish an important result, in the context of Gaussian random vectors. If is a (multivariate) Gaussian vector, , then
where is the vector .
Keep those results in mind, and let us look at Exercise 17, for instance. Here, is an AR(1) process, with innovation ,
One observation (say ) is missing. We have here 3 questions:
- what is the best linear prediction of given and
- what is the best linear prediction of given and
- what is the best linear prediction of given and
Case 1. Here, we have to compute
Since we have an AR(1) process, , and . Thus, from the relationship above
which can be written
i.e. . Which makes sense actually: the AR(1) process is Markovian of order one, so
And we seen in class that for an AR(1) process
So far, so good.
Case 2. Here, we have to compute
Since we have an AR(1) process, , and . Thus, from the relationship above
i.e. .
Case 3. Finally, we have to compute
One more time, since we have an AR(1) process, , and . So here, the relationship above becomes
Here, we can write
i.e.
So finally, what we got here is
and
The mean squared errors for each of those estimates are obtained computing
I guess I should probably stop here… that’s a detailed hint actually.
Seasonal, or periodic, time series
Monday, in our MAT8181 class, we’ve discussed seasonal unit roots from a practical perspective (the theory will be briefly mentioned in a few weeks, once we’ve seen multivariate models). Consider some time series , for instance traffic on French roads,
> autoroute=read.table( + "http://freakonometrics.blog.free.fr/public/data/autoroute.csv", + header=TRUE,sep=";") > X=autoroute$a100 > T=1:length(X) > plot(T,X,type="l",xlim=c(0,120)) > reg=lm(X~T) > abline(reg,col="red")
As discussed in a previous post, if there is a trend, we should remove it, and work on the residual
> Y=residuals(reg) > acf(Y,lag=36,lwd=3)
We can observe that there is some seasonality, here. A first strategy might be to assume that there is a seasonal unit root, so we consider , and we try to find some ARMA process. Consider the empirical autocorrelation function of that time series,
> Z=diff(Y,12) > acf(Z,lag=36,lwd=3)
or the partial autocorrelation function
> pacf(Z,lag=36,lwd=3)
The first graph might suggest a MA(1) structure, while the second graph might suggest an AR(1) time series. Let us try both.
> model1=arima(Z,order=c(0,0,1)) > model1 Call: arima(x = Z, order = c(0, 0, 1)) Coefficients: ma1 intercept -0.2367 -583.7761 s.e. 0.0916 254.8805 sigma^2 estimated as 8071255: log likelihood = -684.1, aic = 1374.2 > E1=residuals(model1) > acf(E1,lag=36,lwd=3)
which can be considered as a white noise (if you’re not convinced, try either Box-Pierce or Ljung-Box test). Similarly,
> model2=arima(Z,order=c(1,0,0)) > model2 Call: arima(x = Z, order = c(1, 0, 0)) Coefficients: ar1 intercept -0.3214 -583.0943 s.e. 0.1112 248.8735 sigma^2 estimated as 7842043: log likelihood = -683.07, aic = 1372.15 > E2=residuals(model2) > acf(E2,lag=36,lwd=3)
which can be also considered as a white noise. So what we have, so far is
for some white noise . This suggest the following SARIMA structure on ,
> model2b=arima(Y,order=c(1,0,0), + seasonal = list(order = c(0, 1, 0), + period=12)) > model2b Call: arima(x = Y, order = c(1, 0, 0), seasonal = list(order = c(0, 1, 0), period = 12)) Coefficients: ar1 -0.2715 s.e. 0.1130 sigma^2 estimated as 8412999: log likelihood = -685.62, aic = 1375.25
So far, so good. Now, what if we consider that we do not have a seasonal unit root, but simply a large autoregressive coefficient in some AR structure. Let us try something like
where a natural guess is that this coefficient should – probably – be close to one. Let us try this one,
> model3c=arima(Y,order=c(1,0,0), + seasonal = list(order = c(1, 0, 0), + period = 12)) > model3c Call: arima(x = Y, order = c(1, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12)) Coefficients: ar1 sar1 intercept -0.1629 0.9741 -684.9455 s.e. 0.1170 0.0115 3064.4040 sigma^2 estimated as 8406080: log likelihood = -816.11, aic = 1640.21
which is comparable with what we got previously (somehow), so we might assume that this model can be considered as an interesting one. We will discuss further the fact that the first coefficient might be considered as non-significant.
What is the difference from those two models? With a short term horizon, the two models are comparable. Clearly
> library(forecast) > previ=function(model,h=36,b=40000){ + prev=forecast(model,h) + T=1:85 + Tfutur=86:(85+h) + plot(T,Y,type="l",xlim=c(0,85+h),ylim=c(-b,b)) + polygon(c(Tfutur,rev(Tfutur)),c(prev$lower[,2],rev(prev$upper[,2])),col="orange",border=NA) + polygon(c(Tfutur,rev(Tfutur)),c(prev$lower[,1],rev(prev$upper[,1])),col="yellow",border=NA) + lines(prev$mean,col="blue") + lines(Tfutur,prev$lower[,2],col="red") + lines(Tfutur,prev$upper[,2],col="red") + }
Now, on a (very) long term perspective, the models are quite different: one is stationnary, so the forecast will tend to the average value (here 0, since the trend was removed), while the other one is (seasonaly) integrated, so the confidence interval will increase. For the non stationry, we get
> previ(model2b,600,b=60000)
and for the stationary one
> previ(model3c,600,b=60000)
But as mentioned in the introduction of this course, forecasts with those models are relevent only for short-term horizon (say not too large). And in that case, the prediction is almost the same here,
> previ(model2b,36,b=60000)
> previ(model3c,36,b=60000)
Now, if we come back on our second model, we did mention previously that the autoregressive coefficient might be considered as non-significant. What if we remove it?
> model3d=arima(Y,order=c(0,0,0), + seasonal = list(order = c(1, 0, 0), + period = 12)) > (model3d) Call: arima(x = Y, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12)) Coefficients: sar1 intercept 0.9662 -696.5661 s.e. 0.0134 3182.3017 sigma^2 estimated as 8918630: log likelihood = -817.03, aic = 1640.07
If we look at a (short-term) forecast, we get
> previ(model3d,36,b=32000)
Do you see any difference? To be honest, I don’t… If we look at the figures, we get
> cbind(forecast(model2b,12)$mean,forecast(model3c,12)$mean,forecast(model3d,12)$mean) Time Series: Start = 86 End = 97 Frequency = 1 1 -4908.4920 -5092.8999 -5520.8780 2 -10012.7837 -9640.8103 -9493.0339 3 -3880.2202 -3841.1960 -3828.2611 4 -18102.5211 -17638.4086 -17499.1828 5 -20602.7346 -20090.9117 -19934.1066 6 -10463.2212 -10209.0139 -10132.0439 7 2458.1538 2376.4897 2351.2377 8 -1680.3342 -1654.4844 -1647.0057 9 876.6837 836.2342 823.4934 10 18046.5642 17561.6520 17413.1463 11 21531.4820 20956.3451 20780.2836 12 -3217.6103 -3152.0446 -3132.4112
Figures are different, but not significantly (keep in mind the size of the confidence interval). This might explain why, in R, when we ask for an autoregressive process or order , then we get a model with parameters to estimate, and even if some are not significant, we usually keep them for the forecast. Most of the time, from forecasting point of view, it’s no big deal.
Filtering a Stationary Time Series
In the first part of the MAT8181 course, on linear (univariate) time series, I forgot to mention an important theorem. Let be a stationary time series, and a sequence of real numbers such that
then the time series defined as
is a stationary time series. Further, one can get easily that
This result can be used, if necessary in the exercises (that might save some time actually). I did not include this property in my notes because it is a bit technical to establish that this sum exists, and that the time series is stationary. It is rather simple with the spectral density (since where stands for the filter generating function), but I did not mention the spectral density since it requires some knowledge on Fourier analysis…
Identification of ARMA processes
Last week (in the MAT8181 course) in order to identify the orders of an ARMA process, we’ve seen the eacf method, and I mentioned the scan method, introduced in Tsay and Tiao (1985). The code below – to produce the output of the scan procedure – has been adapted from an old code by Steve Chen (where I included a visualization of the p-values, with the following colors)
The procedure was described in the course, last Thursday,
arma.scan=function(z,ar.max=15,ma.max=15,alpha=0.01) { ym=function(z,t,m){return(z[t:(t-m)])} n=length(z) z=z - mean(z) cmax=ma.max + 1 rmax=ar.max + 1 corref=matrix(0,nrow=rmax,ncol=cmax) cmj.table=matrix(0,nrow=rmax,ncol=cmax) pv=matrix(0,nrow=rmax,ncol=cmax) mark=matrix(rep("X",(rmax)*(cmax)),nrow=rmax,ncol=cmax) Rnames=paste("AR",0:(ar.max),sep="-") Cnames=paste("MA",0:(ma.max),sep="-") rownames(corref)=Rnames colnames(corref)=Cnames rownames(cmj.table)=Rnames colnames(cmj.table)=Cnames rownames(pv)=Rnames colnames(pv)=Cnames rownames(mark)=Rnames colnames(mark)=Cnames for (m in 0:ar.max) { m1=m+1 for (j in 0:ma.max) { j1=j+1 if (m == 0 && j != 0) { racf=acf(z,plot=FALSE)$acf[1:(j+1)] lamb=racf[j+1]^2 corref[m1,j]=round(lamb,4) dmj=1 + 2*sum(racf[1:j]^2) cmj=-1*(n-m-j)*log(1.0 - lamb/dmj) pvalue =pchisq(cmj,1,lower.tail=FALSE) pv[m1,j]=round(pvalue,4) cmj.table[m1,j]=round(cmj,4) mark[m1,j]=ifelse(pvalue > alpha,"O","X") } else if (m != 0 && j == 0) { racf=pacf(z,plot=FALSE)$acf[1:(m+1)] lamb=racf[m+1]^2 corref[m1,j1]=round(lamb,4) dmj = 1 cmj=-1*(n-m-j)*log(1.0 - lamb/dmj) pvalue =pchisq(cmj,1,lower.tail=FALSE) pv[m1,j1]=round(pvalue,4) cmj.table[m1,j1]=round(cmj,4) mark[m1,j1]=ifelse(pvalue > alpha,"O","X") } else { mat1=matrix(0,nrow=m1,ncol=m1) mat2=matrix(0,nrow=m1,ncol=m1) mat3=matrix(0,nrow=m1,ncol=m1) mat4=matrix(0,nrow=m1,ncol=m1) for (t in (j+m+2):n) { tj1=t-j-1 ym1=ym(z,tj1,m) ym2=ym(z,t,m) mat1=mat1 + as.matrix(ym1)%*%ym1 mat2=mat2 + as.matrix(ym1)%*%ym2 mat3=mat3 + as.matrix(ym2)%*%ym2 mat4=mat4 + as.matrix(ym2)%*%ym1 } b1=solve(mat1)%*%mat2 b2=solve(mat3)%*%mat4 A=b2%*%b1 eig <-eigen(A) eig.val <-eig$values eig.val=Re(eig.val) eig.len=length(eig.val) eig.vector=eig$vectors lamb=min(eig.val) eig.vector0=eig.vector[,which.min(eig.val)] eig.vector0 = eig.vector0/eig.vector0[1] resid=(1:n)*0 for (t in (j+m+1):n) { z0=z[seq(t,t-m,-1)] resid[t]=sum(z0 * eig.vector0) } jm1=j + m + 1 rx=Re(resid[jm1:n]) racf=acf(rx,plot=FALSE)$acf[1:j] dmj=1 + 2*sum(racf^2) cmj=-1*(n-m-j)*log(1.0 - lamb/dmj) pvalue =pchisq(cmj,df=1,lower.tail=FALSE) corref[m1,j1]=round(lamb,4) pv[m1,j1]=round(pvalue,4) cmj.table[m1,j1]=round(cmj,4) mark[m1,j1]=ifelse(pvalue > alpha,"O","X") } } } cat("\n\nSCAN: Smallest CANonical Correlation Method for ARIMA(p,d,q)\n\n") cat("Estimates of Squared Canonical Correlation \n\n") print(corref) cat("\n\nC(m,j)\n\n") print(cmj.table) cat("\n\nChi-Square(1) Test p-value\n\n") print(pv) cat("\nSCAN Matrix \n\n") print(mark) plot(0:1,0:1,col="white",xlim=c(0,nrow(pv)-1),ylim=c(0,ncol(pv)-1),axes=FALSE,xlab="AR",ylab="MA") axis(1); axis(2) library(RColorBrewer) CL=brewer.pal(6, "RdBu")[c(1,2,3,5)] cpv=matrix(as.numeric(cut(as.vector(pv),c(-1,.01,.05,.1,2))),nrow(pv),ncol(pv)) for(i in 1:nrow(pv)){ for(j in 1:ncol(pv)){ polygon(c(i-1,i-1,i,i)-.5,c(j-1,j,j,j-1)-.5, col=CL[cpv[i,j]]) }} }
Consider the following simulated time series,
> s=arima.sim(n=200,model=list(ar=c(0,0,0,.4,0,0,0,.5),ma=c(0,0,1))) > plot(s,type="l")
The output is here
> arma.scan(s,6,6) SCAN: Smallest CANonical Correlation Method for ARIMA(p,d,q) Estimates of Squared Canonical Correlation MA-0 MA-1 MA-2 MA-3 MA-4 MA-5 MA-6 AR-0 0.0614 0.0104 0.1862 0.3516 0.0971 0.0128 0.0000 AR-1 0.0302 0.0294 0.1501 0.0943 0.0855 0.0127 0.0385 AR-2 0.3070 0.2781 0.2140 0.0006 0.1589 0.1884 0.2243 AR-3 0.1627 0.0037 0.1927 0.2311 0.1379 0.0207 0.0376 AR-4 0.2087 0.3947 0.3653 0.3075 0.1502 0.1364 0.1013 AR-5 0.1677 0.1219 0.0110 0.0263 0.0332 0.0350 0.0044 AR-6 0.0114 0.0485 0.0561 0.0427 0.0009 0.0089 0.0308 C(m,j) MA-0 MA-1 MA-2 MA-3 MA-4 MA-5 MA-6 AR-0 4.1161 0.6585 12.0315 20.6512 4.5388 0.5620 0.0000 AR-1 6.1127 1.9499 9.9356 4.9145 4.7219 0.4642 1.9015 AR-2 72.6011 19.1679 14.3512 0.0337 7.9668 9.6479 11.4573 AR-3 34.9724 0.2386 10.1620 13.4082 6.7875 0.8725 1.4071 AR-4 45.8691 27.5070 19.1422 20.2835 7.3339 5.5374 3.5874 AR-5 35.7981 8.0498 0.6280 1.3543 1.8470 1.7930 0.2338 AR-6 2.2147 3.1466 3.5990 1.9904 0.0511 0.4816 1.6440 Chi-Square(1) Test p-value MA-0 MA-1 MA-2 MA-3 MA-4 MA-5 MA-6 AR-0 0.0425 0.4171 0.0005 0.0000 0.0331 0.4534 0.0000 AR-1 0.0134 0.1626 0.0016 0.0266 0.0298 0.4957 0.1679 AR-2 0.0000 0.0000 0.0002 0.8543 0.0048 0.0019 0.0007 AR-3 0.0000 0.6252 0.0014 0.0003 0.0092 0.3503 0.2355 AR-4 0.0000 0.0000 0.0000 0.0000 0.0068 0.0186 0.0582 AR-5 0.0000 0.0046 0.4281 0.2445 0.1741 0.1806 0.6287 AR-6 0.1367 0.0761 0.0578 0.1583 0.8212 0.4877 0.1998 SCAN Matrix MA-0 MA-1 MA-2 MA-3 MA-4 MA-5 MA-6 AR-0 "O" "O" "X" "X" "O" "O" "X" AR-1 "O" "O" "X" "O" "O" "O" "O" AR-2 "X" "X" "X" "O" "X" "X" "X" AR-3 "X" "O" "X" "X" "X" "O" "O" AR-4 "X" "X" "X" "X" "X" "O" "O" AR-5 "X" "X" "O" "O" "O" "O" "O" AR-6 "O" "O" "O" "O" "O" "O" "O"
with the following graph
Of course, it is possible to ask for larger values,
> arma.scan(s,12,12)
The graph is now
Temperatures Series as Random Walks
Last year, I did mention in a post that unit-root tests are dangerous, because they might lead us to strange models. For instance, in a post, I did obtain that the temperature observed in January 2013, in Montréal, might be considered as a random walk process (or at leat an integrated process). The code to extract the data has changed (since the website has been updated), so here, we use
library(RCurl) library(XML) options(RCurlOptions = list(useragent = "R")) HEURE=0:23 extracttemp=function(Y,M,D){ url=paste( "http://climate.weather.gc.ca/climateData/hourlydata_e.html?timeframe=1&Prov=QC&StationID=5415&Year=",Y,"&Month=", M,"&Day=",D,sep="") wp <- getURLContent(url) doc <- htmlParse(wp, asText = TRUE) docName(doc) <- url tmp <- readHTMLTable(doc) basejour=data.frame(Year=Y,Month=M,Day=D, Hour=HEURE,Temp=as.numeric(as.character(data.frame(tmp[2])[,2]))[2:25]) return(basejour)} B=NULL for(y in 1955:2013){ for(d in 1:31){ B=rbind(B,extracttemp(y,1,d))}}
Here are all the temperatures observed, and 2013,
plot(B$X,B$Temp,cex=.5,col="light blue",xlab="January, in Montreal",ylab="Temperature (Celsius)") I=which(B$Year==2013) lines(B$X[I],B$Temp[I],col="red")
In the previous post, one test only was used, and one year was considered. I was wondering if this behavior was observed only with temperature of 2013 (or not), and how the other tests (mentioned in a previous post too) were performing.
I might need a function, because those tests cannot be used if there is a missing value, even only one. So I did use the value observed one hour before (just to make sure that the tests can be done)
correcty=function(Y){ I=which(is.na(Y)) if(length(I)==0){Yc=Y} if(length(I)>0){Yc=Y;for(i in I) Yc[i]=Yc[i-1]} return(Yc) }
Now, we can compute the p-values, for all the years, and the three different three (keeping in mind that two test if the series is non-stationary, and one if the series is stationary)
DF=matrix(NA,2013-1954,3) library(urca) for(y in 1955:2013){ Z=B$Temp[which(B$Year==y)] Zc=correcty(Z) DF[y-1954,2]=as.numeric(pp.test(Zc)$p.value) DF[y-1954,1]=as.numeric(kpss.test(Zc)$p.value) DF[y-1954,3]=as.numeric(adf.test(Zc)$p.value) }
Visually, if red means stationary, and blue means non-stationary, we get
DFP=DF DFP[,1]=DF[,1]<.05 DFP[,2:3]=DF[,2:3]>.05 library(RColorBrewer) CL=brewer.pal(6, "RdBu") plot(0:1,0:1,xlim=c(1950,2015),ylim=c(0,3),axes=FALSE,xlab="",ylab="") axis(1) text(1952,.5,"KPSS") text(1952,1.5,"PP") text(1952,2.5,"ADF") for(y in 1955:2013){ for(i in 1:3){ polygon(y+c(-1,-1,1,1)/2.2,i-.5+c(-1,1,1,-1)/2.2,col=CL[1+(DFP[y-1954,i]==1)*5],border=NA)}}
Quite frequently, we conclude that the temperature is a random walk. Which does not make sense (from a physical point of view). But again, it might come from the fact that temperature are stationary, but with some fractional behavior (as suggested in the previous post).
Unit Root Tests
This week, in the MAT8181 Time Series course, we’ve discussed unit root tests. According to Wold’s theorem, if is (weakly) stationnary then
where is the innovation process, and where is some deterministic series (just to get a result as general as possible). Observe that
as discussed in a previous post. To go one step further, there is also the Beveridge-Nelson decomposition : an integrated of order one process, defined as
can be represented as
a linear trend a random walk a stationary remaining term
i.e.
where is the polynomial with terms , where
For unit-root tests, we will use various representation of the process. In order to illustrate the implementation of those tests, consider the following series
> E=rnorm(240) > X=cumsum(E) > plot(X,type="l")
- Dickey Fuller (standard)
Here, for the simple version of the Dickey-Fuller test, we assume that
and we would like to test if (or not). We can write the previous representation as
so we simply have to test if the regression coefficient in the linear regression is – or not – null. Which can be done with Student’s test. If we consider the previous model without the linear drift, we have to consider the following regression
> lags=0 > z=diff(X) > n=length(z) > z.diff=embed(z, lags+1)[,1] > z.lag.1=X[(lags+1):n] > summary(lm(z.diff~0+z.lag.1 )) Call: lm(formula = z.diff ~ 0 + z.lag.1) Residuals: Min 1Q Median 3Q Max -2.84466 -0.55723 -0.00494 0.63816 2.54352 Coefficients: Estimate Std. Error t value Pr(>|t|) z.lag.1 -0.005609 0.007319 -0.766 0.444 Residual standard error: 0.963 on 238 degrees of freedom Multiple R-squared: 0.002461, Adjusted R-squared: -0.00173 F-statistic: 0.5873 on 1 and 238 DF, p-value: 0.4442
Our testing procedure will be based on the Student’s t value,
> summary(lm(z.diff~0+z.lag.1 ))$coefficients[1,3] [1] -0.7663308
which is exactly the value computed using
> library(urca) > df=ur.df(X,type="none",lags=0) > df ############################################################### # Augmented Dickey-Fuller Test Unit Root / Cointegration Test # ############################################################### The value of the test statistic is: -0.7663
The interpretation of this value can be done using critical values (99%, 95%, 90%)
> qnorm(c(.01,.05,.1)/2) [1] -2.575829 -1.959964 -1.644854
If the statistics exceeds those values, then the series is not stationnary, since we cannot reject the assumption that . So we might conclude that there is a unit root. Actually, those critical values are obtained using
> summary(df) ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression none Call: lm(formula = z.diff ~ z.lag.1 - 1) Residuals: Min 1Q Median 3Q Max -2.84466 -0.55723 -0.00494 0.63816 2.54352 Coefficients: Estimate Std. Error t value Pr(>|t|) z.lag.1 -0.005609 0.007319 -0.766 0.444 Residual standard error: 0.963 on 238 degrees of freedom Multiple R-squared: 0.002461, Adjusted R-squared: -0.00173 F-statistic: 0.5873 on 1 and 238 DF, p-value: 0.4442 Value of test-statistic is: -0.7663 Critical values for test statistics: 1pct 5pct 10pct tau1 -2.58 -1.95 -1.62
The problem with R is that there are several packages that can be used for unit root tests. Just to mention another one,
> library(tseries) > adf.test(X,k=0) Augmented Dickey-Fuller Test data: X Dickey-Fuller = -2.0433, Lag order = 0, p-value = 0.5576 alternative hypothesis: stationary
We do have here also a test where the null hypothesis is that there is a unit root. But the p-value is quite different. What is odd is that we have
> 1-adf.test(X,k=0)$p.value [1] 0.4423705 > df@testreg$coefficients[4] [1] 0.4442389
(but I think it is a coincidence).
- Augmented Dickey Fuller
It is possible to had some lags in the regression. For instance, we can consider
Again, we have to check if one coefficient is null, or not. And this can be done using Student’s t test.
> lags=1 > z=diff(X) > n=length(z) > z.diff=embed(z, lags+1)[,1] > z.lag.1=X[(lags+1):n] > k=lags+1 > z.diff.lag = embed(z, lags+1)[, 2:k] > summary(lm(z.diff~0+z.lag.1+z.diff.lag )) Call: lm(formula = z.diff ~ 0 + z.lag.1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -2.87492 -0.53977 -0.00688 0.64481 2.47556 Coefficients: Estimate Std. Error t value Pr(>|t|) z.lag.1 -0.005394 0.007361 -0.733 0.464 z.diff.lag -0.028972 0.065113 -0.445 0.657 Residual standard error: 0.9666 on 236 degrees of freedom Multiple R-squared: 0.003292, Adjusted R-squared: -0.005155 F-statistic: 0.3898 on 2 and 236 DF, p-value: 0.6777 > summary(lm(z.diff~0+z.lag.1+z.diff.lag ))$coefficients[1,3] [1] -0.7328138
This value is the one obtained using
> df=ur.df(X,type="none",lags=1) > summary(df) ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression none Call: lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -2.87492 -0.53977 -0.00688 0.64481 2.47556 Coefficients: Estimate Std. Error t value Pr(>|t|) z.lag.1 -0.005394 0.007361 -0.733 0.464 z.diff.lag -0.028972 0.065113 -0.445 0.657 Residual standard error: 0.9666 on 236 degrees of freedom Multiple R-squared: 0.003292, Adjusted R-squared: -0.005155 F-statistic: 0.3898 on 2 and 236 DF, p-value: 0.6777 Value of test-statistic is: -0.7328 Critical values for test statistics: 1pct 5pct 10pct tau1 -2.58 -1.95 -1.62
And again, other pckages can be used:
> adf.test(X,k=1) Augmented Dickey-Fuller Test data: X Dickey-Fuller = -1.9828, Lag order = 1, p-value = 0.5831 alternative hypothesis: stationary
Hopefully, the conclusion is the same (we should reject the assumption that the series is stationary, but I am not sure about the computation of the p-value).
- Augmented Dickey Fuller with trend and drift
So far, we have not included the drift in our model. But this is simple to do (this will be called the augmented version of the previous procedure): we just have to include a constant in the regression,
> summary(lm(z.diff~1+z.lag.1+z.diff.lag )) Call: lm(formula = z.diff ~ 1 + z.lag.1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -2.91930 -0.56731 -0.00548 0.62932 2.45178 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.29175 0.13153 2.218 0.0275 * z.lag.1 -0.03559 0.01545 -2.304 0.0221 * z.diff.lag -0.01976 0.06471 -0.305 0.7603 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9586 on 235 degrees of freedom Multiple R-squared: 0.02313, Adjusted R-squared: 0.01482 F-statistic: 2.782 on 2 and 235 DF, p-value: 0.06393
The statistics of interest are obtained here considering some analysis of variance outputs, where this model is compared with the one without the integrated part, and the drift,
> summary(lm(z.diff~1+z.lag.1+z.diff.lag ))$coefficients[2,3] [1] -2.303948 > anova(lm(z.diff ~ z.lag.1 + 1 + z.diff.lag),lm(z.diff ~ 0 + z.diff.lag))$F[2] [1] 2.732912
Those two values are the ones obtained also with
> df=ur.df(X,type="drift",lags=1) > summary(df) ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression drift Call: lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -2.91930 -0.56731 -0.00548 0.62932 2.45178 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.29175 0.13153 2.218 0.0275 * z.lag.1 -0.03559 0.01545 -2.304 0.0221 * z.diff.lag -0.01976 0.06471 -0.305 0.7603 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9586 on 235 degrees of freedom Multiple R-squared: 0.02313, Adjusted R-squared: 0.01482 F-statistic: 2.782 on 2 and 235 DF, p-value: 0.06393 Value of test-statistic is: -2.3039 2.7329 Critical values for test statistics: 1pct 5pct 10pct tau2 -3.46 -2.88 -2.57 phi1 6.52 4.63 3.81
And we can also include a linear trend,
> temps=(lags+1):n > summary(lm(z.diff~1+temps+z.lag.1+z.diff.lag )) Call: lm(formula = z.diff ~ 1 + temps + z.lag.1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -2.87727 -0.58802 -0.00175 0.60359 2.47789 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.3227245 0.1502083 2.149 0.0327 * temps -0.0004194 0.0009767 -0.429 0.6680 z.lag.1 -0.0329780 0.0166319 -1.983 0.0486 * z.diff.lag -0.0230547 0.0652767 -0.353 0.7243 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9603 on 234 degrees of freedom Multiple R-squared: 0.0239, Adjusted R-squared: 0.01139 F-statistic: 1.91 on 3 and 234 DF, p-value: 0.1287 > summary(lm(z.diff~1+temps+z.lag.1+z.diff.lag ))$coefficients[3,3] [1] -1.98282 > anova(lm(z.diff ~ z.lag.1 + 1 + temps+ z.diff.lag),lm(z.diff ~ 1+ z.diff.lag))$F[2] [1] 2.737086
while R function returns
> df=ur.df(X,type="trend",lags=1) > summary(df) ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression trend Call: lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag) Residuals: Min 1Q Median 3Q Max -2.87727 -0.58802 -0.00175 0.60359 2.47789 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.3227245 0.1502083 2.149 0.0327 * z.lag.1 -0.0329780 0.0166319 -1.983 0.0486 * tt -0.0004194 0.0009767 -0.429 0.6680 z.diff.lag -0.0230547 0.0652767 -0.353 0.7243 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9603 on 234 degrees of freedom Multiple R-squared: 0.0239, Adjusted R-squared: 0.01139 F-statistic: 1.91 on 3 and 234 DF, p-value: 0.1287 Value of test-statistic is: -1.9828 1.8771 2.7371 Critical values for test statistics: 1pct 5pct 10pct tau3 -3.99 -3.43 -3.13 phi2 6.22 4.75 4.07 phi3 8.43 6.49 5.47
- KPSS test
Here, in the KPSS testing procedure, two models can be considerd : with a drift, or with a linear trend. Here, the null hypothesis is that the series is stationnary.
With a drift, the code is
> summary(ur.kpss(X,type="mu")) ####################### # KPSS Unit Root Test # ####################### Test is of type: mu with 4 lags. Value of test-statistic is: 0.972 Critical value for a significance level of: 10pct 5pct 2.5pct 1pct critical values 0.347 0.463 0.574 0.73
while it will be, in the case there is a trend
> summary(ur.kpss(X,type="tau")) ####################### # KPSS Unit Root Test # ####################### Test is of type: tau with 4 lags. Value of test-statistic is: 0.5057 Critical value for a significance level of: 10pct 5pct 2.5pct 1pct critical values 0.119 0.146 0.176 0.216
One more time, it is possible to use another package to get the same test (but again, a different output)
> kpss.test(X,"Level") KPSS Test for Level Stationarity data: X KPSS Level = 1.1997, Truncation lag parameter = 3, p-value = 0.01 > kpss.test(X,"Trend") KPSS Test for Trend Stationarity data: X KPSS Trend = 0.6234, Truncation lag parameter = 3, p-value = 0.01
At least, there is some kind of consistency, since we keep rejecting the stationnary assumption, for that series.
- Philipps-Perron test
The Philipps-Perron test is based on the ADF procedure. The code is here
> PP.test(X) Phillips-Perron Unit Root Test data: X Dickey-Fuller = -2.0116, Truncation lag parameter = 4, p-value = 0.571
with again, a possible alternative with the other package
> pp.test(X) Phillips-Perron Unit Root Test data: X Dickey-Fuller Z(alpha) = -7.7345, Truncation lag parameter = 4, p-value = 0.6757 alternative hypothesis: stationary
- Comparison
I will not spend more time comparing the different codes, in R, to run those tests. Let us spend some additional time on a quick comparison of those three procedure. Let us generate some autoregressive processes, with more or less autocorrelation, as well as some random walk, and let us see how those tests perform :
> n=100 > AR=seq(1,.7,by=-.01) > P=matrix(NA,3,31) > M1=matrix(NA,1000,length(AR)) > M2=matrix(NA,1000,length(AR)) > M3=matrix(NA,1000,length(AR)) > for(i in 1:(length(AR)+1)){ + for(s in 1:1000){ + if(i==1) X=cumsum(rnorm(n)) + if(i!=1) X=arima.sim(n=n,list(ar=AR[i])) + library(urca) + M2[s,i]=as.numeric(pp.test(X)$p.value) + M1[s,i]=as.numeric(kpss.test(X)$p.value) + M3[s,i]=as.numeric(adf.test(X)$p.value) + }}
Here, we would like to count how many times the p-value of our tests exceed 5%,
> prop05=function(x) mean(x>.05) + P[1,]=1-apply(M1,2,prop05) + P[2,]=apply(M2,2,prop05) + P[3,]=apply(M3,2,prop05) + } > plot(AR,P[1,],type="l",col="red",ylim=c(0,1),ylab="proportion of non-stationnary + series",xlab="autocorrelation coefficient") > lines(AR,P[2,],type="l",col="blue") > lines(AR,P[3,],type="l",col="green") > legend(.7,1,c("ADF","KPSS","PP"),col=c("green","red","blue"),lty=1,lwd=1)
We can see here how poorly Dickey-Fuller test behave, since a 50% (at least) of our autoregressive processes are considered as non-stationnary.
Inference for ARMA(p,q) Time Series
As we mentioned in our previous post, as soon as we have a moving average part, inference becomes more complicated. Again, to illustrate, we do not need a two general model. Consider, here, some process,
where is some white noise, and assume further that .
> theta=.7 > phi=.5 > n=1000 > Z=rep(0,n) > set.seed(1) > e=rnorm(n) > for(t in 2:n) Z[t]=phi*Z[t-1]+e[t]+theta*e[t-1] > Z=Z[800:1000] > plot(Z,type="l")
- A two step procedure
To start with something simple, assume that we did miss the moving average component,
The estimator of – by least squares – is not longer consistent. But still. We can still compute it
> base=data.frame(Y=Z[2:n],X=Z[1:(n-1)]) > regression=lm(Y~0+X,data=base) > summary(regression) Call: lm(formula = Y ~ 0 + X, data = base) Residuals: Min 1Q Median 3Q Max -3.2445 -0.7909 0.0626 0.9707 3.0685 Coefficients: Estimate Std. Error t value Pr(>|t|) X 0.69571 0.05101 13.64 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.225 on 199 degrees of freedom (799 observations deleted due to missingness) Multiple R-squared: 0.4832, Adjusted R-squared: 0.4806 F-statistic: 186 on 1 and 199 DF, p-value: < 2.2e-16
and then, we cancompute the autocorrelation of the noise,
> n=200 > cor(residuals(regression)[2:n],residuals(regression)[1:(n-1)]) [1] 0.2663076
or more formally, use Durbin-Watson estimator, to get autocorrelation of the noise (and some significance test)
> library(car) > durbinWatsonTest(regression) lag Autocorrelation D-W Statistic p-value 1 0.2656555 1.46323 0 Alternative hypothesis: rho != 0
The point, here, is that we would like to assume that
meaning that should be some process. And
i.e. is a root of this quadratic problem,
> polyroot(c(1,-1/cor(residuals(regression)[2:n],residuals(regression)[1:(n-1)]),1)) [1] 0.2884681+0i 3.4665883+0i
Here, we do have two positive roots. I would go for the one smaller than one, in order to be able to invert the polynomial, if necessary…
- Use of the empirical autocorrelation function
An alternative might be to use properties of the autocorrelation function,
and
Again, we have a set of three equations, with three unknown parameters. Numerically, it is possible to find some roots. If we run the code, we get
> v=c(as.numeric(acf(Z)$acf[2:3]),1)*var(Z) > library(rootSolve) > seteq=function(x){ + F1=v[1]-x[3]^2*(x[2]^2+2*x[1]*x[2]+1)/(1-x[1]^2) + F2=v[2]-(x[1]*v[1]+x[2]*x[3]^2) + F3=v[3]-x[1]*v[2] + return(c(F1,F2,F3))} > multiroot(f=seteq,start=c(.1,.1,1)) $root [1] 3.643734 -3.188145 1.427759 $f.root [1] 1.371170e-11 -3.714573e-11 0.000000e+00 $iter [1] 8 $estim.precis [1] 1.695248e-11
Here, we have a situation…
- Use of least square techniques
We can use, here, the algorithm described in the context of processes.
> V=function(p){ + phi=p[1] + theta=p[2] + u=rep(0,length(Z)) + for(t in 2:length(Z)) u[t]=Z[t]-phi*Z[t-1]-theta*u[t-1] + return(sum(u^2)) + } > p=optim(par=c(.1,.1),V)$par [1] 0.3637783 0.7773845 > coef=c(p,sqrt(V(p)/(length(Z))))
which is not so bad. Actually, if we run that procecure on 1,000 samples, we get the following output
- Use of maximum likelihood techniques
Last, but not least, one more time, we can use (global) maximum likelihood techniques, since the process is a Gaussian process (all finite dimensional vector will have a joint Gaussian distribution) if we assume that the noise is Gaussian.
> library(mnormt) > GlobalLogLik=function(A,TS){ + n=length(TS) + phi=A[1]; theta=A[2] + sigma=A[3] + SIG=matrix(0,n,n) + rho=rep(0,n) + rho[1]=sigma^2*(theta^2+2*phi*theta+1)/(1-phi^2) + rho[2]=phi*rho[1]+theta*sigma^2 + for(h in 3:n) rho[h]=phi*rho[h-1] + for(i in 1:n){for(j in 1:n){ + SIG[i,j]=rho[abs(i-j)+1]}} + return(dmnorm(TS,rep(0,n),SIG,log=TRUE))} > LogL=function(A) -GlobalLogLik(A,TS=Z) > optim(c(.1,.1,1),LogL)$par [1] 0.3890991 0.7672036 1.0731340
It works fine, one more time. But maybe we got lucky here. We’ve seen in the post on autoregressive time series that the algorithm might fell if the time series is not stationary. In order to avoid such problems, we can consider a constraint optimization problem, where we simply recall that ,
> U=matrix(c(1,-1,0,0,0,0),2,3) > C=-c(.999,.999) > constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C) $par [1] 0.3890991 0.7672036 1.0731340 $value [1] 300.1956 $counts function gradient 118 NA $convergence [1] 0 $message NULL $outer.iterations [1] 2 $barrier.value [1] -1.536358e-05
If we run that algorithm 1,000 times, on simulated time series (with the same parameters), we get
Inference for MA(q) Time Series
Yesterday, we’ve seen how inference for time series was possible. I started with that one because it is actually the simple case. For instance, we can use ordinary least squares. There might be some possible bias (see e.g. White (1961)), but asymptotically, estimators are fine (consistent, with asymptotic normality). But when the noise is (auto)correlated, then it is more complex. So, consider here some time series
for some white noise .
> theta1=.25 > theta2=.7 > n=1000 > set.seed(1) > e=rnorm(n) > Z=rep(0,n) > for(t in 3:n) Z[t]=e[t]+theta1*e[t-1]+theta2*e[t-2] > Z=Z[800:1000] > plot(Z,type="l")
- Using the empirical autocorrelations
The first idea might be to use the first two (empirical) autocorrelations (the two that are supposed to be – theoretically – non null).
with when . We also have the following relationship on the variance of the process
With those three equations, for three unknown parameters, , and , we simply have to solve (numerically) that system of equations,
> v=c(as.numeric(acf(Z)$acf[2:3]),var(Z)) > v [1] 0.1658760 0.3823053 1.6379498 > library(rootSolve) > seteq=function(x){ + F1=v[1]-(x[1]+x[1]*x[2])/(1+x[1]^2+x[2]^2) + F2=v[2]-(x[2])/(1+x[1]^2+x[2]^2) + F3=v[3]-(1+x[1]^2+x[2]^2)*x[3]^2 + return(c(F1,F2,F3))} > multiroot(f=seteq,start=c(.1,.1,1)) $root [1] 0.1400579 0.4766699 1.1461636 $f.root [1] 7.876355e-10 4.188458e-09 -2.839977e-09 $iter [1] 5 $estim.precis [1] 2.605357e-09
We are a bit far away from the true values, used to generate our sample. And if we consider 1,000 sample (instead of only one), we still have the bias, and a large variance for our three estimators,
- Using least square techniques
We can try something quite different here. The problem we have is that we do not observe the noise , we only observe our series . But we can try to rebuild that series (call it since we’re not sure it will be a reconstruction of the noise). As suggested in Box & Jenkins (1967), assume that the first two values are null. And then, use
and then, we can use least square techniques
The code will be
> V=function(p){ + theta1=p[1] + theta2=p[2] + u=rep(0,length(Z)) + for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2] + return(sum(u^2)) + }
If we try to minimize the sum of the squares of the residuals, we get
> optim(par=c(.1,.1),V) $par [1] 0.2751667 0.6723909 $value [1] 225.8104 $counts function gradient 77 NA $convergence [1] 0 $message NULL
which is close to the true value. Another good thing is that, if we compare that rebuilt noise with the true one (since we actually have it), then we have the same vector,
> plot(e[800:1000],col="blue",type="l") > theta1=0.2751667 > theta2=0.6723909 > u=rep(0,length(Z)) > for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2] > lines(1:201,u,col="red")
So far, so good. And if we look at 1,000 samples, we get
It looks like we have some bias here. And since the two estimators should be negatively correlated, one over-estimates, while the other one under-estimates.
- Using the (global) maximum likelihood technique
And a final method might be to use the maximum likelihood technique (globally). Again, if we assume that we have a Gaussian i.i.d noise, then the vector is Gaussian, with a simple variance matrix (since a lot of elements will be null),
> library(mnormt) > GlobalLogLik=function(A,TS){ + n=length(TS) + theta1=A[1]; theta2=A[2] + sigma=A[3] + SIG=matrix(0,n,n) + rho=rep(0,n) + rho[1]=1 + rho[2]=(theta1+theta1*theta2)/(1+theta1^2+theta2^2) + rho[3]=(theta2)/(1+theta1^2+theta2^2) + for(i in 1:n){for(j in 1:n){ + SIG[i,j]=rho[abs(i-j)+1]}} + gamma0=(1+theta1^2+theta2^2)*sigma^2 + SIG=gamma0*SIG + return(dmnorm(TS,rep(0,n),SIG,log=TRUE))} > LogL=function(A) -GlobalLogLik(A,TS=Z) > optim(c(.1,.1,1),LogL) $par [1] 0.2584144 0.6826530 1.0669820 $value [1] 298.8699 $counts function gradient 86 NA $convergence [1] 0 $message NULL
Here, the values that minimize the likelihood are rather close to the ones used to generate our sample. And if we run this algorithm on 1,000 samples, we can see that those estimates are fine,
I could not find other ideas, to estimate those parameters. I guess we can use the partial autocorrelation function, since we have relationships that can be related to Yule-Walker equations for time series.
Inference for AR(p) Time Series
Consider a (stationary) autoregressive process, say of order 2,
for some white noise with variance . Here is a code to generate such a process,
> phi1=.25 > phi2=.7 > n=1000 > set.seed(1) > e=rnorm(n) > Z=rep(0,n) > for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t] > Z=Z[800:1000] > n=length(Z) > plot(Z,type="l")
Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.
- using least square regression
A natural idea is to see here a regression model, since (if we consider a matrix formulation)
Here we can run (conditional) ordinary least squares estimation,
> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)]) > regression=lm(Y~0+X1+X2,data=base) > summary(regression) Call: lm(formula = Y ~ 0 + X1 + X2, data = base) Residuals: Min 1Q Median 3Q Max -3.0268 -0.7063 0.1065 0.6925 3.2566 Coefficients: Estimate Std. Error t value Pr(>|t|) X1 0.23400 0.05463 4.283 2.88e-05 *** X2 0.62863 0.05476 11.479 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.062 on 197 degrees of freedom Multiple R-squared: 0.6349, Adjusted R-squared: 0.6312 F-statistic: 171.3 on 2 and 197 DF, p-value: < 2.2e-16
so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise
> regression$coefficients X1 X2 0.2339959 0.6286321 > summary(regression)$sigma [1] 1.061839
- using Yule-Walker equations
As we’ve seen in class, we can easily get the following equations for the autocovariance functions,
which can also be written (again, using a matrix expression)
So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions
The code is the following
> rho1=cor(Z[1:(n-1)],Z[2:n]) > rho2=cor(Z[1:(n-2)],Z[3:n]) > A=matrix(c(1,rho1,rho1,1),2,2) > b=matrix(c(rho1,rho2),2,1) > (PHI=solve(A,b)) [,1] [1,] 0.2256270 [2,] 0.6315329
Now, we need to extract the estimated innovation process, from this set of parameters
> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2) > sd(estWN) [1] 1.058558
This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.
An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,
It is not much more complicated to solve, actually,
> gamma0=var(Z[1:n]) > gamma1=var(Z[1:(n-1)],Z[2:n]) > gamma2=var(Z[1:(n-2)],Z[3:n]) > A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3) > b=matrix(c(gamma0,gamma1,gamma2),3,1) > (PHISIGMA=solve(A,b)) [,1] [1,] 0.2283151 [2,] 0.6283431 [3,] 1.1335501
- using (conditional) likelihood estimators
Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.
has a Gaussian distribution
In that case, the conditional log likelihood (conditional since we set the first two observations here) is
> CondLogLik=function(A,TS){ + phi1=A[1]; phi2=A[2] + sigma=A[3]; L=0 + for(t in 3:length(TS)){ + L=L+dnorm(TS[t],mean=phi1*TS[t-1]+ + phi2*TS[t-2],sd=sigma,log=TRUE)} + return(-L)}
Now, we can run standard optimization procedures,
> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL) $par [1] 0.2339589 0.6285002 1.0565613 $value [1] 293.3042 $counts function gradient 106 NA $convergence [1] 0 $message NULL
It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector has a know form.
- using (unconditional) likelihood estimators
The variance matrix of is , where autocovariances are not not know, be can easily be computed using a recursive relationship.
> library(mnormt) > GlobalLogLik=function(A,TS){ + n=length(TS) + phi1=A[1]; phi2=A[2] + sigma=A[3] + SIG=matrix(0,n,n) + rho=rep(0,n) + rho[1]=1 + rho[2]=phi1/(1-phi2) + for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2] + for(i in 1:n){for(j in 1:n){ + SIG[i,j]=rho[abs(i-j)+1]}} + gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2)) + SIG=gamma0*SIG + return(dmnorm(TS,rep(0,n),SIG,log=TRUE))} > LogL=function(A) -GlobalLogLik(A,TS=Z) > optim(c(.1,.1,1),LogL) Error in chol.default(x, pivot = FALSE) : Error in pd.solve(varcov, log.det = TRUE) : x appears to be not positive definite
The problem is that there is a strong constraint on the pair to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have
i.e. in a standard matrix form
(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider
> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3) > C=c(0,0,0,-.99999) > constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C) $par [1] 0.2238892 0.6342850 1.0613388 $value [1] 297.9202 $counts function gradient 108 NA $convergence [1] 0 $message NULL $outer.iterations [1] 2 $barrier.value [1] 0.000189892
(here, to faster, we restrain the parameters so that they will be positive).
- comparing those estimates
Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter , we get
and for the second one, , we have
The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.
Observe that the standard-deviation of the innovation process is here, well estimated,
(with clearly some estimators that perform better than others).
Defining Properly MA(∞) Time Series
In order to properly define series, we need to get back on some properties of infinite sequences, as briefly mentioned yesterday in the MAT8181 course. Consider some sequence . The sequence is said to be summable if
is convergent, i.e. if the limit of exists when .
From Cauchy criterion, converges if and only if for each , there is for which
when . The sequence is said to be absolutely summable if
and square-summable if
Observe that absolute summability will imply square summability (since for ‘s large enough , and then )
Consider now some time series
If the sequence of coefficients is square-summable, then
converges in to some random varible as . This can be proved easily using Cauchy criteria, in the sense that for any , there is a large enough such that, for any ,
In that case, if the sequence of coefficients is square-summable, then is stationary (in the sense) since the process is centered, and
for all .
Further, ergodicity of the time series, define as the absolute summability of the autocovariance sequence, is obtained when the sequence of coefficients is absolutely summable.