Tag Archives: seasonal

Seasonal Unit Roots

As discussed in the MAT8181 course, there are – at least – two kinds of non-stationary time series: those with a trend, and those with a unit-root (they will be called integrated). Unit root tests cannot be used to assess whether a time series is stationary, or not. They can only detect integrated time series. And the same holds for seasonal unit root.

In a previous post, we’ve seen that it was difficult to model hourly temperature, since most test do not reject unit roots. Consider here the average monthly temperature, still in Montréal, QC.

> montreal=read.table("http://freakonometrics.free.fr/temp-montreal-monthly.txt")
> M=as.matrix(montreal[,2:13])
> X=as.numeric(t(M))
> tsm=ts(X,start=1948,freq=12)
> plot(tsm)

For those who don’t know Montréal, Winter and Summer are very different. We can visualize monthly differences using

> month=rep(1:12,length(tsm)/12)
> plot(month,as.numeric(tsm))
> lines(1:12,apply(M,2,mean),col="red",type="b",pch=19)

or, if install the uroot package, removed from the CRAN repository, we can use

> library(uroot)
> bbplot(tsm)

or

> bb3D(tsm)
Loading required package: tcltk

It looks like our time series is cyclic, because of the yearly seasonal pattern. The autocorrelation function is here

> acf(tsm,lag=120)

Again, this cycle can be visualized using

> persp(1948:2013,1:12,M,theta=-50,col="yellow",shade=TRUE,
+ xlab="Year",ylab="Month",zlab="Temperature",ticktype="detailed")

Now, the question is is there a seasonal unit root ? This would mean that our model should be

If we forget about the autoregressive and the moving average component, we can estimate

If there is a seasonal unit root then  should be close to 1. Somehow.

> arima(tsm,order=c(0,0,0),seasonal=list(order=c(1,0,0),period=12))

Call:
arima(x = tsm, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
        sar1  intercept
      0.9702     6.4566
s.e.  0.0071     2.1515

It is not far away from 1. Actually, it cannot be too close to 1. If it was, then we would get an error message…

To illustrate some interesting models, let us consider also quarterly temperatures,

> N=cbind(apply(montreal[,2:4],1,sum),apply(montreal[,5:7],1,sum),apply(montreal[,8:10],1,sum),apply(montreal[,11:13],1,sum))
> X=as.numeric(t(N))
> tsq=ts(X,start=1948,freq=4)
> persp(1948:2013,1:4,N,theta=-50,col="yellow",shade=TRUE,
+ xlab="Year",ylab="Quarter",zlab="Temperature",ticktype="detailed")

(again, the aim is just to be able to write down some equations, if necessary)

Why not consider a  model on the quarterly temperature? Something like

i.e.

where  is some matrix . This model can easily be estimated,

> library(vars)
> df=data.frame(N)
> names(df)=paste("y",1:4,sep="")
> model=VAR(df)
> model

VAR Estimation Results:
======================= 

Estimated coefficients for equation y1: 
======================================= 
Call:
y1 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

       y1.l1        y2.l1        y3.l1        y4.l1        const 
 -0.13943065   0.21451118   0.08921237   0.30362065 -34.74793931 

Estimated coefficients for equation y2: 
======================================= 
Call:
y2 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.02520938  0.05288958 -0.13277377  0.05134148 40.68955266 

Estimated coefficients for equation y3: 
======================================= 
Call:
y3 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.07740824 -0.21142726  0.11180518  0.12963931 56.81087283 

Estimated coefficients for equation y4: 
======================================= 
Call:
y4 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.18842863 -0.31964579  0.25099508 -0.04452577  5.73228873

and matrix  is here

> A=rbind(
+ coefficients(model$varresult$y1)[1:4],
+ coefficients(model$varresult$y2)[1:4],
+ coefficients(model$varresult$y3)[1:4],
+ coefficients(model$varresult$y4)[1:4])
> A
           y1.l1       y2.l1       y3.l1       y4.l1
[1,] -0.13943065  0.21451118  0.08921237  0.30362065
[2,]  0.02520938  0.05288958 -0.13277377  0.05134148
[3,]  0.07740824 -0.21142726  0.11180518  0.12963931
[4,]  0.18842863 -0.31964579  0.25099508 -0.04452577

Since stationary of this multiple time series is closely related to eigenvalues of this matrix, let us look at them,

> eigen(A)$values
[1]  0.35834830 -0.32824657 -0.14042175  0.09105836
> Mod(eigen(A)$values)
[1] 0.35834830 0.32824657 0.14042175 0.09105836

So it looks like there is no stationarity issue, here. A restricted model is the periodic autoregressive model, so called  model, discussed by Paap and Franses

where

and

Keep in mind that this is a  model, since

This model can be estimated using a specific package (one can also look at the vignette, to get a better understanding of the syntax)

> library(partsm)
> detcomp <- list(regular=c(0,0,0), seasonal=c(1,0), regvar=0)
> model=fit.ar.par(wts=tsq, detcomp=detcomp, type="PAR", p=1)
> PAR.MVrepr(model)
----
    Multivariate representation of a PAR model.

  Phi0:

  1.000  0.000  0.000 0
 -0.242  1.000  0.000 0
  0.000 -0.261  1.000 0
  0.000  0.000 -0.492 1

  Phi1:

 0 0 0 0.314
 0 0 0 0.000
 0 0 0 0.000
 0 0 0 0.000

  Eigen values of Gamma = Phi0^{-1} %*% Phi1:
0.01 0 0 0 

  Time varing accumulation of shocks:

 0.010 0.040 0.155 0.314
 0.002 0.010 0.037 0.076
 0.001 0.003 0.010 0.020
 0.000 0.001 0.005 0.010

Here, the characteristic equation is

so there is a (seasonal) unit root if

Which is clearly not the case, here. It is possible to perform Canova-Hansen test,

> CH.test(tsq)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/2 , pi , 

  L-statistic: 1.122  
  Lag truncation parameter: 5 

  Critical values:

  0.10 0.05 0.025 0.01
 0.846 1.01  1.16 1.35

The idea is that polynomial  has four root, in ,

since

If we get back to monthly data,  has twelve roots,

each of them having different interpretations.

Here we can have 1 cycle per year (on 12 months), 2 cycles per year (on 6 months), 3 cycles per year (on 4 months), 4 cycles per year (on 3 months), even 6 cycles per year (on 2 months). This will depend on the argument of the root, with respectively

The output of the test is here

> CH.test(tsm)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , 

  L-statistic: 1.964  
  Lag truncation parameter: 20 

  Critical values:

 0.10 0.05 0.025 0.01
 2.49 2.75  2.99 3.27

And it looks like we reject the assumption of a seasonal unit root. I can even mention the following testing procedure

> library(forecast)
> nsdiffs(tsm, test="ch")
[1] 0

where the ouput: “1” means that there is a seasonal unit root and “0” that there is no seasonal unit root. Simple to read, isn’t it? If we consider the periodic autoregressive model on the monthly data, the output is

> model=fit.ar.par(wts=tsm, detcomp=detcomp, type="PAR", p=1)
> model
----
  PAR model of order 1 .

  y_t = alpha_{1,s}*y_{t-1} + alpha_{2,s}*y_{t-2} + ... + alpha_{p,s}*y_{t-p} + coeffs*detcomp + epsilon_t,  for s=1,2,...,12
----
  Autoregressive coefficients. 

          s=1  s=2  s=3  s=4  s=5 s=6 s=7  s=8  s=9 s=10 s=11 s=12
alpha_1s 0.15 0.05 0.07 0.33 0.11   0 0.3 0.38 0.31 0.19 0.15 0.37

So, whatever the test, we always reject the assumption that there is a seasonal unit root. Which does not mean that we can not have a strong cycle! Actually, the series is almost periodic. But there is no unit root! So all of this makes sense (I hardly believe that there might be unit root – seasonal, or not – in temperatures).

Just to make sure that we get it right, consider two times series, inspired from the previous one. The first one is a periodic sequence (with a very very small noise, just to avoid problem of non-definite matrices) and the second one is clearly integrated.

> Xp1=Xp2=as.numeric(t(M))
> for(t in 13:length(M)){
+ Xp1[t]=Xp1[t-12]
+ Xp2[t]=Xp2[t-12]+rnorm(1,0,2)
+ }
> Xp1=Xp1+rnorm(length(Xp1),0,.02)
> tsp1=ts(Xp1,start=1948,freq=12)
> tsp2=ts(Xp2,start=1948,freq=12)
> par(mfrow=c(2,1))
> plot(tsp1)
> plot(tsp2)

see also

> par(mfrow=c(1,2))
> bb3D(tsp1)
> bb3D(tsp2)

If we quickly look at those series, I would say that the first one has no unit root – even if it is not stationary, but it is because the series is periodic – while there is (are ?) unit root(s) for the second one. If we look at Canova-Hansen test, we get

> CH.test(tsp1)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , 

  L-statistic: 2.234
  Lag truncation parameter: 20 

  Critical values:

 0.10 0.05 0.025 0.01
 2.49 2.75  2.99 3.27

> CH.test(tsp2)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , 

  L-statistic: 5.448  
  Lag truncation parameter: 20 

  Critical values:

 0.10 0.05 0.025 0.01
 2.49 2.75  2.99 3.27

I know that this package has been removed, so maybe I should not use it. Consider instead

> nsdiffs(tsp1, 12,test="ch")
[1] 0
> nsdiffs(tsp2, 12,test="ch")
[1] 1

Here we have the same conclusion. The first one does not have a unit root, but the second one has. But be careful: with Osborn-Chui-Smith-Birchenhall test,

> nsdiffs(tsp1, 12,test="ocsb")
[1] 1
> nsdiffs(tsp2, 12,test="ocsb")
[1] 1

we have the feeling that there is also a unit root in our cyclic series…

So here, on a short-frequency, we do reject the assumption of a unit root – even a seasonal one – on our temperature series. We still have our high-frequency problem to deal with, some day (but I don’t think I’ll have enough time to introduce long range dependence this session, unfortunately).

Seasonal, or periodic, time series

Monday, in our MAT8181 class, we’ve discussed seasonal unit roots from a practical perspective (the theory will be briefly mentioned in a few weeks, once we’ve seen multivariate models). Consider some time series , for instance traffic on French roads,

> autoroute=read.table(
+ "http://freakonometrics.blog.free.fr/public/data/autoroute.csv",
+ header=TRUE,sep=";")
> X=autoroute$a100
> T=1:length(X)
> plot(T,X,type="l",xlim=c(0,120))
> reg=lm(X~T)
> abline(reg,col="red")

As discussed in a previous post, if there is a trend, we should remove it, and work on the residual 

> Y=residuals(reg)
> acf(Y,lag=36,lwd=3)

We can observe that there is some seasonality, here. A first strategy might be to assume that there is a seasonal unit root, so we consider , and we try to find some ARMA process. Consider the empirical autocorrelation function of that time series,

> Z=diff(Y,12)
> acf(Z,lag=36,lwd=3)

or the partial autocorrelation function

> pacf(Z,lag=36,lwd=3)

The first graph might suggest a MA(1) structure, while the second graph might suggest an AR(1) time series. Let us try both.

> model1=arima(Z,order=c(0,0,1))
> model1

Call:
arima(x = Z, order = c(0, 0, 1))

Coefficients:
          ma1  intercept
      -0.2367  -583.7761
s.e.   0.0916   254.8805

sigma^2 estimated as 8071255:  log likelihood = -684.1,  aic = 1374.2

> E1=residuals(model1)
> acf(E1,lag=36,lwd=3)

which can be considered as a white noise (if you’re not convinced, try either Box-Pierce or Ljung-Box test). Similarly,

> model2=arima(Z,order=c(1,0,0))
> model2

Call:
arima(x = Z, order = c(1, 0, 0))

Coefficients:
          ar1  intercept
      -0.3214  -583.0943
s.e.   0.1112   248.8735

sigma^2 estimated as 7842043:  log likelihood = -683.07,  aic = 1372.15

> E2=residuals(model2)
> acf(E2,lag=36,lwd=3)

which can be also considered as a white noise. So what we have, so far is

 

for some white noise . This suggest the following SARIMA structure on ,

> model2b=arima(Y,order=c(1,0,0),
+               seasonal = list(order = c(0, 1, 0),
+               period=12)) 
> model2b

Call:
arima(x = Y, order = c(1, 0, 0), seasonal = list(order = c(0, 1, 0), period = 12))

Coefficients:
          ar1
      -0.2715
s.e.   0.1130

sigma^2 estimated as 8412999:  log likelihood = -685.62,  aic = 1375.25

So far, so good. Now, what if we consider that we do not have a seasonal unit root, but simply a large autoregressive coefficient in some AR structure. Let us try something like

where a natural guess is that this coefficient should – probably – be close to one. Let us try this one,

> model3c=arima(Y,order=c(1,0,0),
+               seasonal = list(order = c(1, 0, 0), 
+               period = 12))
> model3c

Call:
arima(x = Y, order = c(1, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
          ar1    sar1  intercept
      -0.1629  0.9741  -684.9455
s.e.   0.1170  0.0115  3064.4040

sigma^2 estimated as 8406080:  log likelihood = -816.11,  aic = 1640.21

which is comparable with what we got previously (somehow), so we might assume that this model can be considered as an interesting one. We will discuss further the fact that the first coefficient might be considered as non-significant.

What is the difference from those two models? With a short term horizon, the two models are comparable. Clearly

> library(forecast)
> previ=function(model,h=36,b=40000){
+ prev=forecast(model,h)
+ T=1:85
+ Tfutur=86:(85+h)
+ plot(T,Y,type="l",xlim=c(0,85+h),ylim=c(-b,b))
+ polygon(c(Tfutur,rev(Tfutur)),c(prev$lower[,2],rev(prev$upper[,2])),col="orange",border=NA)
+ polygon(c(Tfutur,rev(Tfutur)),c(prev$lower[,1],rev(prev$upper[,1])),col="yellow",border=NA)
+ lines(prev$mean,col="blue")
+ lines(Tfutur,prev$lower[,2],col="red")
+ lines(Tfutur,prev$upper[,2],col="red")
+ }

Now, on a (very) long term perspective, the models are quite different: one is stationnary, so the forecast will tend to the average value (here 0, since the trend was removed), while the other one is (seasonaly) integrated, so the confidence interval will increase. For the non stationry, we get

> previ(model2b,600,b=60000)

and for the stationary one

> previ(model3c,600,b=60000)

But as mentioned in the introduction of this course, forecasts with those models are relevent only for short-term horizon (say not too large). And in that case, the prediction is almost the same here,

> previ(model2b,36,b=60000)

> previ(model3c,36,b=60000)

Now, if we come back on our second model, we did mention previously that the autoregressive coefficient might be considered as non-significant. What if we remove it?

> model3d=arima(Y,order=c(0,0,0),
+               seasonal = list(order = c(1, 0, 0), 
+               period = 12))
> (model3d)

Call:
arima(x = Y, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
        sar1  intercept
      0.9662  -696.5661
s.e.  0.0134  3182.3017

sigma^2 estimated as 8918630:  log likelihood = -817.03,  aic = 1640.07

If we look at a (short-term) forecast, we get

> previ(model3d,36,b=32000)

Do you see any difference? To be honest, I don’t… If we look at the figures, we get

> cbind(forecast(model2b,12)$mean,forecast(model3c,12)$mean,forecast(model3d,12)$mean)
Time Series:
Start = 86 
End = 97 
Frequency = 1 
1   -4908.4920  -5092.8999  -5520.8780
2  -10012.7837  -9640.8103  -9493.0339
3   -3880.2202  -3841.1960  -3828.2611
4  -18102.5211 -17638.4086 -17499.1828
5  -20602.7346 -20090.9117 -19934.1066
6  -10463.2212 -10209.0139 -10132.0439
7    2458.1538   2376.4897   2351.2377
8   -1680.3342  -1654.4844  -1647.0057
9     876.6837    836.2342    823.4934
10  18046.5642  17561.6520  17413.1463
11  21531.4820  20956.3451  20780.2836
12  -3217.6103  -3152.0446  -3132.4112

Figures are different, but not significantly (keep in mind the size of the confidence interval). This might explain why, in R, when we ask for an autoregressive process or order https://latex.codecogs.com/gif.latex?p, then we get a model with https://latex.codecogs.com/gif.latex?p parameters to estimate, and even if some are not significant, we usually keep them for the forecast. Most of the time, from forecasting point of view, it’s no big deal.

Easter

This morning, there was an interesting post entitled “why does Easter move around so much?” online on http://economist.com/blogs/economist-explains/…

In my time series classes, I keep saying that sometimes, series can exhibit seasonlity, but the seasonal effect can be quite irregular. It is the cas for river levels, where snowmelt can have a huge impact, and it is irregular. Similarly, chocolate sales (even monthly, or quarterly) depends on Easter. Because it can be either in March, or in April, the seasonal pattern is not as regular as flower sales for instance (Valentine beeing always on February 14th, as far as I remember). If we look at the word eggs on http://google.com/trends/q=eggs…, we do observe a cycle related to Easter.

The title of the article published by http://economist.com/blogs/economist-explains/… claims that there is a lot of variability on Eater’s day. Let us check ! The answer to the question “When is Easter ?” can be the following (if we want a short answer): Easter Sunday is the first Sunday after the first full moon after vernal equinox. For more details, see e.g. http://ortelius.de/east. The algorithm used to compute the date of Easter can is online, on http://smart.net/~mmontes/….

> century = year/100
> G = year % 19
> K = (century - 17)/25
> I = (century - century/4 - (century - K)/3 + 19*G + 15) % 30
> I = I - (I/28)*(1 - (I/28)*(29/(I + 1))*((21 - G)/11))
> J = (year + year/4 + I + 2 - century + century/4) % 7
> L = I - J
> EasterMonth = 3 + (L + 40)/44
> EasterDay = L + 28 - 31*(EasterMonth/4)

Actually, this algorithm can be found in some R packages. Here we use the date of Easter from AD 1000 and AD 3000,

> library(timeDate)
> E=Easter(1000:3000)
> D=as.Date(E)
> table(months(D))/2001

    april     march 
0.7651174 0.2348826

(April being before March, in the alphabetical order) If we look at the distribution of the date, it is the following, the starting point being March 1st,

> J=as.numeric(D-as.Date(paste("01/03/",1000:3000,sep=""),"%d/%m/%Y"))
> hist(J,breaks=seq(20,55),col="light green")

And if we look at the autocorrelation function, we can observe that indeed, after 19 years, there is a strong correlation (that could be seen in the algorithm given previously),

> plot(acf(J))

But in order to get a better understanding of the dynamics, we can also look at transiftion matrices. Define

> Q=quantile(J,seq(0,1,by=.25))
> Q[1]=Q[1]-1
> C=cut(J,Q)

Then, the one year transition matrix is (in %)

> k=1; n=length(C)
> B=data.frame(X1=(C[1:(n-k)]),X2=(C[(k+1):n]))
> (T=table(B$X1,B$X2))

          (20,31] (31,39] (39,46] (46,55]
  (20,31]       0       0     265     277
  (31,39]     316       0      13     182
  (39,46]     224     264       0       0
  (46,55]       1     247     211       0
> P=T/apply(T,1,sum)
> round(P*1000)/10

          (20,31] (31,39] (39,46] (46,55]
  (20,31]     0.0     0.0    48.9    51.1
  (31,39]    61.8     0.0     2.5    35.6
  (39,46]    45.9    54.1     0.0     0.0
  (46,55]     0.2    53.8    46.0     0.0

I.e. if  Easter was early in the year (say in March, in the first quartile), then very likeliy, the year after, it will be late in the year (with 50% chance in the third quartile, and 50% chance in the fourth one).