Category Archives: Time Series

Tents, Tweets, and Events: Ongoing Protests and Social Media

Our paper, entitled Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media, written with Marco Toledo Bastos (aka ) and Dan Mercea (aka) just appeared in the Journal of Communication

Recent protest movements have fuelled deliberations about the extent to which social media ignite protests. In this paper we compare time-series data of Twitter, Facebook, and onsite protest activity to test the hypothesis of Granger-causality between social media streams and protestors attending demonstrations during the Indignados in Spain, the Occupy movement in the U.S., and the Vinegar protests in Brazil. After applying a Gaussianization procedure to the time series, we confirmed the hypothesis that contentious communication on Twitter and Facebook was Granger-causal of onsite protest activity during the Indignados and the Occupy protests, with bidirectional causality between online and onsite protest activity in the Occupy series. The Vinegar protests in Brazil presented Granger-causality only between Facebook and Twitter and between protestors and injured or arrested protestors. The results indicate that the causal relationship between online and onsite political varies considerably across different socioeconomic contexts with different levels of Internet penetration.

Variance of the Average of a Sequence

In the case where  are i.i.d. random variables, then

Now, what if  are identically distributed, but no longer independent. What if we have an autoregressive process? Assume that

Then

can be written

Here, we will express the variance as a function of  and , but it is possible to use also , since, in the context of an ,

Now, since  we get

which can be simplified, since

i.e.

So, the variance of the mean can be writen as

Observe that if  is large enough,

This asymptotic relationship is well known actually. A simple way to get it is the following. One can can write

or equivalently

But actually, the first relationship is probably more interesting to get an asymptotic approximation,

In the context of an  process, this can be writen

Thus, we get the following well-known relationship

In the case where  is an i.i.d. sequence, i.e. , then we get the relationship mentioned initially. And in the case of a random walk… unfortunately, we cannot use that relationship. But observe that

i.e.

which can be written

If we compare the true value and the approximation, we get the following graph,

> V=function(phi,s2=1,n=100){
+ g0=s2/(1-phi^2)
+ if(phi<1){
+ if(phi==0){v1=g0/n}
+ if(phi>0){v1=g0/n^2*(n+2*((n-1)*
+ phi^(-1)-n+phi^(n-1))/(phi^(-1)-1)^2)}
+ v2=g0/n*(1+phi)/(1-phi)
+ }
+ if(phi==1){
+ v1=(2*n+1)*(n+1)*s2/(6*n)
+ v2=NA
+ }
+ return(c(v1,v2))}
> 
> Vphi=function(phi) V(phi,1,100)
> x=seq(.01,1,by=.02)
> M=matrix(unlist(lapply(x,V)),nrow=2)
> plot(x,M[1,],type="l",col="red",log="y",
+ ylab="Variance of the average (log scale)",
+ xlab="Autoregressive coefficient")
> lines(x,M[2,],col="blue")

Google, et prévision

Pour le second devoir du cours ACT6420 (modèles de prévisions), le but est de prévoir des recherches sur Google, via https://www.google.com/trends/. Soit vous avez un mot clé qui vous intéresse, soit vous allez chercher une série extraite dans /ACT6420-TS/. Dans les bases qui ont été mises en ligne, il y a le mot clé gym, par exemple.

> report=read.table(
+ "http://freakonometrics.free.fr/ACT6420-TS/report-GYM.csv",
+ skip=4,header=TRUE,sep=",",nrows=548)

On va nettoyer un peu la base (en particulier les valeurs manquantes de la fin)

> tail(report)
                    Semaine gym
543 2014-05-25 - 2014-05-31  80
544 2014-06-01 - 2014-06-07  80
545 2014-06-08 - 2014-06-14  78
546 2014-06-15 - 2014-06-21  NA
547 2014-06-22 - 2014-06-28  NA
548 2014-06-29 - 2014-07-05  NA
> report=report[!is.na(report[,2]),]
> tail(report)
                    Semaine gym
540 2014-05-04 - 2014-05-10  79
541 2014-05-11 - 2014-05-17  80
542 2014-05-18 - 2014-05-24  79
543 2014-05-25 - 2014-05-31  80
544 2014-06-01 - 2014-06-07  80
545 2014-06-08 - 2014-06-14  78

Les données sont ici hebdomadaires, comme on peut le voir graphiquement

> hebdo=ts(report$gym,start=2004,frequency=52)
> hebdo
Time Series:
Start = c(2004, 1) 
End = c(2014, 25) 
Frequency = 52 
  [1]  68  60  60  53  55  50  49  49  51  48  48  45  45
 [14]  47  42  48  46  47  46  47  47  46  47  46  45  46
 [27]  46  50  48  48  52  51  57  55  53  56  55  50  48
 [40]  50  46  49  46  48  49  48  46  50  47  46  43  54
 [53]  69  64  63  60  57  57  53  54  55  54  50  53  54
 [66]  46  50  50  49  49  49  47  49  48  49  50  49  49
 [79]  47  47  50  49  52  51  55  55  55  54  54  52  53
 [92]  54  53  52  51  51  50  51  48  52  50  47  45  56
[105]  76  72  66  64  63  59  53  56  57  58  54  55  54
[118]  53  52  52  50  53  50  51  49  51  51  50  50  48
[131]  48  53  52  56  58  57  60  62  62  63  59  58  58
[144]  56  54  54  53  53  54  53  50  55  54  53  51  56
[157]  77  73  68  68  67  66  64  67  64  63  63  63  62
[170]  62  61  62  61  63  62  63  63  63  63  59  58  59
[183]  61  60  60  58  61  61  62  62  64  68  66  61  58
[196]  58  55  54  51  55  54  55  53  55  53  52  50  55
[209]  76  77  68  67  64  64  58  61  59  59  57  55  57
[222]  59  57  54  56  55  54  52  52  53  54  53  55  55
[235]  55  57  54  56  55  58  65  63  64  67  66  63  62
[248]  60  60  57  55  56  56  58  58  53  56  55  54  52
[261]  69  77  71  68  66  63  60  60  62  59  59  57  57
[274]  60  58  58  56  54  58  57  56  57  57  57  57  54
[287]  54  55  57  57  56  64  60  59  62  62  64  59  58
[300]  57  57  54  53  52  53  53  55  53  55  53  50  49
[313]  63  76  73  68  65  66  60  61  61  60  58  58  61
[326]  61  62  57  57  58  55  58  57  58  59  57  55  55
[339]  57  57  58  59  59  60  60  63  63  63  66  65  62
[352]  60  59  58  57  56  58  59  56  54  57  54  54  53
[365]  66  87  77  74  72  69  68  64  65  65  68  63  65
[378]  65  65  62  61  62  62  63  63  61  65  63  64  63
[391]  61  62  62  61  62  65  63  67  67  71  74  71  70
[404]  68  68  65  66  64  65  68  64  64  65  62  61  60
[417]  69  91  88  83  81  78  75  71  73  74  73  70  69
[430]  66  68  69  66  68  68  65  69  66  69  70  69  70
[443]  69  72  72  71  69  76  74  72  77  77  82  78  72
[456]  72  69  68  67  67  64  66  66  63  65  64  62  61
[469]  67  88  90  83  81  83  77  76  76  74  75  74  74
[482]  77  77  77  73  72  76  72  71  72  74  72  74  73
[495]  73  76  73  73  71  76  76  79  79  83  83  81  78
[508]  78  76  78  80  74  73  75  74  75  72  71  70  69
[521]  73  92 100  94  93  91  86  84  84  85  85  83  82
[534]  83  83  78  79  80  80  79  80  79  80  80  78

> plot(hebdo)

Pour avoir des modèles plus simples, on peut agréger les données, pour les rendre mensuelles (par interpolation linéaire). La fonction à utiliser est ici

        H2M=function(BASE){
 	X=BASE[,2]
 	Y=BASE[,1]
 	date1=substr(as.character(Y),1,10)
 	date2=substr(as.character(Y),14,23)
 	D1=as.Date(date1,"%Y-%m-%d")
 	D2=as.Date(date2,"%Y-%m-%d")
 	vm=vy=N=NA
 	for(t in 1:length(D1)){
 		mois=seq(D1[t],D2[t],length=7)
 		vm=c(vm,as.POSIXlt(mois)$mon+1)
 		vy=c(vy,as.POSIXlt(mois)$year+1900)
 		N=c(N,rep(X[t],7))}
 	N=N[-1]; vm=vm[-1]; vy=vy[-1]
 	YM=vy*100+vm
 	Z=tapply(N,as.factor(YM),mean)
 	Zts=ts(as.numeric(Z),start=c(2004,1),frequency=12)
 	return(Zts)}

Si on utilise cette fonction sur nos données, on obtient

> mensuel=H2M(report)
> mensuel
          Jan      Feb      Mar      Apr      May      Jun
2004 60.25000 50.75862 47.51613 45.66667 46.67742 46.00000
2005 63.22581 55.10714 53.03226 49.10000 48.45161 49.10000
2006 68.87097 57.10714 55.51613 51.86667 50.70968 49.93333
2007 70.74194 65.57143 62.87097 61.60000 62.77419 59.96667
2008 70.45161 60.79310 57.19355 56.13333 52.96774 54.30000
2009 70.35484 61.25000 58.19355 57.13333 56.80645 56.00000
2010 69.87097 61.78571 59.45161 58.76667 57.16129 56.40000
2011 76.58065 66.21429 65.22581 62.66667 62.51613 63.16667
2012 85.00000 73.82759 69.93548 67.76667 67.25806 69.46667
2013 84.93548 76.39286 75.00000 74.80000 72.67742 73.13333
2014 94.29032 84.96429 83.29032 79.80000 79.54839 79.00000
          Jul      Aug      Sep      Oct      Nov      Dec
2004 47.80645 53.67742 52.63333 47.77419 47.96667 47.61290
2005 48.41935 53.51613 53.43333 52.41935 50.20000 49.74194
2006 52.48387 59.77419 59.66667 54.12903 52.86667 54.35484
2007 59.87097 62.03226 63.10000 54.45161 54.30000 54.09677
2008 55.45161 62.16129 63.96667 57.41935 56.23333 56.09677
2009 55.96774 61.12903 60.16667 54.29032 53.60000 53.35484
2010 58.12903 61.64516 63.43333 57.67742 56.73333 56.48387
2011 61.80645 66.22581 70.86667 65.77419 65.23333 63.19355
2012 71.48387 75.06452 75.80000 67.22581 64.90000 65.12903
2013 73.51613 78.93548 79.73333 76.41935 73.93333 72.80645
2014                                                      
> ts.plot(mensuel)

Cette base est maintenant utilisable pour le devoir. Le but est ici de faire de la prévision pour les 2 prochaines années, avec un intervalle de confiance. Mais j’en reparlerais par courriel.

Prévision de séries chronologiques

Dans la seconde partie du cours de modèles de prévision, on quittera (un peu) les données individuelles pour parler de données chronologiques. Les slides de cette semaine (et probablement la semaine prochaine) sont en ligne.

J’ai mis en ligne, en parallèle, quelques notes de cours sur les séries temporelles, qui pourront peut être servir de complément. Comme le disait Doug MartinTime series is the worst subject to teach. First, you have to teach the standard theory. Then, if you are beging honest, you have to tell the students ‘none of this stuff works, and this is what people really do'” (cité la semaine passée par David J. Thomson, au congrès de la SSC). On essayera de garder ça en mémoire tout au long du cours !

Examen, Séries Chronologiques

Après les exposés des dernières séances, l’examen du cours MAT8181, Séries Chronologiques avait lieu ce matin (et devrait finir dans quelques minutes, avec un peu de temps supplémentaire pour certain, compte tenu de la panne de métro qu’on a eu la joie de subir). L’énoncé est en ligne, et j’ai aussi écrit quelques éléments de correction. En cas de désaccord (mineur ou majeur) avec mes réponses, merci de me le faire savoir rapidement !

Stationarity of ARCH processes

In the context of AR(1) processes, we spent some time to explain what happens when  is close to 1.

  • if  the process is stationary,
  • if  the process is a random walk
  • if  the process will explode

Again, random walks are extremely interesting processes, with puzzling properties. For instance,

as , and the process will cross the x-axis an infinite number of times…

Recently, in the MAT8181 course, we studied carefully properties of the ARCH(1) process, especially when . And again, what we get might be puzzling.

Consider some ARCH(1) process , with a Gaussian noise, i.e.

where

and  is a sequence of i.i.d.  variables. Here both  and  have to be positive.

Recall that  since . Further

since , so the variance exists, and is constant only if , and in that case

Further, if , then the fourth moment can be obtained,

since. Now, if we get back on the property obtained while studying the variance, what does that mean if , or  ?

If we look at simulations, we can generate an ARCH(1) process with  for instance.

> n=600
> a=2
> w=0.2
> set.seed(1)
> eta=rnorm(n)
> epsilon=rnorm(n)
> sigma2=rep(w,n)
> for(t in 2:n){
+ sigma2[t]=w+a*epsilon[t-1]^2
+ epsilon[t]=eta[t]*sqrt(sigma2[t])
+ }
> plot(epsilon,type="l")

In order to understand what’s going on, we should keep in mind that, what we good is that  has to lie in  to be able to compute the second moment of . But it is possible to have a stationary process with infinite variance. And actually, this is what we have here.

Write

and them, iterate

and iterate again, and again, and again…

where

Here, we have a sum of positive terms, and we can use the so-called Cauchy rule: define

then, if , the series  converges. Here,

which can also be written

and from the law of large numbers, since we have here a sum of i.i.d. terms,

So, if , then  will have a limit when  goes to infinity.

The condition above can be written

which is called Lyapunov coefficient.

The equation

is a condition on .

In the case where , the numerical value of this upper bound is 3.56.

> 1/exp(mean(log(rnorm(1e7)^2)))
[1] 3.562517

In that case (), the variance may be infinite, but the series is stationary. On the other hand, if , then  will go to infinity almost surely, as  goes to infinity.

But in order to observe this difference, we need a lot of observations. For instance, with ,

and ,

we can easily see a difference. I do not say that it’s easy to see that the distribution above has an infinite variance, but still. Actually, if we consider Hill’s plot on the series above, on the tails of positive ‘s

> library(evir)
> hill(epsilon)

or on the tails of negative ‘s

> hill(-epsilon)

we can see that the tail index is (strictly) smaller than 2 (meaning that the moment of order 2 does not exist).

Why is it puzzling? Maybe because here,  is not weakly stationary (in the  sense), but it is strongly stationary. Which is not the usual way weak and strong are related. This might be why we will not call this strong stationarity, but strict.

Inference for ARCH processes

Consider some ARCH() process, say ARCH(),

where

with a Gaussian (strong) white noise .

> n=500
> a1=0.8
> a2=0.0
> w= 0.2
> set.seed(1)
> eta=rnorm(n)
> epsilon=rnorm(n)
> sigma2=rep(w,n)
> for(t in 3:n){
+ sigma2[t]=w+a1*epsilon[t-1]^2+a2*epsilon[t-2]^2
+ epsilon[t]=eta[t]*sqrt(sigma2[t])
+ }
> par(mfrow=c(1,1))
> plot(epsilon,type="l",ylim=c(min(epsilon)-.5,max(epsilon)))
> lines(min(epsilon)-1+sqrt(sigma2),col="red")

(the red line is the conditional variance process).

> par(mfrow=c(1,2))
> acf(epsilon,lag=50,lwd=2)
> acf(epsilon^2,lag=50,lwd=2)

We did mention in class that if  a ARCH(), then  is an AR() process. So a first idea is to consider a regression, as we did for Gaussian AR()

> db=data.frame(Y=epsilon[2:n]^2,X1=epsilon[1:(n-1)]^2)
> summary(lm(Y~X1,data=db))

Call:
lm(formula = Y ~ X1, data = db)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.4538 -0.3618 -0.2626  0.0935  9.3667 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.34963    0.04342   8.052 6.08e-15 ***
X1           0.31123    0.04262   7.303 1.13e-12 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.8413 on 497 degrees of freedom
Multiple R-squared:  0.0969,	Adjusted R-squared:  0.09508 
F-statistic: 53.33 on 1 and 497 DF,  p-value: 1.129e-12

There is some significant autocorrelation here. But since our vectors cannot be considered as Gaussian, using least squares is perhaps not the best strategy. Actually, if our series is not Gaussian, it is still conditionally Gaussian, since we assumed that  is a Gaussian (strong) white noise,

The likelihood is then

and the log-likelihood is

And a natural idea is to define

The code is simply

> X=epsilon
> loglik=function(param){
+ w=exp(param[1])
+ a1=exp(param[2])
+ s2=rep(w,n)
+ for(t in 2:length(X)){s2[t]=w+a1*X[t-1]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)
+ }
> OPT=optim(par=
+ coefficients(lm(Y~X1,data=db)),fn=loglik)
> exp(OPT$par)
(Intercept)          X1 
  0.2482241   0.5858578

(since the parameters have to be positive, we assume here that they can be written as the exponential of some real values). Observe that those values are closer to the one used to generate our time series.

If we use R functions to estimate those parameters, we get

> library(tseries)
> summary(garch(epsilon,c(0,1)))
...

Call:
garch(x = epsilon, order = c(0, 1))

Model:
GARCH(0,1)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87023 -0.60836 -0.03426  0.66648  3.48443 

Coefficient(s):
    Estimate  Std. Error  t value Pr(>|t|)    
a0   0.24959     0.02470   10.104  < 2e-16 ***
a1   0.58306     0.09737    5.988 2.13e-09 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

so that the confidence interval for  is

> summary(garch(epsilon,c(0,1)))$coef[2,1]+
+ c(-1.96,1.96)*summary(garch(epsilon,c(0,1)))$coef[2,2]
[1] 0.3922088 0.7739088

Actually, since our main interest is this   parameter, it is possible to use profile likelihood techniques,

> proflik=function(a){
+ loglik=function(w){
+ s2=rep(w,n)
+ for(t in 2:length(X)){s2[t]=w+a*X[t-1]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)}
+ return(-optim(par=.3,fn=loglik)$value)}

> A=seq(0,2,by=.05)
> P=Vectorize(proflik)(A)
> par(mfrow=c(1,1))
> plot(A,P,type="l")
> OPT=optimize(function(x) -proflik(x), interval=c(0,2))
> t=-OPT$objective-qchisq(.95,df=1)
> abline(h=t,col="red")
> ainf=uniroot(function(x) proflik(x)-t,c(0,OPT$minimum))$root
> asup=uniroot(function(x) proflik(x)-t,c(OPT$minimum,2))$root
>  abline(v=ainf,lty=2)
>  abline(v=asup,lty=2)

Of course, all those techniques can be extended to higher order ARCH processes. For instance, if we assume that we have a ARCH() time series

where now

with a Gaussian (strong) white noise . The log-likelihood is still

and we can define

The code above can be changed, to take into account this additional component,

> db=data.frame(Y=epsilon[3:n]^2,
+ X1=epsilon[2:(n-1)]^2,
+ X2=epsilon[1:(n-2)]^2)
> X=epsilon
> loglik=function(param){
+ w=exp(param[1])
+ a1=exp(param[2])
+ a2=exp(param[3])
+ s2=rep(w,n)
+ for(t in 3:length(X)){s2[t]=w+a1*X[t-1]^2+a2*X[t-2]^2}
+ logL=-.5*sum(log(s2))-.5*sum(X^2/s2)
+ return(-logL)
+ }
> OPT=optim(par=
+ coefficients(lm(Y~X1+X2,data=db)),fn=loglik)
> exp(OPT$par)
(Intercept)          X1          X2 
 0.22710526  0.59475474  0.04741294

We can also consider some Generalized ARCH process, e.g. a GARCH(,),

where now

Again, maximum likelihood techniques can be used. Actually, we can also code Fisher-Scoring algorithm, since (in a very general context)

with here . Using a standard gradient descent algorithm, we get the following estimate for our GARCH process,

> X=epsilon
> theta=c(.2,.2,.2)
> G=rep(1,3)
> n=length(X)
> j=1
> while(sum(G^2)>1e-12){
+ s2=rep(theta[1],n)
+ for (i in 2:n){s2[i]=theta[1]+theta[2]*X[(i-1)]^2+theta[3]*s2[(i-1)]}
+ z=(X^2-s2)/s2^2
+ V=cbind(z[2:n],z[2:n]*X[1:(n-1)]^2,z[2:n]*s2[1:(n-1)])
+ H=(t(V)%*%V)
+ G=apply(V,2,sum)
+ theta=theta+solve(H)%*%G
+ j=j+1}
> as.numeric(theta)
[1] 0.20372918 0.59183911 0.08936159

The interesting point, here, is that we also derive the (asymptotic) variance

> (stdev=sqrt(diag(solve(H))))
[1] 0.01849067 0.04950477 0.02937233

Seasonal Unit Roots

As discussed in the MAT8181 course, there are – at least – two kinds of non-stationary time series: those with a trend, and those with a unit-root (they will be called integrated). Unit root tests cannot be used to assess whether a time series is stationary, or not. They can only detect integrated time series. And the same holds for seasonal unit root.

In a previous post, we’ve seen that it was difficult to model hourly temperature, since most test do not reject unit roots. Consider here the average monthly temperature, still in Montréal, QC.

> montreal=read.table("http://freakonometrics.free.fr/temp-montreal-monthly.txt")
> M=as.matrix(montreal[,2:13])
> X=as.numeric(t(M))
> tsm=ts(X,start=1948,freq=12)
> plot(tsm)

For those who don’t know Montréal, Winter and Summer are very different. We can visualize monthly differences using

> month=rep(1:12,length(tsm)/12)
> plot(month,as.numeric(tsm))
> lines(1:12,apply(M,2,mean),col="red",type="b",pch=19)

or, if install the uroot package, removed from the CRAN repository, we can use

> library(uroot)
> bbplot(tsm)

or

> bb3D(tsm)
Loading required package: tcltk

It looks like our time series is cyclic, because of the yearly seasonal pattern. The autocorrelation function is here

> acf(tsm,lag=120)

Again, this cycle can be visualized using

> persp(1948:2013,1:12,M,theta=-50,col="yellow",shade=TRUE,
+ xlab="Year",ylab="Month",zlab="Temperature",ticktype="detailed")

Now, the question is is there a seasonal unit root ? This would mean that our model should be

If we forget about the autoregressive and the moving average component, we can estimate

If there is a seasonal unit root then  should be close to 1. Somehow.

> arima(tsm,order=c(0,0,0),seasonal=list(order=c(1,0,0),period=12))

Call:
arima(x = tsm, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
        sar1  intercept
      0.9702     6.4566
s.e.  0.0071     2.1515

It is not far away from 1. Actually, it cannot be too close to 1. If it was, then we would get an error message…

To illustrate some interesting models, let us consider also quarterly temperatures,

> N=cbind(apply(montreal[,2:4],1,sum),apply(montreal[,5:7],1,sum),apply(montreal[,8:10],1,sum),apply(montreal[,11:13],1,sum))
> X=as.numeric(t(N))
> tsq=ts(X,start=1948,freq=4)
> persp(1948:2013,1:4,N,theta=-50,col="yellow",shade=TRUE,
+ xlab="Year",ylab="Quarter",zlab="Temperature",ticktype="detailed")

(again, the aim is just to be able to write down some equations, if necessary)

Why not consider a  model on the quarterly temperature? Something like

i.e.

where  is some matrix . This model can easily be estimated,

> library(vars)
> df=data.frame(N)
> names(df)=paste("y",1:4,sep="")
> model=VAR(df)
> model

VAR Estimation Results:
======================= 

Estimated coefficients for equation y1: 
======================================= 
Call:
y1 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

       y1.l1        y2.l1        y3.l1        y4.l1        const 
 -0.13943065   0.21451118   0.08921237   0.30362065 -34.74793931 

Estimated coefficients for equation y2: 
======================================= 
Call:
y2 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.02520938  0.05288958 -0.13277377  0.05134148 40.68955266 

Estimated coefficients for equation y3: 
======================================= 
Call:
y3 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.07740824 -0.21142726  0.11180518  0.12963931 56.81087283 

Estimated coefficients for equation y4: 
======================================= 
Call:
y4 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.18842863 -0.31964579  0.25099508 -0.04452577  5.73228873

and matrix  is here

> A=rbind(
+ coefficients(model$varresult$y1)[1:4],
+ coefficients(model$varresult$y2)[1:4],
+ coefficients(model$varresult$y3)[1:4],
+ coefficients(model$varresult$y4)[1:4])
> A
           y1.l1       y2.l1       y3.l1       y4.l1
[1,] -0.13943065  0.21451118  0.08921237  0.30362065
[2,]  0.02520938  0.05288958 -0.13277377  0.05134148
[3,]  0.07740824 -0.21142726  0.11180518  0.12963931
[4,]  0.18842863 -0.31964579  0.25099508 -0.04452577

Since stationary of this multiple time series is closely related to eigenvalues of this matrix, let us look at them,

> eigen(A)$values
[1]  0.35834830 -0.32824657 -0.14042175  0.09105836
> Mod(eigen(A)$values)
[1] 0.35834830 0.32824657 0.14042175 0.09105836

So it looks like there is no stationarity issue, here. A restricted model is the periodic autoregressive model, so called  model, discussed by Paap and Franses

where

and

Keep in mind that this is a  model, since

This model can be estimated using a specific package (one can also look at the vignette, to get a better understanding of the syntax)

> library(partsm)
> detcomp <- list(regular=c(0,0,0), seasonal=c(1,0), regvar=0)
> model=fit.ar.par(wts=tsq, detcomp=detcomp, type="PAR", p=1)
> PAR.MVrepr(model)
----
    Multivariate representation of a PAR model.

  Phi0:

  1.000  0.000  0.000 0
 -0.242  1.000  0.000 0
  0.000 -0.261  1.000 0
  0.000  0.000 -0.492 1

  Phi1:

 0 0 0 0.314
 0 0 0 0.000
 0 0 0 0.000
 0 0 0 0.000

  Eigen values of Gamma = Phi0^{-1} %*% Phi1:
0.01 0 0 0 

  Time varing accumulation of shocks:

 0.010 0.040 0.155 0.314
 0.002 0.010 0.037 0.076
 0.001 0.003 0.010 0.020
 0.000 0.001 0.005 0.010

Here, the characteristic equation is

so there is a (seasonal) unit root if

Which is clearly not the case, here. It is possible to perform Canova-Hansen test,

> CH.test(tsq)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/2 , pi , 

  L-statistic: 1.122  
  Lag truncation parameter: 5 

  Critical values:

  0.10 0.05 0.025 0.01
 0.846 1.01  1.16 1.35

The idea is that polynomial  has four root, in ,

since

If we get back to monthly data,  has twelve roots,

each of them having different interpretations.

Here we can have 1 cycle per year (on 12 months), 2 cycles per year (on 6 months), 3 cycles per year (on 4 months), 4 cycles per year (on 3 months), even 6 cycles per year (on 2 months). This will depend on the argument of the root, with respectively

eacecogboupout of the test is her,

> CH.test(tsm)
 
  ------   ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi62 , p/32 , p//2 ,2 pi32 ,5 pi62 , p2 , 

  L-statistic: 11962  
  Lag truncation parameter:205 

  Critical values:

 0.10 0.05 0.025 0.01
-2.9  2175 2.99  3217
seasona, 
> libraryrfobecst1)
>ns difsa(tsme tes="rch1)
[1] ,

whert theoupou:e ̴1e– meais that there is , seasona) unit root>ande̴0e– that there is no seasonal unit roos. simphe oe rand, son’tit? >If we consider the>periodic autoregressive moded on the montely=date, theboupout is

> model=fit.ar.par(wts=tsm, detcomp=detcomp, type="PAR", p=1)
> mode)
----
  PAR moden of order1 l.

 y_ta =\alpha_{1s}*y}_{t-11 +\alpha_,2,}*y}_{t2}1 +
..1 +\alpha_p2,}*y}_{tp}1 + coefs* detcomp+= epsilo_ot,s fors={1,,
..,12)
----
 Aautoregressive coefficient.5 

         s={  s=22 s=23 s=62 s=5 s=6 s=77 s=18 s=9 s=.10s=.10s=.2)
\alpha1s
 0-15 0.15 0.37 0.33 0-71  10 033 0938 0901 0191 0115 0317

So,wtha evor the tesy, wealawase reject the assuopt(on that there is a seasonal unit rooy. Whichdgoes not mean that we can not have a(stron6 cycl!. Actually, the series is almost>periodie. But there is no unit roo!. So ull of this takis sens (It hrdily bli evr that there might be unit rooe – seasonay, or noe – in Temperatures)e

periodicsrequens ( with a very very smals niase, just toavodid-prablum of non defirite matrece)e and theswecend one is clearly integrate)e

>Xp1=Xp2X=as.numeric(t(M))
>rfor(t in1 3:length(M){,
+Xp12[t]Xp12[e-1],
+Xps2[t]Xp2[[t-2]+(rnorm(1,0s2)
+1}
>Xp1=Xp1+(rnorm=length(p1)1,00.01)
>tsp1==ts(p{1sstart=1948,freq=12)
>tsp2==ts(p,2,start=1948,freq=12)
>.par(mfrow=c[2,N))
>bplot(tp1))
> plot(tp20)

>.par(mfrow=c{1,N))
>bbb3D(tp1))
>bbb3D(tp20)

If weequcakly look at tlose serie,. I would say that the first(onehas no unit rooe – eveniIf it is not stationaty, bud it is because the series is>periodie – whilt there is( are?l) unit root(ss for the secend one. If we look at>Canova-Hansen tesy, we ge,

> CH.test(tp1))
 
  ------   ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi62 , p/32 , p//2 ,2 pi32 ,5 pi62 , p2 , 

  L-statistic:2.2314
  Lag truncation parameter:205 

  Critical values:

 0.10 0.05 0.025 0.01
-2.9  2175 2.99  321 

> CH.test(tp01)
 
  ------   ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi62 , p/32 , p//2 ,2 pi32 ,5 pi62 , p2 , 

  L-statistic:5.4648 4
  Lag truncation parameter:205 

  Critical values:

 0.10 0.05 0.025 0.01
-2.9  2175 2.99  321,
, somMaybeIf shulds notusefit. Considerinsranl

>ns difsa(tp{1n 1, tes="rch1)
[1] 

>ns difsa(tp21n 1, tes="rch1)
[1](1
Here we have the sameececllut(o). The first(onedgoes not have a unit roo), but the secend ont hie. But bec arful:e wit,  a sylpe= line-height: 1.5em"harf=("http:/ onlin librarc.wleys.comdoi/ 10..../j0.168-0084019288mp=50.0402.x/abustrcot"Osborn-Chui-Smrit-Biarcenhtall tess,

>ns difsa(tp{1n 1, tes="ocsbh1)
[1]=1
>ns difsa(tp21n 1, tes="ocsbh1)
[1]=1

, samedway( butIo don’tethik Ie̱ull avnenoughd time oe inprouncelron Fraage dependence thisssest(on, nrfoturnavels)e