Tag Archives: PP

Tests non-paramétriques et simulations

Lors du dernier cours de statistique, nous avons présenter les tests d’ajustment de lois. Nous avions illustré ces tests à partir de la taille d’individus (déjà utilisé pour présenter l’ajustement de lois, et l’estimation de densité), correspondant à https://latex.codecogs.com/gif.latex?\boldsymbol{x}=\{x_1,\cdots,x_n\}.

> Davis=read.table(
+ "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
> Davis[12,c(2,3)]=Davis[12,c(3,2)]
> X=Davis$height
> n=length(X)

On notera https://latex.codecogs.com/gif.latex?%20(x_{i:n}) la statistique d’ordre, au sens où

https://latex.codecogs.com/gif.latex?%20x_{1:n}\leq%20x_{2:n}\leq\cdots\leq%20x_{n-1:n}\leq%20x_{n:n}

Parmi les outils graphiques, nous avons vu le PP plot (graphique probabilité-probabilité) et le QQ plot (graphique quantile). Le code pour créer un PP plot peut être le suivant

> PP=function(Y,F=pnorm){
+   n=length(Y)
+   x=F(sort(Y))
+   y=seq(1/n/2,1-1/n/2,by=1/n)
+   return(data.frame(x=x,y=y))
+ }

qui représente (à un détail près) le nuage de points

https://latex.codecogs.com/gif.latex?\left\{F_0(x_{i:n}),\frac{i}{n}\right\}

et celui pour le QQ plot serait

> QQ=function(Y,Q=qnorm){
+   n=length(Y)
+   x=Q(seq(1/n/2,1-1/n/2,by=1/n))
+   y=sort(Y)
+   return(data.frame(x=x,y=y))
+ }

qui représente (toujours à un détail près) le nuage de points

https://latex.codecogs.com/gif.latex?%20\left\{F_0^{-1}\left(\frac{i}{n}\right),x_{i:n}\right\}

où https://latex.codecogs.com/gif.latex?%20F_0 est la loi que l’on cherche à tester, au sens où on a https://latex.codecogs.com/gif.latex?%20H_0:F=F_0 avec comme hypothèse alternative https://latex.codecogs.com/gif.latex?%20H_1:F\neq%20F_0.

Continue reading Tests non-paramétriques et simulations

PP plot, QQ plot et estimation à noyau

Reprenons ici le code pour récupérer la base de la taille (et du poids) d’un ensemble de 200 personnes.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
X=Davis$height
n=length(X)

La première chose que l’on avait vu la semaine passée est le PP-plot,

https://latex.codecogs.com/gif.latex?\left\{\frac{i}{n},F_\star(X_{i:n})\right\}On va ici ‘tester’ une loi normale (de même moyenne et de même variance que notre échantillon… faisons simple)

PP=data.frame(empirique=(1:n)/n,
theorique=pnorm(sort(X),mean(X),sd(X)))

Continue reading PP plot, QQ plot et estimation à noyau

Temperatures Series as Random Walks

Last year, I did mention in a post that unit-root tests are dangerous, because they might lead us to strange models. For instance, in a post, I did obtain that the temperature observed in January 2013, in Montréal, might be considered as a random walk process (or at leat an integrated process). The code to extract the data has changed (since the website has been updated), so here, we use

library(RCurl)
library(XML)
options(RCurlOptions = list(useragent = "R"))
HEURE=0:23
extracttemp=function(Y,M,D){
url=paste(
"http://climate.weather.gc.ca/climateData/hourlydata_e.html?timeframe=1&Prov=QC&StationID=5415&Year=",Y,"&Month=",
M,"&Day=",D,sep="")
wp <- getURLContent(url)
doc <- htmlParse(wp, asText = TRUE) 
docName(doc) <- url
tmp <- readHTMLTable(doc)
basejour=data.frame(Year=Y,Month=M,Day=D,
Hour=HEURE,Temp=as.numeric(as.character(data.frame(tmp[2])[,2]))[2:25])
return(basejour)}
B=NULL
for(y in 1955:2013){
for(d in 1:31){
B=rbind(B,extracttemp(y,1,d))}}

Here are all the temperatures observed, and 2013,

plot(B$X,B$Temp,cex=.5,col="light blue",xlab="January, in Montreal",ylab="Temperature (Celsius)")
I=which(B$Year==2013)
lines(B$X[I],B$Temp[I],col="red")

In the previous post, one test only was used, and one year was considered. I was wondering if this behavior was observed only with temperature of 2013 (or not), and how the other tests (mentioned in a previous post too) were performing.

I might need a function, because those tests cannot be used if there is a missing value, even only one. So I did use the value observed one hour before (just to make sure that the tests can be done)

correcty=function(Y){
I=which(is.na(Y))
	if(length(I)==0){Yc=Y}
	if(length(I)>0){Yc=Y;for(i in I) Yc[i]=Yc[i-1]}	
return(Yc)
}

Now, we can compute the p-values, for all the years, and the three different three (keeping in mind that two test if the series is non-stationary, and one if the series is stationary)

DF=matrix(NA,2013-1954,3)
library(urca)
for(y in 1955:2013){
Z=B$Temp[which(B$Year==y)]
	Zc=correcty(Z)
	DF[y-1954,2]=as.numeric(pp.test(Zc)$p.value)	
	DF[y-1954,1]=as.numeric(kpss.test(Zc)$p.value)	
	DF[y-1954,3]=as.numeric(adf.test(Zc)$p.value)
}

Visually, if red means stationary, and blue means non-stationary, we get

DFP=DF
DFP[,1]=DF[,1]<.05
DFP[,2:3]=DF[,2:3]>.05
library(RColorBrewer)
CL=brewer.pal(6, "RdBu")
plot(0:1,0:1,xlim=c(1950,2015),ylim=c(0,3),axes=FALSE,xlab="",ylab="")
axis(1)
text(1952,.5,"KPSS")
text(1952,1.5,"PP")
text(1952,2.5,"ADF")
for(y in 1955:2013){
for(i in 1:3){
polygon(y+c(-1,-1,1,1)/2.2,i-.5+c(-1,1,1,-1)/2.2,col=CL[1+(DFP[y-1954,i]==1)*5],border=NA)}}

Quite frequently, we conclude that the temperature is a random walk. Which does not make sense (from a physical point of view). But again, it might come from the fact that temperature are stationary, but with some fractional behavior (as suggested in the previous post).

Unit Root Tests

This week, in the MAT8181 Time Series course, we’ve discussed unit root tests. According to Wold’s theorem, if https://latex.codecogs.com/gif.latex?(Y_t) is  (weakly) stationnary then

https://latex.codecogs.com/gif.latex?%20%20%20%20%20Y_{t}=\sum%20_{{j=0}}^{\infty%20}\psi_{j}\varepsilon%20_{{t-j}}+\xi%20_{t}

where https://latex.codecogs.com/gif.latex?%20(\varepsilon%20_{{t}}) is the innovation process, and where https://latex.codecogs.com/gif.latex?%20(\xi%20_{{t}}) is some deterministic series (just to get a result as general as possible). Observe that

https://latex.codecogs.com/gif.latex?\sum%20_{{j=0}}^{{\infty%20}}|\psi_{{j}}|^{2}%20%3C%20\infty

as discussed in a previous post. To go one step further, there is also the Beveridge-Nelson decomposition : an integrated of order one process, defined as

https://latex.codecogs.com/gif.latex?%20%20%20%20%20\Delta%20Y_{t}=(1-L)%20Y_t=\sum%20_{{j=0}}^{\infty%20}\psi_{j}\varepsilon%20_{{t-j}}+\xi=\Psi(L)\varepsilon%20_{{t}}+\xican be represented as

a linear trend https://latex.codecogs.com/gif.latex?+ a random walk https://latex.codecogs.com/gif.latex?+ a stationary remaining term

i.e.

https://latex.codecogs.com/gif.latex?%20Y_{t}=\underbrace{Y_0%20+%20\xi%20t%20%C2%A0}+\underbrace{\Psi(1)\sum_{j=1}^t\varepsilon%20_{{i}}}+\underbrace{\tilde\Psi(L)\varepilon_0-\tilde\Psi(L)\varepsilon_t}

where https://latex.codecogs.com/gif.latex?%20\tilde\Psi(\cdot) is the polynomial with terms https://latex.codecogs.com/gif.latex?%20\tilde\psi_j, where

https://latex.codecogs.com/gif.latex?%20\tilde\psi_j%20=\sum_{i=j+1}^\infty\psi_i

For unit-root tests, we will use various representation of the process. In order to illustrate the implementation of those tests, consider the following series

> E=rnorm(240)
> X=cumsum(E)
> plot(X,type="l")
  • Dickey Fuller (standard)

Here, for the simple version of the Dickey-Fuller test, we assume that

https://latex.codecogs.com/gif.latex?%20Y_t=\alpha+\beta%20t+\varphi%20Y_{t-1}+\varepsilon_t

and we would like to test if https://latex.codecogs.com/gif.latex?%20\varphi=1 (or not). We can write the previous representation as

https://latex.codecogs.com/gif.latex?%20\Delta%20Y_t=\alpha+\beta%20t+[\varphi-1]%20Y_{t-1}+\varepsilon_t

so we simply have to test if the regression coefficient in the linear regression is – or not – null. Which can be done with Student’s test. If we consider the previous model without the linear drift, we have to consider the following regression

> lags=0
> z=diff(X)
> n=length(z)
> z.diff=embed(z, lags+1)[,1]
> z.lag.1=X[(lags+1):n]
> summary(lm(z.diff~0+z.lag.1 ))

Call:
lm(formula = z.diff ~ 0 + z.lag.1)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.84466 -0.55723 -0.00494  0.63816  2.54352 

Coefficients:
         Estimate Std. Error t value Pr(>|t|)
z.lag.1 -0.005609   0.007319  -0.766    0.444

Residual standard error: 0.963 on 238 degrees of freedom
Multiple R-squared:  0.002461,	Adjusted R-squared:  -0.00173 
F-statistic: 0.5873 on 1 and 238 DF,  p-value: 0.4442

Our testing procedure will be based on the Student’s t value,

> summary(lm(z.diff~0+z.lag.1 ))$coefficients[1,3]
[1] -0.7663308

which is exactly the value computed using

> library(urca)
> df=ur.df(X,type="none",lags=0)
> df

############################################################### 
# Augmented Dickey-Fuller Test Unit Root / Cointegration Test # 
############################################################### 

The value of the test statistic is: -0.7663

The interpretation of this value can be done using critical values (99%, 95%, 90%)

> qnorm(c(.01,.05,.1)/2)
[1] -2.575829 -1.959964 -1.644854

If the statistics exceeds those values, then the series is not stationnary, since we cannot reject the assumption that https://latex.codecogs.com/gif.latex?%20\varphi-1=0. So we might conclude that there is a unit root. Actually, those critical values are obtained using

> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression none 

Call:
lm(formula = z.diff ~ z.lag.1 - 1)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.84466 -0.55723 -0.00494  0.63816  2.54352 

Coefficients:
         Estimate Std. Error t value Pr(>|t|)
z.lag.1 -0.005609   0.007319  -0.766    0.444

Residual standard error: 0.963 on 238 degrees of freedom
Multiple R-squared:  0.002461,	Adjusted R-squared:  -0.00173 
F-statistic: 0.5873 on 1 and 238 DF,  p-value: 0.4442

Value of test-statistic is: -0.7663 

Critical values for test statistics: 
      1pct  5pct 10pct
tau1 -2.58 -1.95 -1.62

The problem with R is that there are several packages that can be used for unit root tests. Just to mention another one,

> library(tseries)
> adf.test(X,k=0)

	Augmented Dickey-Fuller Test

data:  X
Dickey-Fuller = -2.0433, Lag order = 0, p-value = 0.5576
alternative hypothesis: stationary

We do have here also a test where the null hypothesis is that there is a unit root. But the p-value is quite different. What is odd is that we have

> 1-adf.test(X,k=0)$p.value
[1] 0.4423705
> df@testreg$coefficients[4]
[1] 0.4442389

(but I think it is a coincidence).

  • Augmented Dickey Fuller

It is possible to had some lags in the regression. For instance, we can consider

https://latex.codecogs.com/gif.latex?%20\Delta%20Y_t=\alpha+\beta%20t+[\varphi-1]%20Y_{t-1}+\psi%20\Delta%20Y_{t-1}+\varepsilon_t

Again, we have to check if one coefficient is null, or not. And this can be done using Student’s t test.

> lags=1
> z=diff(X)
> n=length(z)
> z.diff=embed(z, lags+1)[,1]
> z.lag.1=X[(lags+1):n]
> k=lags+1
> z.diff.lag = embed(z, lags+1)[, 2:k]
> summary(lm(z.diff~0+z.lag.1+z.diff.lag ))

Call:
lm(formula = z.diff ~ 0 + z.lag.1 + z.diff.lag)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87492 -0.53977 -0.00688  0.64481  2.47556 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
z.lag.1    -0.005394   0.007361  -0.733    0.464
z.diff.lag -0.028972   0.065113  -0.445    0.657

Residual standard error: 0.9666 on 236 degrees of freedom
Multiple R-squared:  0.003292,	Adjusted R-squared:  -0.005155 
F-statistic: 0.3898 on 2 and 236 DF,  p-value: 0.6777

> summary(lm(z.diff~0+z.lag.1+z.diff.lag ))$coefficients[1,3]
[1] -0.7328138

This value is the one obtained using

> df=ur.df(X,type="none",lags=1)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression none 

Call:
lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87492 -0.53977 -0.00688  0.64481  2.47556 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
z.lag.1    -0.005394   0.007361  -0.733    0.464
z.diff.lag -0.028972   0.065113  -0.445    0.657

Residual standard error: 0.9666 on 236 degrees of freedom
Multiple R-squared:  0.003292,	Adjusted R-squared:  -0.005155 
F-statistic: 0.3898 on 2 and 236 DF,  p-value: 0.6777

Value of test-statistic is: -0.7328 

Critical values for test statistics: 
      1pct  5pct 10pct
tau1 -2.58 -1.95 -1.62

And again, other pckages can be used:

> adf.test(X,k=1)

	Augmented Dickey-Fuller Test

data:  X
Dickey-Fuller = -1.9828, Lag order = 1, p-value = 0.5831
alternative hypothesis: stationary

Hopefully, the conclusion is the same (we should reject the assumption that the series is stationary, but I am not sure about the computation of the p-value).

  • Augmented Dickey Fuller with trend and drift

So far, we have not included the drift in our model. But this is simple to do (this will be called the augmented version of the previous procedure): we just have to include a constant in the regression,

> summary(lm(z.diff~1+z.lag.1+z.diff.lag ))

Call:
lm(formula = z.diff ~ 1 + z.lag.1 + z.diff.lag)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.91930 -0.56731 -0.00548  0.62932  2.45178 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)  
(Intercept)  0.29175    0.13153   2.218   0.0275 *
z.lag.1     -0.03559    0.01545  -2.304   0.0221 *
z.diff.lag  -0.01976    0.06471  -0.305   0.7603  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9586 on 235 degrees of freedom
Multiple R-squared:  0.02313,	Adjusted R-squared:  0.01482 
F-statistic: 2.782 on 2 and 235 DF,  p-value: 0.06393

The statistics of interest are obtained here considering some analysis of variance outputs, where this model is compared with the one without the integrated part, and the drift,

> summary(lm(z.diff~1+z.lag.1+z.diff.lag ))$coefficients[2,3]
[1] -2.303948
> anova(lm(z.diff ~ z.lag.1 + 1 + z.diff.lag),lm(z.diff ~ 0 + z.diff.lag))$F[2]
[1] 2.732912

Those two values are the ones obtained also with

> df=ur.df(X,type="drift",lags=1)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression drift 

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.91930 -0.56731 -0.00548  0.62932  2.45178 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)  
(Intercept)  0.29175    0.13153   2.218   0.0275 *
z.lag.1     -0.03559    0.01545  -2.304   0.0221 *
z.diff.lag  -0.01976    0.06471  -0.305   0.7603  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9586 on 235 degrees of freedom
Multiple R-squared:  0.02313,	Adjusted R-squared:  0.01482 
F-statistic: 2.782 on 2 and 235 DF,  p-value: 0.06393

Value of test-statistic is: -2.3039 2.7329 

Critical values for test statistics: 
      1pct  5pct 10pct
tau2 -3.46 -2.88 -2.57
phi1  6.52  4.63  3.81

And we can also include a linear trend,

> temps=(lags+1):n
> summary(lm(z.diff~1+temps+z.lag.1+z.diff.lag ))

Call:
lm(formula = z.diff ~ 1 + temps + z.lag.1 + z.diff.lag)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87727 -0.58802 -0.00175  0.60359  2.47789 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)  
(Intercept)  0.3227245  0.1502083   2.149   0.0327 *
temps       -0.0004194  0.0009767  -0.429   0.6680  
z.lag.1     -0.0329780  0.0166319  -1.983   0.0486 *
z.diff.lag  -0.0230547  0.0652767  -0.353   0.7243  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9603 on 234 degrees of freedom
Multiple R-squared:  0.0239,	Adjusted R-squared:  0.01139 
F-statistic:  1.91 on 3 and 234 DF,  p-value: 0.1287

> summary(lm(z.diff~1+temps+z.lag.1+z.diff.lag ))$coefficients[3,3]
[1] -1.98282
> anova(lm(z.diff ~ z.lag.1 + 1 + temps+ z.diff.lag),lm(z.diff ~ 1+ z.diff.lag))$F[2]
[1] 2.737086

while R function returns

> df=ur.df(X,type="trend",lags=1)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression trend 

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.87727 -0.58802 -0.00175  0.60359  2.47789 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)  
(Intercept)  0.3227245  0.1502083   2.149   0.0327 *
z.lag.1     -0.0329780  0.0166319  -1.983   0.0486 *
tt          -0.0004194  0.0009767  -0.429   0.6680  
z.diff.lag  -0.0230547  0.0652767  -0.353   0.7243  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9603 on 234 degrees of freedom
Multiple R-squared:  0.0239,	Adjusted R-squared:  0.01139 
F-statistic:  1.91 on 3 and 234 DF,  p-value: 0.1287

Value of test-statistic is: -1.9828 1.8771 2.7371 

Critical values for test statistics: 
      1pct  5pct 10pct
tau3 -3.99 -3.43 -3.13
phi2  6.22  4.75  4.07
phi3  8.43  6.49  5.47
  • KPSS test

Here, in the KPSS testing procedure, two models can be considerd : with a drift, or with a linear trend. Here, the null hypothesis is that the series is stationnary.

With a drift, the code is

> summary(ur.kpss(X,type="mu"))

####################### 
# KPSS Unit Root Test # 
####################### 

Test is of type: mu with 4 lags. 

Value of test-statistic is: 0.972 

Critical value for a significance level of: 
                10pct  5pct 2.5pct  1pct
critical values 0.347 0.463  0.574 0.73

while it will be, in the case there is a trend

> summary(ur.kpss(X,type="tau"))

####################### 
# KPSS Unit Root Test # 
####################### 

Test is of type: tau with 4 lags. 

Value of test-statistic is: 0.5057 

Critical value for a significance level of: 
                10pct  5pct 2.5pct  1pct
critical values 0.119 0.146  0.176 0.216

One more time, it is possible to use another package to get the same test (but again, a different output)

> kpss.test(X,"Level")

	KPSS Test for Level Stationarity

data:  X
KPSS Level = 1.1997, Truncation lag parameter = 3, p-value = 0.01

> kpss.test(X,"Trend")

	KPSS Test for Trend Stationarity

data:  X
KPSS Trend = 0.6234, Truncation lag parameter = 3, p-value = 0.01

At least, there is some kind of consistency, since we keep rejecting the stationnary assumption, for that series.

  • Philipps-Perron test

The Philipps-Perron test is based on the ADF procedure. The code is here

> PP.test(X)

	Phillips-Perron Unit Root Test

data:  X
Dickey-Fuller = -2.0116, Truncation lag parameter = 4, p-value = 0.571

with again, a possible alternative with the other package

> pp.test(X)

	Phillips-Perron Unit Root Test

data:  X
Dickey-Fuller Z(alpha) = -7.7345, Truncation lag parameter = 4, p-value
= 0.6757
alternative hypothesis: stationary
  •  Comparison

I will not spend more time comparing the different codes, in R, to run those tests. Let us spend some additional time on a quick comparison of those three procedure. Let us generate some autoregressive processes, with more or less autocorrelation, as well as some random walk, and let us see how those tests perform :

> n=100
> AR=seq(1,.7,by=-.01)
> P=matrix(NA,3,31)
> M1=matrix(NA,1000,length(AR))
> M2=matrix(NA,1000,length(AR))
> M3=matrix(NA,1000,length(AR))

> for(i in 1:(length(AR)+1)){
+ for(s in 1:1000){
+ if(i==1) X=cumsum(rnorm(n))
+ if(i!=1) X=arima.sim(n=n,list(ar=AR[i]))
+ library(urca)
+ M2[s,i]=as.numeric(pp.test(X)$p.value)
+ M1[s,i]=as.numeric(kpss.test(X)$p.value)
+ M3[s,i]=as.numeric(adf.test(X)$p.value)
+ }}

Here, we would like to count how many times the p-value of our tests exceed 5%,

> prop05=function(x) mean(x>.05)
+ P[1,]=1-apply(M1,2,prop05)
+ P[2,]=apply(M2,2,prop05)
+ P[3,]=apply(M3,2,prop05)
+ }
> plot(AR,P[1,],type="l",col="red",ylim=c(0,1),ylab="proportion of non-stationnary 
+ series",xlab="autocorrelation coefficient")
> lines(AR,P[2,],type="l",col="blue")
> lines(AR,P[3,],type="l",col="green")
> legend(.7,1,c("ADF","KPSS","PP"),col=c("green","red","blue"),lty=1,lwd=1)

 

We can see here how poorly Dickey-Fuller test behave, since a 50% (at least) of our autoregressive processes are considered as non-stationnary.

A random walk ? What else ?

Consider the following time series,

What does it look like ? I know, this is a stupid game, but I keep using it in my time series courses. It does look like a random walk, doesn’t it ? If we use Philipps-Perron test, yes, it does,

> PP.test(x)

	Phillips-Perron Unit Root Test

data:  x 
Dickey-Fuller = -2.2421, Truncation lag parameter = 6, p-value = 0.4758

If we look at the autocorrelation function, we do observe some persistence,

> acf(x,100)

Perhaps this persistence can be related to long range dependence, or to some fractional random walk. A natural idea could be estimate Hurst parameter, using for instance Beran (1992) estimator – based on Whittle (1956) – where we assume that the autocorrelation function satisfies

as  for some  (the so called Hurst index). But here, we start to observe unexpected ouputs,

> library(longmemo)
> (d  <- WhittleEst(x))
'WhittleEst' Whittle estimator for  fractional Gaussian noise ('fGn');	 call:
WhittleEst(x = x)
	  time series of length  n = 759.

H = 0.9899335
coefficients 'eta' =
    Estimate Std. Error z value   Pr(>|z|)
H 0.98993350 0.02468323 40.1055 < 2.22e-16
 <==> d := H - 1/2 = 0.49 (0.025)

 $ vcov       : num [1, 1] 0.000609
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : chr "H"
  .. ..$ : chr "H"
 $ periodogr.x: num [1:379] 1479.3 1077.3 371.7 287.2 51.2 ...
 $ spec       : num [1:379] 62.5 31.7 21.3 16.1 12.9 ...

or more precisely some non-expected values for Hurst parameter, which should be in 

> confint(d)
      2.5 %   97.5 %
H 0.9415553 1.038312

Oops, perhaps, we did miss something, because it looks like there is extremely strong persistence on our time series,

> plot(d)

It is probablty time to ask where I found that series… To be honest, I did borrow  it from a great canadian website http://climate.weatheroffice.gc.ca/climateData/. For instance, it you want the temperature we did experience a few days ago, you can use

> Y=2013
> M=1
> D=25
> url=paste(
"http://climate.weatheroffice.gc.ca/climateData/hourlydata_e.html?
timeframe=1&Prov=QC&StationID=5415&hlyRange=1953-01-01|2013-02-
01&Year=",Y,"&Month=",M,"&Day=",D,sep="")
> page=scan(url,what="character")

Yes, that series is the temperature we did experience in Montréal last month (hourly time seies). On the graph below, you can actually compare it with temperature experienced in Januarys over the past 60 years,

So it is not that surprising to see long range dependence models appearing (I did write a paper on that topic precisely a few years ago). What I found puzzeling is that persistence is large, extremely large. And the problem is that I do not see how we can explain ‘jumps’ that we do observe on that series. For instance the behavior of the series while I was in Europe, before January 20th: within 3 days, the temperature went down, from 0°C to -20°C, and up from -20°C to 0°C, and then down again, from 0°C to -20°C (a nice И if we use cyrillic letters). Or how can we explain the oscillating behavior observed the week after, where the temperature went up, from -25°C to (almost) +10°C in a few days. Within 10 days, we did observe also two ‘jumps’ (or ‘crashes‘ if we want to use the terminology of financial time series) with a decrease of 25 degrees in less than 24 hours ! Obviously, we need to find other classes of model to replicate that kind of behavior we observe on temperatures…

Les tests de non-stationnarité (racine unité)

Pour commencer la modélisation d’une série temporelle, la première étape est de savoir si on peut la considérer comme stationnaire. L’alternative étant qu’elle soit intégrée. Le test le plus classique est le test de Dickey-Fuller.De manière générale, considérons un modèle autorégressive à l’ordre 1,

http://freakonometrics.blog.free.fr/public/perso6/adf-1.gif

On peut éventuellement réécrire ce processus sous la forme

http://freakonometrics.blog.free.fr/public/perso6/acf-18.gif

http://freakonometrics.blog.free.fr/public/perso6/adf-21.gif. On aura un processus non-stationnaire dès lors que http://freakonometrics.blog.free.fr/public/perso6/adf-20.gif ou encore http://freakonometrics.blog.free.fr/public/perso6/adf-16.gif. Si on n’a pas de tendance linéaire, tester http://freakonometrics.blog.free.fr/public/perso6/acf-22.gif ou http://freakonometrics.blog.free.fr/public/perso6/adf-26.gif peut se faire avec un test de Student. L’idée de Dickey-Fuller est de généraliser le test de Student, en rajoutant cette tendance linéaire (pour commencer).

Afin d’illustrer l’utilisation de ce test, commençons par un cas “simple”: on va intégrer un bruit blanc simulé.

>  set.seed(1)
>  E=rnorm(240)
>  X=cumsum(E)
>  plot(X,type="l")

On peut regarder s’il y a une constante non nulle (on parlera de modèle avec drift) voire une tendance linéaire (on parlera de trend),

La première étape pourrait être de tenter un modèle sans rien.

> df=ur.df(X,type="none",lags=1)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression none

Call:
lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag)

Residuals:
Min       1Q   Median       3Q      Max
-2.87492 -0.53977 -0.00688  0.64481  2.47556

Coefficients:
Estimate Std. Error t value Pr(>|t|)
z.lag.1    -0.005394   0.007361  -0.733    0.464
z.diff.lag -0.028972   0.065113  -0.445    0.657

Residual standard error: 0.9666 on 236 degrees of freedom
Multiple R-squared: 0.003292,	Adjusted R-squared: -0.005155
F-statistic: 0.3898 on 2 and 236 DF,  p-value: 0.6777

Value of test-statistic is: -0.7328

Critical values for test statistics:
1pct  5pct 10pct
tau1 -2.58 -1.95 -1.62

La région critique du test est ici (pour un seuil à 5%) l’ensemble des valeurs inférieures à -1.95. Or ici la statistique de test est ici -0.73, on est alors dans la région d’acceptation du test, et on va retenir l’hypothèse de racine unité, i.e. la série n’est pas stationnaire. Mais peut-être faudrait-il juste prendre en compte une constante ?

> df=ur.df(X,type="drift",lags=1)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression drift

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)

Residuals:
Min       1Q   Median       3Q      Max
-2.91930 -0.56731 -0.00548  0.62932  2.45178

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.29175    0.13153   2.218   0.0275 *
z.lag.1     -0.03559    0.01545  -2.304   0.0221 *
z.diff.lag  -0.01976    0.06471  -0.305   0.7603
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9586 on 235 degrees of freedom
Multiple R-squared: 0.02313,	Adjusted R-squared: 0.01482
F-statistic: 2.782 on 2 and 235 DF,  p-value: 0.06393

Value of test-statistic is: -2.3039 2.7329

Critical values for test statistics:
1pct  5pct 10pct
tau2 -3.46 -2.88 -2.57
phi1  6.52  4.63  3.81

Deux statistiques de test sont calculées, ici: la première relative à la racine unité, la seconde relative à la constante.  On observe ici que la statistique de test pour le test de racine unité (-2.3) est ici supérieure à toutes les valeurs critiques associées (données sur la première ligne). On va encore accepter l’hypothèse de racine unité. Mais le modèle était peut-être faux, et peut-être avait-on en fait en tendance linéaire ?

> df=ur.df(X,type="trend",lags=1)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression trend

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)

Residuals:
Min       1Q   Median       3Q      Max
-2.87727 -0.58802 -0.00175  0.60359  2.47789

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.3227245  0.1502083   2.149   0.0327 *
z.lag.1     -0.0329780  0.0166319  -1.983   0.0486 *
tt          -0.0004194  0.0009767  -0.429   0.6680
z.diff.lag  -0.0230547  0.0652767  -0.353   0.7243
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9603 on 234 degrees of freedom
Multiple R-squared: 0.0239,	Adjusted R-squared: 0.01139
F-statistic:  1.91 on 3 and 234 DF,  p-value: 0.1287

Value of test-statistic is: -1.9828 1.8771 2.7371

Critical values for test statistics:
1pct  5pct 10pct
tau3 -3.99 -3.43 -3.13
phi2  6.22  4.75  4.07
phi3  8.43  6.49  5.47

On obtient cette fois trois statistique, la première relative au test de racine unité, et les deux suivantes aux tests sur la constante, et sur la tendance (la pente de l’ajustement linéaire). Là encore, la valeur de test (-1.98) excède les valeurs critiques: la p-value serait ici supérieure à 10%. On va là encore accepter l’hypothèse de racine unité.
Mais la modélisation autorégressive à l’ordre 1 était peut-être elle aussi fausse. Aussi, il existe un test de Dickey-Fuller augmenté. L’idée est de considérer, de manière beaucoup plus générale, un processus autorégressif à un ordre supérieur,
http://freakonometrics.blog.free.fr/public/perso6/adf-2.gif
Comme auparavant, on peut réécrire ce modèle en fonction des variations (on parlera de modèle à correction d’erreur)
http://freakonometrics.blog.free.fr/public/perso6/acf-29.gif
où on construit les nouveaux coefficients par une relation de récurrence, du genre http://freakonometrics.blog.free.fr/public/perso6/adf-6.gif. Bref, à nouveau, on est ramené à tester http://freakonometrics.blog.free.fr/public/perso6/acf-22.gif ou http://freakonometrics.blog.free.fr/public/perso6/adf-26.gif . Et là encore, on a le choix sur la tendance.
Si on commence par supposer qu’il n’y a pas de tendance,

> df=ur.df(X,type="none",lags=2)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression none

Call:
lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag)

Residuals:
Min       1Q   Median       3Q      Max
-2.86738 -0.53887 -0.02009  0.67058  2.45970

Coefficients:
Estimate Std. Error t value Pr(>|t|)
z.lag.1     -0.005168   0.007389  -0.699    0.485
z.diff.lag1 -0.029619   0.065289  -0.454    0.650
z.diff.lag2 -0.037856   0.065436  -0.579    0.563

Residual standard error: 0.9685 on 234 degrees of freedom
Multiple R-squared: 0.004704,	Adjusted R-squared: -0.008056
F-statistic: 0.3687 on 3 and 234 DF,  p-value: 0.7757

Value of test-statistic is: -0.6994

Critical values for test statistics:
1pct  5pct 10pct
tau1 -2.58 -1.95 -1.62

la valeur de la statistique de test excède là encore les valeurs critiques, i.e. la p-value dépasse 10%. Si on rajoute une constante,

> df=ur.df(X,type="drift",lags=2)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression drift

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)

Residuals:
Min       1Q   Median       3Q      Max
-2.91609 -0.56702  0.01025  0.62109  2.43970

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.31105    0.13332   2.333   0.0205 *
z.lag.1     -0.03744    0.01565  -2.392   0.0175 *
z.diff.lag1 -0.01917    0.06483  -0.296   0.7677
z.diff.lag2 -0.02794    0.06496  -0.430   0.6676
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9594 on 233 degrees of freedom
Multiple R-squared: 0.02664,	Adjusted R-squared: 0.0141
F-statistic: 2.125 on 3 and 233 DF,  p-value: 0.09774

Value of test-statistic is: -2.3924 2.9709

Critical values for test statistics:
1pct  5pct 10pct
tau2 -3.46 -2.88 -2.57
phi1  6.52  4.63  3.81

on valide encore l’hypothèse de racine unité et de même avec une tendance linéaire,

> df=ur.df(X,type="trend",lags=2)
> summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression trend

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)

Residuals:
Min       1Q   Median       3Q      Max
-2.85835 -0.58826 -0.00867  0.61407  2.47280

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.3530591  0.1522373   2.319   0.0213 *
z.lag.1     -0.0338831  0.0168523  -2.011   0.0455 *
tt          -0.0005674  0.0009879  -0.574   0.5663
z.diff.lag1 -0.0237595  0.0654158  -0.363   0.7168
z.diff.lag2 -0.0328215  0.0656102  -0.500   0.6174
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9608 on 232 degrees of freedom
Multiple R-squared: 0.02802,	Adjusted R-squared: 0.01126
F-statistic: 1.672 on 4 and 232 DF,  p-value: 0.1573

Value of test-statistic is: -2.0106 2.0849 3.0185

Critical values for test statistics:
1pct  5pct 10pct
tau3 -3.99 -3.43 -3.13
phi2  6.22  4.75  4.07
phi3  8.43  6.49  5.47

On notera toutefois que ces trois modèles nous suggèrent de ne pas retenir autant de retard, qui ne semblent pas significatifs.
On peut bien entendu faire ces tests sur de vraies données par exemple le niveau du Nil

> library(datasets)
> NILE=Nile

ou encore les taux d’intérêt américains,

> base=read.table(
+ "http://freakonometrics.free.fr/basedata.txt",
+ header=TRUE)
> Y=base[,"R"]
> Y=Y[(base$yr>=1960)&(base$yr<=1996.25)]
> TAUX=ts(Y,frequency = 4, start = c(1960, 1))

Par exemple, sur cette dernière, on rejette l’hypothèse de stationnarité

>  df=ur.df(y=TAUX,lags=3,type="drift")
>  summary(df)

############################################### 
# Augmented Dickey-Fuller Test Unit Root Test # 
############################################### 

Test regression drift

Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)

Residuals:
Min      1Q  Median      3Q     Max
-3.1982 -0.2947 -0.0629  0.3524  3.1899

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  0.40609    0.16706   2.431 0.016362 *
z.lag.1     -0.06339    0.02500  -2.535 0.012354 *
z.diff.lag1  0.32613    0.08145   4.004 0.000102 ***
z.diff.lag2 -0.31102    0.08027  -3.875 0.000165 ***
z.diff.lag3  0.28712    0.08103   3.543 0.000541 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7805 on 137 degrees of freedom
Multiple R-squared: 0.2051,	Adjusted R-squared: 0.1819
F-statistic: 8.839 on 4 and 137 DF,  p-value: 2.227e-06

Value of test-statistic is: -2.5354 3.2457

Critical values for test statistics:
1pct  5pct 10pct
tau2 -3.46 -2.88 -2.57
phi1  6.52  4.63  3.81

Mais toutes sortes d’autres tests (plus robustes) peuvent être faits. Les plus connus sont le test de Philips-Perron et le test dit KPSS. Pour ce dernier, il faut spécifier s’il l’on suppose que la série est de moyenne constante, ou si une tendance doit être prise en compte. Si l’on suppose une constante non-nulle, mais pas de tendance, on utilise

> summary(ur.kpss(X,type="mu"))

####################### 
# KPSS Unit Root Test # 
####################### 

Test is of type: mu with 4 lags.

Value of test-statistic is: 0.972

Critical value for a significance level of:
10pct  5pct 2.5pct  1pct
critical values 0.347 0.463  0.574 0.739

alors que pour prendre en compte une tendance

> summary(ur.kpss(X,type="tau"))

####################### 
# KPSS Unit Root Test # 
####################### 

Test is of type: tau with 4 lags.

Value of test-statistic is: 0.5057

Critical value for a significance level of:
10pct  5pct 2.5pct  1pct
critical values 0.119 0.146  0.176 0.216

Derrière se cache un teste du multiplicateur de Lagrange. L’hypothèse nulle correspond à l’absence de racine unité: plus la statistique de test est grande, plus on s’éloigne de la stationnarité (hypothèse nulle). Avec ces deux tests, on rejette là encore l’hypothèse de stationnarité de notre marche aléatoire simulée.

Pour le test de Philips-Perron, on a un test de type Dickey-Fuller,

> PP.test(X)

Phillips-Perron Unit Root Test

data:  X
Dickey-Fuller = -2.0116, Trunc lag parameter = 4, p-value = 0.571

Là encore, la p-value nous recommande de valider l’hypothèse de non-stationnarité. A titre de comparaison, si on avait travaillé sur la série différenciée, on aurait accepté l’hypothèse de stationnarité

> PP.test(diff(X))

Phillips-Perron Unit Root Test

data:  diff(X)
Dickey-Fuller = -15.9522, Trunc lag parameter = 4, p-value = 0.01