Correlation with constraints on pairs

An interesting question was posted on http://math.stackexchange.com/726205/…: if one knows the covariances  and , is it possible to infer ? I asked myself a question close to this one a few weeks ago (that I might also relate to a question I asked a long time ago, about possible correlations between three exchange rates, on financial markets). More precisely, if one knows the correlations  and , is it possible to say something about ?

I could not find much details (but maybe I did not look enough in the existing literature). My strategy was to consider the correlation matrix, and to use the fact that a correlation matrix is symmetric, positive semidefinite matrix (also called Gramian matrix, which is a matrix with no negative eigenvalues). given the two correlations, we should consider the function of the third correlation, which indicates whether the smallest eigenvalue is non-negative, or not. Then, I look at the range of the third correlation, to get the minimum and the maximum possible value (I guess we can prove that possible values belongs to some interval). The code to get that is simply

corrminmax=function(r1,r2){
h=function(r3){
R=matrix(c(1,r1,r2,r1,1,r3,r2,r3,1),3,3)
return(min(eigen(R)$values)>0)}
vc=seq(-1,+1,length=1e4+1)
vr=Vectorize(h)(vc)
indx=which(vr==TRUE)
return(vc[range(indx)])
}

Using this code, it is possible to look at the smallest correlation for the third pair, as well as the maximum correlation,

x1=seq(-1,1,by=.1)
x2=seq(-1,1,by=.1)
W=M=matrix(NA,length(x1),length(x2))
for(i in 1:length(x1)){
for(j in 1:length(x2)){
C=corrminmax(x1[i],x2[j])
W[i,j]=C[1]
M[i,j]=C[2]
}}

If we plot those matrices, we get

par(mfrow=c(1,2))
persp(x1,x2,W,zlim=c(-1,1),col="green",
shade=TRUE,theta=-30)
persp(x1,x2,M,zlim=c(-1,1),col="green",
shade=TRUE,theta=-30)

and if we plot the difference, to get the range of the interval we clearly see that the largest range is obtained when the two correlations are null (in that case, any correlation is valid)

Retour sur les enjeux du Pôle Nord

Avec presque deux semaines de retard, je voulais revenir avec deux images sur la discussion qui avait eu lieu lors de la conférence “à qui appartient le pôle nord”. En effet, Philippe Reka (a.k.a. @visionscarto) a présenté deux cartes qui m’ont fait longuement réfléchir sur les pays qui pourraient s’intéresser au Pôle Nord (plus que le revendiquer, formellement).

Mon point de départ était de considérer que les nations qui pourraient s’intéresser au Pôle étaient les régions qui ont un pied au delà du cercle artique. On évoquait en souriant le fait que le Canada ne serait pas le Canada sans ses hivers froids, ainsi que l’importance culturelle de ce Nord. Mais Philippe a réussit à prendre le contrepied de ces réflexions (assez conventionnelles) en présentant cette première carte, proposant de voir au delà du cercle artique, en intégrant les pays de second rang. Parmi lesquels la Chine, ou Singapore.

La question qu’il faut se poser est simple: comment relier l’Europe ou même la côte Est des Etats Unis depuis la Chine ? Car comme le notait Marc Levinson dans The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger, pour bien comprendre le fonctionnement de l’économie, et les questions géopoliques associées, il faut comprendre le transport, et plus particuliement le transport maritime. Or comme on le voit – peut-être de manière encore plus convainquante sur la carte ci-dessous – la route actuelle, passant par le Moyen Orient, le Canal de Suez et la mer Méditerranée est longue, et dangeureuse (Philippe a d’ailleurs fait un travail fabuleux sur la cartographie de la piraterie, mais on en reparlera). Mais si une route pouvait être ouverte par le nord de la Russie, par le pôle, ce serait probablement la voie la plus naturelle.

Donc les pays les plus intéressés par le Pôle, et par les voies de transports qui pourraient être créées par le réchauffement climatique (qui va inévitablement changer la géographie de l’arctique) sont certes les pays limitrophes, mais il faut pas oublier les pays plus “lointains”, qui auront sûrement leur mot à dire !

Merci Philippe pour cette vision rafraîchissante du Pôle, qui m’aura beaucoup fait réfléchir sur le pouvoir de la visualisation cartographique !

Somewhere else, part 123

Some writings, worth reading,

et un peu de lecture en français,

Did I miss something?

Independence and correlation

A short post to get back on a property I gave briefly in the MAT8595 class in January, and again in the MAT8181 class this week (to illustrate the distinction between weak and strong white noises). Recall that (real-valued) random variables  and  are independent if for all , Another characterization, for integrable variable is that for all , which can be written, if variables are square integrable The idea to prove this characterization is to observe that if  and  are independent can be written Using a standard argument in integration theory, equality is valid for step functions (not only indicators), and then to positive measurable functions, and finally to integrable functions. Proving this result is not that difficult. Observe that Rényi (1959) – inspired by Gebelein (1947) – followed by Sarmanov (1958) introduced the concept of maximal correlation, that can be related to this result, where the maximum is taken over all functions  and  such that the correlation exist. Actually, it is possible to consider only transformations such that  and  (and similarly for , the idea is that we simple center and scale, which does not impact the correlation.Thus,  and  are independent if and only if Algorithm to estimate that coefficient are interesting. The problem can be written, equivalently And if the minimization is considered over , assuming that  is fixed, then the optimal transformation is And similarly for . So using an iterative algorithm, it is possible to get  and  (see Breiman and Friedman (1985) for more details). Actually, those functions appear in nonlinear canonical analysis. As mentioned in Lancaster (1957), for a Gaussian random vector  and in that case   and  are affine functions. This can be related to Hermite’s polynomial and to the expansion of the bivariate Gaussian density. I still hope that someone will go further for the project in the MAT8181 course.

Seasonal Unit Roots

As discussed in the MAT8181 course, there are – at least – two kinds of non-stationary time series: those with a trend, and those with a unit-root (they will be called integrated). Unit root tests cannot be used to assess whether a time series is stationary, or not. They can only detect integrated time series. And the same holds for seasonal unit root.

In a previous post, we’ve seen that it was difficult to model hourly temperature, since most test do not reject unit roots. Consider here the average monthly temperature, still in Montréal, QC.

> montreal=read.table("http://freakonometrics.free.fr/temp-montreal-monthly.txt")
> M=as.matrix(montreal[,2:13])
> X=as.numeric(t(M))
> tsm=ts(X,start=1948,freq=12)
> plot(tsm)

For those who don’t know Montréal, Winter and Summer are very different. We can visualize monthly differences using

> month=rep(1:12,length(tsm)/12)
> plot(month,as.numeric(tsm))
> lines(1:12,apply(M,2,mean),col="red",type="b",pch=19)

or, if install the uroot package, removed from the CRAN repository, we can use

> library(uroot)
> bbplot(tsm)

or

> bb3D(tsm)
Loading required package: tcltk

It looks like our time series is cyclic, because of the yearly seasonal pattern. The autocorrelation function is here

> acf(tsm,lag=120)

Again, this cycle can be visualized using

> persp(1948:2013,1:12,M,theta=-50,col="yellow",shade=TRUE,
+ xlab="Year",ylab="Month",zlab="Temperature",ticktype="detailed")

Now, the question is is there a seasonal unit root ? This would mean that our model should be

If we forget about the autoregressive and the moving average component, we can estimate

If there is a seasonal unit root then  should be close to 1. Somehow.

> arima(tsm,order=c(0,0,0),seasonal=list(order=c(1,0,0),period=12))

Call:
arima(x = tsm, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
        sar1  intercept
      0.9702     6.4566
s.e.  0.0071     2.1515

It is not far away from 1. Actually, it cannot be too close to 1. If it was, then we would get an error message…

To illustrate some interesting models, let us consider also quarterly temperatures,

> N=cbind(apply(montreal[,2:4],1,sum),apply(montreal[,5:7],1,sum),apply(montreal[,8:10],1,sum),apply(montreal[,11:13],1,sum))
> X=as.numeric(t(N))
> tsq=ts(X,start=1948,freq=4)
> persp(1948:2013,1:4,N,theta=-50,col="yellow",shade=TRUE,
+ xlab="Year",ylab="Quarter",zlab="Temperature",ticktype="detailed")

(again, the aim is just to be able to write down some equations, if necessary)

Why not consider a  model on the quarterly temperature? Something like

i.e.

where  is some matrix . This model can easily be estimated,

> library(vars)
> df=data.frame(N)
> names(df)=paste("y",1:4,sep="")
> model=VAR(df)
> model

VAR Estimation Results:
======================= 

Estimated coefficients for equation y1: 
======================================= 
Call:
y1 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

       y1.l1        y2.l1        y3.l1        y4.l1        const 
 -0.13943065   0.21451118   0.08921237   0.30362065 -34.74793931 

Estimated coefficients for equation y2: 
======================================= 
Call:
y2 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.02520938  0.05288958 -0.13277377  0.05134148 40.68955266 

Estimated coefficients for equation y3: 
======================================= 
Call:
y3 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.07740824 -0.21142726  0.11180518  0.12963931 56.81087283 

Estimated coefficients for equation y4: 
======================================= 
Call:
y4 = y1.l1 + y2.l1 + y3.l1 + y4.l1 + const 

      y1.l1       y2.l1       y3.l1       y4.l1       const 
 0.18842863 -0.31964579  0.25099508 -0.04452577  5.73228873

and matrix  is here

> A=rbind(
+ coefficients(model$varresult$y1)[1:4],
+ coefficients(model$varresult$y2)[1:4],
+ coefficients(model$varresult$y3)[1:4],
+ coefficients(model$varresult$y4)[1:4])
> A
           y1.l1       y2.l1       y3.l1       y4.l1
[1,] -0.13943065  0.21451118  0.08921237  0.30362065
[2,]  0.02520938  0.05288958 -0.13277377  0.05134148
[3,]  0.07740824 -0.21142726  0.11180518  0.12963931
[4,]  0.18842863 -0.31964579  0.25099508 -0.04452577

Since stationary of this multiple time series is closely related to eigenvalues of this matrix, let us look at them,

> eigen(A)$values
[1]  0.35834830 -0.32824657 -0.14042175  0.09105836
> Mod(eigen(A)$values)
[1] 0.35834830 0.32824657 0.14042175 0.09105836

So it looks like there is no stationarity issue, here. A restricted model is the periodic autoregressive model, so called  model, discussed by Paap and Franses

where

and

Keep in mind that this is a  model, since

This model can be estimated using a specific package (one can also look at the vignette, to get a better understanding of the syntax)

> library(partsm)
> detcomp <- list(regular=c(0,0,0), seasonal=c(1,0), regvar=0)
> model=fit.ar.par(wts=tsq, detcomp=detcomp, type="PAR", p=1)
> PAR.MVrepr(model)
----
    Multivariate representation of a PAR model.

  Phi0:

  1.000  0.000  0.000 0
 -0.242  1.000  0.000 0
  0.000 -0.261  1.000 0
  0.000  0.000 -0.492 1

  Phi1:

 0 0 0 0.314
 0 0 0 0.000
 0 0 0 0.000
 0 0 0 0.000

  Eigen values of Gamma = Phi0^{-1} %*% Phi1:
0.01 0 0 0 

  Time varing accumulation of shocks:

 0.010 0.040 0.155 0.314
 0.002 0.010 0.037 0.076
 0.001 0.003 0.010 0.020
 0.000 0.001 0.005 0.010

Here, the characteristic equation is

so there is a (seasonal) unit root if

Which is clearly not the case, here. It is possible to perform Canova-Hansen test,

> CH.test(tsq)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/2 , pi , 

  L-statistic: 1.122  
  Lag truncation parameter: 5 

  Critical values:

  0.10 0.05 0.025 0.01
 0.846 1.01  1.16 1.35

The idea is that polynomial  has four root, in ,

since

If we get back to monthly data,  has twelve roots,

each of them having different interpretations.

Here we can have 1 cycle per year (on 12 months), 2 cycles per year (on 6 months), 3 cycles per year (on 4 months), 4 cycles per year (on 3 months), even 6 cycles per year (on 2 months). This will depend on the argument of the root, with respectively

The output of the test is here

> CH.test(tsm)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , 

  L-statistic: 1.964  
  Lag truncation parameter: 20 

  Critical values:

 0.10 0.05 0.025 0.01
 2.49 2.75  2.99 3.27

And it looks like we reject the assumption of a seasonal unit root. I can even mention the following testing procedure

> library(forecast)
> nsdiffs(tsm, test="ch")
[1] 0

where the ouput: “1” means that there is a seasonal unit root and “0” that there is no seasonal unit root. Simple to read, isn’t it? If we consider the periodic autoregressive model on the monthly data, the output is

> model=fit.ar.par(wts=tsm, detcomp=detcomp, type="PAR", p=1)
> model
----
  PAR model of order 1 .

  y_t = alpha_{1,s}*y_{t-1} + alpha_{2,s}*y_{t-2} + ... + alpha_{p,s}*y_{t-p} + coeffs*detcomp + epsilon_t,  for s=1,2,...,12
----
  Autoregressive coefficients. 

          s=1  s=2  s=3  s=4  s=5 s=6 s=7  s=8  s=9 s=10 s=11 s=12
alpha_1s 0.15 0.05 0.07 0.33 0.11   0 0.3 0.38 0.31 0.19 0.15 0.37

So, whatever the test, we always reject the assumption that there is a seasonal unit root. Which does not mean that we can not have a strong cycle! Actually, the series is almost periodic. But there is no unit root! So all of this makes sense (I hardly believe that there might be unit root – seasonal, or not – in temperatures).

Just to make sure that we get it right, consider two times series, inspired from the previous one. The first one is a periodic sequence (with a very very small noise, just to avoid problem of non-definite matrices) and the second one is clearly integrated.

> Xp1=Xp2=as.numeric(t(M))
> for(t in 13:length(M)){
+ Xp1[t]=Xp1[t-12]
+ Xp2[t]=Xp2[t-12]+rnorm(1,0,2)
+ }
> Xp1=Xp1+rnorm(length(Xp1),0,.02)
> tsp1=ts(Xp1,start=1948,freq=12)
> tsp2=ts(Xp2,start=1948,freq=12)
> par(mfrow=c(2,1))
> plot(tsp1)
> plot(tsp2)

see also

> par(mfrow=c(1,2))
> bb3D(tsp1)
> bb3D(tsp2)

If we quickly look at those series, I would say that the first one has no unit root – even if it is not stationary, but it is because the series is periodic – while there is (are ?) unit root(s) for the second one. If we look at Canova-Hansen test, we get

> CH.test(tsp1)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , 

  L-statistic: 2.234
  Lag truncation parameter: 20 

  Critical values:

 0.10 0.05 0.025 0.01
 2.49 2.75  2.99 3.27

> CH.test(tsp2)

  ------ - ------ ----
  Canova & Hansen test
  ------ - ------ ----

  Null hypothesis: Stationarity.
  Alternative hypothesis: Unit root.
  Frequency of the tested cycles: pi/6 , pi/3 , pi/2 , 2pi/3 , 5pi/6 , pi , 

  L-statistic: 5.448  
  Lag truncation parameter: 20 

  Critical values:

 0.10 0.05 0.025 0.01
 2.49 2.75  2.99 3.27

I know that this package has been removed, so maybe I should not use it. Consider instead

> nsdiffs(tsp1, 12,test="ch")
[1] 0
> nsdiffs(tsp2, 12,test="ch")
[1] 1

Here we have the same conclusion. The first one does not have a unit root, but the second one has. But be careful: with Osborn-Chui-Smith-Birchenhall test,

> nsdiffs(tsp1, 12,test="ocsb")
[1] 1
> nsdiffs(tsp2, 12,test="ocsb")
[1] 1

we have the feeling that there is also a unit root in our cyclic series…

So here, on a short-frequency, we do reject the assumption of a unit root – even a seasonal one – on our temperature series. We still have our high-frequency problem to deal with, some day (but I don’t think I’ll have enough time to introduce long range dependence this session, unfortunately).

Somewhere else, part 122

Some writings worth reading

et un peu de lecture en français

Did I miss something?

Linear ‘Prediction’ for AR Time Series

In the exercises for the MAT8181 course, there are two Exercises (16 and 17) about prediction and extrapolation based on MA(1) and AR(1) time series. But before discussing those exercises (I had some request for hints), I wanted to recall the definition of the linear prediction,

where

As discussed previously on this blog, we consider here on projection not on  (this would be the conditional expectancy) but on the linear subset.

The goal of Exercise 2 was to establish an important result, in the context of Gaussian random vectors. If  is a (multivariate) Gaussian vector, , then

where  is the vector .

Keep those results in mind, and let us look at Exercise 17, for instance. Here,  is an AR(1) process, with innovation ,

One observation (say ) is missing. We have here 3 questions:

  • what is the best linear prediction of  given  and 
  • what is the best linear prediction of  given  and 
  • what is the best linear prediction of  given  and 

Case 1. Here, we have to compute

Since we have an AR(1) process,  and . Thus, from the relationship above

which can be written

i.e. . Which makes sense actually: the AR(1) process is Markovian of order one, so

And we seen in class that for an AR(1) process

So far, so good.

Case 2. Here, we have to compute

Since we have an AR(1) process,  and . Thus, from the relationship above

i.e. .

Case 3. Finally, we have to compute

One more time, since we have an AR(1) process,  and . So here, the relationship above becomes

Here, we can write

i.e.

So finally, what we got here is

and

The mean squared errors for each of those estimates are obtained computing

I guess I should probably stop here… that’s a detailed hint actually.

Somewhere else, part 121

Some writings worth reading

“Abstract Algebra is wonderful because you can prove things about groups that you don’t even understand.”

“And God said: ‘Thou shalt not divide by zero’”

“This exercise is so easy that it can be solved in a negative number of steps.”

“… And like everything trivial, we must prove it using a very obscure and indirect method that you will probably use only three or four times in your life at the very most.”

“There are a number of steps to follow when you come across an integration problem that requires trig substitution. The first step is to cry. The second step is to re-evaluate your life and wonder what you did to deserve this.”

“Real analysis is just triangle inequality with applications.”

“Topologies aren’t like Wisconsin, unions are still allowed!”

“We’re using a textbook so that way you guys don’t think I’m making all of this up.”

“I too have a lot of trouble visualizing 4-dimensional vectors… When I’m sober.”

et un peu de lecture en français

Did I miss something?

Seasonal, or periodic, time series

Monday, in our MAT8181 class, we’ve discussed seasonal unit roots from a practical perspective (the theory will be briefly mentioned in a few weeks, once we’ve seen multivariate models). Consider some time series , for instance traffic on French roads,

> autoroute=read.table(
+ "http://freakonometrics.blog.free.fr/public/data/autoroute.csv",
+ header=TRUE,sep=";")
> X=autoroute$a100
> T=1:length(X)
> plot(T,X,type="l",xlim=c(0,120))
> reg=lm(X~T)
> abline(reg,col="red")

As discussed in a previous post, if there is a trend, we should remove it, and work on the residual 

> Y=residuals(reg)
> acf(Y,lag=36,lwd=3)

We can observe that there is some seasonality, here. A first strategy might be to assume that there is a seasonal unit root, so we consider , and we try to find some ARMA process. Consider the empirical autocorrelation function of that time series,

> Z=diff(Y,12)
> acf(Z,lag=36,lwd=3)

or the partial autocorrelation function

> pacf(Z,lag=36,lwd=3)

The first graph might suggest a MA(1) structure, while the second graph might suggest an AR(1) time series. Let us try both.

> model1=arima(Z,order=c(0,0,1))
> model1

Call:
arima(x = Z, order = c(0, 0, 1))

Coefficients:
          ma1  intercept
      -0.2367  -583.7761
s.e.   0.0916   254.8805

sigma^2 estimated as 8071255:  log likelihood = -684.1,  aic = 1374.2

> E1=residuals(model1)
> acf(E1,lag=36,lwd=3)

which can be considered as a white noise (if you’re not convinced, try either Box-Pierce or Ljung-Box test). Similarly,

> model2=arima(Z,order=c(1,0,0))
> model2

Call:
arima(x = Z, order = c(1, 0, 0))

Coefficients:
          ar1  intercept
      -0.3214  -583.0943
s.e.   0.1112   248.8735

sigma^2 estimated as 7842043:  log likelihood = -683.07,  aic = 1372.15

> E2=residuals(model2)
> acf(E2,lag=36,lwd=3)

which can be also considered as a white noise. So what we have, so far is

 

for some white noise . This suggest the following SARIMA structure on ,

> model2b=arima(Y,order=c(1,0,0),
+               seasonal = list(order = c(0, 1, 0),
+               period=12)) 
> model2b

Call:
arima(x = Y, order = c(1, 0, 0), seasonal = list(order = c(0, 1, 0), period = 12))

Coefficients:
          ar1
      -0.2715
s.e.   0.1130

sigma^2 estimated as 8412999:  log likelihood = -685.62,  aic = 1375.25

So far, so good. Now, what if we consider that we do not have a seasonal unit root, but simply a large autoregressive coefficient in some AR structure. Let us try something like

where a natural guess is that this coefficient should – probably – be close to one. Let us try this one,

> model3c=arima(Y,order=c(1,0,0),
+               seasonal = list(order = c(1, 0, 0), 
+               period = 12))
> model3c

Call:
arima(x = Y, order = c(1, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
          ar1    sar1  intercept
      -0.1629  0.9741  -684.9455
s.e.   0.1170  0.0115  3064.4040

sigma^2 estimated as 8406080:  log likelihood = -816.11,  aic = 1640.21

which is comparable with what we got previously (somehow), so we might assume that this model can be considered as an interesting one. We will discuss further the fact that the first coefficient might be considered as non-significant.

What is the difference from those two models? With a short term horizon, the two models are comparable. Clearly

> library(forecast)
> previ=function(model,h=36,b=40000){
+ prev=forecast(model,h)
+ T=1:85
+ Tfutur=86:(85+h)
+ plot(T,Y,type="l",xlim=c(0,85+h),ylim=c(-b,b))
+ polygon(c(Tfutur,rev(Tfutur)),c(prev$lower[,2],rev(prev$upper[,2])),col="orange",border=NA)
+ polygon(c(Tfutur,rev(Tfutur)),c(prev$lower[,1],rev(prev$upper[,1])),col="yellow",border=NA)
+ lines(prev$mean,col="blue")
+ lines(Tfutur,prev$lower[,2],col="red")
+ lines(Tfutur,prev$upper[,2],col="red")
+ }

Now, on a (very) long term perspective, the models are quite different: one is stationnary, so the forecast will tend to the average value (here 0, since the trend was removed), while the other one is (seasonaly) integrated, so the confidence interval will increase. For the non stationry, we get

> previ(model2b,600,b=60000)

and for the stationary one

> previ(model3c,600,b=60000)

But as mentioned in the introduction of this course, forecasts with those models are relevent only for short-term horizon (say not too large). And in that case, the prediction is almost the same here,

> previ(model2b,36,b=60000)

> previ(model3c,36,b=60000)

Now, if we come back on our second model, we did mention previously that the autoregressive coefficient might be considered as non-significant. What if we remove it?

> model3d=arima(Y,order=c(0,0,0),
+               seasonal = list(order = c(1, 0, 0), 
+               period = 12))
> (model3d)

Call:
arima(x = Y, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
        sar1  intercept
      0.9662  -696.5661
s.e.  0.0134  3182.3017

sigma^2 estimated as 8918630:  log likelihood = -817.03,  aic = 1640.07

If we look at a (short-term) forecast, we get

> previ(model3d,36,b=32000)

Do you see any difference? To be honest, I don’t… If we look at the figures, we get

> cbind(forecast(model2b,12)$mean,forecast(model3c,12)$mean,forecast(model3d,12)$mean)
Time Series:
Start = 86 
End = 97 
Frequency = 1 
1   -4908.4920  -5092.8999  -5520.8780
2  -10012.7837  -9640.8103  -9493.0339
3   -3880.2202  -3841.1960  -3828.2611
4  -18102.5211 -17638.4086 -17499.1828
5  -20602.7346 -20090.9117 -19934.1066
6  -10463.2212 -10209.0139 -10132.0439
7    2458.1538   2376.4897   2351.2377
8   -1680.3342  -1654.4844  -1647.0057
9     876.6837    836.2342    823.4934
10  18046.5642  17561.6520  17413.1463
11  21531.4820  20956.3451  20780.2836
12  -3217.6103  -3152.0446  -3132.4112

Figures are different, but not significantly (keep in mind the size of the confidence interval). This might explain why, in R, when we ask for an autoregressive process or order https://latex.codecogs.com/gif.latex?p, then we get a model with https://latex.codecogs.com/gif.latex?p parameters to estimate, and even if some are not significant, we usually keep them for the forecast. Most of the time, from forecasting point of view, it’s no big deal.

Somewhere else, part 120

Some writings worth reading

et un peu de lecture en français,

Did I miss something interesting?

Dernières cartes du pôle nord

Une fois n’est pas coutume, je vais mettre quelques cartes que je n’ai pas générées, pour illustrer le débat de demain soir. Une première carte datant de 1906,

et une autre datant de 1950,

Pour rappel,  Robert Peary aurait été le premier homme à atteindre le pôle Nord en avril 1909 (en traîneau à chiens, lors de sa huitième expédition).  Mais beaucoup de monde semble en douter.  Roald Amundsen et Umberto Nobile l’ont surovlé en dirigeable en 1926, alors qu’Ivan Papanine s’y est posé, en avion, en 1937. Il faudra attendre l’expédition de Wally Herbert en 1969 pour qu’un explorateur l’atteigne à pieds (en l’occurence, en traîneau à chiens). Je retiendrais aussi la carte d’Aleksandra et Daniel Mizielinski,

Pour revenir au débat de la soirée (à qui appartient le pôle nord), je pourrais mettre quelques cartes (en plus de celle de mon précédant billet) sur le partage possible de la zone arctique, allant d’un découpages simpliste

à d’autres plus complexes,

L’idée (envisagé comme un solution plausible) serait de diviser le pôle Nord entre tous les pays limitrophes, reliant la zone limitrophe au territoire de l’Arctique, jusqu’au pôle Nord.

 

Car le débat est intense… Par exemple, l’expédition danoise de 2012, à bord du LOMROG avait pour but d’étudier la dorsale de Lomonossov  – Хребет Ломоносова – afin d’établir que la faille ne relie pas le pôle à la russie, mais au Groenland. En effet, la Convention des Nations Unies sur le droit de la mer (UNCLOS) établit qu’un pays peut demander à rattacher un terroitoire à sa zone économique exclusive (EEZ) de 200 milles nautique dans le cas où le territoire se trouve sur une extension de son plateau continental. Depuis 2008, dispose d’un passage, libre de glace, permettant un passage est-ouest, rendant possible pour le trafic maritime

Maintenant, on se doute que si certains se disputent la région, ce n’est pas (uniquement) à cause du Père Noël (on pourra d’ailleurs relire Canada issues Santa Claus a passport à ce sujet), mais à cause des ressources en gaz et en pétrole,

Le USGS estimait ainsi que  13% du pétrole brut, et 30% du gaz naturel non découvert (et récupérable) se situait en Arctique. La suite du débat aura lieu demain soir

On retourne au pôle nord ?

Depuis plusieurs jours, j’essaye de me préparer autant que possible au débat de mardi soir, et plus j’avance dans mes lectures, plus je trouve que la question “à qui appartient le pôle nord” est complexe. Un autre exemple récent de “revendication” d’une région par d’autres pourrait être celui de l’Ukraine. Pour simplifier outrageusement un débat (là aussi), j’ai l’impression qu’une partie de l’Ukraine se sent européenne, et une autre se sent russe. Un outil pour visualiser cette appartenance peut être d’utiliser des données de langue, par exemple.

File:Ukraine cencus 2001 Ukrainian.svg

Avec des données et une visualisation sur une carte, on peut tenter d’éclairer un peu. Mais dans le cas du pôle nord, on ne peut pas vraiment utiliser de données de ce genre (pas à ma connaissance, malheureusement). Cela dit, on peut apprendre des choses passionnantes dans un ancien numéro de Courrier International sur cette même question :

Dans mon précédant billet, j’avais parlé de la discussion que j’avais eu, rapidement, avec ma fille. En emmenant mon fils à l’escrime vendredi soir, je lui expliquais la problématique du débat, et sa réponse a été sans ambiguïté: le pôle nord appartient à celui qui a mis son premier un drapeau dessus ! On retrouve là la visualisation graphique de l’affiche du débat. Maintenant je dois avouer que ce genre de revendication me gêne un peu. La symbolique de la pose du drapeau est troublante quand on y pense. Une des images à laquelle on pense immédiatement est d’ailleurs le drapeau américain, posé sur la lune (en écho au drapeau hissé lors de la bataille d’Iwo Jima, 硫黄島, en février et mars 1945). A l’époque du drapeau américain sur la lune, il avait d’ailleurs été précisé « this act is intended as a symbolic gesture of national pride in achievement and is not to be construed as a declaration of national appropriation by claim of sovereignty » dans un texte adopté par le Sénat. Donc peut être que cette réponse est trop rapide ! Je voulais donc revenir sur deux éléments clés du débat : les pays qui pourraient revendiquer le pôle nord, et le pôle nord, en tant que tel.

  • les pays autour du pôle nord

Pour l’instant, j’avais fait quelques cartes pour visualiser le pôle, par rapport aux pays qui l’entourent (essentiellement le Canada, la Russie et le Groenland). J’avais retenu comme contour pour mes cartes les limites terrestres des pays, tels que définis sur les cartes. Mais on peut aller plus loin, comme l’évoquent certains, en tenant compte de la zone de 200 milles maritimes (la notion d’Exclusive Economic Zone).

La (petite) région au centre est celle qui est découpée dans la carte de Courrier International, en début de billet. Comme on le voit sur la carte ci-dessus, en rajoutant ces zones, effectivement, nos trois nations sont très proches du pôle (le contour de glace est celui observé pendant les 12 mois de 2013),

Pour moi, l’avantage de ces zones est que c’est plus simple de regarder l’intersection entre la zone de glace, et ces zones économiques (et de calculer l’aire) que de mesurer la longueur de frontière entre un pays et la zone de glace. On va essayer en fin de billet de regarder quelle proportion de la zone de glace chevauchait une zone exclusive (je passe ici sous silence la difficulté technique de calcul d’une aire d’une région contenant le pôle nord… étrangement, c’est complexe, d’un point de vue computationel. Les plus curieux iront voir le billet dédié).

Cela dit, certains pourraient critiquer l’idée de tenir compte de cette extension (horizontale) du territoire, en tenant compte de ces zones économiques. C’est quoi la prochaine étape ? Une appropriation verticale, afin de prendre possession des nuages. Cela dit, je plaisante à peine car la possession verticale de l’espace existe, avec les espaces aériens… D’ailleurs les russes ont évoqués une appropriation verticale, mais sous terre, car la dorsale de Lomonossov  – Хребет Ломоносова – relirait le pôle directement au territoire russe.

Mais je vais arrêter là ma discussion sur les pays.

  • quel pôle nord ?

J’avais un peu laissé de côté la question, mais je pense qu’il serait quand même intéressant d’en parler au moins 2 minutes. De quoi parle-t-on quand on parle du pôle nord ? Pour l’instant, j’avais utilisé la définition usuelle du pôle nord, correspondant au point le plus septentrional de la terre (ce qui, j’en conviens, est un peu tautologique comme définition). En terme de coordonnées, il s’agit du point à 90° de latitude Nord. Il n’a pas de longitude, car c’est précisément le point au nord où tous les méridiens (et tous les fuseaux horaires) se coupent. J’avais aussi dit, dans mon précédant billet qu’il s’agissait du point de la surface terrestre, de l’hémisphère nord, situé sur l’axe de rotation de la terre. Ce point géographique n’est malheureusement pas fixe à la surface de la Terre, car l’axe de rotation de la terre varie (très faiblement).

Mais il existe un autre pôle, le pôle nord magnétique, qui est l’unique sur la surface de la terre où le champ magnétique (terrestre) pointe vers le centre de la terre (et orthogonal à la surface de la terre, s’il pointe vers le haut, c’est le pôle sud). A priori, on pourrait se dire que ce point n’aura pas grand intérêt dans la discussion, car s’approprier un point mouvant serait surprenant. Car il bouge beaucoup, en témoigne la carte ci-dessous (les coordonnées sont facilement accessibles, sur http://ngdc.noaa.gov/geomag/data/poles/ par exemple)

Un utilisant ces champs, on peut d’ailleurs définir un pôle nord gravitationnel (je me permets d’inventer le mot, par analogie au pôle nord magnétique), à l’aide non pas du champ magnétique, mais du champ gravitationnel. En anglais, on parle de polar wander. Pour faire simple, ce pôle nord sera lié à la répartition de la masse, sur terre, et des mouvements d’eau, en particulier (le reste étant assez stable à côté). Or comme l’ont montré récemment Chen, Wilson, Ries et Tapley, dans Rapid ice melting drives Earth’s pole to the east (article remarqué par Richard Lovett, en mai 2013, dans Nature) le pôle nord semble se déplace plus rapidement que prévu, à cause du réchauffement climatique.

Mais je pense que cette discussion (quoi que potentiellement passionnante) nous éloigne de ce qui devrait être évoqué mardi soir. En fait, mon point serait que, plutôt que de visualiser le pôle nord, en tant que point, il est plus intéressant de visualiser une région. Et plutôt que de considérer une région au delà du cercle polaire (j’avais utilisé des cercles dans mon précédant billet), on peut davantage utiliser la zone glacée, comme le montre la carte

Encore une fois, le soucis est que cette zone glacée varie énormément ! On observe qu’entre la moitié et les deux-tiers de la surface de la glace se situe dans les zones économiques des trois principaux pays limitrophes, la Russie et le Canada.

Et si on y regarde de plus près, on observe d’ailleurs que les surfaces occupées respectivement par la Russie et le Canada sont comparables,

avec toutefois un comportement cyclique, la présence canadienne étant au plus faible à l’automne,

J’ai l’impression que je pourrais passer des heures sur ces cartes. Mais je vais probablement m’arrêter là pour ma préparation du débat. Et même si ces débats ne laissent que peu de temps pour la prise de notes, j’essayerais de prendre un peu de temps pour revenir sur ce qui s’est dit, en milieu de semaine, sur le blog, ou sur Twitter.

Moving the North Pole to the Equator

I am still working with @3wen on visualizations of the North Pole. So far, it was not that difficult to generate maps, but we started to have problems with the ice region in the Arctic. More precisely, it was complicated to compute the area of this region (even if we can easily get a shapefile). Consider the globe,

worldmap <- ggplot() + 
geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) +
scale_y_continuous(breaks = (-2:2) * 30) +
scale_x_continuous(breaks = (-4:4) * 45)

and then, add three points in the northern hemisphere, and plot the associated triangle

P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), 
fill ="blue", alpha = 0.6, col = "light blue", size = .8)+
geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+

for some given projection, e.g.

coord_map("ortho", orientation=c(61, -74, 0))

This can be done with the following function

proj1=function(x=75){
triangle <- data.frame(long=c(-70,-110,-90*(x<90)+90*(x>90)),
lat=c(60,60,x*(x<90)+(90-(x-90))*(x>90)),group=1, region=1)
worldmap <- ggplot() + 
geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) +
scale_y_continuous(breaks = (-2:2) * 30) +
scale_x_continuous(breaks = (-4:4) * 45)
P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), 
fill ="blue", alpha = 0.6, col = "light blue", size = .8)+
geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+
coord_map("ortho", orientation=c(61, -74, 0)) 
print(P1)
}

or

I am not sure if I understand why the projection of the triangle is not convex on the graph above, but say it’s not a big deal, here. Actually, our problem is that our interest is on regions (polygons, from a geometrical point of view) that do contain the North Pole. And here, it starts to be messy. I can easily move the upper point on the other side of the globe, but the polygon is not correct,

I do understand that it should be a problem, non-trivial, but it means that it should not be that simple to compute the area of a polygon (a region) that contains the North Pole. Which is exactly what we did observe in our computation. And I believe that one heuristic interpretation is related to the following graph

My skills in geometry are extremely poor. So do not expect that I will go through the code of the function that compute the area of a polygon ! Actually, my idea is the following : if the problem is that the North Pole is in the region, let’s consider some rotation, to shift the North Pole on the Equation. The code here is, from latitudes and longitude, to get new latitudes and longitudes, after a rotation around the y-axis (the North Pole will go down, along Greenwhich meridian) is

rotation=function(Z,theta){
lon=Z[,1]/180*pi; lat=Z[,2]/180*pi
x=cos(lon)*cos(lat)
y=sin(lon)*cos(lat)
z=sin(lat)
pt1=cbind(x,y,z)
M=matrix(c(cos(theta),0,-sin(theta),0,1,0,sin(theta),0,cos(theta)),3,3)
pt2=t(M%*%t(pt1))
lat=asin(pt2[,3])*180/pi
lon=atan2(pt2[,2],pt2[,1])*180/pi
return(cbind(lon,lat))}

With a rotation from  (no change) to  (the North Pole on the equator), we get

From now on, it is possible to compute the area of any region containing the North Pole ! One should simply apply the rotation function on all datebases generated from shapefiles (and then the opposite rotation to get a proper location) ! We can then compute the centroid of the ice region, for example,

r.glace=glace
r.glace[,1:2]=rotation(glace[,1:2],pi/2)
M=matrix(NA,length(unique(glace$id)),3)
j=0
for(i in unique(glace$id)){j=j+1
Polyglace <- as(r.glace[glace$id==i,c("long","lat")],"gpc.poly")
M[j,1]=area.poly(Polyglace)
M[j,2:3]=centroid(r.glace[r.glace$id==i,c("long","lat")])
}
Z=c(weighted.mean(M[,2],M[,1]),weighted.mean(M[,3],M[,1]))
rotation(rbind(Z),-pi/2)[1,])

And we get

and below, we can visualize all the locations of the centroid of the ice region in the past 25 years

Somewhere else, part 119

Some writings worth reading,

avec un peu de lecture en français,

Did I miss something?

Filtering a Stationary Time Series

In the first part of the MAT8181 course, on linear (univariate) time series, I forgot to mention an important theorem. Let  be a stationary time series, and  a sequence of real numbers such that

then the time series  defined as

is a stationary time series. Further, one can get easily that

This result can be used, if necessary in the exercises (that might save some time actually). I did not include this property in my notes because it is a bit technical to establish that this sum exists, and that the time series is stationary. It is rather simple with the spectral density (since  where  stands for the filter generating function), but I did not mention the spectral density since it requires some knowledge on Fourier analysis…