Inference for ARMA(p,q) Time Series

As we mentioned in our previous post, as soon as we have a moving average part, inference becomes more complicated. Again, to illustrate, we do not need a two general model. Consider, here, some https://latex.codecogs.com/gif.latex?ARMA(1,1) process,

https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t+\theta%20\varepsilon_{t-1}

where https://latex.codecogs.com/gif.latex?(\varepsilon_t) is some white noise, and assume further that https://latex.codecogs.com/gif.latex?\theta+\phi\neq0.

> theta=.7
> phi=.5
> n=1000
> Z=rep(0,n)
> set.seed(1)
> e=rnorm(n)
> for(t in 2:n) Z[t]=phi*Z[t-1]+e[t]+theta*e[t-1]
> Z=Z[800:1000]
> plot(Z,type="l")

  • A two step procedure

To start with something simple, assume that we did miss the moving average component,

https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+u_t

The estimator of https://latex.codecogs.com/gif.latex?\phi – by least squares – is not longer consistent. But still. We can still compute it

> base=data.frame(Y=Z[2:n],X=Z[1:(n-1)])
> regression=lm(Y~0+X,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.2445 -0.7909  0.0626  0.9707  3.0685 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
X  0.69571    0.05101   13.64   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.225 on 199 degrees of freedom
  (799 observations deleted due to missingness)
Multiple R-squared:  0.4832,	Adjusted R-squared:  0.4806 
F-statistic:   186 on 1 and 199 DF,  p-value: < 2.2e-16

and then, we cancompute the autocorrelation of the noise,

> n=200
> cor(residuals(regression)[2:n],residuals(regression)[1:(n-1)])
[1] 0.2663076

or more formally, use Durbin-Watson estimator, to get autocorrelation of the noise (and some significance test)

> library(car)
> durbinWatsonTest(regression)
 lag Autocorrelation D-W Statistic p-value
   1       0.2656555       1.46323       0
 Alternative hypothesis: rho != 0

The point, here, is that we would like to assume that

https://latex.codecogs.com/gif.latex?u_t=\varepsilon_t+\theta\varepsilon_{t-1}

meaning that https://latex.codecogs.com/gif.latex?(u_t) should be some https://latex.codecogs.com/gif.latex?MA(1) process. And

https://latex.codecogs.com/gif.latex?\rho(1)=\frac{\theta}{1+\theta^2}

i.e. https://latex.codecogs.com/gif.latex?\theta is a root of this quadratic problem,

> polyroot(c(1,-1/cor(residuals(regression)[2:n],residuals(regression)[1:(n-1)]),1))
[1] 0.2884681+0i 3.4665883+0i

Here, we do have two positive roots. I would go for the one smaller than one, in order to be able to invert the polynomial, if necessary…

  •  Use of the empirical autocorrelation function

An alternative might be to use properties of the autocorrelation function,

https://latex.codecogs.com/gif.latex?\gamma(0)=\frac{1+\theta^2+2\phi\theta}{1-\phi^2}\cdot\sigma^2

https://latex.codecogs.com/gif.latex?\gamma(1)=\phi\gamma(0)+\theta\sigma^2

and

https://latex.codecogs.com/gif.latex?\gamma(2)=\phi\gamma(1)

Again, we have a set of three equations, with three unknown parameters. Numerically, it is possible to find some roots. If we run the code, we get

> v=c(as.numeric(acf(Z)$acf[2:3]),1)*var(Z)
> library(rootSolve)
> seteq=function(x){
+ F1=v[1]-x[3]^2*(x[2]^2+2*x[1]*x[2]+1)/(1-x[1]^2)
+ F2=v[2]-(x[1]*v[1]+x[2]*x[3]^2)
+ F3=v[3]-x[1]*v[2]
+ return(c(F1,F2,F3))}
> multiroot(f=seteq,start=c(.1,.1,1))
$root
[1]  3.643734 -3.188145  1.427759

$f.root
[1]  1.371170e-11 -3.714573e-11  0.000000e+00

$iter
[1] 8

$estim.precis
[1] 1.695248e-11

Here, we have a situation…

  • Use of least square techniques

We can use, here, the algorithm described in the context of https://latex.codecogs.com/gif.latex?MA(q) processes.

> V=function(p){
+ phi=p[1]
+ theta=p[2]
+ u=rep(0,length(Z))
+ for(t in 2:length(Z)) u[t]=Z[t]-phi*Z[t-1]-theta*u[t-1]
+ return(sum(u^2))
+ }
> p=optim(par=c(.1,.1),V)$par
[1] 0.3637783 0.7773845
> coef=c(p,sqrt(V(p)/(length(Z))))

which is not so bad. Actually, if we run that procecure on 1,000 samples, we get the following output

  • Use of maximum likelihood techniques

Last, but not least, one more time, we can use (global) maximum likelihood techniques, since the process is a Gaussian process (all finite dimensional vector will have a joint Gaussian distribution) if we assume that the noise is Gaussian.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi=A[1];  theta=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=sigma^2*(theta^2+2*phi*theta+1)/(1-phi^2)
+ rho[2]=phi*rho[1]+theta*sigma^2
+ for(h in 3:n) rho[h]=phi*rho[h-1]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)$par
[1] 0.3890991 0.7672036 1.0731340

It works fine, one more time. But maybe we got lucky here. We’ve seen in the post on autoregressive time series that the algorithm might fell if the time series is not stationary. In order to avoid such problems, we can consider a constraint optimization problem, where we simply recall that https://latex.codecogs.com/gif.latex?\phi\in(-1,1),

> U=matrix(c(1,-1,0,0,0,0),2,3)
> C=-c(.999,.999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.3890991 0.7672036 1.0731340

$value
[1] 300.1956

$counts
function gradient 
     118       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] -1.536358e-05

If we run that algorithm 1,000 times, on simulated time series (with the same parameters), we get

Inference for MA(q) Time Series

Yesterday, we’ve seen how inference for https://latex.codecogs.com/gif.latex?AR(p) time series was possible.  I started  with that one because it is actually the simple case. For instance, we can use ordinary least squares. There might be some possible bias (see e.g. White (1961)), but asymptotically, estimators are fine (consistent, with asymptotic normality). But when the noise is (auto)correlated, then it is more complex. So, consider here some  time series

for some white noise https://latex.codecogs.com/gif.latex?(\varepsilon_t).

> theta1=.25
> theta2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=e[t]+theta1*e[t-1]+theta2*e[t-2]
> Z=Z[800:1000]
> plot(Z,type="l")

  • Using the empirical autocorrelations

The first idea might be to use the first two (empirical) autocorrelations (the two that are supposed to be – theoretically – non null).

with  when . We also have the following relationship on the variance of the process

With those three equations, for three unknown parameters, https://latex.codecogs.com/gif.latex?\theta_1https://latex.codecogs.com/gif.latex?\theta_2 and https://latex.codecogs.com/gif.latex?\sigma, we simply have to solve (numerically) that system of equations,

> v=c(as.numeric(acf(Z)$acf[2:3]),var(Z))
> v
[1] 0.1658760 0.3823053 1.6379498
> library(rootSolve)
> seteq=function(x){
+ F1=v[1]-(x[1]+x[1]*x[2])/(1+x[1]^2+x[2]^2)
+ F2=v[2]-(x[2])/(1+x[1]^2+x[2]^2)
+ F3=v[3]-(1+x[1]^2+x[2]^2)*x[3]^2
+ return(c(F1,F2,F3))}
> multiroot(f=seteq,start=c(.1,.1,1))
$root
[1] 0.1400579 0.4766699 1.1461636

$f.root
[1]  7.876355e-10  4.188458e-09 -2.839977e-09

$iter
[1] 5

$estim.precis
[1] 2.605357e-09

We are a bit far away from the true values, used to generate our sample. And if we consider 1,000 sample (instead of only one), we still have the bias, and a large variance for our three estimators,

http://freakonometrics.hypotheses.org/files/2014/01/Capture-d%E2%80%99e%CC%81cran-2014-01-29-a%CC%80-11.34.46.png

  • Using least square techniques

We can try something quite different here. The problem we have is that we do not observe the noise https://latex.codecogs.com/gif.latex?(\varepsilon_t), we only observe our series https://latex.codecogs.com/gif.latex?(X_t). But we can try to rebuild that series (call it  since we’re not sure it will be a reconstruction of the noise). As suggested in Box & Jenkins (1967), assume that the first two values are null. And then, use

and then, we can use least square techniques

The code will be

> V=function(p){
+ theta1=p[1]
+ theta2=p[2]
+ u=rep(0,length(Z))
+ for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2]
+ return(sum(u^2))
+ }

If we try to minimize the sum of the squares of the residuals, we get

> optim(par=c(.1,.1),V)
$par
[1] 0.2751667 0.6723909

$value
[1] 225.8104

$counts
function gradient 
      77       NA 

$convergence
[1] 0

$message
NULL

which is close to the true value. Another good thing is that, if we compare that rebuilt noise with the true one (since we actually have it), then we have the same vector,

> plot(e[800:1000],col="blue",type="l")
> theta1=0.2751667
> theta2=0.6723909
> u=rep(0,length(Z))
> for(t in 3:length(Z)) u[t]=Z[t]-theta1*u[t-1]-theta2*u[t-2]
> lines(1:201,u,col="red")

So far, so good. And if we look at 1,000 samples, we get

It looks like we have some bias here. And since the two estimators should be negatively correlated, one over-estimates, while the other one under-estimates.

  • Using the (global) maximum likelihood technique

And a final method might be to use the maximum likelihood technique (globally). Again, if we assume that we have a Gaussian i.i.d noise, then the vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is Gaussian, with a simple variance matrix (since a lot of elements will be null),

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ theta1=A[1];  theta2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=(theta1+theta1*theta2)/(1+theta1^2+theta2^2)
+ rho[3]=(theta2)/(1+theta1^2+theta2^2)
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1+theta1^2+theta2^2)*sigma^2
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
$par
[1] 0.2584144 0.6826530 1.0669820

$value
[1] 298.8699

$counts
function gradient 
      86       NA 

$convergence
[1] 0

$message
NULL

Here, the values that minimize the likelihood are rather close to the ones used to generate our sample. And if we run this algorithm on 1,000 samples, we can see that those estimates are fine,

I could not find other ideas, to estimate those parameters. I guess we can use the partial autocorrelation function, since we have relationships that can be related to Yule-Walker equations for https://latex.codecogs.com/gif.latex?AR(p) time series.

Journalisme Scientifique

Il y a quelques jours, Passeur de Sciences a annoncé que le site du Monde.fr allait abriter plusieurs nouveaux blogs de science. Mon voisin @tomroud a parfaitement résumé la situation en 140 caractères, “plus de blogs sur les sciences sur le monde.fr http://passeurdesciences.blog.lemonde.fr/…. Mais toujours pas (vraiment) de blogs de scientifiques” (il ajoutait quelques minutes plus tard “il semble donc que l’avenir du blog sur les sciences soit le journalisme scientifique. Pas sûr que ce soit un progrès…“), et @marc_rr a poursuivi la discussion sur son blog – tout se passe comme si – avec un long débat passionnant. N’ayant pas l’habitude de poster des commentaires, je vais plutôt mettre un billet sur mon blog, car je plussoie à la vision de @tomroud !

Je suis en train de finaliser depuis quelques jours un article sur les blogs académiques, et à plusieurs reprises, je me suis interrogé sur la distinction entre l’activité de bloggeur académique, et l’activité de journaliste scientifique. Et le débat des derniers jours (je devrais dire des derniers heures) m’a beaucoup éclairé. Je pourrais faire beaucoup de reproches à @passeursciences (et je l’avais déjà fait une fois, dans Martingale et Journalisme Scientifique, suite à la publication d’un article qui, en quelques lignes, avait balayé le travail de centaines de scientifiques, dont deux chercheurs qui auraient un prix Nobel quelques mois plus tard pour ces travaux, justement). Et beaucoup de compliments (j’adore la chronique “Improbablologie“, même si j’ai pris l’habitude de la lire en la version originale sur http://www.improbable.com/, car j’adore l’humour de @marcabrahams). Et dans un commentaire sur le blog de @marc_rr, j’ai eu l’impression que le débat se simplifiait, et que la distinction entre scientifiques et journaliste devenait plus claire,

“Les chercheurs-blogueurs : vous allez sans doute me trouver sévère mais si je dis que j’ai du mal à trouver des chercheurs qui bloguent a/ très régulièrement (au moins une ou deux fois par semaine) b/ en français et sans fautes c/ avec un excellent niveau de vulgarisation (donc sans s’adresser en jargon aux happy few) et d/ sur l’actualité, c’est parce que cela correspond à la réalité. Mais c’est aussi sans doute parce que mes critères sont trop élevés ou trop journalistiques” (a continuer…)

Le problème du jargon scientifique est un vieux problème, qui semble atrocement gêner les journalistes, car ils semblent ne pas le maîtriser. Mais je me suis toujours énervé contre cette tentative de nivellement par le bas (ce qui fait que je prend le clavier aujourd’hui). Cette distanciation forcée enlève beaucoup, et fait rapidement perdre toute crédibilité. Dans son Histoire de la Lecture, Alberto Manguel citait un poète gaulois, Ausone, qui écrivait

Tu as acheté des livres et rempli des rayons, ô amoureux des Muses.
Cela signifie-t-il que tu es désormais savant ?
Si tu achètes aujourd’hui des instruments à cordes, plectre et lyre :
Crois-tu que demain le royaume de la musique t’appartiendra ?

Effectivement, je pense qu’un passionné d’astronomie ne se contente pas de lire Ciel et Espace, il va s’acheter un télescope, et essayer de voir les étoiles par lui même. Et celui qui s’intéresse aux mathématiques ira lire Choux Romanesco etc, avec tout le jargon qui va avec, et voudra aller plus loin. Lorsqu’on a vu les figures d’Esscher, avec les enfants, on a essayé de comprendre pourquoi ça marche ! Alors on a pris nos crayons, nos compas, et on s’est lancé. La science, ça se vit. Je pense que c’est la grande distinction entre les journalistes scientifiques et les bloggers académiques. Dans un vieux billet, sur l’importance du do-it-yourself, j’avais justement expliqué qu’il était important que les gens mettent les mains à la pâte. Je peux citer, par exemple, un autre billet (né d’une discussion avec @imparibus) à l’époque des élections présidentielles. J’avais mis en ligne des graphiques et des codes informatiques pour faire un peu de prévision. Au lieu d’attendre que quelqu’un d’autre le fasse. Dans quelques jours, je dois participer a une discussion (dite grand public) sur le big data. Et un de mes points est qu’il est important de comprendre ce qui se passe, comment fonctionnent les algorithmes qui, soit disant, nous gouvernent. L’affiche de la conférence (j’en reparlerais bientôt sur le blog) associe big data et big brother, et je pense que cette association est (partiellement) fausse : on pense à Big Brother si on se sent dépossédé, et je ne pense pas que ce soit le cas. A condition de comprendre ce qui peut se faire avec du big data, et ce qui reste de l’utopie… Il faut se lancer, et s’approprier le jargon, au contraire ! Je trouve cette démission face à la difficulté incroyable !

Quant à parler de happy-few, c’est tellement méprisant ! Au risque de choquer, il y a du monde qui veut des choses techniques, avec du jargon dedans ! Dans l’article mentionné au début, @passeursciences se félicitait d’avoir 20 millions de pages vues en un peu plus de décembre 2011 (et pas lues comme le dit l’article, malheureusement, on ne saura jamais trop ce que les gens font quand ils arrivent sur nos pages). Je l’en félicite. Personnellement, je suis loin derrière, car depuis décembre 2012 (j’ai fait ma migration assez tardivement) j’en totalise à peine 2 millions. Étant statisticien, je serais le premier à éviter de donner trop de poids à ces chiffres (surtout que je publie, à l’occasion, en anglais, ce qui peut faire augmenter le nombre de lecteurs, et beaucoup de mes articles sont resyndiqués sur des blogs comme , r-bloggers, architects.dzone ou statsblogs – parmi ceux que je connais – ce qui fait baisser le nombre de lecteurs), mais quand on regarde rapidement, ça veut dire que j’ai juste 5 fois moins de pages vues qu’un blog hébergé par le plus grand quotidien français !? En mettant des maths et des formules de codes dans presque tous mes billets ?! J’ai du mal a comprendre l’idée de happy-few, je suis désolé… Cela dit, j’espère que ceux qui viennent sont effectivement happy

PS: sur le français sans faute, je suis malheureusement tout seul sur mon blog, et j’ai beaucoup de mal à me relire moi-même. Mais promis, un jour je me payerais le luxe d’avoir quelqu’un pour relire mes articles avant que je ne les mette en ligne ! Ah ah ah, on peut toujours rêver…

Inference for AR(p) Time Series

Consider a (stationary) autoregressive process, say of order 2,

https://latex.codecogs.com/gif.latex?Y_t%20=\varphi_1%20Y_{t-1}+\varphi_2%20Y_{t-2}+\varepsilon_t

for some white noise with variance . Here is a code to generate such a process,

> phi1=.25
> phi2=.7
> n=1000
> set.seed(1)
> e=rnorm(n)
> Z=rep(0,n)
> for(t in 3:n) Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+e[t]
> Z=Z[800:1000]
> n=length(Z)
> plot(Z,type="l")

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . Several techniques can be used to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, since (if we consider a matrix formulation)

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.0268 -0.7063  0.1065  0.6925  3.2566 

Coefficients:
   Estimate Std. Error t value Pr(>|t|)    
X1  0.23400    0.05463   4.283 2.88e-05 ***
X2  0.62863    0.05476  11.479  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.062 on 197 degrees of freedom
Multiple R-squared:  0.6349,	Adjusted R-squared:  0.6312 
F-statistic: 171.3 on 2 and 197 DF,  p-value: < 2.2e-16

so we get the following estimators, for the autocorrelation coefficients, and the volatility of the noise

> regression$coefficients
       X1        X2 
0.2339959 0.6286321 
> summary(regression)$sigma
[1] 1.061839
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written (again, using a matrix expression)

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
          [,1]
[1,] 0.2256270
[2,] 0.6315329

Now, we need to extract the estimated innovation process, from this set of parameters

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.058558

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

An alternative could be to include the variance term in Yule-Walker equations, to get a three dimensional linear equation,

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\gamma_0%20=%20\varphi_1%20\gamma_1+\varphi_2%20\gamma_2+\sigma^2\\%20\gamma_1=\varphi_1%20\gamma_0+\varphi_2%20\gamma_1%20\\%20\gamma_2=\varphi_1%20\gamma_1+\varphi_2%20\gamma_0\end{array}\right.

It is not much more complicated to solve, actually,

> gamma0=var(Z[1:n])
> gamma1=var(Z[1:(n-1)],Z[2:n])
> gamma2=var(Z[1:(n-2)],Z[3:n])
> A=matrix(c(gamma1,gamma0,gamma1,gamma2,gamma1,gamma0,1,0,0),3,3)
> b=matrix(c(gamma0,gamma1,gamma2),3,1)
> (PHISIGMA=solve(A,b))
          [,1]
[1,] 0.2283151
[2,] 0.6283431
[3,] 1.1335501
  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. The standard model is a Gaussian model, i.e.

https://latex.codecogs.com/gif.latex?Y_t\vert%20Y_{t-1}=y_{t-1},Y_{t-2}=y_{t-2}

has a Gaussian distribution

https://latex.codecogs.com/gif.latex?\mathcal{N}(\varphi_1y_{t-1}+\varphi_2y_{t-2},\sigma^2)

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1] 0.2339589 0.6285002 1.0565613

$value
[1] 293.3042

$counts
function gradient 
     106       NA 

$convergence
[1] 0

$message
NULL

It is also possible to consider a global maximum likelihood optimisation problem, since the variance matrix of vector https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) has a know form.

  • using (unconditional) likelihood estimators

The variance matrix of https://latex.codecogs.com/gif.latex?\boldsymbol{Y}=(Y_1,\cdots,Y_t) is https://latex.codecogs.com/gif.latex?\boldsymbol{\Gamma}=[\gamma(\vert%20i-j\vert)], where autocovariances are not not know, be can easily be computed using a recursive relationship.

> library(mnormt)
> GlobalLogLik=function(A,TS){
+ n=length(TS)
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]
+ SIG=matrix(0,n,n)
+ rho=rep(0,n)
+ rho[1]=1
+ rho[2]=phi1/(1-phi2)
+ for(h in 3:n) rho[h]=phi1*rho[h-1]+phi2*rho[h-2]
+ for(i in 1:n){for(j in 1:n){
+ SIG[i,j]=rho[abs(i-j)+1]}}
+ gamma0=(1-phi2)*sigma^2/((1+phi2)*((1-phi2)^2-phi1^2))
+ SIG=gamma0*SIG
+ return(dmnorm(TS,rep(0,n),SIG,log=TRUE))}
> LogL=function(A) -GlobalLogLik(A,TS=Z)
> optim(c(.1,.1,1),LogL)
Error in chol.default(x, pivot = FALSE) : 
Error in pd.solve(varcov, log.det = TRUE) : 
  x appears to be not positive definite

The problem is that there is a strong constraint on the pair https://latex.codecogs.com/gif.latex?(\varphi_1,\varphi_2) to get a stationary process (we are not far away, here, from the border of the triangle, where the process become non stationary). To be more specific (this was mentioned in a previous post), we should have

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

i.e. in a standard matrix form

https://latex.codecogs.com/gif.latex?\left[\begin{array}{cc}%20+1%20&%20-1%20\\%20-1%20&%20-1%20\\%200%20&%20+1\end{array}\right]\left[\begin{array}{c}%20\varphi_1%20\\%20\varphi_2\end{array}\right]%20%3E%20\left[\begin{array}{c}%20-1%20\\%20-1%20\\%20-1\end{array}\right]

(we can add an additional constraint on the variance parameter, to insure that it will be positive). To run a contrained optimization routine, consider

> U=matrix(c(1,0,0,-1,0,1,0,-1,0,0,1,0),4,3)
> C=c(0,0,0,-.99999)
> constrOptim(c(.1,.1,1),LogL,grad=NULL,ui=U,ci=C)
$par
[1] 0.2238892 0.6342850 1.0613388

$value
[1] 297.9202

$counts
function gradient 
     108       NA 

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 0.000189892

(here, to faster, we restrain the parameters so that they will be positive).

  • comparing those estimates

Here, our five estimators are rather close. Let us run more samples to see more precisely how they behave. For the first parameter https://latex.codecogs.com/gif.latex?\widehat{\varphi_1}, we get

and for the second one, https://latex.codecogs.com/gif.latex?\widehat{\varphi_2}, we have

The bias we observe is probably coming from the fact that, with this numerical example, we are not far away from the non-stationary case (the sum of the true parameters should be less than 1, and it is 0.95). When we estimate the parameters, we force them to be inside the triangle, since those parameters can be estimated only if the process is stationary.

Observe that the standard-deviation of the innovation process https://latex.codecogs.com/gif.latex?\widehat{\sigma} is here, well estimated,

(with clearly some estimators that perform better than others).

 

Bias of Hill Estimators

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, then Hill estimators for https://latex.codecogs.com/gif.latex?\alpha are given by

https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20=%20\left[\frac{1}{k}\sum_{i=0}^{k-1}%20\log%20X_{n-i,n}%20-\log%20X_{n-k,n}\right]^{-1}
for https://latex.codecogs.com/gif.latex?k\in\{1,2,\cdots,n\}. Then we did say that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k satisfies some consistency in the sense that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{\mathbb{P}}{\rightarrow}%20\alpha if https://latex.codecogs.com/gif.latex?k\rightarrow\infty, but not too fast, i.e. https://latex.codecogs.com/gif.latex?k/n\rightarrow0 (under additional assumptions on the rate of convergence, it is possible to prove that https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k%20\overset{a.s.}{\rightarrow}%20\alpha). Further, under additional technical conditions

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}(0,\alpha^2)

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5
> S=function(x){ifelse(x>1,x^(-alpha),1)}
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01)
> plot(u,Vectorize(S)(u),type="l",col="red")
> u=seq(0,99/100,by=.01)
> plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific https://latex.codecogs.com/gif.latex?k‘s).

> ns=10000
> HillK=matrix(NA,ns,10)
> for(s in 1:ns){
+ X=Vectorize(Q)(runif(n))
+ H=hill(X,plot=FALSE)
+ hillk=function(k) H$y[H$x==k]
+ HillK[s,]=Vectorize(hillk)(15*(1:10))
+ }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that https://latex.codecogs.com/gif.latex?\overline{F}(x)=C%20x^{-\alpha}, with https://latex.codecogs.com/gif.latex?\alpha%3E0, but is means that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=%20x^{-\alpha}%20\mathcal{L}(x)

for some slowly varying function https://latex.codecogs.com/gif.latex?\mathcal{L}, not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function https://latex.codecogs.com/gif.latex?a such that

https://latex.codecogs.com/gif.latex?\lim_{t\rightarrow\infty}\frac{\overline{F}(xt)/\overline{F}(t)-x^{-\alpha}}{a(t)}=x^{-\alpha}\frac{1-x^{-\beta}}{\beta}{}

This (positive) constant https://latex.codecogs.com/gif.latex?\beta is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk et al. (2000) for some examples).

To be more specific, assume that

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

then, the second order regular variation property is obtained using https://latex.codecogs.com/gif.latex?a(t)=\beta%20t^{-\beta}, and then, if https://latex.codecogs.com/gif.latex?k goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts et al. (1997)), if https://latex.codecogs.com/gif.latex?k=O(n^{2\beta/(\alpha+2\beta)}), then, for some https://latex.codecogs.com/gif.latex?\lambda%3E0,

https://latex.codecogs.com/gif.latex?\sqrt{k}\left(\widehat{\alpha}_k-\alpha\right)\overset{\mathcal%20L}{\rightarrow}\mathcal{N}\left(\frac{\alpha^3}{\beta-\alpha}\lambda,\alpha^2\right)

The intuitive interpretation of this result is that if https://latex.codecogs.com/gif.latex?k is too large, and if the underlying distribution is not exactly a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

  • if https://latex.codecogs.com/gif.latex?k is too large, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a biased estimator
  • if https://latex.codecogs.com/gif.latex?k is too small, https://latex.codecogs.com/gif.latex?\widehat{\alpha}_k is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

https://latex.codecogs.com/gif.latex?\overline{F}(x)=\underbrace{C(1+x^{-\beta})}_{\mathcal{L}(x)}\cdot%20%20x^{-\alpha}

> beta=.5
> S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) }
> Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500
> set.seed(1)
> X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir)
> hill(X)
> abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

Defining Properly MA(∞) Time Series

In order to properly define https://latex.codecogs.com/gif.latex?MA(\infty) series, we need to get back on some properties of infinite sequences, as briefly mentioned yesterday in the MAT8181 course. Consider some sequence https://latex.codecogs.com/gif.latex?(a_i)_{i\in\mathbb{N}}. The sequence is said to be summable if

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=0}^n%20a_i

is convergent, i.e. if the limit of https://latex.codecogs.com/gif.latex?S_n exists when https://latex.codecogs.com/gif.latex?n\rightarrow\infty.

From Cauchy criterionhttps://latex.codecogs.com/gif.latex?\sum%20a_i converges if and only if for each https://latex.codecogs.com/gif.latex?\eta%3E0, there is https://latex.codecogs.com/gif.latex?m\in\mathbb{N} for which

https://latex.codecogs.com/gif.latex?\vert%20a_i+a_{i+1}+\cdots+a_{j-1}+a_j\vert%3C\eta

when https://latex.codecogs.com/gif.latex?%20i,j%3Em. The sequence https://latex.codecogs.com/gif.latex?(a_i)_{i\in\mathbb{N}} is said to be absolutely summable if

https://latex.codecogs.com/gif.latex?%20\sum_{i=0}^\infty%20\vert%20a_i\vert%20%3C\infty

and square-summable if

https://latex.codecogs.com/gif.latex?%20\sum_{i=0}^\infty%20a_i^2%20%3C\infty

Observe that absolute summability will imply square summability (since for https://latex.codecogs.com/gif.latex?j‘s large enough https://latex.codecogs.com/gif.latex?%20\vert%20a_j\vert%20\leq1, and then https://latex.codecogs.com/gif.latex?%20a_j^2\leq\vert%20a_j\vert)

Consider now some https://latex.codecogs.com/gif.latex?MA(\infty) time series

https://latex.codecogs.com/gif.latex?%20X_t=\sum_{h=0}^\infty%20\theta_h%20\varepsilon_{t-h}

If the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is square-summable, then

https://latex.codecogs.com/gif.latex?%20S_T%20=%20\sum_{h=0}^T%20\theta_h%20\varepsilon_{t-h}

converges in https://latex.codecogs.com/gif.latex?%20L_2  to some random varible as https://latex.codecogs.com/gif.latex?%20T\rightarrow\infty. This can be proved easily using Cauchy criteria, in the sense that for any https://latex.codecogs.com/gif.latex?\eta%3E0, there is a https://latex.codecogs.com/gif.latex?%20T large enough such that, for any https://latex.codecogs.com/gif.latex?%20h,

In that case, if the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is square-summable, then https://latex.codecogs.com/gif.latex?%20(X_t) is stationary (in the https://latex.codecogs.com/gif.latex?%20L_2 sense) since the process is centered, and

https://latex.codecogs.com/gif.latex?\gamma(h)=\sigma^2%20\cdot%20\sum_{i=0}^\infty%20\theta_i%20\theta_{i+h}

for all https://latex.codecogs.com/gif.latex?h\in\mathbb{N}.

Further, ergodicity of the time series, define as the absolute summability of the autocovariance sequence, is obtained when the sequence of coefficients https://latex.codecogs.com/gif.latex?%20(\theta_i) is absolutely summable.

Musée(s)

Samedi après-midi, profitant du (relatif) réchauffement, on a marché avec ma fille (la petite, les grands sont au Chili pour la semaine) jusqu’au musée des Beaux-Arts. Traîner au musée une fin d’après-midi doit être une des choses que j’appréciais le plus à l’époque où j’étais étudiant à Paris. Mais Montréal n’est pas Paris (ni New York, comme me le faisait remarquer un collègue, amateur de musées). Pourtant, le musée des Beaux-Arts de Montréal m’avait permis de découvrir Dale Chihuly l’été dernier (une découverte surprenante). Samedi, je pensais qu’on pourrait aller voir Peter Doing, mais je me suis souvenu que les éditions de la Pastèque exposait une quizaine d’artistes, et j’ai eu envie d’aller jeter un œil.

L’idée de l’exposition – ça m’est revenu seulement au beau milieu de la visite – était qu’un artiste devait sélectionner une pièce du musée, et… broder autour.  Une quinzaine d’artistes pour les quinze ans de la maison d’édition. Idée intéressante, non ?

On entre dans l’exposition en découvrant quelques planches de Michel Rabagliati. C’est un peu la star de l’exposition, alors j’ai regardé d’un œil un peu distret car j’espérais surtout découvrir des artistes que je ne connaissais pas. Mais en 5 ou 6 planches, j’avoue qu’une nouvelle fois, Michel m’a époustouflé. On découvre une jolie petite histoire, en espagnol, d’un colleur d’affiche, dont l’affiche mal collée s’envole avec le vent, et reste un morceau… De mémoire, un critique d’art découvre le morceau restant,

et qui s’extasie, criant au génie, même s’il n’a jamais vu le reste de l’affiche…

ce n’est qu’en se retournant qu’on découvre le morceau de l’affiche, qui est resté…la bébé sardine qui fait face à sa maman, comme me l’a expliqué ma fille. Ah, oui,  c’est un tableau de Joan Miró. L’idée est géniale ! Vraiment ! Mais ce qui m’a le plus touché, je pense, c’est la discussion (filmée) de Pascal Girard, et sur sa rencontre avec les ours, inspiré par un dessin de Sarni (Sharni) Pootoogook, un artiste inuit, datant des années 60,

Je ne sais pas si c’est le souvenir de mes vacances californiennes de cet été (où on a effectivement croisé des ours, en se promenant), ou si c’est la réflexion autour du “est-ce que j’ai vraiment vu un ours? est-ce que ça s’est vraiment passé comme je le raconte? est-ce que je me souviens encore de ce que j’ai vu, ou est-ce que l’histoire que j’ai raconté a pris le dessus sur ce que j’ai réellement vécu?” qui m’a troublé… mais je suis resté scotché devant la vidéo et la discussion de Pascal Girard.

Tout son questionnement, je le vis sur mes blogs, où je passe mon temps à essayer de mettre un peu en forme mon quotidien (peut-être un peu trop, à l’occasion) exactement comme Pascal Girard. Toutes proportions gardées, bien entendu, car je ne suis pas un artiste ! Cela dit, lorsque je discutais avec Julien Prévieux (pour préparer la Biennale d’Art Contemporain, à Rennes, en 2010), je m’étais fais la réflexion – à plusieurs reprises – qu’artistes et scientifiques ont beaucoup en commun.

J’avais oublié à quel point une visite rapide de 30 minutes au musée peut susciter comme questionnements. Vivement que j’emmène les grands à leur retour….

Triangle for Parameters of AR(2) Stationary Processes

We’ve seen yesterday conditions on https://latex.codecogs.com/gif.latex?(\phi_1,\phi_2) so that the canonical https://latex.codecogs.com/gif.latex?AR(2) process, https://latex.codecogs.com/gif.latex?(X_t), satisfying

https://latex.codecogs.com/gif.latex?X_t=\phi_1%20X_{t-1}+\phi_2%20X_{t-2}+\varepsilon_t

The condition is rather simple, since https://latex.codecogs.com/gif.latex?(\phi_1,\phi_2) should be a triangular region. But the proof is a bit more tricky…

Recall that we want to parametrize the region

https://latex.codecogs.com/gif.latex?\{(\phi_1%20,\phi_2)\in\mahtbb{R}^2:%201-\phi_1z-\phi_1z^2\neq%200,\forall%20z\in\mathbb{C},\vert\vert%20z\vert\vert%20\leq%201\}

Since we have a true https://latex.codecogs.com/gif.latex?AR(2) process, then https://latex.codecogs.com/gif.latex?\phi_2\neq%200. Our polynomial is here

https://latex.codecogs.com/gif.latex?\Phi(z)=1-\phi_1z-\phi_1z^2=\left(1-\frac{z}{\lambda_1}\right)\left(1-\frac{z}{\lambda_2}\right)

where https://latex.codecogs.com/gif.latex?\lambda_i‘s are the roots – in https://latex.codecogs.com/gif.latex?\mathbb{C} – of https://latex.codecogs.com/gif.latex?\Phi(\cdot). Consider now some kind of dual version of that polynomial,

https://latex.codecogs.com/gif.latex?\tilde\Phi(z)=\left(1-{z}{\lambda_1}\right)\left(1-{z}{\lambda_2}\right)=1+\frac{\phi_1}{\phi_2}z+\frac{1}{\phi_2}z^2

Having the roots of https://latex.codecogs.com/gif.latex?\Phi(\cdot) outside the unit circle is the same as having the roots of https://latex.codecogs.com/gif.latex?\tilde\Phi(\cdot) inside the unit circle. Obserse that we can write

https://latex.codecogs.com/gif.latex?\tilde\Phi(z)=\frac{1}{\phi_2}(\underbrace{z^2-\phi_1%20z-\phi_2}_{\bar{\Phi}(z)})

Roots of https://latex.codecogs.com/gif.latex?\bar{\Phi}(\cdot)} are then

https://latex.codecogs.com/gif.latex?\xi%20=%20\frac{1}{2}\left(\phi_1\pm\sqrt{\phi_1^2+4\phi_2}\right)

From this point, we should discuss a little bit, depending on the value of https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2.

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2=0

Then there is one root, and only one. So we need to have https://latex.codecogs.com/gif.latex?\vert\phi_1\vert%20%3C2 or equivalently https://latex.codecogs.com/gif.latex?\phi_2%3E-1.

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2%3E0

Then we got roots in https://latex.codecogs.com/gif.latex?\mathbb{R}, and

https://latex.codecogs.com/gif.latex?-1%3C%20\frac{1}{2}\left(\phi_1\pm\sqrt{\phi_1^2+4\phi_2}\right)%3C%201

means, equivalently, that

https://latex.codecogs.com/gif.latex?\phi_2%3E-1%20\%20;%20\%20\phi_2-\phi_1%3C1%20\%20;%20\%20\phi_2+\phi_1%3C1

  • if https://latex.codecogs.com/gif.latex?\Delta=\phi_1^2+4\phi_2%3C0

Then we have two (conjugate) roots in https://latex.codecogs.com/gif.latex?\mathbb{C}, and the square of norm of those roots is https://latex.codecogs.com/gif.latex?\vert\vert%C2%A0\xi\vert\vert^2=-\phi_2. Thus, https://latex.codecogs.com/gif.latex?\phi_2%3E-1.

We get what was mention in the course: the canonical https://latex.codecogs.com/gif.latex?AR(2) has a stationary solution if, and only if

https://latex.codecogs.com/gif.latex?\left\{\begin{array}{l}%20\phi_2-\phi_1%3C1%20\\\phi_2+\phi_1%3C1\\%20\vert\phi_2\vert%3C1\end{array}\right.

which is a triangular region, see

Precision, with Imprecise Words

This morning, after my course on extreme values, some students did show me a question they got from practicals they were suppose to work on, with undergraduate students :

To be more specific, they wanted some feedback about point B. Now, let’s make it clear : I have no idea what “precision” and “variation” could mean… But let’s try and see if we can get something usefull, that might help to understand the question. In order to illustrate, consider the following regression model,

> plot(cars,pch=19,col="black",cex=.8)
> abline(lm(dist~speed,data=cars),lty=2)

Here is the summary table of the linear regression model

> summary(lm(dist~speed,data=cars))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.5791     6.7584  -2.601   0.0123 *  
speed         3.9324     0.4155   9.464 1.49e-12 ***

My first idea was that “variation of the X’s” should be related to the “variance” of the explanatory variable. But it is stupid. For instance, If we transform the explanatory variable, say with a multiplicative factor of 100, then the variance of X will be 10,000 times larger. And the regression will be the same

> cars100=cars
> cars100$speed=100*cars$speed
> plot(cars100,pch=19,col="black",cex=.8)
> abline(lm(dist~speed,data=cars100),lty=2)

in the sense that

> summary(lm(dist~speed,data=cars10))

             Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.57909    6.75844  -2.601   0.0123 *  
speed         0.39324    0.04155   9.464 1.49e-12 ***

And similarly, divide by 100. So, I guess using some affine transformation of the explanatory variable is clearly not the way we should get a variable with more “variability”. Let us try something else. And keep in mind the following quantities,

> var(cars$speed)
[1] 27.95918
> sd(cars$speed)/mean(cars$speed)
[1] 0.3433535

with the variance, and the coefficient of variation. Consider the following modified dataset,

> carsg=cars
> carsg$speed[12]=8
> carsg$speed[23]=25
> carsg$speed[34]=24
> carsg$speed[39]=12

Four values were changed, here. Observe that, somehow, there is more variability

> var(carsg$speed)
[1] 31.84694
> sd(carsg$speed)/mean(carsg$speed)
[1] 0.3640845

But if we consider the output of the regression model, we get

> summary(lm(dist~speed,data=carsg))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -18.5681     5.3621  -3.463  0.00113 ** 
speed         3.9708     0.3254  12.201 2.55e-16 ***

It look like we got here a more precision on the slope, with a smaller variance, and a larger Student-t-value. But what if we consider the following transformation,

> carsg=cars
> carsg$speed[11]=5
> carsg$speed[21]=25
> carsg$speed[31]=25
> carsg$speed[50]=7

Again, we have more variability here, on the explanatory variable,

> var(carsg$speed)
[1] 32.9898
> sd(carsg$speed)/mean(carsg$speed)
[1] 0.3754036

But this time,

> summary(lm(dist~speed,data=carsg))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -1.5078     8.0498  -0.187    0.852    
speed         2.9077     0.4932   5.896 3.61e-07 ***

the estimator of the slope has more variance, and we have a smaller Student-t-value. So here, if we increase the “variability” of X, we get get… almost anything. The intuition about those two transformations is relatively simple. In the first case, I have moved observations that were far away from the regression line – but in the center of the distribution, and I put them closer to the regression line, but more on the border of the sample (to increase the variance)

(I would not call them outliers since outliers are defined as observations far away from the model, but on Y, not on X). In the second case, I did exactly the opposite.

I am not sure if I understood correctly this sentence. But it looks like it is incorrect. Since there is only one false statement here, I will go for this one. What do you think?

Academic Tweets

Almost four years ago, I did tweet, for the very first time of my life:

The goal was to mention a post on my blog (on optimal control). 15,000 tweets later, I wanted to get back on my experience on Twitter, trying to explain why I tweet, and why academics have some kind of legitimacy to be on Twitter (from my understanding of the goal of such a platform) and they should use it by having a Twitter account.

  • How Twitter works?

I guess I should explain what Twitter is, in case some readers do not know it. “Trying to explain Twitter to the non-user has become something like the tech world’s Arthurian challenge, a seemingly impossible task that no-one is able to fully complete” as explained in Theorizing Twitter: Narratives and Identity. But let me try. Rules on Twitter are simple,

  1. you need to create an account, on http://twitter.com/. You have 160 characters for a short bio, and you can use an avatar
  2. then you can starting tweeting, i.e. posting online messages with 140 characters
  3. you can also be passive, and simply follow some discussions… you can type something in the search window, and then you will see all the tweets containing that word (or that sentence), like why do people tweetIt is also possible to follow some hastags (following the symbol) such as , or some people (following the @ symbol) such as @freakonometrics (that’s me).
  4. if you like my tweets (posted under the name @freakonometrics), you can follow me, and I will appear in your so-called TL (or timeline). You will become my follower (of course, if I find your tweets interesting, I might follow you back).
  5. you can also include a picture (one, only). Since a link will be mentioned in your tweet (automatically generated by Twitter), you will have less than 140 characters to tweet.
  6. you can add html links, that will be shortern by Twitter.
  7. if you include a link on a video (say on http://vimeo.com/) or a song (say on https://soundcloud.com/), then, the object will appear directly in the tweet: followers will see the movie, not only the tweet with a link, see e.g.

Then, there are (almost official) codes

  1. you can retweet a tweet, which means forwarding another user’s tweet to all of your followers. RTs are not endorsement (neither are tweets actually)… You can use the retweet button, or use the old fashion RT (for re-tweet) or MT (for modified tweet) if you had to shorten a tweet. This is important to mention how you get some information in case you want to share it with you followers (it is called a mention, and it is like using a reference in a research paper). Another way of acknowledging the account who originally shared the content being tweeted is to HT the account (hat tip)
  2. you can reply to a tweet using the reply button. If you reply to one of my tweets, your tweet will start with @freakonometrics and only your followers also following me will see it in their tweet list. If you want to share the answer with everyone, the tweet should start with a dot, .@freakonometrics. It is also possible to send direct (private) messages… It is also possible to poke another account if you want to make sure that someone reads your tweet.
  3. it is possible to favorite a tweet, by clicking the yellow star next to the message. If a lot of the people I follow favorite a tweet, it will be more likely to appear in the discover window, even if I do not follow that account. It might also be interesting to favorite tweets since they can appear in some widgets you have on your blog. Keep in mind that some tweets are promoted, meaning that some company pay to have more exposure… promote and favorite are quite different…
  4. there are some strange customs, such as the #FF for follow Friday. On Friday, you can share usernames of your favorite twitterers, the accounts you find interesting. I am not a social person, so I do not use that, and I do not know how to react when someone #FF me (usually I simply favorite, which is a simple way to say thank you without tweeting a stupid “thanks“)
  5. because of the 140 character constraint, a lot of strange words can be used on Twitter, such as “OH” which most often means “overheard“… just go on google to find more…

and there are more unofficial codes (here, it will be rather subjective, so comments are opened if you want to criticize)

  1. when someone RT, MT or HT one of your tweets, it is common to favorite the tweet. Maybe common is not appropriate here : it is something people with a lot of followers did to me a few years ago, which helped me, somehow, to get more popular, and I also try to do it.
  2. do not steal tweets: you should always mention the account that originally shared the content. From an ethical point of view, that’s an obvious statement. But from a technical point of view, with the 140 character rule, it can be complex. I mean, if you want to add “HT @freakonometrics” you have only 120 characters left, so either you shorten the tweet, or you keep the tweet as it was and you tweet right after something like “previous tweet, HT @freakonometrics
  3. it is possible to delete a Tweet. I do not know if there are rules here, because I’ve seen a lot of tweets with invalid urls, or big typos… I do delete some tweets. My rules are : (1) if I see a typo (or a problem with the url), I have the right to correct it, so I can write a new tweet, an delete the previous one; (2) if  a tweet arouses controversy (and if it was not the goal, say that with 140 characters, I got misunderstood), then I delete the tweet: I am not on Twitter to argue, only to share some contents (we’ll get back on that point later on).

See http://ryan.cordells.us/s13dh/, or An Introduction to Social Media for Scientists, for some advice on how to start  tweeting. So now that we’ve seen how Twitter works, how can we use it? As academics.

  • Are Academics on Twitter ?

Only « one in forty scholars are active on Twitter » as estimated in Priem et al. in 2011. And let’s face it honestly : academics are mainly skeptical about Twitter. For most of them, it’s for their kids, or perhaps for graduate students (if they have some ideas about what Twitter is). But not researchers. I follow (and am followed) by a lot of graduate students, PhD’s or postdocs, and a few more senior researchers. Most of those senior, prominent scientists, share extremely interesting information! I do not discuss the quality here, simply the fact that, indeed, not everyone is willing to go on Twitter. And be active.

But if not everyone is on Twitter, important people are. At least, in Economics. In January, I wanted to go to the conference of the American Economic Association, in Philadelphia, but I could not do it. That’s where the job market for PhD students in Economics took place. But it was extremely active on Twitter (you got updates following frequently the #ASSA2014 hashtag). From my perspective (maybe also the people I follow), it looked like everyone there was on Twitter during that (major) event.

  • Twitter as a Bookmark

I do spend a lot of time online, reading articles, for work, for fun, and sometimes, I’d like to keep tracks of those articles, or posts on blogs, or articles on http://arxiv.org/, etc. The first motivation to use Twitter is because I need bookmarks. With Twitter, those bookmarks are public. It does not mean that I endorse what I read, it should be understood as « I found that interesting, and I want to bookmark it, to find it, someday ». A few years ago, it was difficult to get back old tweets, like those I posted in 2010 or 2011, so I decided to start my Somewhere Else chronicle on this blog: I simply post all the tweets I wrote, to mention readings worth reading, outside my blog, somewhere else. Like that one:

It is clearly not perfect. I got a lot of tweets with questions following that tweet (but I did not answer them, I shared the graph, I did not create it). The point was « that seems to be interesting, and I’d be glad to find it if someday I want to spend more time on linguistic issues ».

As most academics, I do read a lot. And one of our duties is to share information… This was mentioned in Priem and Costello (2010), « the professional impact of Twitter may be particularly pronounced for scholars given that sharing information is a central component of their work » (see also Letierce et al (2010)). They estimated that 30% tweets sent by academics contains a hyperlink to a peer-reviewed resource (usually a pdf file of a research paper). And it is not necessarily a paper published online 24 hours ago : it can also be a paper rediscovered accidentally, or a technical report written a few decades ago that has just been scanned. @coulmont went back on a « strange experience » (as he defined it) a few months ago, when I (re)discovered a post published on his blog almost one year before, tweeted it, and then a buzz started on that old post. At least, that’s what he told me, we’ll never know for sure if this was because of me, or not (I doubt it actually, there were a lot of good reasons to rediscover that post).

Traditionally, academic visibility is measured using citations, meaning that some work has been accepted by (so called) peers, in the scientific community. Academics need to write to have an impact. But a lot of time is spent on reading. This reading activity is missed by standard citation counts (unless you publish a review, for instance).  On Twitter, you can comment, even (briefly) discuss a publication, some tricks on computer codes, share graphical visualization, etc. It is not like posting an anonymous comment on some scientific blog (actually, several blogs do not have open comments any more). On Twitter, there is some kind of credibility, and not only from the academic resume : people know you and follow you. According to Scott Wagers, « good content is propagated rapidly, bad content is not ». Of course, it is not that simple. There is a time for tweeting, clearly. If you tweet when someone with a lot of followers, then things might grow exponentially fast, and within a few hours, a tweet can be RTd a dozen, a hundred, a thousand times (we’ll get back on the viral effect of Twitter at the end of this post).

  • Live-Tweet in Conferences

Another popular use among academics is to use Twitter for live-tweets, see e.g. How People are using Twitter during conference or the interesting “who is going to read 12,000 tweets?!“. But as mentioned on a lot of blogs, one should be careful about live-tweeting. As recalled in Live-tweeting at academic conferences, « with great power comes great responsibility ». Live tweeting is supposed to be fun, but stay polite, and respectful. And use quotation marks. Getting back on the so-called Twittergate (see An Idea is a Dangerous Thing to Quarantine #twittergate), Aaron Bady, used that interesting image, about Twitter within the academic community « I conjured up the image of an appropriate cantankerous old professor yelling at a bunch of punk tweeters to get off his lawn, like Clint Eastwood in Gran Twittarino ».

Now, to be honest, I have been involved in some live-tweet only once, while I was giving a talk, at the World Social Science Forum, a few months ago

But I usually do not live-tweet, I do not feel comfortable with it (I prefer to take notes in my book, even if I might also write a post on my blog later on, but I always ask the speaker if I can quote what he or she said). « Some worried that having someone tweet their insights before they publish might increase the likelihood that they will be scooped by a colleague — although others regarded that notion as slightly paranoid » as mentioned in the Academic Twitterazzi. « The debate over live tweeting at conferences is, in many ways, about control and access: who controls conference space, presentation content, or access to knowledge? » wrote Roopika Risam in Conference Live Tweets: Twitter Good or #Twittergate?.

Beyond those pseudo-ethical considerations, there is also a more practical reason: in mathematics, it is quite difficult to live-tweet. I mean, it is difficult to write equations in Twitter, and a graph without the formal model is usually useless. There might some interest when there are computation issues, to share a visualization for instance (there were interesting experiences in R conferences this summer, e.g. in Lyon).

  • Twitter to Meet People

It is also possible to start discussions on Twitter. But again, I am not a social guy, so I usually do not like that. I mean, if I share a link to an article, it is because I found it interesting. If someone wants me to go further, or to discuss it, why not…. but Twitter is not the place. I prefer my blog, where I do not have the 140 character constraint.

An important issue in Twitter is to speed up connections between scientists. There is nothing new here. Traditionally, scientists have always interacted with other scientists, in sort of one-to-one interactions, attending seminars, conferences, discussion with colleagues. Using the words of Priem et al (2013), « informal conversations have moved out of the faculty lounge to online social media platforms », such as Twitter. One of the interests is to join somehow a larger « virtual department » with colleagues that are not next door, but who might be far away and in other areas of research. One can even discuss with real people, outside academia. Since I have interests in risk modeling, finance, climate, computer science, mathematics, I can also discuss with people working on stock markets, in insurance companies, in data visualization startups, even journalists. The awesome point is that it becomes possible to interact with open minded researchers. As mentioned in Fox (2012) – slightly changing the title – « blogging [and micro blogging] changed how economics share ideas ».

The first step in the scientific process is to find ideas, new ideas or concepts to investigate, datasets to describe. Following interesting people on Twitter can be the first step. The final step is to communicate findings and to disseminate. The time when researchers were studying the table of contents of journals to find interesting articles is behind us. When disseminating on a blog, we can share codes, graphs, datasets, links to additional material. On Twitter, we have to deal with the 140 character constraint, which makes it hard. One idea can be to use a nice visualization, a graph, a map.

  • Personal versus Institutional Accounts

On Twitter, I mainly follow researchers, only a few institutional accounts. For instance, I like the @HarvardBiz to get updates about their blog, and recent articles. It’s like using RSS feed (except that I am not a big fan of RSS). I might also confess that I have been asked, a long time ago, to be the Twitter Manager of the @StatFr account, of the French Statistical Society. But I quickly faced two problems: it is very difficult to be active on two accounts at the same time, especially when they share the same goal (here, it was just tweeting links to interesting articles, related to statistics – as well as activities of the association). And it is difficult to get a clear guideline. I mean, on my Twitter account, I tweet whatever I like. After a few days experiencing the @StatFr account, I had to argue with the President of the Society because of a link I mentioned in one of my tweet, that was too controversial (but, from my point of view, was mentioning interesting statistical issues). So I have to confess I prefer to follow people, more than institutions or groups.

On the other hand, in all you can tweet, academics behind the nature chemistry Twitter account (@NatureChemistry) went back on 4 years of experience. Among lessons learnt, collectively, it was mentioned that with the 140 characters constraint, « clarity is a virtue », which is indeed of the the things you learn with Twitter. Furthermore, they mention that following important researchers in your scientific community on Twitter is interesting. Not only you can discover interesting information. With personal ccounts, you can also learn more personal information. For instance, it’s always a pleasure to get news and updates from Emiliano,

 

  • The Impact of Twitter in Standard Process of Academic Dissemination

For academics still skeptical about the use of Twitter, I should probably mention Shuai et al. (2012) which proved (following 4,600 papers) that papers mentioned on Twitter are more downloaded, and more cited (see also Eysenbach1 (2011)). Dissemination using Twitter can help to reach other researchers, in your area, but also journalists, people working in the industry or for governmental organization, even members of parliament… It is hard to follow the sequence, but here how it looks like

publication of an academic paper
(in a journal, or on arxiv)

tweet mentioning the paper

mention in a popular blog

mention in a newspaper

citations
(etc.)

Things can go viral easily with Twitter (again, we’ll get back on this point more specifically soon). But still. It is difficult to clearly understand what  happened (in the newspaper, the researchers are usually mentioned, as well as the blog sharing the information, but that’s all…)

  • Having Fun on Twitter

A few weeks ago, I saw a nice graph, on a blog

and I wanted to reproduce it. With a code as simply as possible. Sure, it’s a geek’s thing to try to write codes as short as possible. But that’s fun, really

 

It was not worst writing a post on my blog, and with a couple of tweets, I keep tracks of that interesting mathematical problem.

People on Twitter try also – sometimes – to have fun, following some hastags, such as, a few weeks ago, e.g.

(see also here for a collection of great tweets) or more recently #SixWordPeerReview,

  • Gaining Time to Discover Interesting Information

So, clearly, you can save time to discover interesting publications, simply by following interesting people. But, as you may imagine, the difficult point is to follow your TL. If you follow, say 400 people, each of them did tweet, on average 5 times in a given day (including RTs), that makes 2000 tweet to read, if you go on Twitter once a day. Much more if you go on Twitter once a week. To save some time, somehow, it is possible to use dedicated websites such as http://tweetedtimes.com/. Based on your TL and people you follow, it is possible get a subset of popular tweets and links shared those people you follow. I believe that a similar algorithm is used in the https://twitter.com/i/discover page (probably more based on tweets favored than RTd).

  • My Own Experience on Twitter

Now, to share a bit of my personal experience, I should probably mention that I do not have a cell phone, neither a tablet. So when I go on Twitter, it has to be on my laptop. It might be while cooking and preparing the lunch box for the kids, in the morning, just to get quickly the news from the night. It might also be in the office, when some code is running, and I have to wait. Most of the night it’s in the evening. Once the kids are asleep.

If I look at my tweets, you can see that I RT a lot of tweets (the old way), and a few Twitter accounts (even if I have the feeling that only tweets published more than two years ago appear here)

  • Going viral on Twitter

To conclude, I should warn everyone that Twitter is addictive. And it can be exhilarating and dangerous when things start going viral… For instance, a few days ago, I did post a simple map (without proper references, it was mentioned in another tweet since there were two references, one for each map), and then, in 24 hours, there were almost 1,000 RTs (not to mentioned tweets mentioning that tweets that were also RTd),

The interesting side is that it was a nice opportunity to understand how this viral process works, and @3wen published an interesting post on that experience.

  • Possible Conclusion (?)

Being on Twitter is an amazing experience. I met (at least virtually) a lot of people, that I would have never been interacting with if they were not on Twitter. David Monniaux, editor of the great blog http://david.monniaux.free.fr/dotclear/ used to be on Twitter (@monniaux), but – according to the legend I read on Twitter – being on Twitter was time consuming (David used to interact a lot), so he decided to quit. Thus, I am glad that I do have neither a smartphone, nor a tablet! Because I would spend hours on Twitter! With moderation, Twitter is a great tool. But to be honest, it’s a little bit crowded… it is a place to be, but it’s hard to have a discussion with other people, with the 140 characters. I really believe it’s not the place to ask questions, and start a discussion. This is why I also try to go on other microblogging plateforms to interact, more confidential… But I won’t mention them here, I’d like to keep it that way.

  • To Go Further

Twitter for academic and engagement
Using Twitter for Curated Academic Content
Beyond citations: scholars’ visibility on the social Web
Live-tweeting at academic conferences
An Introduction to Social Media for Scientists
How People are using Twitter during conference
Using Twitter During an Academic Conference
Presenting for Twitter at Conferences
An Idea is a Dangerous Thing to Quarantine #twittergate
My Norm is More Normal Than Yours: Academic Tweeting and Loose Fish
“But who is going to read 12,000 tweets?!” How researchers can collect and share relevant social media content at conferences
Don’t Have Time to Tweet-bollocks! Twitter can even save you time as a scientist.
The role of Twitter in the life cycle of a scientific publication, and the associated infographics
Can tweets predict citations? (metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact)
All you can tweet
Twitter as tool for strengthen a scientific community
How and why scholars cite on Twitter
Prevalence and use of Twitter among scholars (on http://figshare.com/)
Twitter as a tool for conservation education and outreach: what scientific conferences can do to promote live tweeting
Understanding how Twitter is used to spread scientific messages

Likelihood Based Methods, for Extremes

This week, in the MAT8595 course, we will start the section on inference for extreme values. To start with something simple, we will use maximum likelihood techniques on a Generalized Pareto Distribution (we’ve seen Monday Pickands-Balkema-de Hann theorem).

  • Maximum Likelihood Estimation

In the context of parametric models, the standard technique is to consider the maximum of the likelihood (or the log-likelihod).Let denote the parameter (with ). Given some – stnardard – technical assumptions, such as , or  on some neighbourhood of , then

where https://latex.codecogs.com/gif.latex%20?I denotes Fisher information matrix (see any textbook for mathematical statistics courses). Consider here some i.i.d. sample, from a Generalized Pareto Distribution, with parameter https://latex.codecogs.com/gif.latex?\boldsymbol{\theta}=(\xi,\sigma), so that

https://latex.codecogs.com/gif.latex?%20%20%20%20%20F_{(\xi,\sigma)}(x)%20=%20\begin{cases}%201%20-%20\left(1+%20\frac{\xi%20x}{\sigma}\right)^{-1/\xi}%20&,%20\xi%20\neq%200%20\\%201%20-%20\exp%20\left(-\frac{x}{\sigma}\right)%20&,%20\xi%20=%200%20\end{cases}

If we solve (numerically) the first order condition of the maximum likelihood, we get an estimator  https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) which satisfies

https://latex.codecogs.com/gif.latex?\sqrt{n}\left(\left[\begin{array}{c}\widehat{\xi}_n\\\widehat{\sigma%20}_n\end{array}\right]-\left[\begin{array}{c}\xi_0\\\sigma_0%20\end{array}\right]\right)\rightarrow%20\mathcal{N}\left(\left[\begin{array}{c}0\\end{array}\right],\left[\begin{array}{cc}(1+\xi_0)^2%20&%20\sigma_0[1+\xi_0]\\%20\sigma_0%20[1+\xi_0]%20&%202\sigma^2_0(1+\xi_0)%20\end{array}\right]\right)

The idea of this asymptotic normality is the following : if the true distribution of the sample is a GPD with parameter , then, if https://latex.codecogs.com/gif.latex%20?n is large enough, then https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) will have a joint normal distribution. So if we generate a lot of sample (sufficently large, say 200 observations), then the scatterplot of the estimator should the same as the scatterplot of a Gaussian distribution,

> library(evir)
> n=200
> param=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ }
> m=apply(param,2,mean)
> S=var(param)
> library(mnormt)
> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=101)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> COL=rev(heat.colors(100))
> image(x,y,z,col=COL)
> points(param)

and to get a 3d representation

> x=seq(min(param[,1])-.05,max(param[,1])+.05,length=31)
> y=seq(min(param[,2])-.05,max(param[,2])+.05,length=31)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> persp(x,y,t(z),shade=TRUE,col="green",theta=-30,phi=20,ticktype="detailed",
+ xlab="xi",ylab="sigma")

With 200 observations, if the true underlying distribution is a GPD, then, indeed, the joint distribution of https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol{\theta}}_n=(\widehat{\xi}_n,\widehat{\sigma%20}_n) seems to be normal. That would be interesting to generate some confidence intervals for instance, or define some tests.

To go further, see any standard textbook on statistical mathematics, e.g. Casella & Berger (2002).

  • Delta Method

Another important property is the so called delta-method (we’ve seen Monday in class that it was obtained easily using a first order Taylor expansion). The idea is that if https://latex.codecogs.com/gif.latex%20?\widehat{\boldsymbol{\theta}}_n is asymptotically normal, and if is sufficently smooth, then https://latex.codecogs.com/gif.latex%20?h(\widehat{\boldsymbol{\theta}}_n) will also be asymptotically Gaussian. More precicely (see also the header of this blog)

From this property, we can get the normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n=\widehat{\xi}_n^{-1} (which is another parametrization used in extreme value models), or on any quantile, https://latex.codecogs.com/gif.latex%20?\widehat{Q}_u=F^{-1}_{\widehat{\boldsymbol{\theta}}_n}(u)=h_u(\widehat{\xi}_n,\widehat{\sigma}_n). Let us run some simulation, one more time to check that we actually have a joint normality.

> library(evir)
> n=200
> param=riskm=matrix(NA,1000,2)
> for(s in 1:1000){
+ x=rgpd(n,xi=1/1.5,beta=1)
+ param[s,]=gpd(x,0)$par.ests
+ xihat=param[s,1]
+ shat=param[s,2]
+ q=shat * (.01^(-xihat) - 1)/xihat
+ tvar=q+(shat + xihat * q)/(1 - xihat)
+ riskm[s,]=c(1/xihat,q)
+ }
> m=apply(riskm,2,mean)
> S=var(riskm)
> library(mnormt)
> x=seq(min(riskm[,1])-.05,max(riskm[,1])+.05,length=101)
> y=seq(min(riskm[,2])-.05,max(riskm[,2])+.05,length=101)
> vx=rep(x,each=length(y))
> vy=rep(y,length(x))
> vz=dmnorm(cbind(vx,vy),m,S)
> z=matrix(vz,length(y),length(x))
> image(x,y,t(z),col=COL)
> points(riskm)

As we can see bellow, with samples of size 200, we cannot use this asymptotical result: it looks like we do not have enought data. But if we run the same code with

> n=5000

We get the joint normality of https://latex.codecogs.com/gif.latex%20?\widehat{\alpha}_n and https://latex.codecogs.com/gif.latex%20?\widehat{Q}_n(u). This is what we can get from this result, called delta-method in statistical textbooks. See again Casella & Berger (2002) for more details.

  • Profile Likelihood

Another interesting tool is the concept of profile likelihood. This would be interesting here since the main interest is the tail index https://latex.codecogs.com/gif.latex%20?\xi, https://latex.codecogs.com/gif.latex%20?\sigma being here some kind of auxilary parameter. See Venzon & Moolgavkar (1988) for more details. Here, we will plot

http://freakonometrics.blog.free.fr/public/perso5/proflike01.gif

But more generally, it is possible to consider

http://freakonometrics.blog.free.fr/public/perso5/profilik06.gif

where http://freakonometrics.blog.free.fr/public/perso5/profilik03.gif is the set of interesting parameters. Then (under standard suitable conditions) we can prove that

http://freakonometrics.blog.free.fr/public/perso5/profilik05.gif

which can be used to derive confidence intervals. In the GPD case, for each https://latex.codecogs.com/gif.latex%20?\xi, we have to find an optimal https://latex.codecogs.com/gif.latex%20?\sigma^\star(\xi). We compute the (profile) likelihood i.e. https://latex.codecogs.com/gif.latex%20?\mathcal{L}(\xi,\sigma^\star(\xi)). And we can compute the maximum of this profile likelihood. This two-stage optimization is, in general, not equivalent with the (global) maximization of the likelihood, as computed below

>  n=500
>  set.seed(1)
>  x=rgpd(n,xi=1/1.5,beta=1)
>  loglikelihood=function(xi,beta){
+  sum(log(dgpd(x,xi,mu=0,beta))) }
>  XIV=(1:300)/100;L=rep(NA,300)
>  for(i in 1:300){
+  XI=XIV[i]
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  L[i]=-optim(par=1,fn=profilelikelihood)$value }
>  plot(XIV,L,type="l")
>  XIV[which.max(L)]
[1] 0.67
>  gpd(x,0)$par.ests
       xi      beta 
0.6730145 0.9725483

We are not far away. Actually, if we want to compute the maximum of the profile likelihood (and not only compute the values of the profile likelihood on a grid, as before), we use

>  PL=function(XI){
+  profilelikelihood=function(beta){
+  -loglikelihood(XI,beta) }
+  return(optim(par=1,fn=profilelikelihood)$value)}
>  (OPT=optimize(f=PL,interval=c(0,3)))
$minimum
[1] 0.6731025

$objective
[1] 822.5574

Observe that, indeed, we are not far away from the maximum likelihood estimator of https://latex.codecogs.com/gif.latex%20?\xi (I believe that it’s mainly a computational issue here, and theat the two are similar, here… actually, I’d be glad to hear about cases where maximum of the profile likelihood is not the same as the maximum of the likelihood). The interesting point is that we can use this technique to compute a confidence interval, and even visualize it on a graph

>  up=OPT$objective
>  abline(h=-up)
>  abline(h=-up-qchisq(p=.95,df=1),col="red")
>  I=which(L>=-up-qchisq(p=.95,df=1))
>  lines(XIV[I],rep(-up-qchisq(p=.95,df=1),length(I)),
+  lwd=5,col="red")
>  abline(v=range(XIV[I]),lty=2,col="red")

The vertical lines are the lower and the upper bound of a 95% confidence interval for parameter https://latex.codecogs.com/gif.latex%20?\xi.

To go further, see Murphy, S.A & van der Vaart, A.W. (2000). On Profile Likelihood.

Causal Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we will discuss (almost) only causal models. For instance, with https://latex.codecogs.com/gif.latex?AR(1),

https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t

with some white noise https://latex.codecogs.com/gif.latex?(\varepsilon_t), those models are obtained when https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3C1. In that case, we’ve seen that https://latex.codecogs.com/gif.latex?(\varepsilon_t) was actually the innovation process, and we can write

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h=0}^{+\infty}%20\phi^h%20\varepsilon_{t-h}

which is actually a mean-square convergent series (using simple Analysis arguments on series). From that expression, we can easily see that https://latex.codecogs.com/gif.latex?(X_t) is stationary, since https://latex.codecogs.com/gif.latex?\mathbb{E}(X_t)=0 (which does not depend on https://latex.codecogs.com/gif.latex?t) and

https://latex.codecogs.com/gif.latex?\text{cov}(X_t,X_{t-h})=\frac{\phi^h}{1-\phi^2}\sigma^2(which does not depend on https://latex.codecogs.com/gif.latex?t).

Consider now the case where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1. Clearly, we have some problem here, since

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h=0}^{+\infty}%20\phi^h%20\varepsilon_{t-h}

cannot be defined (the series does not converge, in https://latex.codecogs.com/gif.latex?L^2). Nevertheless, it is still possible to write

https://latex.codecogs.com/gif.latex?X_t=\frac{1}{\phi}%20X_{t{\color{Red}%20+1}}{\color{Red}%20-\frac{1}{\phi}}\varepsilon_{t{\color{Red}%20+1}}But it is possible to iterate (as in the previous case) and write

https://latex.codecogs.com/gif.latex?X_t%20=%20\sum_{h={\color{Red}%201}}^{+\infty}%20\frac{-1}{\phi^h}%20\varepsilon_{t{\color{Red}%20+h}}

which is actually well defined. And in that case, the sequence of random variables https://latex.codecogs.com/gif.latex?(X_t) obtained from this equation is the unique stationary solution of the recursive equation https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t. This might be confusing, but the thing is this solution should not be confused with the usual non-stationary solution of https://latex.codecogs.com/gif.latex?X_t=\phi%20X_{t-1}+\varepsilon_t obtained from https://latex.codecogs.com/gif.latex?X_0. As in the code writen to generate a time series, from some starting value https://latex.codecogs.com/gif.latex?X_0 in the previous post.

Now, let us spent some time with this stationary time series, considered as unatural in Brockwell and Davis (1991). One point is that, in the previous case (where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3C1) https://latex.codecogs.com/gif.latex?(\varepsilon_t) was the innovation process. So variable https://latex.codecogs.com/gif.latex?X_t was not correlated with the future of the noise, https://latex.codecogs.com/gif.latex?\sigma\{\varepsilon_{t+1},\varepsilon_{t+2},\cdots\}. Which is not the case when https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1.

All that looks nice, if you’re willing to understand thing at some theoretical level. What does all that mean from a computational perspective ? Consider some white noise (this noise actually does exist whatever you want to define, based on that time series)

> n=10000
> e=rnorm(n)
> plot(e,type="l",col="red")

If we look at the simple case, to start with,

> phi=.8
> X=rep(0,n)
> for(t in 2:n) X[t]=phi*X[t-1]+e[t]

The time series – the latest 1,000 observations – looks like

Now, if we use the cumulated sum of the noise,

> Y=rep(0,n)
> for(t in 2:n) Y[t]=sum(phi^((0:(t-1)))*e[t-(0:(t-1))])

we get

Which is exactly the same process ! This should not surprise us because that’s what the theory told us. Now, consider the problematic case, where https://latex.codecogs.com/gif.latex?\vert%20\phi\vert%20%3E1

> phi=1.1
> X=rep(0,n)
> for(t in 2:n) X[t]=phi*X[t-1]+e[t]

Clearly, that series is non-stationary (just look at the first 1,000 values)

Now, if we look at the series obtained from the cumulated sum of future values of the noise

> Y=rep(0,n)
> for(t in 1:(n-1)) Y[t]=sum((1/phi)^((1:(n-t)))*e[t+(1:(n-t))])

We get something which is, actually, stationary,

So, what is this series exactly ? If you look that the autocorrelation function,

> acf(Y)

we get the autocorrelation function of a (stationary) https://latex.codecogs.com/gif.latex?AR(1) process,

> acf(Y)[1]

Autocorrelations of series ‘Y’, by lag

    1 
0.908 

> 1/phi
[1] 0.9090909

Observe that there is a white noise – call it https://latex.codecogs.com/gif.latex?(\eta_t) – such that

https://latex.codecogs.com/gif.latex?X_t=\frac{1}{\phi}X_{t-1}+\eta_t

This is what we call the canonical form of the stationary process https://latex.codecogs.com/gif.latex?(X_t).

Visualizing Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we started discussing autoregressive models. Just to illustrate, here is some code to plot https://latex.codecogs.com/gif.latex?AR(1) – causal – process,

> graphar1=function(phi){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 2:n) X[t]=phi*X[t-1]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(1/phi,0,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-.2,.2),xlim=c(-1,1))
+ axis(1)
+ points(phi,0,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

e.g.

> graphar1(.8)

or

> graphar1(-.7)

(with, on the bottom, the root of the characteristic polynomial, the value of the parameter https://latex.codecogs.com/gif.latex?\phi_{1}, the autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\rho(h) and the partial autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\psi(h)).

Of course, it is possible to do something similar with https://latex.codecogs.com/gif.latex?AR(2) processes,

> graphar2=function(phi1,phi2){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 3:n) X[t]=phi1*X[t-1]+phi2*X[t-2]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ P=polyroot(c(1,-phi1,-phi2))
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(P,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,xlim=c(-2.1,2.1),ylim=c(-1.2,1.2))
+ polygon(c(-2,0,2,-2),c(-1,1,-1,-1),col="light green")
+ u=seq(-2,2,by=.001)
+ lines(u,-u^2/4)
+ abline(v=seq(-2,2,by=.2),col="grey",lty=2)
+ abline(h=seq(-1,1,by=.2),col="grey",lty=2)
+ segments(0,-1,0,1)
+ axis(1)
+ axis(2)
+ points(phi1,phi2,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

For example,

> graphar2(.65,.3)

or

> graphar2(-1.4,-.7)

Faire parler les chiffres… n’importe comment

Cette fin de semaine, Martin Grandjean a mis en ligne un billet intéressant sur son blog, sur l’utilisation des statistiques (dans un but de propagande). L’exercice n’est pas nouveau, mais Martin soulève des questions, malheureusement importantes et complexes. Dans un paragraphe, intitulé “faire parler les chiffres… n’importe comment” (que j’ai repris comme titre, j’avoue avoir hésité avec “with great power comes great responsibility“), on retrouve l’analyse (rapide) d’un graphique, présenté ci-dessous. Le graphique parle de “statistiques” liées à des problèmes démographiques et d’immigration, sauf que comme le note Martin, il y a un “petit problème” : “ces statistiques n’en sont pas vraiment. Ou plutôt, il s’agit d’une extrapolation libre et “linéaire”“.

Je reviendrais sur ces points dans deux minutes, avant je voulais revenir sur une petite phrase que l’on retrouve un peu plus loin: “comment peut-on se permettre de prédire une tendance sur cinquante ans à partir d’une tendance observée sur dix ans ?“. Cette remarque est importante… car beaucoup de monde se pose cette question, et les statisticiens se doivent d’apporter une réponse. Ce problème m’avait été posé (plus ou moins) par une journaliste il y a quelques années maintenant, sur la solvabilité des compagnies d’assurance. Avec Solvabilité II, en Europe, les compagnies doivent calculer des VaR à 99.5%, c’est à dire liés à des événements ayant une période de retour de 200 ans. Avec au mieux 25 ans de données ! Dans ce cas, la nuance vient du fait que l’on cherche à estimer un quantile associé à une probabilité faible avec peu de données (et que l’on assimile rareté et période temporelle). C’est le même problème qui se pose en hydrologie, quand on veut construire une digue suffisamment haute pour que seul un événement millénaire provoque une inondation, mais que l’on doit prévoir avec 50 ans de données. Cela dit, quand un assureur fait de la mortalité prospective, il a souvent 20 ou 30 ans de données, et doit estimer la probabilité qu’un assuré de 25 ans soit encore en vie dans 50 voire 75 ans. C’est le genre de message que j’essayais de faire passer il y a quelques années dans une table ronde sur la pérennité des régimes de retraites, en France. Bref, cet exercice est un vrai exercice. Et il convient de bien comprendre le modèle pour comprendre ce qu’on fait ! Dire que c’est stupide ne pourra pas suffire.

Car quand on a peu de données (j’ai envie de dire que c’est le cas ici), c’est le modèle qui va imposer les conclusions: tous les modèles semblent valides quand on a peu de données, et les conclusions dépendront plus du modèle retenu que des données. C’est moins le cas quand on a des gros volumes de données (sans parler de big-data, je garde ça pour un autre billet).

Revenons sur le problème en question, car il est intéressant. On nous parle ici d'”extrapolation linéaire” dans la publicité. “Extrapolation” signifie que l’on va construire un modèle, et qu’on va l’utiliser pour faire une prévision. Et il y a “linéaire“. A priori, un modèle linéaire, ce n’est pas trop ambigu… sauf qu’on peut faire un modèle sur le niveau, ou sur le taux de croissance. Si on fait un modèle très simple – constant – sur le taux de croissance, on aura alors une croissance exponentielle, c’est aussi simple que ça. Donc le modèle peut être linéaire, ou exponentiel (on regardera un peu les deux).

Autre point. On a ici trois populations,

  • les suisses, 
  • les étrangers, 
  • la population totale, 

Le graphique est un peu trompeur… Il donne l’illusion que l’on extrapole la première variable  et la troisième, . Le danger est qu’il convient, dans ce cas, de contraindre un minimum les modèles, car . On pourrait très bien imaginer qu’en extrapolant, on ait . Donc il faudra faire un peu attention… Le plus simple sera probablement de construire des modèles sur  et  en s’assurant de la positivité de ces deux variables.

Dernier point. A-t-on le droit de faire des modèles indépendants. Peut-on supposer que les deux séries  et  évoluent indépendamment l’une de l’autre ? A priori non, et dans ce cas, c’est un peu plus complexe, car il faut un modèle bivarié. Bref, je trouve que ce petit exemple donne matière à réflexion sur ce qu’est la modélisation statistique. Et sur l’incroyable pouvoir qu’on les statisticiens et les économètres (car ceux sont eux qui construisent le modèle… “with great power comes great responsibility” comme dirait l’autre).

Car il faut que les choses soient claires. Il est impossible d’être neutre ! Ça n’a pas de sens… On peut construire plusieurs modèles, qui sembleront tous valides, et ensuite utiliser un critère de choix de modèles (les plus usuels étant peut-être le critère AIC d’Akaike, ou le BIC de Schwarz). Mais là aussi, je n’ai aucun moyen pour dire : ce modèle est le meilleur, et il est neutre. Non, il sera le meilleur pour un critère de choix, que je choisis.

Histoire d’être plus clair, essayons de faire des régressions. Le modèle le plus simple serait un modèle linéaire,

avec des résidus – a priori – non corrélés, . C’est que donnerait deux régressions linéaires sous Excel, pour ceux qui ne sont pas à l’aise avec la modélisation. Classiquement, ce modèle s’écrit

avec  (conditionnellement à notre variable explicative). Lançons nous… Bon, quand j’ai cherché les données, je n’ai pas trouvé 3 observations, mais une bonne trentaine pour la population résidente, et les étrangers. On va donc utiliser toutes les données pour faire notre projection (et éventuellement voir ce que donnerait une régression avec seulement 3 observations)

> D=read.table("http://freakonometrics.free.fr/suisse.csv",sep=",",
+ header=TRUE)
> tail(D)
      X      N1      N3      N2
27 2006 1554527 7508739 5954212
28 2007 1602093 7593494 5991401
29 2008 1669715 7701856 6032141
30 2009 1714004 7785806 6071802
31 2010 1766277 7870134 6103857
32 2011 1815994 7954662 6138668

Comme on a un modèle linéaire, on peut faire indifféremment l’estimation du modèle sur deux des trois séries. Par simplicité, modélisons des deux séries qui nous intéressent. L’estimation donne ici

> reg2=lm((N2/100000)~X,data=D)
> reg3=lm((N3/100000)~X,data=D)
> pred2=predict(reg2,newdata=data.frame(X=1980:2060))/100000
> pred3=predict(reg3,newdata=data.frame(X=1980:2060))/100000
> lines(1980:2060,pred2,col=COL[1],lwd=2)
> lines(1980:2060,pred3,col=COL[6],lwd=2)

Maintenant, il faut s’interroger sur la pertinence de notre modèle. Pour ma part, j’ai rarement vu des modèles linéaires pour modéliser des des évolutions de populations. Mais sur du court terme (une centaine d’années), pour quoi pas. Même avec un modèle de type Maltusien, c’est possible. Pour ma part, quand on travaille sur des données de comptage, j’aime beaucoup la régression de Poisson,

A la différence de notre premier modèle, on aura ici de l’hétéroscédasticité : plus la population grande, plus la variance (et donc le terme d’erreur) sera grande. C’est plus réaliste que le premier modèle. Rien de plus facile que d’estimer ce modèle

> reg2=glm((N2/100000)~X,data=D,family=poisson(link="identity"))
> reg1=glm((N1/100000)~X,data=D,family=poisson(link="identity"))

On peut en profiter d’ailleurs pour regarder un peu les intervalles de confiance (sur notre modèle) et relativiser un peu les conclusions

> pred2p=predict(reg2,newdata=data.frame(X=1980:2060),type="response",se.fit=TRUE)
> pred1p=predict(reg1,newdata=data.frame(X=1980:2060),type="response",se.fit=TRUE)

La taille de l’intervalle de confiance nous suggère ici d’être prudent avant d’affirmer quoi que ce soit… et encore, on a de la chance, on a ici 35 ans de données et pas 3 !

> I=which(D$X%in%c(1990,2000,2010))
> reg2=glm((N2/100000)~X,data=D[I,],family=poisson(link="identity"))
> reg1=glm((N1/100000)~X,data=D[I,],family=poisson(link="identity"))

Bon, classiquement, quand on fait une régression de Poisson, on utilise un lien exponentiel,

On va alors s’assurer que les deux comptages soient toujours positifs ! Ce qui est bien compte tenu de notre problème. Mais en contrepartie, on a la contrainte d’avoir une croissance (ou une décroissance) exponentielle de nos populations. On n’a pas trop le choix (car quand on y pense, avec un modèle linéaire, il y a forcément une date, avant laquelle, ou après laquelle, la population sera négative… ce qui est gênant)

> reg2=glm((N2/100000)~X,data=D,family=poisson)
> reg1=glm((N1/100000)~X,data=D,family=poisson)
> pred2p=predict(reg2,newdata=data.frame(X=1980:2060),type="response")
> pred1p=predict(reg1,newdata=data.frame(X=1980:2060),type="response")

(la courbe en pointillé en arrière plan est le modèle linéaire).

On continue ? On peut en fait avoir un modèle bivarié, pour dire que nos deux courbes évoluent ensemble. Il existe par exemple le modèle de Poisson à choc commun, de densité jointe

avec . Sous R, le package qui permettait de faire cette régression n’est plus disponible, mais peu importe, on peut le recoder, ça prendra trois minutes,

> f=function(Z,L){
+ si=function(i) choose(Z[1],i)*choose(Z[2],i)*gamma(i+1)*
+ (L[3]/(L[1]*L[2]))^i
+ s=Vectorize(si)(0:min(Z))
+ p=exp(-sum(L))*L[1]^Z[1]/gamma(Z[1]+1)*L[2]^Z[2]/gamma(Z[2]+1)*sum(s)
+ return(p)
+ }
> minuslogL=function(B){
+ h=function(x) exp(x)
+ logL=function(i) log(f(Y[i,],
+ c(h(B[1]+B[2]*X[i]),h(B[3]+B[4]*X[i]),h(B[5]+B[6]*X[i]))))
+ return(-sum(Vectorize(logL)(1:n)))
+ }
> optim(c(lm(log(Y[,1])~X)$coefficients,
+ lm(log(Y[,2])~X)$coefficients,0,0),minuslogL)->maxL
> Bstar=maxL$par
> Bstar
  (Intercept)             X   (Intercept)             X               
-4.343823e+01  2.303903e-02 -3.430377e+00  3.746384e-03 

 1.506016e-02 -9.743153e-04 
> predbiv=function(x,B=Bstar){
+ h=function(x) exp(x)
+ return(c(h(B[1]+B[2]*x)+h(B[5]+B[6]*x),
+          h(B[3]+B[4]*x)+h(B[5]+B[6]*x)))

Si on construit notre prévision, on est ici proche de notre précédant modèle,

On pourrait aussi utiliser un modèle de série temporelle: dire que l’on a une tendance linéaire, ou exponentielle, et dire que le bruit peut être modélisé par un modèle autorégressif (bivarié ou pas).

avec un modèle autorégressive pour le bruit,

ou de manière beaucoup plus générale,


i.e. un modèle autorégressif vectoriel,

On peut estimer ce modèle assez facilement,

> reg2=glm(N2~X,data=D,family=poisson)
> reg1=glm(N1~X,data=D,family=poisson)
> Z=cbind(residuals(reg2),residuals(reg1))
> library(vars)
> regvar=VAR(Z,p=1)

Si on regarde le modèle, on notera que la matrice d’autocorrélation est bien pleine, avec de la causalité dans tous les sens (ce qui devrait augmenter encore la taille de nos intervalles de confiance)

> summary(regvar)

VAR Estimation Results:

Estimation results for equation y1: 
=================================== 
y1 = y1.l1 + y2.l1 + const 

      Estimate Std. Error t value Pr(>|t|)    
y1.l1  1.13635    0.05191  21.891   <2e-16 ***
y2.l1 -0.11922    0.03411  -3.495   0.0016 ** 
const  0.73037    0.52316   1.396   0.1737    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Estimation results for equation y2: 
=================================== 
y2 = y1.l1 + y2.l1 + const 

      Estimate Std. Error t value Pr(>|t|)    
y1.l1  0.44698    0.12384   3.609  0.00118 ** 
y2.l1  0.83294    0.08139  10.234 5.76e-11 ***
const  0.87575    1.24811   0.702  0.48868    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Je continue ? Allez, un petit dernier exemple pour la route ! Regardons le petit modèle suivant,

Avec ce modèle, le ratio entre les deux populations sera toujours le même. Ce qui fera taire tous les débats… et pour l’estimer, il suffit de rajouter la contrainte dans les codes précédents (avec l’estimation par maximum de vraisemblance)

Maintenant, comme on avait des séries temporelles, je n’ai pas présenté de modèles nonparamétriques, car c’est délicat de faire de la prévision avec ces modèles. Par contre, si on avait des données individuelles, on aura pu s’en donner à cœur joie, à faire du lissage dans tous les sens ! Ce qui nous aurait donné encore plus de latitude dans notre modèle. Je passe aussi sous silence ici que mes modèles sont probablement trop simples. Je veux dire par là – je connais mal les procédures pour avoir la nationalité suisse – que quand ses enfants, et ses petits-enfants, sont nés en Suisse, j’ai du mal à me dire que la personne est un “étranger“. Au bout de quelques années, on peut imaginer un modèle dynamique où les étrangers passent de “étranger” à “Suisse“. Mais j’ai peut-être un biais canadien dans ma perception de la nationalité.

Ce que je voulais dire, c’est que faire des prévisions à partir d’un (relativement) petit jeu de données n’est jamais simple, et que le choix du modèle va complètement déterminer la visualisation que l’on veut avoir pour nos données, et du type de prévision que l’on cherche. Et ne me demandez pas d’être neutre, c’est impossible… Une fois qu’on a compris ce problème, on comprend que certains chercheurs avancent que plus de la moitié des études publiées (souvent sur des petits échantillons) sont fausses. Je ne dirais pas pour ma part qu’elles sont “fausses“. Juste qu’il est facile, quand on a peu de données, de faire passer le message que l’on veut ! Ici, c’est facile à démonter (Martin l’a fait en quelques lignes, et il m’en aura fallu un peu plus car j’ai besoin d’un peu plus de temps – et de formalisme – pour raconter mes histoires), et il s’agissait uniquement d’un dessin de propagande. Il faut garder en mémoire que beaucoup de décisions de santé publique sont basés sur ce genre d’exercice. Avec en plus des données rarement publiques, et entachées de beaucoup de bruit… On pensera aux études sur les OGM, sur les antennes relais, sur le tabagisme (évoqué dans un précédant billet), etc. Et pour démonter ce genre d’étude, cela prend beaucoup plus de temps !