Visualizing Autoregressive Time Series

In the MAT8181 graduate course on Time Series, we started discussing autoregressive models. Just to illustrate, here is some code to plot https://latex.codecogs.com/gif.latex?AR(1) – causal – process,

> graphar1=function(phi){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 2:n) X[t]=phi*X[t-1]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(1/phi,0,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-.2,.2),xlim=c(-1,1))
+ axis(1)
+ points(phi,0,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

e.g.

> graphar1(.8)

or

> graphar1(-.7)

(with, on the bottom, the root of the characteristic polynomial, the value of the parameter https://latex.codecogs.com/gif.latex?\phi_{1}, the autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\rho(h) and the partial autocorrelation function https://latex.codecogs.com/gif.latex?h\mapsto\psi(h)).

Of course, it is possible to do something similar with https://latex.codecogs.com/gif.latex?AR(2) processes,

> graphar2=function(phi1,phi2){
+ nf <- layout(matrix(c(1,1,1,1,2,3,4,5), 2, 4, byrow=TRUE), respect=TRUE)
+ e=rnorm(n)
+ X=rep(0,n)
+ for(t in 3:n) X[t]=phi1*X[t-1]+phi2*X[t-2]+e[t]
+ plot(X[1:6000],type="l",ylab="")
+ abline(h=mean(X),lwd=2,col="red")
+ abline(h=mean(X)+2*sd(X),lty=2,col="red")
+ abline(h=mean(X)-2*sd(X),lty=2,col="red")
+ P=polyroot(c(1,-phi1,-phi2))
+ u=seq(-1,1,by=.001)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,ylim=c(-2,2),xlim=c(-2.5,2.5))
+ polygon(c(u,rev(u)),c(sqrt(1-u^2),rev(-sqrt(1-u^2))),col="light yellow")
+ abline(v=0,col="grey")
+ abline(h=0,col="grey")
+ points(P,pch=19,col="red",cex=1.3)
+ plot(0:1,0:1,col="white",xlab="",ylab="",axes=FALSE,xlim=c(-2.1,2.1),ylim=c(-1.2,1.2))
+ polygon(c(-2,0,2,-2),c(-1,1,-1,-1),col="light green")
+ u=seq(-2,2,by=.001)
+ lines(u,-u^2/4)
+ abline(v=seq(-2,2,by=.2),col="grey",lty=2)
+ abline(h=seq(-1,1,by=.2),col="grey",lty=2)
+ segments(0,-1,0,1)
+ axis(1)
+ axis(2)
+ points(phi1,phi2,pch=19,col="red",cex=1.3)
+ acf(X,lwd=3,col="blue",main="",ylim=c(-1,1))
+ pacf(X,lwd=3,col="blue",main="",ylim=c(-1,1),xlim=c(0,16))}

For example,

> graphar2(.65,.3)

or

> graphar2(-1.4,-.7)

Somewhere else, part 110

Long time no see… here are some writings worth reading, published somewhere else,

Real satellite picture, taken above Korea (via https://google.ca/search?q=satellite+korea…) Note that the original (11Mo high resolution picture) is http://eoimages.gsfc.nasa.gov/images/… see

avec aussi un peu de lecture en français,

Did I miss something?

Faire parler les chiffres… n’importe comment

Cette fin de semaine, Martin Grandjean a mis en ligne un billet intéressant sur son blog, sur l’utilisation des statistiques (dans un but de propagande). L’exercice n’est pas nouveau, mais Martin soulève des questions, malheureusement importantes et complexes. Dans un paragraphe, intitulé “faire parler les chiffres… n’importe comment” (que j’ai repris comme titre, j’avoue avoir hésité avec “with great power comes great responsibility“), on retrouve l’analyse (rapide) d’un graphique, présenté ci-dessous. Le graphique parle de “statistiques” liées à des problèmes démographiques et d’immigration, sauf que comme le note Martin, il y a un “petit problème” : “ces statistiques n’en sont pas vraiment. Ou plutôt, il s’agit d’une extrapolation libre et “linéaire”“.

Je reviendrais sur ces points dans deux minutes, avant je voulais revenir sur une petite phrase que l’on retrouve un peu plus loin: “comment peut-on se permettre de prédire une tendance sur cinquante ans à partir d’une tendance observée sur dix ans ?“. Cette remarque est importante… car beaucoup de monde se pose cette question, et les statisticiens se doivent d’apporter une réponse. Ce problème m’avait été posé (plus ou moins) par une journaliste il y a quelques années maintenant, sur la solvabilité des compagnies d’assurance. Avec Solvabilité II, en Europe, les compagnies doivent calculer des VaR à 99.5%, c’est à dire liés à des événements ayant une période de retour de 200 ans. Avec au mieux 25 ans de données ! Dans ce cas, la nuance vient du fait que l’on cherche à estimer un quantile associé à une probabilité faible avec peu de données (et que l’on assimile rareté et période temporelle). C’est le même problème qui se pose en hydrologie, quand on veut construire une digue suffisamment haute pour que seul un événement millénaire provoque une inondation, mais que l’on doit prévoir avec 50 ans de données. Cela dit, quand un assureur fait de la mortalité prospective, il a souvent 20 ou 30 ans de données, et doit estimer la probabilité qu’un assuré de 25 ans soit encore en vie dans 50 voire 75 ans. C’est le genre de message que j’essayais de faire passer il y a quelques années dans une table ronde sur la pérennité des régimes de retraites, en France. Bref, cet exercice est un vrai exercice. Et il convient de bien comprendre le modèle pour comprendre ce qu’on fait ! Dire que c’est stupide ne pourra pas suffire.

Car quand on a peu de données (j’ai envie de dire que c’est le cas ici), c’est le modèle qui va imposer les conclusions: tous les modèles semblent valides quand on a peu de données, et les conclusions dépendront plus du modèle retenu que des données. C’est moins le cas quand on a des gros volumes de données (sans parler de big-data, je garde ça pour un autre billet).

Revenons sur le problème en question, car il est intéressant. On nous parle ici d'”extrapolation linéaire” dans la publicité. “Extrapolation” signifie que l’on va construire un modèle, et qu’on va l’utiliser pour faire une prévision. Et il y a “linéaire“. A priori, un modèle linéaire, ce n’est pas trop ambigu… sauf qu’on peut faire un modèle sur le niveau, ou sur le taux de croissance. Si on fait un modèle très simple – constant – sur le taux de croissance, on aura alors une croissance exponentielle, c’est aussi simple que ça. Donc le modèle peut être linéaire, ou exponentiel (on regardera un peu les deux).

Autre point. On a ici trois populations,

  • les suisses, 
  • les étrangers, 
  • la population totale, 

Le graphique est un peu trompeur… Il donne l’illusion que l’on extrapole la première variable  et la troisième, . Le danger est qu’il convient, dans ce cas, de contraindre un minimum les modèles, car . On pourrait très bien imaginer qu’en extrapolant, on ait . Donc il faudra faire un peu attention… Le plus simple sera probablement de construire des modèles sur  et  en s’assurant de la positivité de ces deux variables.

Dernier point. A-t-on le droit de faire des modèles indépendants. Peut-on supposer que les deux séries  et  évoluent indépendamment l’une de l’autre ? A priori non, et dans ce cas, c’est un peu plus complexe, car il faut un modèle bivarié. Bref, je trouve que ce petit exemple donne matière à réflexion sur ce qu’est la modélisation statistique. Et sur l’incroyable pouvoir qu’on les statisticiens et les économètres (car ceux sont eux qui construisent le modèle… “with great power comes great responsibility” comme dirait l’autre).

Car il faut que les choses soient claires. Il est impossible d’être neutre ! Ça n’a pas de sens… On peut construire plusieurs modèles, qui sembleront tous valides, et ensuite utiliser un critère de choix de modèles (les plus usuels étant peut-être le critère AIC d’Akaike, ou le BIC de Schwarz). Mais là aussi, je n’ai aucun moyen pour dire : ce modèle est le meilleur, et il est neutre. Non, il sera le meilleur pour un critère de choix, que je choisis.

Histoire d’être plus clair, essayons de faire des régressions. Le modèle le plus simple serait un modèle linéaire,

avec des résidus – a priori – non corrélés, . C’est que donnerait deux régressions linéaires sous Excel, pour ceux qui ne sont pas à l’aise avec la modélisation. Classiquement, ce modèle s’écrit

avec  (conditionnellement à notre variable explicative). Lançons nous… Bon, quand j’ai cherché les données, je n’ai pas trouvé 3 observations, mais une bonne trentaine pour la population résidente, et les étrangers. On va donc utiliser toutes les données pour faire notre projection (et éventuellement voir ce que donnerait une régression avec seulement 3 observations)

> D=read.table("http://freakonometrics.free.fr/suisse.csv",sep=",",
+ header=TRUE)
> tail(D)
      X      N1      N3      N2
27 2006 1554527 7508739 5954212
28 2007 1602093 7593494 5991401
29 2008 1669715 7701856 6032141
30 2009 1714004 7785806 6071802
31 2010 1766277 7870134 6103857
32 2011 1815994 7954662 6138668

Comme on a un modèle linéaire, on peut faire indifféremment l’estimation du modèle sur deux des trois séries. Par simplicité, modélisons des deux séries qui nous intéressent. L’estimation donne ici

> reg2=lm((N2/100000)~X,data=D)
> reg3=lm((N3/100000)~X,data=D)
> pred2=predict(reg2,newdata=data.frame(X=1980:2060))/100000
> pred3=predict(reg3,newdata=data.frame(X=1980:2060))/100000
> lines(1980:2060,pred2,col=COL[1],lwd=2)
> lines(1980:2060,pred3,col=COL[6],lwd=2)

Maintenant, il faut s’interroger sur la pertinence de notre modèle. Pour ma part, j’ai rarement vu des modèles linéaires pour modéliser des des évolutions de populations. Mais sur du court terme (une centaine d’années), pour quoi pas. Même avec un modèle de type Maltusien, c’est possible. Pour ma part, quand on travaille sur des données de comptage, j’aime beaucoup la régression de Poisson,

A la différence de notre premier modèle, on aura ici de l’hétéroscédasticité : plus la population grande, plus la variance (et donc le terme d’erreur) sera grande. C’est plus réaliste que le premier modèle. Rien de plus facile que d’estimer ce modèle

> reg2=glm((N2/100000)~X,data=D,family=poisson(link="identity"))
> reg1=glm((N1/100000)~X,data=D,family=poisson(link="identity"))

On peut en profiter d’ailleurs pour regarder un peu les intervalles de confiance (sur notre modèle) et relativiser un peu les conclusions

> pred2p=predict(reg2,newdata=data.frame(X=1980:2060),type="response",se.fit=TRUE)
> pred1p=predict(reg1,newdata=data.frame(X=1980:2060),type="response",se.fit=TRUE)

La taille de l’intervalle de confiance nous suggère ici d’être prudent avant d’affirmer quoi que ce soit… et encore, on a de la chance, on a ici 35 ans de données et pas 3 !

> I=which(D$X%in%c(1990,2000,2010))
> reg2=glm((N2/100000)~X,data=D[I,],family=poisson(link="identity"))
> reg1=glm((N1/100000)~X,data=D[I,],family=poisson(link="identity"))

Bon, classiquement, quand on fait une régression de Poisson, on utilise un lien exponentiel,

On va alors s’assurer que les deux comptages soient toujours positifs ! Ce qui est bien compte tenu de notre problème. Mais en contrepartie, on a la contrainte d’avoir une croissance (ou une décroissance) exponentielle de nos populations. On n’a pas trop le choix (car quand on y pense, avec un modèle linéaire, il y a forcément une date, avant laquelle, ou après laquelle, la population sera négative… ce qui est gênant)

> reg2=glm((N2/100000)~X,data=D,family=poisson)
> reg1=glm((N1/100000)~X,data=D,family=poisson)
> pred2p=predict(reg2,newdata=data.frame(X=1980:2060),type="response")
> pred1p=predict(reg1,newdata=data.frame(X=1980:2060),type="response")

(la courbe en pointillé en arrière plan est le modèle linéaire).

On continue ? On peut en fait avoir un modèle bivarié, pour dire que nos deux courbes évoluent ensemble. Il existe par exemple le modèle de Poisson à choc commun, de densité jointe

avec . Sous R, le package qui permettait de faire cette régression n’est plus disponible, mais peu importe, on peut le recoder, ça prendra trois minutes,

> f=function(Z,L){
+ si=function(i) choose(Z[1],i)*choose(Z[2],i)*gamma(i+1)*
+ (L[3]/(L[1]*L[2]))^i
+ s=Vectorize(si)(0:min(Z))
+ p=exp(-sum(L))*L[1]^Z[1]/gamma(Z[1]+1)*L[2]^Z[2]/gamma(Z[2]+1)*sum(s)
+ return(p)
+ }
> minuslogL=function(B){
+ h=function(x) exp(x)
+ logL=function(i) log(f(Y[i,],
+ c(h(B[1]+B[2]*X[i]),h(B[3]+B[4]*X[i]),h(B[5]+B[6]*X[i]))))
+ return(-sum(Vectorize(logL)(1:n)))
+ }
> optim(c(lm(log(Y[,1])~X)$coefficients,
+ lm(log(Y[,2])~X)$coefficients,0,0),minuslogL)->maxL
> Bstar=maxL$par
> Bstar
  (Intercept)             X   (Intercept)             X               
-4.343823e+01  2.303903e-02 -3.430377e+00  3.746384e-03 

 1.506016e-02 -9.743153e-04 
> predbiv=function(x,B=Bstar){
+ h=function(x) exp(x)
+ return(c(h(B[1]+B[2]*x)+h(B[5]+B[6]*x),
+          h(B[3]+B[4]*x)+h(B[5]+B[6]*x)))

Si on construit notre prévision, on est ici proche de notre précédant modèle,

On pourrait aussi utiliser un modèle de série temporelle: dire que l’on a une tendance linéaire, ou exponentielle, et dire que le bruit peut être modélisé par un modèle autorégressif (bivarié ou pas).

avec un modèle autorégressive pour le bruit,

ou de manière beaucoup plus générale,


i.e. un modèle autorégressif vectoriel,

On peut estimer ce modèle assez facilement,

> reg2=glm(N2~X,data=D,family=poisson)
> reg1=glm(N1~X,data=D,family=poisson)
> Z=cbind(residuals(reg2),residuals(reg1))
> library(vars)
> regvar=VAR(Z,p=1)

Si on regarde le modèle, on notera que la matrice d’autocorrélation est bien pleine, avec de la causalité dans tous les sens (ce qui devrait augmenter encore la taille de nos intervalles de confiance)

> summary(regvar)

VAR Estimation Results:

Estimation results for equation y1: 
=================================== 
y1 = y1.l1 + y2.l1 + const 

      Estimate Std. Error t value Pr(>|t|)    
y1.l1  1.13635    0.05191  21.891   <2e-16 ***
y2.l1 -0.11922    0.03411  -3.495   0.0016 ** 
const  0.73037    0.52316   1.396   0.1737    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Estimation results for equation y2: 
=================================== 
y2 = y1.l1 + y2.l1 + const 

      Estimate Std. Error t value Pr(>|t|)    
y1.l1  0.44698    0.12384   3.609  0.00118 ** 
y2.l1  0.83294    0.08139  10.234 5.76e-11 ***
const  0.87575    1.24811   0.702  0.48868    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Je continue ? Allez, un petit dernier exemple pour la route ! Regardons le petit modèle suivant,

Avec ce modèle, le ratio entre les deux populations sera toujours le même. Ce qui fera taire tous les débats… et pour l’estimer, il suffit de rajouter la contrainte dans les codes précédents (avec l’estimation par maximum de vraisemblance)

Maintenant, comme on avait des séries temporelles, je n’ai pas présenté de modèles nonparamétriques, car c’est délicat de faire de la prévision avec ces modèles. Par contre, si on avait des données individuelles, on aura pu s’en donner à cœur joie, à faire du lissage dans tous les sens ! Ce qui nous aurait donné encore plus de latitude dans notre modèle. Je passe aussi sous silence ici que mes modèles sont probablement trop simples. Je veux dire par là – je connais mal les procédures pour avoir la nationalité suisse – que quand ses enfants, et ses petits-enfants, sont nés en Suisse, j’ai du mal à me dire que la personne est un “étranger“. Au bout de quelques années, on peut imaginer un modèle dynamique où les étrangers passent de “étranger” à “Suisse“. Mais j’ai peut-être un biais canadien dans ma perception de la nationalité.

Ce que je voulais dire, c’est que faire des prévisions à partir d’un (relativement) petit jeu de données n’est jamais simple, et que le choix du modèle va complètement déterminer la visualisation que l’on veut avoir pour nos données, et du type de prévision que l’on cherche. Et ne me demandez pas d’être neutre, c’est impossible… Une fois qu’on a compris ce problème, on comprend que certains chercheurs avancent que plus de la moitié des études publiées (souvent sur des petits échantillons) sont fausses. Je ne dirais pas pour ma part qu’elles sont “fausses“. Juste qu’il est facile, quand on a peu de données, de faire passer le message que l’on veut ! Ici, c’est facile à démonter (Martin l’a fait en quelques lignes, et il m’en aura fallu un peu plus car j’ai besoin d’un peu plus de temps – et de formalisme – pour raconter mes histoires), et il s’agissait uniquement d’un dessin de propagande. Il faut garder en mémoire que beaucoup de décisions de santé publique sont basés sur ce genre d’exercice. Avec en plus des données rarement publiques, et entachées de beaucoup de bruit… On pensera aux études sur les OGM, sur les antennes relais, sur le tabagisme (évoqué dans un précédant billet), etc. Et pour démonter ce genre d’étude, cela prend beaucoup plus de temps !

Quelques jours à Rennes

J’ai quitté Montréal hier soir, juste après le cours…

… et je serais de passage à Rennes pour quelques jours, pendant toute la semaine. Au programme, quatre jours de travail avec des co-auteurs, place Hoche, histoire de finir des papiers qui traînaient depuis (beaucoup) trop longtemps. Promis, j’en reparlerais dès que les documents seront en ligne ! En attendant, je sais retrouver la ville pendant quelques jours, et je compte en profiter autant que possible !

Statistical Interests in Large Cities

I always thought that there were some kind of schools in statistics, areas (not to say universities or laboratories) where people had common interest in term of statistical methodology. Like people with strong interest in extreme values, or in Lévy Processes. I wanted to check this point so I did extract information about articles puslished in about 35 journals in statistics, probability and econometrics. I got all the information in files extracted from http://scopus.com/

> setwd("/home/arthur/Documents/scopus/")
> L=list.files()
> z=NULL
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ z=c(z,as.character(B$Source.title))
+ }

Here is the list of the publications I have used

> Z=sort(table(z),decreasing=TRUE)
> Z[1:34]
                                 Computational Statistics and Data Analysis 
                                                                       4000 
                                           Journal of Multivariate Analysis 
                                                                       4000 
                                                         Econometric Theory 
                                                                       2631 
                                              Annals of Applied Probability 
                                                                       2051 
                                                             Bioinformatics 
                                                                       2000 
                                                                 Biometrika 
                                                                       2000 
                                                    Journal of Econometrics 
                                                                       2000 
                              Journal of Statistical Planning and Inference 
                                                                       2000 
                            Journal of the American Statistical Association 
                                                                       2000 
                                                        Operations Research 
                                                                       2000 
                                                        Pattern Recognition 
                                                                       2000 
                                      Probability Theory and Related Fields 
                                                                       2000 
                                                          Signal Processing 
                                                                       2000 
                                             Journal of Applied Probability 
                                                                       1999 
                                Stochastic Processes and their Applications 
                                                                       1999 
                         Annals of the Institute of Statistical Mathematics 
                                                                       1985 
                                                       Annals of Statistics 
                                                                       1797 
                                                              Technometrics 
                                                                       1446 
                                       Journal of Machine Learning Research 
                                                                       1441 
                                                              Biostatistics 
                                                                       1120 
                                         Statistics and Probability Letters 
                                                                       1062 
                                                      Annals of Probability 
                                                                       1054 
                                                   Statistics and Computing 
                                                                        927 
                                            Advances in Applied Probability 
                                                                        895 
                                        Journal of Nonparametric Statistics 
                                                                        836 
                                                   Computational Statistics 
                                                                        813 
                                            Journal of Time Series Analysis 
                                                                        811 
                          Journal of Computational and Graphical Statistics 
                                                                        802 
     Journal of the Royal Statistical Society. Series C: Applied Statistics 
                                                                        794 
Journal of the Royal Statistical Society. Series B: Statistical Methodology 
                                                                        793 
                                                                 Biometrics 
                                                                        784 
                                                           Machine Learning 
                                                                        559 
                                                  SIAM Journal on Computing 
                                                                        433 
                                     International Journal of Biostatistics 
                                                                        368

The first problem is that is it difficult to extract universities and locations of contributors. When you look at what we have in the dataset, here it is

> B$Authors.with.affiliations[1]
[1] Mischler, S., CEREMADE, UMR CNRS 7534, Universit\303\251 Paris-Dauphine, Place du 
Mar\303\251chal de Lattre de Tassigny, Paris Cedex 16, 75775, France; Mouhot, C., DPMMS,
Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, 
CB3 0WA, United Kingdom; Wennberg, B., Department of Mathematical Sciences, Chalmers 
University of Technology, G\303\266teborg, Sweden, Department of Mathematical Sciences, 
University of Gothenburg, G\303\266teborg, 41296, Sweden

The first step was to split all that sentence, based on the comma operator

> setwd("/home/arthur/Documents/scopus/")
> L=list.files()
> v=NULL
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ A=B$Authors.with.affiliations
+ for(j in 1:length(A)){
+ x1=as.character(A[j])
+ x2=strsplit(x1,",")
+ v=c(v,x2[[1]])}
+ }

I have a long  long vector here. Which contains a lot of things !

> V=sort(table(v),decreasing=TRUE)
> names(V)[1:40]
 [1] " United States"                           
 [2] " Department of Statistics"                
 [3] " Department of Mathematics"               
 [4] " M."                                      
 [5] " J."                                      
 [6] " A."                                      
 [7] " S."                                      
 [8] " United Kingdom"                          
 [9] " France"                                  
[10] " D."                                      
[11] " P."                                      
[12] " Y."                                      
[13] " R."                                      
[14] " China"                                   
[15] " H."                                      
[16] " Germany"                                 
[17] " Department of Economics"                 
[18] " C."                                      
[19] " G."                                      
[20] " L."                                      
[21] " Canada"                                  
[22] " T."                                      
[23] " University of California"                
[24] " Department of Biostatistics"             
[25] " F."                                      
[26] " B."                                      
[27] " Department of Mathematics and Statistics"
[28] " E."                                      
[29] " K."                                      
[30] " N."                                      
[31] " Department of Computer Science"          
[32] " Japan"                                   
[33] " Australia"                               
[34] " X."                                      
[35] " Hong Kong"                               
[36] " Italy"                                   
[37] " W."                                      
[38] " Spain"

 

A lot of useless information, for sure, but also more valuable information. Like university names,

> names(V)[c(23,50,58,59,61,66,67,72,84,87,89)]
 [1] " University of California"     " Stanford University"         
 [3] " Chapel Hill"                  " University of Washington"    
 [5] " Stanford"                     " University of Michigan"      
 [7] " Carnegie Mellon University"   " Columbia University"         
 [9] " Cornell University"           " University of North Carolina"
[11] " Duke University"

or cities,

> names(V)[c(35,40,41,44,45,47,51,53,54,55,56,62,64,65,
+ 70,71,82,92,97)]
 [1] " Hong Kong"    " New York"     " Berkeley"     " Cambridge"   
 [5] " Boston"       " Seattle"      " London"       " Pittsburgh"  
 [9] " Los Angeles"  " Singapore"    " Beijing"      " Philadelphia"
[13] " Ann Arbor"    " Atlanta"      " Toronto"      " Baltimore"   
[17] " Chicago"      " San Diego"    " Tokyo"

I decided to focus on 90 locations. Each time I have a string which is the same as the name of one of my 90 cities, I keep it. So if there is a Prof. Ann Arbor, I will consider that person as a city. Here is the graph of all locations, with the number of “articles“. Or contributors. If four people in San Francisco published toegher an article, the article appears four times in my dataset. I did spend some time with Cambridge, and I decided to move Cambridge, MA to Boston, MA. Just for convenience.

> require("geosphere")
> require("maps")
> data(world.cities)
> data(us.cities)
> data(canada.cities)
> LOCALIZE=Vectorize(function(v){z=findLatLon(v)$latlon;if(is.na(z)){z=c(NA,NA)};return(z)})
> CITIES=names(V)[city]
> NCITIES=substr(CITIES,2,nchar(CITIES))
> NCITIES[substr(NCITIES,1,5)=="Paris"]="Paris"
> NCITIES=unique(NCITIES)
> LC=matrix(unlist(LOCALIZE(NCITIES)),nrow=2)
> BASELOC=data.frame(CITY=NCITIES,LAT=LC[2,],LON=LC[1,])

I did spend some time on some cities, such as Paris, or London, where zip code was sometimes attached to the city name. I also had to fix some problems… But after a few minuts, I was able to locate those cities.

Then, I wanted to extract information about all publications. Keywords are interesting, but over 266,567 “publications“, it is hard to use (sometimes it is not file, somethimes it is extremely general, or extremely specialized). So I decided to extract words from the title of the contribution.

> VCITY=NULL
> VKW=NULL
> VY=NULL
> VJ=NULL
> VA=NULL
> VW=NULL
> art=0
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ A=B$Authors.with.affiliations
+ for(j in 1:length(A)){
+ art=art+1
+ x1=as.character(A[j])
+ x2=strsplit(x1,",")
+ listu=which(x2[[1]]%in%CITIES)
+ if(length(listu)>0){
+ C=tolower(paste(" ",as.character(B[j,"Title"]),sep=""))
+ x3=strsplit(C," ")[[1]]
+ kx3=which(!x3%in%c("a","the","of","an","in","",
+ "for","and","with","on","to","using","from","under"))
+ x3=x3[kx3]
+ J=as.character(B[j,"Source.title"])
+ Y=B[j,"Year"]
+ n1=length(listu)
+ n2=length(x3)
+ VCITY=c(VCITY,rep(x2[[1]][listu],each=n2))
+ VKW=c(VKW,rep(x3,n1))
+ VY=c(VY,rep(Y,n1*n2))
+ VJ=c(VJ,rep(J,n1*n2))
+ VA=c(VA,rep(art,n1*n2))
+ VW=c(vW,rep(1/n2,n1*n2))
+ }}}
­> BASEUNIV=data.frame(CITY=NCITIES,KEYW=VKW,YEAR=VY,JOURNAL=VJ,INDICE=VA,W=W)
Here, I got a huge dataset. One line is one city and one "word". Now, let us select one word, and let us plot how important that word is, in each city,
> Figure=function(keyword="bayesian"){
+ SBASEUNIV=BASEUNIV[BASEUNIV$KEYW==keyword,]
+ SB2=tapply(SBASEUNIV$W,SBASEUNIV$CITY,sum)
+ D=data.frame(CITY=names(SB2),CT=as.vector(SB2))
+ BASE=merge(BASELOC,D)
+ library(maps)
+ library(RColorBrewer)
+ CL=brewer.pal(6, "RdBu")
+ Y=SB2/SB*sum(SB,na.rm=TRUE)/sum(SB2,na.rm=TRUE)
+ X=cut(Y,breaks=c(0,.5,.75,1,1.333,2,10000))
+ levels(X)=1:6
+ library(maps)
+ map("world")
+ points(BASE$LON,BASE$LAT,pch=1,col=CL[as.numeric(X)],
+ cex=sqrt(Y*20),lwd=4)
+ }
In the code above, we compare with the independent case (if cities and keywords where independent) since we normalize using
SB2/SB*sum(SB,na.rm=TRUE)/sum(SB2,na.rm=TRUE)
For bayesian statistics (publication with the word bayesian in the title)

For nonparametric statistics (publication with the word nonparametric in the title)

For stochastic processes (publication with the word processes in the title)

(the problem here is that we cannot visualize the red circles: if in a city, no one published on a given topic, it would be strong red, but tiny, or even null… so we won’t see it). It decided to keep the top 250 words that appeaared in titles, I removed standard common words, such as it, theof, etc.

> listewords=names(sort(table(BASEUNIV$KEYW),decreasing=TRUE)[1:250])
> listewords=listewords[-c(1,2,3,4,7,15,24,42,129)]
> idx=which(BASEUNIV$KEYW%in%listemots)
> T=table(as.character(BASEUNIV$KEYW[idx]),BASEUNIV$CITY[idx])
> MATRICE=as.matrix(T)

I had a nice contingency table, with 90 cities, versus 200 words.

> library("FactoMineR")
> res.pca = PCA(t(MATRICE), scale.unit=TRUE, ncp=5, 
+ graph=FALSE)
> plot.PCA(res.pca, axes=c(1, 2), choix="ind")

Principal component analysis was disapointing,

So I decided to extract, per city, the largest contributions to the chi-square distance

> K2=chisq.test(MATRICE)
> M2=K2$expected

On the graph below, the green level is the theoretical counts of each word, under some independence assumption. The dark line is the observed one. For instance, in San Francisco, on top, we have words that were not used a lot (e.g. processes: given the total number of publications, it would make sense to have 6 or 7 publications with the word processes, but there were 0 publications actually), and below words that were intensively used. Intensively (such as method and structure, the last one was expected two or three times, but it appeared in 25 publications) compared with the other cities,

In Boston, MA,  we got

In New York City, NY

In Paris (France),

But to be honest, I was disapointed. I mean, yes, I can see on the previous graph, for instance, that there are a lot of people working on stochastic processes, with the words Brownian and Markov. But for most cases, I can hardly get an interpretation…

I tried a finaly graph, on interconnexions between authors. The first point is that it is common to have joint publications with colleagues, in the same city. The largest the point, the more joint papers,

But we can add here cross publications: the thinner the line, the less joint publications between two places,

We can see that I missed in the first part the Cambridge-Boston distinction, since Cambridge should now stand for Cambridge, UK. But the line is clearly too large to be explained here by collaboration betweem Cambridge, UK, and Boston, MA. But still. a lot of them can be explained, with Hong-Kong and Shanghai, or Mexico and Guanajuato.

If someone has better ideas to import properly the locations (or affiliations, it might be fun to focus on universities) and perhaps the abstract (more than the title), I’d be glad to try the same study in Economic journals…

Bias and MLE

Before leaving the office, this evening, JP decided to knock at my door to ask me a “quick and very basic question” (as he put it). This is JP’s stategy, and he knows it works. His question was – more or less – what do we know about the bias in maximum likelihood estimation when we have a small sample, from a Gamma distribution. He was surprised by some results he got. If I wanted to be naughty, too, I would say that he was suprised to see how long his student spent to code that in SAS. So he wanted to challenge me, and see how fast I could give him a valuable answer. Given the fact that I had to leave early because my elder son had a fencing competition, I tried to write a simple code to “visualize” the bias of the parameter (the first one) of a Gamma distribution, with MLE.

Before showing the graph, I wanted to add that I hate one thing about mathematical statistics courses: we learn nothing interesting there. I mean, we can see nice mathematical concepts, but after this class, you can hardly say anything when you see your first dataset. Like with real data. For instance, this course usually emphasize asymptotical results, using limiting theorem. When you take this course, you learn a lot of thing about maximumum likelihood for instance. You can compute the asymptotic variance and derive asymptotic confidence intervals. But are those results relevant when you have 50 observations? Is it possible, with 50 observations, to have a bias which has the same size as the parameter?

As usual, one possible answer is “if you don’t have a lot of observations, be Bayesian!“. Maybe. Someday. What I tried, here, is to run simulations to see how MLE estimators behave. Given a -i.i.d. sample, from a  distribution, let  and  denote the maximum likelihood estimators of the two parameters.

library(fitdistrplus)
maxl=function(x) fitdist(x,"gamma",method="mle")$estimate
VK=floor(exp(seq(log(5),log(200),length=25)))
V=NULL
for(k in 1:length(VK)){
n=VK[k]
N=5000
m=matrix(rgamma(n*N,1,2),n,N)
ss=apply(m,2,maxl)
V=rbind(V,ss)}
y=as.vector(V[seq(1,length(VK)*2,by=2),])
x=rep(c(VK),ncol(V))
boxplot(y~x,
xlab="Nb. observations (log scale)",ylim=c(0,6))
abline(h=1,lty=2,col="blue")

Here, in our simulations, the shape parameter was 1. On the graph, we have boxplots of   obtained on several scenarios. We clearly see the positive bias of the MLE. And the bias reduces with  (as expected, since the MLE is asymptotically unbiased). We can also visualize the distribution of    (instead of boxplots)

It is also possible to derive analytical results. David Cox and Joyce Snell did the maths in 1968 and actually did obtain analytical expressions for the biases. More recently, David Giles (a.k.a. @deagiles on Twitter) and Hui Feng did look at the behavior of bias-adjusted estimators, a few years ago. For instance, one can get that

where

 being the so-called digamma function,

and where  and  are the first and second order derivatives, see e.g. Bowman and Shenton (1982) – yes, there is an book on the topic of estimating parameters of the Gamma distribution…

Observe that the bias of   does not depend on , while the bias of   will depend on .

d1digamma=function(x,h=1e-7)
return((digamma(x+h)-digamma(x-h))/(2*h))
d2digamma=function(x,h=1e-7)
return((d1digamma(x+h)-d1digamma(x-h))/(2*h))
biasalpha=function(a,n){
return((a*d1digamma(a)-a^2*d2digamma(a)
-2)/(2*n*(a*d1digamma(a)-1)^2))
}

The way I compute it is probably not optimal, so if you want to improve it, please, go ahead ! If we compare the average bias obtained on our simulation, and the one obtained this first order approximation, we get

m=apply(V,1,mean)
plot(VK,m[seq(1,length(VK)*2,by=2)],type="b",col="red",xlab="Nb. observations (log scale)",log="x")
abline(h=1,lty=2,col="blue")
B=Vectorize(function(n) biasalpha(a=1,n))(1:200)
lines(1:200,B+1,col="orange")

Observe here that neglecting the  factor yield an underestimation of the real biais… Fun, isn’t it?

Somewhere else, part 109

Some writings worth reading, published somewhere else,

see also NYC under the snow (via http://distilleryimage11.ak.instagram.com/…)

  • “World map of difference between solar time and clock time” (via http://imgur.com/8IFLFoJ , with some interpretation problems, since the difference from apparent solar time varies over the year)

avec peu de lecture en français ces derniers jours,

Did I miss something interesting? (apart from the awesome http://www.amazon.com/gp/product/B00EV3JTAY)

Wasting Time (and Givin’ Up)

There was an interesting post, published a few days ago, entitled This Blog is a Waste of My Time. The funny thing is that I had exactly the same experience at the same time. Since 2013 ended, I wanted to update my resume. And I observed that I got zero publications in the past two years years. ZeroNada. Nothing published in 2012 and nothing published in 2013. Of course, it is mainly a timing issue, since several papers are still in the loop, and I might end up with a few papers published in 2014 (at least one was accepted the first week of 2014). But still… When I decided to turn off my laptop yesterday evening at 2 a.m. (this morning actually) I started wondering also if blogging wasn’t a waste of time. Or if it was something else.

  • My Research is a Waste of My Time (and not only mine)

This will sound like a cliché, but academic do waste a lot of time when doing (or pretending doing) some research,

  1. wasting time applying for grants: do I have to be more specific here? By wasting time I mean working during a month (almost) to fill forms, and there then get a “your proposal is extremely interesting, you got positive feedback from the reviewers, unfortunately, there’s no funding from the government…“. We all had this experience. We’re wasting our time here… and the reviewers time, too.
  2. wasting time in committees: as mentioned above, I have to spend time in research committees, reading applications for grants, but also in faculty committed, discussing office allocation for instance. We got more postdocs, visitors, interns than available seats… and for some reasons, I am on the bargaining comity, trying to argue with my colleague that my postdoc staying for 6 weeks should be before his visitor staying for 2 weeks on the priority list. I do have to do this, but you have to admit that, somehow, you waste the time of four tenured professor (plus me, I am not tenured) on some bullshit here… I am also on the PhD comity, where we receive applications. In December, we did spend a lot of time on the application of one candidate, potentially interesting (like many hours, discussing and arguing), and we’re not even sure that if we agree to enroll him in the PhD program, the candidate will join. If he’s not coming, that will be a waste of time
  3. wasting time in the review process: it looks like I spent more time reading and writing reports on others papers than writing my own ! Ok, that might be an actual quote I got from one of my referees in a recent paper… I keep saying that I should start saying “no, I am too busy” when an editor ask me for a review. But I also keep saying that a lot of bullshit managed to get published. So I cannot stand aside and wait. I mean, I could: I’m French and we’re usually good for this kind of things. But I’d rather be involved in the process, advise the editor if the paper is not worth it, and help to improve the paper if it might be interesting. But again, I spend a lot of time in this process. I know what others are doing, but I keep delaying my own. You cannot find my name if you look for articles published in 2012 or 2013, but I am somewhere, as one of those anonymous referees thanked at the end of the article (who sometimes spent more time on the paper than the PhD supervisor who barely knows what the paper is about, but still has his – or her – name on it). There was an interesting post by Rob Hyndman entitled How to get your paper rejected quickly a few weeks ago. I still don’t know if I agree with everything, but I agree that “review­ers spend a great deal of time pro­vid­ing com­ments, and it is dis­re­spect­ful to ignore them” (I would say “might spend“, but I do not want to argue on that point today). A lot of time is wasted in the publication process.
  4. wasting time trying to get data: before Julie started her internship in September, I tried to get datasets to work on demographic problems. I started discussing (and filling) forms to get French datasets, and managed to get a smaller in Québec. The agenda was simple: we work on the small dataset, write the code, and then, once we’ll get the big dataset, we’ll just use the code that we tested on the small one. After 6 months, I still wonder if my form has been accepted, and if, someday, I will be able to get access to this dataset. I know that the dataset exists. I mean, I know that two datasets exist, and I just ask for a merge… but it looks like there might be ethical considerations, so it takes time.

I do waste a lot of time in the process of making research, and I do not mention here procrastination. Actually, I believe that procrastination is extremely important, and is not a waste of time… But I will get back, someday, on that point in another post.

  • My Teaching Related Duties are a Waste of My Time

I will not claim that teaching is a waste of time. I am still extremely pretentious, and I believe that by the end of my courses, my students could actually learn something… But the problem is more on associated duties. One might think of writing the exams (and sketches of solutions) or grading (since I do not have T.A.s to help). It takes time. A lot of time actually. But I won’t consider it as wasted. Two shorts stories to explain what I mean (that occurred in the Autumn session)

  1. in September, I gave a course, and there were 4 tests scheduled. A few hours before the first one, I got an email from a student, asking me to reschedule it because he could not be there. He asked me to postpone his examen a few days after. I said no, essentially because we signed an agreement on the first day, and the student knew by that time that he will not be able to be there for the exam. And never told me before. I decided to stand on this principle. The thing is the student invoked religious matters, and I understood it will start to be stinky soon. But I had principles. I got some moral support from my colleagues, and from my Dean, but everyone was telling me that I was in charge in this battle (“we support you, but you’re on your own“) since we have our academic independence. I did ask for legal backup from the Professors Union (three times) and no feedback. Then, I heard that a letter had been sent to the rector by a lawyer, and within 10 minutes, I gave the student everything he asked. If he asked me to take the test on a Sunday, I would have said yes… Just because lawyers basic rule is to waste others time, or money. So I gave up. I did not want to waste my time on that battle, on my own. The funny (?) side of this story is that so did the student: I agreed to postpone the test at the end of the session, he came for the second test (but never show up in my class) and got a little bit more than 30%. I did not get further news from him, and he did not take the other tests. But I did waste quite some time, and some bad nights and insomnia, too.
  2. in December, I was grading some homework I gave to my students (practical, on databases) and I saw on two forums that a pair of students was asking for help. Actually, it was not help but could you please do this for me ? They did mention the number of their database (each group had a different database, and the person who posted the question in those forums was located in Montréal). This was fraud, or at least fraud attempt. So I gave them 0% – on that specific homework. Students confessed that they did ask for help on the forum (but never asked me anything)… and I gave up. I mean, I decided to grade their work, and I did fill a form for fraud, sent to the faculty, so that it will be someone else’s problem. I did not want to spend time arguing that those students clearly should not get the exam (one of them had only 20% at the final written exam – the other one 36%) : if they want to learn something, taking the course in the Winter session was clearly an opportunity to learn something… But they din’t get it, and I gave up.

I clearly waste time on a lot of things. But when I look back at the past four or five years, I might feel ashamed not to have more prestigious (somehow) publications, lectures notes without typos everywhere, but at least, the blog is something I am still proud of, sort of. And when I end up working, tired, around 2 a.m., I have the feeling that something is wrong, and that a lot of time has been wasted. And I have to confess that I think I should give up on something… But I don’t think it will be on my blogging activity.

Somewhere else, part 108

Some writings worth reading,

and “Global Traffic Map” http://telegeography.com/ … via @Geopolitics2020 see also http://wired.com/politics/security/… and

and “Sex Ratio in Europe” from http://appsso.eurostat.ec.europa.eu/nui/… via http://joyofdata.de/blog/… see

et un peu de lecture en français,

Did I miss something?

Sequences defined using a Linear Recurrence

In the introduction to the time series course (MAT8181) this morning, we did spend some time on the expression of (deterministic) sequences defined using a linear recurence (we will need that later on, so I wanted to make sure that those results were familiar to everyone).

  • First order recurence

The most simple case is the first order recurence, https://latex.codecogs.com/gif.latex?u_n=a+b%20u_{n-1} where https://latex.codecogs.com/gif.latex?b\neq%201 (for convenience). Observe that we can remove the constant, using a simple translation https://latex.codecogs.com/gif.latex?\underbrace{[u_n-m]}_{v_n}%20=%20b%20\underbrace{[u_{n-1}-m]}_{v_{n-1}} if https://latex.codecogs.com/gif.latex?%20m=a/(1-b). So, starting from this point, we will always remove the constant in the recurent equation. Thus, https://latex.codecogs.com/gif.latex?{v_n}%20=%20b{v_{n-1}}. From this equation, observe that https://latex.codecogs.com/gif.latex?{v_n}%20=%20b^n{v_{0}}, which is the general expression of https://latex.codecogs.com/gif.latex?{v_n}.

  • Second order recurence

Consider now a second order recurence, https://latex.codecogs.com/gif.latex?{v_n}%20=%20a{v_{n-1}}+b{v_{n-2}}. In order to find the general expression of https://latex.codecogs.com/gif.latex?{v_n}, define https://latex.codecogs.com/gif.latex?\boldsymbol{V}_n%20=(v_{n}},{v_{n-1}})^{\sffamily%20T}. Then https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}a&%20b%20\\%201%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} This time, we have a vectorial linear recurent equation. But what we’ve done previously still holds. For instance, https://latex.codecogs.com/gif.latex?%20{\boldsymbol{V}_n%20}=B{\boldsymbol{V}_{n-1}%20}=\cdots=B^n\boldsymbol{V}_{0} What could we say about https://latex.codecogs.com/gif.latex?%20B^n ? If https://latex.codecogs.com/gif.latex?B can be diagonalized, then https://latex.codecogs.com/gif.latex?%20B=P\Delta%20P^{-1} and https://latex.codecogs.com/gif.latex?%20B^n=P\Delta^n%20P^{-1}. Thus, https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20B^n%20\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20}=%20P\underbrace{\begin{bmatrix}\lambda_1^n&%200%20\\%200%20&%20\lambda_2^n\end{bmatrix}}_{\Delta^n}%20P^{-1}\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20} so what we’ll get here is something likehttps://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n for some constant https://latex.codecogs.com/gif.latex?%20\alpha and https://latex.codecogs.com/gif.latex?%20\beta. Recall that https://latex.codecogs.com/gif.latex?\lambda_1 and https://latex.codecogs.com/gif.latex?\lambda_2 are the eigenvalues of matrix https://latex.codecogs.com/gif.latex?B, and they are also the roots of the characteristic polynomial https://latex.codecogs.com/gif.latex?%20P(x)=x^2%20-%20ax%20-%20b. Since https://latex.codecogs.com/gif.latex?%20a and https://latex.codecogs.com/gif.latex?%20b are real-valued, there are two roots for the polynomial, possibly identical, possibly complex (but then conjugate). An interesting case is obtained when the roots are https://latex.codecogs.com/gif.latex?%20re^{\pm%20i\theta}. In that case https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(\alpha\cos(n\theta)%20+%20\beta\sin(n\theta)) To visualize this general term, consider the following code. A first strategy is to define the sequence, given the two parameters, and two starting values. E.g.

> a=.5
> b=-.9
> u1=1; u0=1

Then, we iterate to generate the sequence,

> v=c(u1,u0)
> while(length(v)<100) v=c(a*v[1]+b*v[2],v)
> plot(0:99,rev(v))

It is also possible to use the generic expression we’ve just seen. Here, the roots of the characteristic polynomial are

> r=polyroot(c(-b, -a, 1))
> r
[1] 0.25+0.9151503i 0.25-0.9151503i
> plot(r,xlim=c(-1.1,1.1),ylim=c(-1.1,1.1),pch=19,col="red")
> u=seq(-1,1,by=.01)
> lines(u,sqrt(1-u^2),lty=2)
> lines(u,-sqrt(1-u^2),lty=2)

http://freakonometrics.hypotheses.org/files/2014/01/Selection_546.png Since, https://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n, then https://latex.codecogs.com/gif.latex?%20\begin{cases}%20\alpha%20+%20\beta%20=%20v_0%20\\%20\alpha%20r_1%20+%20\beta%20r_2%20=%20v_1%20\end{cases} it is possible to derive numerical expressions for the two parameters. If https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(A\cos(n\theta)%20+%20B\sin(n\theta)), then https://latex.codecogs.com/gif.latex?A=\lambda+\mu while https://latex.codecogs.com/gif.latex?B=i(\lambda-\mu). Thus,

> A=sum(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))
> B=diff(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))* complex(real=0,imaginary=1)

We can plot the sequence of points

> plot(0:99,rev(v))

and then we can also plot the sine wave, too

> t=seq(0,100,by=.1)
> bv=function(t) Mod(r)[1]^t
> fv=function(t) Mod(r)[1]^t*(A*cos(t*Arg(r)[1])+B*sin(t*Arg(r)[1]))
> lines(t,Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,-Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,Vectorize(fv)(t-1),col="blue")

We will see a lot of graph like this in the course, when looking at autocorrelation functions.

  • Higher order recurence

More generally, we can write https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}\\v_{n-2}\\%20\vdots%20\\%20v_{n-p+1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}b_{1}%20&%20b_{2}%20&b_3&%20\cdots%20&%20b_{p}%20\\%201%20&%200%20&%200&%20\cdots%20&0\\%200%20&%201%20&%200&%20\cdots%20&0\\%20\vdots%20&%20\vdots%20&%20\vdots%20&%20\ddots%20&%20\vdots%20\\%200%20&%200%20&%200&%20\cdots%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}\\v_{n-3}%20\\%20\vdots%20\\%20v_{n-p}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} The matrix is a so called companion matrix. And similar results could be obtained for the expression of the general term of the sequence. If all that is not familar, I strongly recommand to read carefully a textbook on sequences and linear recurence.

Central Limit Theorem

This week, in the MAT8595 course, before proving Fisher-Tippett theorem, we will get back on the proof of the Central Limit Theorem, and the class of stable distribution (in Lévy’s sense). In order to illustrate the problem of heavy tails, on the behavior of the mean, consider a sequence of i.i.d. Gaussian random variables https://latex.codecogs.com/gif.latex?X_i‘s. On top, we visualize the sequence, and below, we visualize the associate random walk

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=1}^n%20X_i

(the central limit theorem will give a limiting distribution for https://latex.codecogs.com/gif.latex?n^{-1}S_n in the case where the variance of the https://latex.codecogs.com/gif.latex?X_i‘s is finite)

If we consider a sequence of i.i.d. random variables https://latex.codecogs.com/gif.latex?X_i‘s whith heavier tails (possibly with infinite variance), we can still define https://latex.codecogs.com/gif.latex?S_n, but as we can see below, https://latex.codecogs.com/gif.latex?S_n can be quite heratic.

As we will see this Thursday, the key to derive stable distribution for the central limit theorem, or possible limiting distributions for the maximum is Cauchy’s function equation. I strongly recommand to look at the proof.

Copules et valeurs extrêmes, syllabus

Le plan de cours pour le cours MAT8595 Copules et Valeurs Extremes est en ligne. L’entente d’évaluation sera signée au premier cours, ce lundi à 9:00 (salle SH-2140). D’autre billets seront mis en ligne dans les jours à venir, avec quelques exercices, et les articles qui serviront de base pour les projets, sur http://freakonometrics.hypotheses.org/courses/copulas-and-extremes.

Jimmy, Mile End, et le Québec

Suite à mon billet sur Paul et le Québec, j’avais commencé une discussion (par courriel) avec Antoine, en France, qui voulait plus de références de bande dessinées québécoises. Or justement, pendant les fêtes je me suis offert Non-Aventures de Jimmy Beaulieu. Jimmy Beaulieu a fait plusieurs livres disons… sensuels, ces derniers temps, en plus de travailler sur Magasin Général, de Régis Loisel et Jean-Louis Tripp (je ne parlerais pas, pour l’instant, de cette série car si j’ai grand plaisir à la lire, je me méfie de Régis Loisel qui ne sait décidément pas clore ses histoires… j’ai adoré le premier cycle de la Quête de l’oiseau du temps, et sa superbe adaptation de Peter Pan, mais je n’ai pas aimé les ‘suites’. Mais on en reparlera peut-être un jour).

Dans Non-Aventures, on retrouve plusieurs (petites) histoires, détachées les unes des autres, avec comme seul point commun le fait d’être (comme le dit le sous titre du livre, des planches à la première personne). On retrouve ici quelques rééditions de vieux publications épuisées (ou presques) comme Le moral des troupes, ou Quelques pelures, mais aussi (me semble-t-il) beaucoup de dessins nouveaux.

Pourquoi j’en parle ici ? peut-être pour finir ma série sur la BD québécoise commence avec Guy Delisle et Michel Rabagliati. Avec Jimmy Beaulieu, on continue dans la BD québécoise autobiographique. Mais au lieu d’être de l’auto-dérision comme Guy Delisle ou nostalgique comme Michel Rabagliati, on est ici dans une mise en scène du quotidien. A Montréal (le plus souvent). Et on le vit avec lui… On se retrouve souvent dans plusieurs scène. J’avoue avoir beaucoup aimé sa description des balcons de Montréal (qui me semble très juste). Le livre de Jimmy Beaulieu est très beau, et souvent très sensuel…. Sinon, pour découvrir la vie de Montréal, je ne peux que recommander un livre paru voilà deux ans, Mile End, de Michel Hellman truffé de petites anecdotes. Si on aime déambuler dans Mile End, on sera davantage touché par certaines histoires, mais il suffit de vivre dans un quartier qui a une vie de quartier, précisément, pour apprécier le livre. Il y a par exemple cette délicieuse réflexion sur l’histoire des poteaux électriques, dans les rues (que l’on peut lire sur http://editionspowpow.com/bandes-dessinees/mile-end/…).

Sur la forme… on a l’impression de lire du Trondheim, et c’est toujours agréable ! Deux jolies bandes dessinées pour découvrir le Québec. Et promis, la prochaine fois, je parle d’économie ou de sciences… de choses plus sérieuses quoi !

Multivariate Archimax copulas

Our paper, written jointly also with Anne-Laure Fougères, Christian Genest and Johanna Nešlehová, entitled Multivariate Archimax Copulas, should appear some day in the Journal of Multivariate Analysis.

A multivariate extension of the bivariate class of Archimax copulas was recently proposed by Mesiar & Jagr (2013), who asked under which conditions it holds. This paper answers their question and provides a stochastic representation of multivariate Archimax copulas. A few basic properties of these copulas are explored, including their minimum and maximum domains of attraction. Several non-trivial examples of multivariate Archimax copulas are also provided.

In this paper, we extend the class of Archimax copulas, introduced in dimension 2 in Bivariate Distributions with Given Extreme Value Attractor, by Philippe Capéraà, Anne-Laure Fougères and Christian Genest, inspired by some ideas mentioned in a paper published in Kybernetika a few years ago. I will try to post additional material, soon…