Somewhere else, part 27

Still a lot of intresting posts and articles, here and there,

avec comme souvent quelques billets intéressants, en Français,

Did I miss something ?

“Ranking the popularity of programming languages” on 

De la non-connexité du Vaucluse

Avant-hier, José, un ancien collègue rennais me faisait noter une bizarrerie de la cartographie (et me posait des questions sur l’impact sur les cartes faites avec R). En fait, il m’a fait découvrir que le département du Vaucluse n’était pas connexe. Comme on le voit sur la carte ci-contre, il y a l’enclave des papes, qui est enclavé dans la Drome, mais administrativement rattachée au Vaucluse. Étonnant non ?

Maintenant avec R, ce genre de choses existent. Par exemple, il est possible de travailler avec les îles, qui sont rattachées à tel ou tel département. Regardons ce qui se passe ici, avec les cartes standards de R,

>  library(maps)
>  france = map(database="france")
>  france$names
  [1] "Nord"                                
  [2] "Pas-de-Calais"

(…)

 [92] "Gard"                                
 [93] "Vaucluse"                            
 [94] "Tarn-et-Garonne"                     
 [95] "Alpes-Maritimes"                     
 [96] "Vaucluse"                            
 [97] "Tarn"                                
[108] "Hautes-Pyrenees"                     
[109] "Var:Iles d'Hyeres:I. du Levant"      
[110] "Var:Iles d'Hyeres:I. de Porquerolles"
[111] "Var:Iles d'Hyeres:I. de Port Cros"   
[112] "Haute-Corse"                         
[113] "Pyrenees-Orientales"                 
[114] "Corse du Sud"

On voit que le Vaucluse apparaît deux fois dans la liste des départements. Pour les îles, elles sont rattachées à un département avec un nom spécifique (comme on le voit sur l’île de Porquerolles, par exemple). Mais pas l’enclave des papes. En fait, si on cherche le Vaucluse, il apparaît deux fois

>  which(substr(tolower(france$names),1,5)=="vaucl")
[1] 93 96

Aussi, si on colore le Vaucluse, c’est le département tout entier (avec l’enclave) qui ressort,

 Le code est ici

>  dpt="Vaucluse"
>  couleur="red"
>  match=match.map(france,dpt)
>  color=couleur[match]
>  map(database="france", fill=TRUE, col=color)

On peut aussi faire ressortir l’enclave. Pour cela, il suffit d’aller demander de colorer de manières différentes les deux régions,

>  match[which(match==1)[2]]=2
>  couleur=c("blue","red")
>  color=couleur[match]
>  map(database="france", fill=TRUE, col=color)

Ah, la joie des cartes avec R…

 

UEFA, is that it ?

Following my previous post, a few more things. As mentioned by Frédéric, it is – indeed – possible to compute the probability of all pairs. More precisely, all pairs are not as likely to occur: some teams can play against (almost) eveyone, while others cannot. From the previous table, it is possible to compute probability that the last team plays against team 1. Or team 2 (numbers are from the  xls file mentioned previously). To make it simple

> table(M[,2*n])/length(M[,2*n])*100

       1        2        3        5        7       10       11 
11.82500 12.61212 12.61212 13.25279 19.31173 18.70767 11.67856

Here, the last team (as I did rank them) has 11.8% chances to play against team 1, and 19.3% to play against team 7. If we compute all the probabilities, we obtain

> S
       1     2     3     5     7    10    11    13
4   0.00 14.16 14.16  0.00 22.22 21.25 13.05 15.13
6  12.52 13.19 13.19 14.11 20.13  0.00 12.35 14.47
8  18.78  0.00 19.54 21.50  0.00  0.00 18.39 21.76
9  18.78 19.54  0.00 21.50  0.00  0.00 18.39 21.76
12 14.68 15.54 15.54 16.56  0.00 23.19 14.47  0.00
14 11.64 12.37 12.37 13.05 18.96 18.25  0.00 13.34
15 11.77 12.55 12.55  0.00 19.36 18.59 11.64 13.50
16 11.82 12.61 12.61 13.25 19.31 18.70 11.67  0.00

that can be visualized below

White areas cannot be reached, while red ones are more likely. Here, we compute probability that home team (given on the x-axis) plays against some visitor team (on the y-axis). The fact that those probabilities are not uniform seems odd. But I guess it comes from those constraints…

Another weird point: it is possible to reach a deadlock. At least with the technique I have been using. So far, I did not count them. But we can, simply the following code

> U=c(4,6,8,9,12,14,15,16)
> a1=U[1]
> b1=U[2]
> c1=U[3]
> d1=U[4]
> e1=U[5]
> f1=U[6]
> g1=U[7]
> h1=U[8]
> a2=b2=c2=d2=e2=f2=g2=h2=NA
> posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
> if(length(posa2)==0){na=na+1}
> for(a2 in posa2){
+ posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
+ if(length(posb2)==0){na=na+1}
+ for(b2 in posb2){
+ posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
+ if(length(posc2)==0){na=na+1}
+ for(c2 in posc2){
+ posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],
+ a2,b2,c2)
+ if(length(posd2)==0){na=na+1}
+ for(d2 in posd2){
+ pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],
+ a2,b2,c2,d2)
+ if(length(pose2)==0){na=na+1}
+ for(e2 in pose2){
+ posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],
+ a2,b2,c2,d2,e2)
+ if(length(posf2)==0){na=na+1}
+ for(f2 in posf2){
+ posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],
+ a2,b2,c2,d2,e2,f2)
+ if(length(posg2)==0){na=na+1}
+ for(g2 in posg2){
+ posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],
+ a2,b2,c2,d2,e2,f2,g2)
+ if(length(posh2)==0){na=na+1}
+ for(h2 in posh2){
+ s=s+1
+ V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
+ }}}}}}}}

On the initial ordering of home team, the number of deadlocks was

> na
[1] 657

The probability of obtaining a deadlock is then

> 657/(657+5463)
[1] 0.1073529

(657 scenarios ended in a dead end, while 5463 ended well). The worst case was obtained when we considered

 [1]    6    4   16   14   12   15    8    9

In that case, the probability of obtaining a deadlock was

> 4047/(4047+5463)
[1] 0.4255521

Here, it clearly depends on the ordering. So if we draw – randomly – the order of the home teams, i.e.

> Urandom=sample(U,size=8)

the distribution of the probablity of having a deadlock is

All those computations were based on my understanding of the drawings. But Kristof (aka @ciebiera), on his blog krzysztofciebiera.blogspot.ca/… obtained different results. For instance, based on my previous computations, the probability to obtain identical pairs was 0.018349% (1 chance out of 5463), but Kristof obtained – based on the UEFA procedure (as he called it) – a probability of 0.0181337%. Which is not _ strictly – the same, but both computations yield relatively close results…

UEFA, what were the odds ?

Ok, I was supposed to take a break, but Frédéric, professor in Tours, came back to me this morning with a tickling question. He asked me what were the odds that the Champions League draw produces exactly the same pairings from the practice draw, and the official one (see e.g. dailymail.co.uk/…).

To be honest, I don’t know much about soccer, so here is what happened, with the practice draw (on the left, on December 19th) and the official one (on the right, on December 20th),

UEFA

Clearly, the pairs are identical, but not the order. Actually, at first, I was suprised that even which team plays at home first, was iddentical. But (it seams that) teams that play at home first are the ones that ended second after the previous stage of the competition.

And to be more specific about those draws, those pairs were obtained using real urns, real balls, so it is pure randomness (again, as far as I understood). But with very specific rules. For instance, two teams from the same country cannot play together (or one against the other) at this stage. Or teams that ended first after the previous turn can only play with (or against) teams that ended second. Actually, Frederic sent me an xls file, with a possibility matrix.

Let us find all possible pairs, regardless which team plays at home first (again, we do not care here since the order is defined by the rule mentioned above). Doing the maths might have been a bit complicated, with all those contraints. With a small code, it is possible to list all possible pairs, for those eight games. Let us import our possibility matrix,

 > n=16
 > uefa=read.table(
 + "http://freakonometrics.blog.free.fr/public/data/uefa.csv",
 + sep=",",header=TRUE)
 > LISTEIMPOSSIBLE=matrix(
 + (rep(1:n,n))*(uefa[1:n,2:(n+1)]=="NON"),n,n)

I can fix the first team (in my list, the fourth one is the first team that ended second). Then, I look at all possible second one (that will play with the first one),

 > a1=1
 > "%notin%" <- function(x, table){x[match(x, table, nomatch = 0) == 0]}
 > posa2=((a1+1):n)%notin%LISTEIMPOSSIBLE[,a1]

Then, consider the second team that ended second (the sixth one in my list). And look at all possible fourth team (that will play this second game), i.e exluding the one that were already drawn, and those that are not possible,

 > b1=6
 > posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)

Etc. So, given the list of home teams,

 > a1=4
 > b1=6
 > c1=8
 > d1=9
 > e1=12
 > f1=14
 > g1=15
 > h1=16

consider the following loops,

 > posa2=(1:n)%notin%c(LISTEIMPOSSIBLE[,a1])
 > for(a2 in posa2){
 + posb2=(1:n)%notin%c(LISTEIMPOSSIBLE[,b1],a2)
 + for(b2 in posb2){
 + posc2=(1:n)%notin%c(LISTEIMPOSSIBLE[,c1],a2,b2)
 + for(c2 in posc2){
 + posd2=(1:n)%notin%c(LISTEIMPOSSIBLE[,d1],a2,b2,c2)
 + for(d2 in posd2){
 + pose2=(1:n)%notin%c(LISTEIMPOSSIBLE[,e1],a2,b2,c2,d2)
 + for(e2 in pose2){
 + posf2=(1:n)%notin%c(LISTEIMPOSSIBLE[,f1],a2,b2,c2,d2,e2)
 + for(f2 in posf2){
 + posg2=(1:n)%notin%c(LISTEIMPOSSIBLE[,g1],a2,b2,c2,d2,e2,f2)
 + for(g2 in posg2){
 + posh2=(1:n)%notin%c(LISTEIMPOSSIBLE[,h1],a2,b2,c2,d2,e2,f2,g2)
 + for(h2 in posh2){
 + s=s+1
 + V=c(a1,a2,b1,b2,c1,c2,d1,d2,e1,e2,f1,f2,g1,g2,h1,h2)
 + cat(s,V,"\n") 
 + M=rbind(M,V)
 + }}}}}}}}

With the print option, we end up with

5461 4 13 6 11 8 5 9 2 12 10 14 3 15 7 16 1 
5462 4 13 6 11 8 5 9 2 12 10 14 7 15 1 16 3 
5463 4 13 6 11 8 5 9 2 12 10 14 7 15 3 16 1

i.e.

> nrow(M)
[1] 5463

possible pairs (the list can be found here, where numbers are the same as the one in the csv file). Which was the probability mentioned in acomment in the article mentioned previously dailymail.co.uk/…. So the probability to have exactly the same output after the practise and the official draws was (in %)

> 100/nrow(M)
[1] 0.01830496

Which is not that small when we think about it….

And if someone has a mathematical expression for this probability, I am interested. The only reliable method I found was to list all possible pairs (the csv file is available if someone wants to check). But I am not satisfied….

Somewhere else, part 26

One very interesting – not to say disturbing – post, this week

and as usual, a lot of interesting posts and articles, here and there,

  • on the “adequacy of scholars’ training” timeshighereducation.co.uk/ … “many are little or no better qualified than those they are teaching
  • “The global diversity of birds in space and time” nature.com/…
  • “Banks are [officially] above the law” financialsense.com/…
  • Excel and operational risk wiscnews.com/… “operator error’ resulted in a spreadsheet underestimating the total cost” (about $400,0000) another example of operational risk knoxnews.com/ … “one account wasn’t correctly linked into an Excel spreadsheet” want more spreadsheet operation risk stories ? eusprig.org/…
  • “What if We Made Fewer Ph.D.’s?” chronicle.com/…
  • is that legitimate to use probability in trials ? maths.ed.ac.uk/~aar/… by Laurence Tribe in 1971
  • thomas.loc.gov/… when a “breakthrough in mathematics in the theory of vector bundles” is discussed at House of Representatives
  •  “Assault Deaths Within the United States” on kjhealy‘s blog kieranhealy.org/… via obouba
  • “November 2012 was the fifth-warmest November since records began in 1880” ncdc.noaa.gov/ …

Toujours quelques documents en français,
  • En France, “capital scolaire des membres des comités exécutifs du CAC 40” opesc.org/analyses/… (tenant compte du cumul)
  • “Salaire des enseignants (primaire et secondaire) européens ?” lemonde.fr/societe/…

Generating a non-homogeneous Poisson process

Consider a Poisson process gif.latex (54×20), with non-homogeneous intensity . Here, we consider a deterministic function, not a stochastic intensity. Define the cumulated intensity

in the sense that the number of events that occurred between time gif.latex (8×13) and gif.latex (6×12) is a random variable that is Poisson distributed with parameter  .

For example, consider here a cyclical Poisson process, with intensity

   lambda=function(x) 100*(sin(x*pi)+1)

To compute the cumulated intensity, consider a very general function

   Lambda=function(t) integrate(f=lambda,lower=0,upper=t)$value

The idea is to generate a Poisson process on a finite interval .

The first code is based on a proposition from Çinlar (1975),

  1. start with https://latex.codecogs.com/gif.latex?s=0
  2. generate gif.latex (96×19)
  3. set gif.latex (112×19)
  4. set gif.latex (6×12) denote gif.latex (124×19)
  5. deliver
  6. go to step 2.

In order to get the infinimum of gif.latex (12×13), consider a code as

   v=seq(0,Tmax,length=1000)
   t=min(v[which(Vectorize(Lambda)(v)>=s)])

(it might not be very efficient…. but it should work). Here, the code to generate that Poisson process is

   s=0; v=seq(0,Tmax,length=1000)
   X=numeric(0)
   while(X[length(X)]<=Tmax){
     u=runif(1)
     s=s-log(u)
     t=min(v[which(Vectorize(Lambda)(v)>=s)])
     X=c(X,t)
   }

Here, we get the following histogram,

   hist(X,breaks=seq(0,max(X)+1,by=.1),col="yellow")
   u=seq(0,max(X),by=.02)
   lines(u,lambda(u)/10,lwd=2,col="red")

Consider now another strategy. The idea is to use the conditional distribution before the next event, given that one occurred at time ,

  1. start with
  2. generate gif.latex (51×16)
  3. set gif.latex (74×14)
  4. deliver
  5. go to step 2.

Here the algorithm is simple. For the computational side, at each step, we have to compute and then http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=F_t%5E%7B-1%7D. To do so, since is increasing with values in , we can use a dichotomic algorithm,

   Ft=function(x) 1-exp(-Lambda(t+x)+Lambda(t))
   Ftinv=function(u){
     a=0
     b=Tmax
     for(j in 1:20){
       if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b}
       if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a}
       a=binf
       b=bsup
     }
   return((a+b)/2)
   }

Here the code is the following

   t=0; X=t
   while(X[length(X)]<=Tmax){
     Ft=function(x) 1-exp(-Lambda(t+x)+Lambda(t))
     Ftinv=function(u){
      a=0
      b=Tmax
      for(j in 1:20){
        if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b}
        if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a}
        a=binf
        b=bsup
      }
      return((a+b)/2)
     }
     x=Ftinv(runif(1))
     t=t+x
     X=c(X,t)
   }

The third code is based on a classical algorithm to generate an homogeneous Poisson process on a finite interval: first, we generate the number of events, then, we draw uniform variates, and we sort them. Here, the strategy is closed, except that is won’t be uniform any longer.

  1. generate the number of events on the time interval gif.latex (101×19)
  2. generate independently gif.latex (114×17) where 
  3. set gif.latex (60×15) i.e. the ordered values  gif.latex (136×16)
  4. deliver http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=t_i‘s

This algorithm is extremely simple, and also very fast. This is one function to inverse, and it is not in the loop,

   n=rpois(1,Lambda(Tmax))
   Ft=function(x) Lambda(x)/Lambda(Tmax)
   Ftinv=function(u){
     a=0
     b=Tmax
     for(j in 1:20){
       if(Ft((a+b)/2)<=u){binf=(a+b)/2;bsup=b}
       if(Ft((a+b)/2)>=u){bsup=(a+b)/2;binf=a}
       a=binf
       b=bsup
     }
     return((a+b)/2)
     }
   X0=rep(NA,n)
   for(i in 1:n){
     X0[i]=Ftinv(runif(1))
    }
   X=sort(X0)

Here is the associated histogram,

An alternative is based on a rejection technique. Actually, it was the algorithm mentioned a few years ago on this blog (well, the previous one). Here, we need an upper bound for the intensity, so that computations might be much faster. Here, consider

  1. start with
  2. generate gif.latex (96×19)
  3. set gif.latex (137×19)
  4. generate gif.latex (95×19) (independent of http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=u)
  5. if gif.latex (90×19) then deliver http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=t
  6. go to step 2.

Here, consider a constant upper bound,

   lambdau=function(t) 200
   Lambdau=function(t) lambdau(t)*t

The code to generate a Poisson process is

   t=0
   X=numeric(0)
   while(X[length(X)]<=Tmax){
     u=runif(1)
     t=t-log(u)/lambdau
     if(runif(1)<=lambda(t)/lambdau) X=c(X,t)
  }

The histogram is here

Finally, the last one is also based on a rejection technique, mixed with the second one. I.e. define

gif.latex (433×20)

The good thing is that this function can easily be inverted

gif.latex (215×21)

  1. start (as usual) with
  2. generate gif.latex (63×19)
  3. set gif.latex (74×14)
  4. generate gif.latex (96×19)
  5. if gif.latex (124×19) then deliver http://www.forkosh.com/cgi-bin/mathtex.cgi?formdata=t
  6. goto step 2.

Here, the algorithm is simply

   t=0
   while(X[length(X)]<=Tmax){
     Ftinvu=function(u) -log(1-x)/lambdau
     x=Ftinvu(runif(1))
     t=t+x
     if(runif(1)<=lambda(t+x)/lambdau(t+x)) X=c(X,t)
   }

Obviously those five codes work, the first one being much slower than the other three. But it might be because my strategy to seek the infimum is not great. And the latter worked well since there were not much rejection, I guess it can be worst…

All those algorithms were mentioned in a nice survey written by Raghu Pasupathy and can be downloaded from http://web.ics.purdue.edu/~pasupath/…. In the paper, non-homogeneous spatial Poisson processes are also mentioned…

 

Actuariat IARD

Cet hiver (même si la nouvelle ne sera officielle qu’à la rentrée), je devrais donner le cours ACT2040, actuariat IARD. Le plan de cours sera bientôt en ligne, mais je peux déjà dire que le cours sera basé sur le Tome 2 du livre écrit avec Michel Denuit il y a quelques années, mathématiques de l’assurance non-vie. Le cours est une suite du cours ACT6420 méthodes de prévisions, donné cet automne (qui est un prérequis indiqué sur le site du registrariat http://websysinfo.uqam.ca/…): je partirais donc du fait que le modèle linéaire de régression est connu (et compris) et que tout le monde sait utiliser R, et lire des sorties de régression. Mais les premières démonstrations reviendront sur l’utilisation de R, et sur l’analyse de la variance, que l’on n’a pas vraiment eu le temps d’aborder dans le cours de régression. Pour des références sur R, je conseille

  • “R pour les débutants” d’Emmanuel Paradis, (PDF)
  • “Introduction à la programmation en S” par Vincent Goulet, (PDF)

pour les documents en français, ou pour des documents plus complets, mais en anglais

  • “R for Beginners” d’Emmanuel Paradis (PDF),
  • “An Introduction to R” par Longhow Lam (PDF)
  • “The R language — a short companion” par Marc Vandemeulebroecke (PDF),
  • “The R Guide” par Jason Owen (PDF),
  • “Econometrics in R” par Grant Farnsworth (PDF) pour aller plus loin sur les régressions,
  • “Practical Regression and Anova using R” by Julian Faraway (PDF) sur le meme sujet
  • “Statistics with R and S-Plus” d’Hugo Quené (PDF)
  • “Statistical Computing and Graphics Course Notes” par Frank Harrell, (PDF).
  • “Using R for Data Analysis and Graphics – Introduction, Examples and Commentary” par John Maindonald (PDF).

Sinon, les transparents du premier cours sont en ligne ici, et le plan de cours est

et je mettrais bientôt en ligne des liens vers des bases de données que l’on utilisera tout au long du cours, ou en démonstration.

Pour les références, je citerais deux livres sur lesquels je m’appuierai beaucoup car je les connais presque par cœur. Ils sont disponibles à la Coopuqam

Examen final ACT2121

L’examen de lundi est en ligne ici, avec des éléments de correction  (avec comme toujours des statistiques sur les réponses). Les notes seront publiées bientôt. Toute personne qui trouve des erreurs peut me contacter avant que je ne valide les notes. L’examen correspondait à l’examen de pratique 15 du livre de Jacques Labelle (je n’ai rien inventé). Pour une des questions, la réponse n’était pas parmi les réponses proposées. J’ai donc finalement noté l’épreuve sur 29 (et appliqué un coefficient multiplicatif pour ramener à une note sur 30). Comme toujours, ceux qui ont prédit correctement leur nombre de bonnes réponses ont eu un point bonus.

Econometric Modeling in Finance and Insurance with the R language

On February 15th, IFM2, the Institute of Financial Mathematics in Montréal will organize an (one day) Executive workshop on Econometric Modeling in Finance and Insurance with the R language. The event is not yet mentioned in the calendar, but the syllabus can be downloaded here. Additional details (slides and R code) will be available soon, on this blog. In the morning, it will be an introduction to the R langage, and in the afternoon, we will focus on applications,

  • Principal components analysis and application to yield curves
  • Regression tree, logistic regression and application to credit scoring
  • Poisson regression and applications to claims reserving (IBNR) and projected mortality tables (LifeMetrics)

Somewhere else, part 25

Two interesting posts, especially since I just finnish my lecture on predictive modeling at the UQAM actuarial program,

with – as usual – a lot of interesting posts and articles, here and there,

 

Did I miss something ?

ACT6420 examen final

Mercredi prochain, c’est l’examen final (qui compte pour 30%). Au programme, comme annoncé ce matin, la forme sera proche de celle de l’examen intra, avec 33 questions à choix multiple

  • quelques questions de compréhension générales sur la modélisation des séries temporelles,
  • quelques questions portant sur de l’analyse de sorties obtenues suite à une modélisation d’une série.

Cette session, la série à étudier sera celle obtenue sur la fréquentation d’un aéroport, sur une quinzaine d’années. Les données sont mensuelles, et sont en ligne via le code suivant

> base=read.table(
"http://freakonometrics.blog.free.fr/public/data/TS-examen.txt",
+ sep=";",header=TRUE)
> X=ts(base$X,start=c(base$A[1],base$M[1]),frequency=12)
> plot(X)

Les annexes qu’il faudra discuter à l’examen sont en ligne. Est-il utile de préciser que je ne répondrais aps aux questions sur ce document d’ici mercredi ?

Bon courage.

Actuariat 1, ACT2121, huitième cours

Pour le huitième cours d’actuariat 1 (ACT2121, préparation à l’examen P de la SOA), on continuera les exercices commencés la semaine passée. Je mets toutefois en ligne quelques exercices supplémentaires, pour ceux qui souhaitent s’entraîner davantage (le fichier est en ligne ici). Pour rappel (?) l’examen final aura lieu dans 2 semaines la semaine prochaine, et portera sur l’ensemble de la matière. Comme toujours, 30 questions, 3 heures, et on commence à 13 heures (dois-je le préciser ?). Cette fois, je fournis la table “officielle” de la SOA.

 


Modélisation et prévision, cas d’école

Quelques lignes de code que l’on reprendra au prochain cours, avec une transformation en log, et une tendance linéaire. Considérons la recherche du mot clé headphones, au Canada, la base est en ligne sur l’ancien blog, à l’adresse freakonometrics.blog.free.fr/…

> report=read.table(
+ "report-headphones.csv",
+ skip=4,header=TRUE,sep=",",nrows=464)
> source("http://freakonometrics.blog.free.fr/public/code/H2M.R")
> headphones=H2M(report,lang="FR",type="ts")
> plot(headphones)

Mais le modèle linéaire ne devrait pas convenir, car la série explose,

> n=length(headphones)
> X1=seq(12,n,by=12)
> Y1=headphones[X1]
> points(time(headphones)[X1],Y1,pch=19,col="red")
> X2=seq(6,n,by=12)
> Y2=headphones[X2]
> points(time(headphones)[X2],Y2,pch=19,col="blue")

Il est alors naturel de prendre le logarithme de la série,

> plot(headphones,log="y")

C’est cette série que l’on va modéliser (mais c’est bien entendu la première série, au final, qu’il faudra prévoir). On commence par ôter la tendance (ici linéaire)

> X=as.numeric(headphones)
> Y=log(X)
> n=length(Y)
> T=1:n
> B=data.frame(Y,T)
> reg=lm(Y~T,data=B)
> plot(T,Y,type="l")
> lines(T,predict(reg),col="purple",lwd=2)

On travaille alors sur la série résiduelle.

> Z=Y-predict(reg)
> acf(Z,lag=36,lwd=6)
> pacf(Z,lag=36,lwd=6)

On peut tenter de différencier de manière saisonnière,

> DZ=diff(Z,12)
> acf(DZ,lag=36,lwd=6)
> pacf(DZ,lag=36,lwd=6)

On ajuste alors un processus ARIMA, sur la série différenciée,

> mod=arima(DZ,order=c(1,0,0),
+ seasonal=list(order=c(1,0,0),period=12))
> mod

Coefficients:
ar1     sar1  intercept
0.7937  -0.3696     0.0032
s.e.  0.0626   0.1072     0.0245

sigma^2 estimated as 0.0046:  log likelihood = 119.47

Mais comme c’est la série de base qui nous intéresse, on utilise une écriture SARIMA,

> mod=arima(Z,order=c(1,0,0),
+ seasonal=list(order=c(1,1,0),period=12))

On fait alors la prévision de cette série.

> modpred=predict(mod,24)
> Zm=modpred$pred
> Zse=modpred$se

On utilise aussi le prolongement de la tendance linéaire,

> tendance=predict(reg,newdata=data.frame(T=n+(1:24)))

Pour revenir enfin à notre série initiale, on utilise les propriétés de la loi lognormales, et plus particulièrement la forme de la moyenne, pour prédire la valeur de la série,

> Ym=exp(Zm+tendance+Zse^2/2)

Graphiquement, on a

> plot(1:n,X,xlim=c(1,n+24),type="l",ylim=c(10,90))
> lines(n+(1:24),Ym,lwd=2,col="blue")

Pour les intervalles de confiance, on peut utiliser les quantiles de la loi lognormale,

> Ysup975=qlnorm(.975,meanlog=Zm+tendance,sdlog=Zse)
> Yinf025=qlnorm(.025,meanlog=Zm+tendance,sdlog=Zse)
> Ysup9=qlnorm(.9,meanlog=Zm+tendance,sdlog=Zse)
> Yinf1=qlnorm(.1,meanlog=Zm+tendance,sdlog=Zse)
> polygon(c(n+(1:24),rev(n+(1:24))),
+ c(Ysup975,rev(Yinf025)),col="orange",border=NA)
> polygon(c(n+(1:24),rev(n+(1:24))),
+ c(Ysup9,rev(Yinf1)),col="yellow",border=NA)