Tag Archives: computer

Extracting information from a picture, round 1

This week, I wanted to get information I found on the nice map, below. I could not get access to the original dataset, per zip code… and I was wondering, if (assuming that the map was with high resolution) it was actually possible to extract information, using a simple R function…

As we can see, there is red, and green on the map, and I would love to know which are the green and the red cities, in France. One important issue is actually the background. Here it’s nice, it white… but white is a strange color, achromatic and very light. More specifically, if I search red areas, the background is very red. And very green, too. So, to avoid those issues, I did use gimp to change the background, into black. On the opposite, where it’s black, it’s neither red, nor green !

Let us get the map, and extract information from the file

url="https://freakonometrics.hypotheses.org/files/2018/12/inondation3.png"
download.file(url,"inondation3.png")
image="inondation3.png"
library(pixmap)
library(png)
IMG=readPNG(image)

Information is stored in several matrices – or in arrays.  Dimension 1 is the height of the picture (in pixels), dimension 2 is the width, and the third one is either 1 (red), 2 (green) or 3 (blue), based on the rgb decomposition of each pixel. Then, I try to find the border of the map

nl=dim(IMG)[1]
nc=dim(IMG)[2]
MAT=(IMG[,,1]+IMG[,,2])/2
x=apply(MAT,2,max)
plot(x,type="l")

When it’s null, it means no color on the line of the matrix, i.e. completly black (initially, I used the mean function, but the maximum really behaves like a step function)

y=apply(MAT,1,max)
plot(y,type="l")

Let us find cutoff values, on the left and on the right, on top and on the bottom

image(1:nc,1:nl,t(MAT))
abline(v=min(which(x>.2)),col="blue")
abline(v=max(which(x>.2)),col="blue")
abline(h=min(which(y>.2)),col="blue")
abline(h=max(which(y>.2)),col="blue")

We obtain the following (forget about the fact that – somehow – France is upside-down)

We can zoom-in, just to make sure that our border are fine

par(mfrow=c(1,2))
image(min(which(x>.2))+(-5):5,1:nl,t(MAT)[min(which(x>.2))+(-5):5,])
abline(v=min(which(x>.2))+(-5):5,col="white")
abline(v=min(which(x>.2)),col="blue")
x1=min(which(x>.2))-1

and on the vertical range

image(max(which(x>.2))+(-5):5,1:nl,t(MAT)[max(which(x>.2))+(-5):5,])
abline(v=max(which(x>.2))+(-5):5,col="white")
abline(v=max(which(x>.2)),col="blue")
x2=max(which(x>.2))+1

So far so good. Let us keep the subpart of the picture,

image(x1:x2,y1:y2,t(MAT)[x1:x2,y1:y2])

Now, let us focus on the red part / component of that picture

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))

That’s not bad, isn’t it ? And get can have a similar graph for the green part

VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Now, I wanted to ajust a map of France on that one. Using shapefiles of administrative regions, it would be possible to get the proportion of red and green parts (départements, cantons, etc). As a starting point (before going to ‘départements’), let us use a standard shapefile for France

library(maptools)
library(PBSmapping)
url="http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds"
download.file(url,"FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We try here to rescale it. The left part should be on the left part of the picture as well as the right part. And the same holds for the top, and the bottom,

Unfortunately, even if we change the projection technique, I could not match perfectly the contour of France. I am quite sure that it’s a projection problem ! But I did try a dozen popular ones, with no success… so if anyone has a clever idea…

Acheter un billet de train (pas trop cher)

Hier, je suis tombé sur article qui discutait des prix des billets de train, en France (et du prix très élevé, a certaines dates, genre pendant les vacances d’hiver). Tous ceux qui ont l’habitude de prendre le train savent que le prix que l’on paye dépend du moment ou on achète le billet (et de la souplesse que l’on peut avoir sur l’heure (voire la date) du trajet). Cet été, dans le cadre du projet de la formation Data Science pour l’Actuariat, pour mon cours, Pierre proposait de moissonner le site https://www.oui.sncf/ pour suivre un peu l’évolution du prix des billets.

Le principal soucis est que le site https://www.oui.sncf/ s’appuie sur du javascript pour les formulaires de saisie et pour l’affichage des résultats, ce qui empêche l’utilisation classique du package rvest, par exemple. J’avais évoqué dans un autre billet l’utilisation de wdman, pour scraper le site des incendies de forets. Ici, Pierre proposait de passer par casperjs, et je vais reprendre un peu ici sa stratégie:

  • on va utiliser casperjs, un émulateur de navigateur écrit en javascript. Il permet d’émuler un véritable navigateur (même moteur que google chrome) et de résoudre le javascript intégrés à la page
  • on va utiliser un petit code en bash pour lancer le code

Pour le code en bash, c’est juste que je suis sous mac et linux. Sous mac, on peut faire pas mal de choses en bash… Tout passe par des variables, que l’on peut définir, et afficher, par exemple

qui nous donne l’heure. Je peux aussi définir une variable, et l’incrémenter (pratique pour faire des boucles)

Plus intéressant, on peut planifier des taches. Pour ça, on tape

qui va ouvrir un éditeur,

Je demande ici de lancer un code R, tous les heures, a 13:50, 14:50, 15:50, etc. Pour demander tous les jours a 13:50, je tape

on va ensuite sauver l’instruction

On voit que la commande sera lancée, tous les jours : elle est dans la liste des taches a faire

notons qu’on peut lancer un code tout en passant des arguments : ici je lui dis quel objet manipuler (le premier argument) et le second sert pour créer un fichier (et pour le nommer).

Bref, c’est assez facile de lancer automatiquement des codes, pour scraper. Sous windows, on passe par le planificateur de taches. Un premier script permet d’extraire les trains ouverts dans la journée,

et le second, les trains ouverts depuis plus de 24 heures,

Ensuite, on va utiliser http://docs.casperjs.org/en/latest/ pour coder notre émulateur de navigateur internet (le code est ici en ligne).

On va ainsi créer plein de fichiers, contenant les informations que l’on veut ! Je vais passer un peu le retraitement, et juste présenter les informations qu’on peut en tirer. En particulier, Pierre avait stocke les données entre mars et juin dernier.

Ici, on est juste sur quelques trajets de grande ligne,

library(readr)
library(rgdal)
nomFichier = tempfile(fileext = ".zip")
  download.file("https://freakonometrics.free.fr/CarteFrance.zip", destfile = nomFichier, mode = "wb")
  unzip(zipfile = nomFichier, exdir = getwd())
  download.file("https://freakonometrics.free.fr/LgTroncons.csv", destfile = "LgTroncons.csv", mode = "wb")
  download.file("https://freakonometrics.free.fr/CoordVilles.csv", destfile = "CoordVilles.csv", mode = "wb")
  fra0 = readOGR(dsn = paste(getwd(), "/CarteFrance", sep = ""), layer = "gadm36_FRA_0", verbose = F)
  LgTroncons = read_delim("LgTroncons.csv",";", escape_double = FALSE,locale = locale(decimal_mark = ","), trim_ws = TRUE)
CoordVilles = read_delim("CoordVilles.csv",";", escape_double = FALSE,locale = locale(decimal_mark = ","), trim_ws = TRUE)
NomsVilles = CoordVilles[CoordVilles$NOM_A_AFFICHER==1,]
library(ggplot2)
fr_df = fortify(fra0)
ggp = ggplot() + geom_polygon(data=fr_df, aes(long, lat,group = group), fill = "#3A8EBA") 
  ggp = ggp + geom_path(data = LgTroncons, aes(x = LONG, y = LAT, group = ID_TRONCON), colour= "#CC5500", lineend = "round", size=3) + geom_path(data = LgTroncons, aes(x=LONG, y=LAT, group = ID_TRONCON), colour="white", lineend = "round",  size=1.75)
  ggp = ggp + geom_point(data = NomsVilles, aes(x=LONG, y=LAT), colour = "blue", fill = "white", shape=21, size = NomsVilles$PT_SIZE, stroke = NomsVilles$PT_STROKE) + theme_void()
  ggp = ggp + geom_text(data = NomsVilles, aes(x=LONG, y=LAT, label=NOM),hjust = NomsVilles$H_AJUST,
vjust=NomsVilles$V_AJUST, colour = "white", fontface = "bold", size =3.25)+coord_fixed(1.47)
  ggp = ggp + ggtitle("Représentation des trajets étudiées") + theme(plot.title = element_text(hjust = 0.5, face="bold"))
  print(ggp)

On va travailler sur les trajets suivants,

Comparons ici les billets pour un trajet Pars-Rennes, a partir des informations moissonnées pendant 3 mois, le vendredi soir, en particulier 2 vendredi de juin 2018 (les 15 et 22 juin). Pour ces deux jours, il y a eu 6 trains, entre 17 et 20 heures. Pour le 15 juin, les trois premiers ont commencé avec un prix de 45 €. Pour le 22 juin, le premier a commence a 45 €, mais les deux suivant on été lancés a 33 €. Assez rapidement, les prix sont monte a 45 €.

On peut regarder l’evolution du prix

Si on regarde plusieurs destination, on observe des comportement très différentes,

  • pour Le Mans, les prix montent très vite, commençant a 15 €, montant a 18 € 10 heures après le lancement, 21 € le lendemdemain, 27 € au bout d’une semaine. En un mois, les prix ont presque doublé.
  • pour Rennes, on observe une evolution similaire, passant de 20 € a 25 € en quelques heures, et 40 € deux semaines après !
  • pour Toulouse en revanche, le prix initial est plus haut, 43 €, monte de 3 € en 10 heures, 6 € en 16 heures, pour rester a 49 €

Mais pour toutes les destinations, les prix sont croissants.

soit graphiquement

On peut aussi faire une carte. Si on regarde les prix a l’ouverture, Lille, Le Mans et Rennes sont peu chères.

Et les plus fortes variations sur 10 heures sont observées sur Nantes et Bordeaux.

Amusant non ?

Combining automatically factor levels in R

Each time we face real applications in an applied econometrics course, we have to deal with categorial variables. And the same question arise, from students : how can we combine automatically factor levels ? Is there a simple R function ?

I did upload a few blog posts, over the pas years. But so far, nothing satistfying. Let me write down a few lines about what could be done. And if some wants to write a nice R function, that would be awesome. To illustrate the idea, consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
             x2=cut(x2,breaks=
             c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
             labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

There is one (continuous) dependent variable y, one continuous covariable x_1 and one categorical variable x_2, with here ten levels. We can plot the data using

plot(b$x1,y,col="white",xlim=c(0,1.1))
text(b$x1,y,as.character(b$x2),cex=.5)

The output of a linear regression yield the following predictions

for(i in 1:10){
p=function(x) predict(lm(y~x1+x2,data=b),newdata=data.frame(x1=x,x2=LETTERS[i]))
u=seq(-1,1.065,by=.01)
v=Vectorize(p)(u)
lines(u,v)}

the slope for x_1 is the same, we simply add a different constant for each level. As we can see, some levels are very very close, so it seems legitimate to combine them into one single category. Here is the output of the linear regression,

summary(lm(y~x1+x2,data=b))
Coefficients:
             Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  0.843802   0.119655   7.052 3.23e-11 ***
x1           1.992878   0.053838  37.016  &lt; 2e-16 ***
x2A          0.055500   0.131173   0.423   0.6727    
x2H          0.009293   0.121626   0.076   0.9392    
x2F         -0.177002   0.121020  -1.463   0.1452    
x2B         -0.218152   0.130192  -1.676   0.0955 .  
x2D         -0.206970   0.121294  -1.706   0.0896 .  
x2G         -0.407417   0.129999  -3.134   0.0020 ** 
x2C         -0.526708   0.123690  -4.258 3.24e-05 ***
x2J         -0.664281   0.128126  -5.185 5.54e-07 ***
x2E         -0.816454   0.123625  -6.604 3.94e-10 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 0.2014 on 189 degrees of freedom
Multiple R-squared:  0.8995,	Adjusted R-squared:  0.8942 
F-statistic: 169.1 on 10 and 189 DF,  p-value: &lt; 2.2e-16
AIC(lm(y~x1+x2,data=b))
[1] -60.74443
BIC(lm(y~x1+x2,data=b))
[1] -21.16463

Here the reference category is “I”. And it looks like we could actually combine that category with several others. One strategy here would be to select all categories that seem to be not significantly different, and to run a (multiple) test

library(car)
linearHypothesis(lm(y~x1+x2,data=b), c("x2A = 0", "x2H = 0", "x2F = 0"))
 
Hypothesis:
x2A = 0
x2H = 0
x2F = 0
 
Model 1: restricted model
Model 2: y ~ x1 + x2
 
  Res.Df    RSS Df Sum of Sq      F Pr(&gt;F)    
1    192 8.4651                               
2    189 7.6654  3   0.79971 6.5726  3e-04 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

It seems that we can combine those four categories together.

Here, we can see what’s going on when we change the reference category (actually, loop on all categories)

P=matrix(NA,nlevels(b$x2),nlevels(b$x2))
colnames(P)=rownames(P)=LETTERS[1:10]
plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5),
     ylim=c(0,10.5))
text(1:10,0,LETTERS[1:10])
text(0,1:10,LETTERS[1:10])
for(i in 1:nlevels(b$x2)){
#levels(b$x2)=LETTERS[1:10]
b$x2=relevel(b$x2,LETTERS[i])
p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
names(p)=substr(names(p),3,3)
P[LETTERS[i],names(p)]=p
p=P[LETTERS[i],]
idx=which(p&gt;.05)
points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2)
idx=which(p&gt;.1)
points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)}

We are glad to see that it is symmetric : if “H” should be combined with “I”, “I” should also be combined with “H”.

Here black points are related with the 10% p-value, and white points the 5% p-value. This graph is actually hard to read… And actually, this reminds us of  Bertin (1967).

Here, we can predefine manually some ordering (we will see below how it might be automatised)

LETTERSord=c("I","A","H","F","B","D","G","C","J","E")
P=matrix(NA,nlevels(b$x2),nlevels(b$x2))
colnames(P)=rownames(P)=LETTERSord
plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5),
     ylim=c(0,10.5))
ct=c(3,3,2,1,1)
abline(v=.5+c(0,cumsum(ct)),lty=2)
abline(h=.5+c(0,cumsum(ct)),lty=2)
text(1:10,0,LETTERSord)
text(0,1:10,LETTERSord)
for(i in 1:nlevels(b$x2)){
  #levels(b$x2)=LETTERS[1:10]
  b$x2=relevel(b$x2,LETTERSord[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,3)
  P[LETTERSord[i],names(p)]=p
  p=P[LETTERSord[i],]
  idx=which(p&gt;.05)
  points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2)
  idx=which(p&gt;.1)
  points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)
}

Here we get the following

It looks like we have our combined categories…

Actually, it is possible to use another strategy. We start from some level, say “A”. Then, we merge it with all non-significantly different levels. If “B” is not one of them, we use it as the new reference. Etc.

for(i in 1:nlevels(b$x2)){
  if(LETTERS[i]%in%levels(b$x2)){
  b$x2=relevel(b$x2,LETTERS[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,nchar(p))
  idx=which(p&gt;.05)
  mix=c(LETTERS[i],names(p)[idx])
  b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep=""))
}}

The final categories are

table(b$x2)
 
A+I+H B+D+F   C+G     E     J 
   46    82    35    23    14

with the following regression output

summary(lm(y~x1+x2,data=b))
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  0.86407    0.03950  21.877  &lt; 2e-16 ***
x1           1.99180    0.05323  37.417  &lt; 2e-16 ***
x2B+D+F     -0.21517    0.03699  -5.817 2.44e-08 ***
x2C+G       -0.50545    0.04528 -11.164  &lt; 2e-16 ***
x2E         -0.83617    0.05128 -16.305  &lt; 2e-16 ***
x2J         -0.68398    0.06131 -11.156  &lt; 2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 0.2008 on 194 degrees of freedom
Multiple R-squared:  0.8975,	Adjusted R-squared:  0.8948 
F-statistic: 339.6 on 5 and 194 DF,  p-value: &lt; 2.2e-16
AIC(lm(y~x1+x2,data=b))
[1] -66.76939
BIC(lm(y~x1+x2,data=b))
[1] -43.68117

Which is consistent with the group we got before. But actually, if we change the order, we can get different combinations. For instance, if we go from “J” to “A”, instead of “A” to “J”, we obtain

for(i in nlevels(b$x2):1){
  #levels(b$x2)=LETTERS[1:10]
  if(LETTERS[i]%in%levels(b$x2)){
  b$x2=relevel(b$x2,LETTERS[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,nchar(p))
  idx=which(p&gt;.05)
  mix=c(LETTERS[i],names(p)[idx])
  b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep=""))
}}
table(b$x2)
 
          E         G+C I+A+B+D+F+H           J 
         23          35         128          14

with different information criteria here

AIC(lm(y~x1+x2,data=b))
[1] -36.61665
BIC(lm(y~x1+x2,data=b))
[1] -16.82675

I guess it would be necessary to run randomly the order we go through the levels. Last, but not least, one can use regression trees (even if it not per se in the syllabus of the course). The problem is that there is another explanatory variable that might interphere. So I would suggest (1) to fit a linear model y=\beta_0+\beta_1x_1+u_i, to calculate the residuals, \widehat{u}_i (2) to run a regression tree, to explain \widehat{u}_i with categorical variable x_2 (I did explain how trees are build when the explanatory variable is a categorical one in a previous post)

library(rpart)
library(rpart.plot)
b$e=residuals(lm(y~x1,data=b))
arbre=rpart(e~x2,data=b)
prp(arbre,type=2,extra=1)

Observe that the leaves have the same groups as the one we got.

arbre
n= 200 
 
node), split, n, deviance, yval
      * denotes terminal node
 
1) root 200 22.563500  7.771561e-18  
  2) x2=G,C,J,E 72  4.441495 -3.232525e-01  
    4) x2=J,E 37  1.553520 -4.578492e-01 *
    5) x2=G,C 35  1.509068 -1.809646e-01 *
  3) x2=I,A,H,F,B,D 128  6.366628  1.818295e-01  
    6) x2=F,B,D 82  2.983381  1.048246e-01 *
    7) x2=I,A,H 46  2.030229  3.190993e-01 *

I guess that it should be possible to put all that in an R function, to suggest combinations of level that might improve the regression.

Régression sur une variable qualitative et ANOVA

Ce matin, pour le cours STT5100, on évoquait la régression sur une variable catégorielle. En particulier, on avait commencé par regarder ce que donnerait la régression sans la constante, et son interprétation. On s’était appuyé sur la base des poids et des tailles des élèves, et de la variable de genre.

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
Davis=data.frame(Y=Davis$weight * 2.204622,
                 X1=Davis$sex)

On voulait estimer le modèle y_i =\beta_F\boldsymbol{1}_F(x_i)+\beta_H\boldsymbol{1}_H(x_i)+\varepsilon_iOn avait vu que l’on pouvait passer par l’écriture matricielle

 X=cbind(Davis$X1=='F',Davis$X1=='M') 
 Y=Davis$Y

car la matrice \mathbf{X}^T\mathbf{X} est inversible (une fois que l’on enlève la constante)

 solve(t(X)%*%X)
            [,1]       [,2]
[1,] 0.008928571 0.00000000
[2,] 0.000000000 0.01136364

et donc l’estimateur par moindres carrés est (classiquement)\widehat{\mathbf{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}

 solve(t(X)%*%X) %*% (t(X)%*%Y)
         [,1]
[1,] 125.4272
[2,] 167.3258

ce qui correspond effectivement à la sortie de R,

 reg=lm(Y~0+X1,data=Davis)
 summary(reg)
 
Coefficients:
    Estimate Std. Error t value Pr(&gt;|t|)    
X1F  125.427      1.960   64.00   &lt;2e-16 ***
X1M  167.326      2.211   75.68   &lt;2e-16 ***

Considérons maintenant les deux sous-populations, avec le poids des femmes, et le poids des hommes

x=Y[X[,1]==1]
y=Y[X[,2]==1]
nx=length(x)
ny=length(y)

On avait vu en cours que les \widehat{\mathbf{\beta}} avaient une interprétation très simple, puisque\widehat{{\beta}}_M = \frac{1}{n_M}\sum_{i:x_i=M} y_iautrement dit \widehat{{\beta}}_M   est le poids moyen des hommes. Et en effet

 mean(y)
[1] 167.3258

C’est finalement très naturel, ou intuitif.

On peut maintenant s’interroger sur l’écart-type de l’estimateur de \widehat{{\beta}}_M . Intuitivement, on aurait envie d’avoir la variance de l’estimateur de la moyenne, soit ici

 sqrt(var(y)/ny)
[1] 2.794391
 sqrt(1/(ny-1)*sum( (y-mean(y))^2 )/ny)
[1] 2.794391

car pour rappel\text{Var}[\overline{y}]=\frac{\text{Var}(y)}{n}Comme on l’a vue dans le modèle de régression multiple, la variance de l’estimateur de \mathbf{\beta} est proportionnel à \sigma^2 , la variance globale des résidus (c’est l’hypothèse d’homoscédasticité ! les deux groupes doivent avoir la même variance). On va donc calculer l’estimateur naturel de \sigma^2

 s2=1/(nx+ny-2)*(sum( (x-mean(x))^2 )+sum( (y-mean(y))^2))
 sqrt(s2/ny)
[1] 2.210863

et en effet, on retombe sur la valeur donnée dans le tableau de régresion

 sqrt(s2/nx)
[1] 1.959721

(pareil pour l’autre coefficient).

On avait ensuite regardé la régression telle qu’elle faite classiquement, sous R : on garde la constante, et on enlève une des variables indicatrices (qui devient alors la “modalité de référence”).

 X=cbind(1,Davis$X1=='M')

Là encore, le modèle devient identifiable, et on obtient ici

 solve(t(X)%*%X) %*% (t(X)%*%Y)
          [,1]
[1,] 125.42724
[2,]  41.89855

On avait noté qu’il y avait un interprétation de cette seconde valeur, comme un différentiel par rapport à la modalité de référence

mean(y)-mean(x)
[1] 41.89855

La sortie de régression devient ici

 reg2=lm(Y~X1,data=Davis)
 summary(reg2)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  125.427      1.960   64.00   &lt;2e-16 ***
X1M           41.899      2.954   14.18   &lt;2e-16 ***

Et comme je l’avais dit, le test de Student correspond ici à un test d’égalité entre la taille moyenne des hommes et celle des femmes. Et en effet, si on fait le test, on voit que la différence est significative, comme attendu (pour la même raison qu’au dessus, on suppose la même variance dans les deux groupes)

 t.test(Y[X[,1]==1],Y[X[,2]==1],var.equal=TRUE)
 
	Two Sample t-test
 
data:  Y[X[, 1] == 1] and Y[X[, 2] == 1]
t = -6.4475, df = 286, p-value = 4.826e-10
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -30.62603 -16.30035
sample estimates:
mean of x mean of y 
 143.8626  167.3258

Je suis par contre un peu surpris que les p-values soient différente. Mon interprétation est que les p-values sont (de toutes façons) très faibles, et donc ça a peu d’importance. En fait, si on rend les deux variables indépendantes (par exemple en mélangeant la variable \mathbf{y} ), ça marche ! Posons

 Davis$Y=sample(Davis$Y)

ce qui revient à permuter toutes les observations de la variable dépendante (mais pas les autres !). La régression donne ici

 reg2=lm(Y~X1,data=Davis)
 summary(reg2)
 
Call:
lm(formula = Y ~ X1, data = Davis)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-57.458 -22.184  -5.512  17.809 118.912 
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept) 143.4382     2.7820   51.56   &lt;2e-16 ***
X1M           0.9645     4.1940    0.23    0.818    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 29.44 on 198 degrees of freedom
Multiple R-squared:  0.000267,	Adjusted R-squared:  -0.004782 
F-statistic: 0.05289 on 1 and 198 DF,  p-value: 0.8183

autrement dit, le genre n’est plus significatif, avec une p-value de 81.8%. Ce qui est bien au dessus de 5%. Si on fait maintenant le test de comparaison de moyenne, sur les deux sous-groupes, on obtient

 Y=Davis$Y
 t.test(Y[X[,1]==1],Y[X[,2]==1],var.equal=TRUE)
 
	Two Sample t-test
 
data:  Y[X[, 1] == 1] and Y[X[, 2] == 1]
t = -0.22998, df = 198, p-value = 0.8183
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -9.235209  7.306165
sample estimates:
mean of x mean of y 
 143.4382  144.4027

et le test a ici également une p-value de 81.8%. Les deux tests sont donc rigoureusement équivalents.

Les transports publics parisiens

Histoire de continuer la série de billets sur la visualisation, et la manipulation de données ouvertes, je vais reprendre de codes de Tony, de la formation Data Science pour l’Actuariat, pour visualiser le transport dans Paris (et la région parisienne). Si j’ai le temps, dans les jours a venir, je ferais une analyse du réseau de métro, compare aux autres grandes villes européennes. Pour commencer, on va récupérer les données, fournies par le site d’open data du stif, le syndicat des transports d’Ile de France (https://opendata.stif.info). Les données sont découpées par semestre, ce qui rend le code un peu lourd… mais bon, ça n’est pas plus complique pour autant.

library(dplyr)
library(stringr)
library(ggplot2)
library(xlsx)
library(ggmap)

On commence par lire tous les fichiers en ligne

nbvalid = list()
download.file("https://opendata.stif.info/explore/dataset/emplacement-des-gares-idf-data-generalisee/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","Gares.csv")
gares = read.csv("Gares.csv", sep=";", header = TRUE)
distr_pers = list()
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires1.csv")
distr_pers$S1 = read.csv("horaires1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires2.csv")
distr_pers$S2 = read.csv("horaires2.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations1.csv")
nbvalid$S1 = read.csv("validations1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations2.csv")
nbvalid$S2 = read.csv("validations2.csv", sep=";", header = TRUE)
download.file("https://freakonometrics.free.fr/Correspondance_NOM.csv","Correspondance_NOM.csv")
Cooresp = read.csv("Correspondance_NOM.csv", sep=";", header = TRUE)

On a ensuite besoin de définir les dates des vacances, pour 2017

Vacances = list()
Vacances$Noel = append(seq(from = as.Date("01/01/2017", format="%d/%m/%Y"), to=as.Date("02/01/2017", format="%d/%m/%Y"), by=1),seq(from = as.Date("24/12/2017", format="%d/%m/%Y"), to=as.Date("31/12/2017", format="%d/%m/%Y"), by=1))
Vacances$Ski = seq(from = as.Date("04/02/2017", format="%d/%m/%Y"), to=as.Date("19/02/2017", format="%d/%m/%Y"), by=1)
Vacances$Printemps = seq(from = as.Date("02/04/2017", format="%d/%m/%Y"), to=as.Date("17/04/2017", format="%d/%m/%Y"), by=1)
Vacances$Ete = seq(from = as.Date("08/07/2017", format="%d/%m/%Y"), to=as.Date("03/09/2017", format="%d/%m/%Y"), by=1)
Vacances$Toussaint = seq(from = as.Date("21/10/2017", format="%d/%m/%Y"), to=as.Date("05/11/2017", format="%d/%m/%Y"), by=1)
Vacances$All=Reduce(append,Vacances)

Après, un peu de nettoyage est nécessaire, avec des gares en double (par exemple quand passe a la fois le RER et le métro), et pour recuperer leur localisation spatiale (latitude et longitude)

gares$NOM_LONG = as.character(gares$NOM_LONG)
DD = (gares$NOM_LONG[duplicated(gares$NOM_LONG)])
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="Metro"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"M", sep="-")
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="RER"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"R", sep="-")
gares$NOM_LONG=factor(gares$NOM_LONG)
 
a=as.character(gares$Geo.Point)
gares$Y=as.numeric(str_extract_all(a,"^[0-9]+.[0-9]+"))
gares$X=as.numeric(str_extract_all(a,"[0-9]+.[0-9]+$"))

On compte ensuite les nombres de validation de tickets, par gare

Manip_nbvalid = function(Data,DD,gares) {
  i=grep("^[a-zA-Z]+",as.character(Data$NB_VALD))
  Data$NB_VALD[i]=as.integer(5)
  i=is.na(Data$NB_VALD)
  Data$NB_VALD[i]=as.integer(5)
  Data$LIBELLE_ARRET=as.character(Data$LIBELLE_ARRET)
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="100"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"M", sep="-")
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="800"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"R", sep="-")
 
  for (i in seq(1,nrow(Cooresp))) { Data$LIBELLE_ARRET=gsub(as.character(Cooresp$nbval[i]),as.character(Cooresp$gares[i]),Data$LIBELLE_ARRET)
  }
gares$NOM_LONG=as.character(gares$NOM_LONG)
Data=dplyr::left_join(Data,gares[,c("NOM_LONG","X","Y")],by=c("LIBELLE_ARRET"="NOM_LONG"))
  Data=Data[is.na(Data$CODE_STIF_ARRET)==FALSE,]
  Data=Data[Data$CODE_STIF_ARRET!="ND",]
  Data$NB_VALD=as.integer(as.character(Data$NB_VALD))
  Data$JOUR=as.Date(Data$JOUR)
  Data$CODE_STIF_TRNS=factor(Data$CODE_STIF_TRNS)
  Data$CODE_STIF_RES=factor(Data$CODE_STIF_RES)
  Data$CODE_STIF_ARRET=factor(Data$CODE_STIF_ARRET)
  Data$LIBELLE_ARRET=factor(Data$LIBELLE_ARRET)
  Data$ID_REFA_LDA=factor(Data$ID_REFA_LDA)
  Data$CATEGORIE_TITRE=factor(Data$CATEGORIE_TITRE)
  Data$JOURSEM=weekdays(Data$JOUR)  
  return(Data)
}
nbvalid=lapply(nbvalid, Manip_nbvalid,DD=DD,gares=gares)

On a ainsi tous les comptages, pour toutes les gares. On fait ensuite un découpage par tranche horaire

Manip_dist_pers = function(DataFrame) {
  DataFrame=DataFrame[(DataFrame$TRNC_HORR_60)!="ND",]
  DataFrame$TRNC_HORR_60=factor(DataFrame$TRNC_HORR_60, levels = c("0H-1H", "1H-2H", "2H-3H", "3H-4H", "4H-5H", "5H-6H", "6H-7H", "7H-8H", "8H-9H", "9H-10H", "10H-11H", "11H-12H", "12H-13H", "13H-14H", "14H-15H", "15H-16H", "16H-17H", "17H-18H", "18H-19H", "19H-20H", "20H-21H", "21H-22H", "22H-23H", "23H-0H")) 
  DataFrame=DataFrame[(DataFrame$CODE_STIF_ARRET)!="ND",]
  DataFrame$CODE_STIF_ARRET=factor(DataFrame$CODE_STIF_ARRET)
DataFrame$TRANCHE=str_extract(as.character(DataFrame$TRNC_HORR_60),"^[0-9]{1,2}")
  return(DataFrame)
}
distr_pers=lapply(distr_pers, Manip_dist_pers)

On peut ensuite recuperer la distribution des validation, par jour

distr_JOURV=list()
distr_JOURV$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$Y=rbind(distr_JOURV$S1,distr_JOURV$S2)
distr_JOUR=list()
distr_JOUR$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$Y=rbind(distr_JOUR$S1,distr_JOUR$S2)
distr_JOUR_Station=list()
distr_JOUR_Station$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
distr_JOUR_Station$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
Manip_dist_Jour = function(DataFrame) {
  DataFrame$JOURSEM=factor(DataFrame$JOURSEM,levels = c("lundi","mardi","mercredi","jeudi","vendredi","samedi","dimanche"))
  DataFrame$TypeJ=character(nrow(DataFrame))
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ete]="Ete"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Noel]="Noel"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ski]="Ski"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Printemps]="Printemps"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Toussaint]="Toussaint"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$All == FALSE]="HorsVacances"
  DataFrame$CAT_JOUR=character(nrow(DataFrame))
  DFr=list()
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOVS"
  DFr$JOVS$Data = DataFrame[ii,]
  DFr$JOVS$Nom="Jours ouvrés Vacances Scolaires"
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOHV"
  DFr$JOHV$Data = DataFrame[ii,]
  DFr$JOHV$Nom="Jours ouvés Hors Vacances Scolaires"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAVS"
  DFr$SAVS$Data = DataFrame[ii,]
  DFr$SAVS$Nom="Samedi VS"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAHV"
  DFr$SAHV$Data = DataFrame[ii,]
  DFr$SAHV$Nom="Samedi HV"
  ii=DataFrame$JOURSEM=="dimanche"
  DataFrame$CAT_JOUR[ii]="DIJFP"
  DFr$DIJFP$Data = DataFrame[ii,]
  DFr$DIJFP$Nom="Dimanche"
  return(list("TypeJ"=DFr,"Distr"=DataFrame))
}
res=Manip_dist_Jour(distr_JOUR$Y)
distr_TypeJ=res$TypeJ
distr_JOUR$Y=res$Distr
res=Manip_dist_Jour(distr_JOURV$Y)
distr_TypeJV=res$TypeJ
distr_TypeJ_Station=list()
res=Manip_dist_Jour(distr_JOUR_Station$S1)
distr_TypeJ_Station$S1=res$TypeJ
distr_JOUR_Station$S1=res$Distr
res=Manip_dist_Jour(distr_JOUR_Station$S2)
distr_TypeJ_Station$S2=res$TypeJ
distr_JOUR_Station$S2=res$Distr
rm(res)

On peut alors tracer toutes sortes de graphiques, par exemple le nombre de validations, par jour, entre le 1er janvier et le 31 décembre, en fonction du jour de la semaine.

g0 = ggplot(distr_JOUR$Y, aes(x=JOUR, y=NB_VALD, color = JOURSEM)) + geom_point()
g0 = g0 + labs(title="Nombres de validations chaque jours de 2017", x="Date", y="Nombre de validations")
g0

On peut voir la très forte baisse les jours de semaine pendant les vacances d’été. Au lieu de regarder sur l’année, on peut regarder sur la journée

Fct_FqH = function(DataFrame,distr_pers) {
DataFrame=dplyr::full_join(DataFrame,distr_pers[,c("CAT_JOUR","CODE_STIF_ARRET","pourc_validations","TRANCHE","TRNC_HORR_60")],by=c("CODE_STIF_ARRET"="CODE_STIF_ARRET","CAT_JOUR"="CAT_JOUR"))
  DataFrame$NB_VALD=DataFrame$NB_VALD*DataFrame$pourc_validations
  return(DataFrame)
}
distr_JOUR_Station$S1=Fct_FqH(distr_JOUR_Station$S1, distr_pers$S1)
distr_JOUR_Station$S2=Fct_FqH(distr_JOUR_Station$S2, distr_pers$S2)
distr_JOUR_Station$Y=rbind(distr_JOUR_Station$S1,distr_JOUR_Station$S2)
distr_JOUR_Station$Y=distr_JOUR_Station$Y[is.na(distr_JOUR_Station$Y$NB_VALD)==FALSE,]

On peut alors faire un graphique, en fonction de la tranche horaire, pour certaines périodes, par exemple en dehors de vacances scolaires, en semaine (par heure, on a ici un boxplot)

Graphique_HOR = function(DataFrame,TypeJ,NomJ) {
  # Graphique de la distribution de l'affluence par tranche horaire et type de jours
  g1 = ggplot(DataFrame[DataFrame$CAT_JOUR==TypeJ,], aes(x=TRNC_HORR_60, y=pourc_validations, color = TRNC_HORR_60,las=2)) + geom_boxplot() + ylim(c(0,100))
  g1 = g1 + labs(title=paste(c("Distribution des validations par tranche horaire ",NomJ), sep="", collapse = ""), x="Jours", y="Nombre de validations") +
  theme(axis.text.x= element_text(size = 8, angle = 45))
  g1
}
Graphique_HOR(distr_JOUR_Station$Y,"JOHV","Jours ouvrés Hors Vacances Scolaires")

ou bien le samedi

Graphique_HOR(distr_JOUR_Station$Y,"SAHV","Samedi Hors Vacances Scolaires")

On peut tenter un peu de cartographie. Comme nombre de métros/bus, dans le monde, on a souvent uniquement accès aux nœuds d’entrée dans le réseau (et pas aux nœuds de sortie). Mais ça reste intéressant, et très informatif

get_Paris1 = get_map(c(2.3448688,48.8613029), zoom = 11)
Paris1 = ggmap(get_Paris1)

Par gare, et par heure, on peut regarder le nombre de validations de tickets

Median_Valid = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
Median_Valid_Station = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, TRNC_HORR_60,LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
 
Carte_Densite = function(Nom,Carte,TypeJ,HOR,DataFrame) {
if (HOR=="") {
    ii=DataFrame$CAT_JOUR==TypeJ
    NomSave=paste("Densité des validations",Nom,TypeJ)
  }
  else {
    ii=DataFrame$CAT_JOUR==TypeJ &amp; DataFrame$TRNC_HORR_60==HOR
    NomSave=paste("Densité des validations",Nom,TypeJ,HOR)
  }
  U=DataFrame[ii,]
  n=round(log10(median(U$NB_VALD)))-1
  n=max(1,10^n)
  Nb_Repete_Stations=ceiling(U$NB_VALD/n)
  U$Size_Stations=U$NB_VALD/max(U$NB_VALD)
  Z=U[rep(1:nrow(U),Nb_Repete_Stations),]
  Carte_A= Carte + geom_point(aes(x=X,y=Y),data=Z,col="coral", size=10*Z$Size_Stations) +
    geom_density2d(data = Z, aes(x=X,y=Y), size = 0.5) + 
    stat_density2d(data = Z, aes(x=X,y=Y,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") +
    scale_fill_gradient(low = "chartreuse", high = "red",guide = FALSE) + 
    scale_alpha(range = c(0, 0.3), guide = FALSE) + ggtitle(NomSave) +
    theme(axis.title.x = element_blank(), axis.title.y = element_blank(), axis.text.x= element_blank(), axis.text.y = element_blank())
 
  suppressWarnings(print(Carte_A))
}

Par exemple, si on regarde les points de validations de tickets entre 5 et 6 heures du matin, on obtient

L=levels(Median_Valid_Station$TRNC_HORR_60)
Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[6],Median_Valid_Station)

avec beaucoup de ville dans la banlieue proche. Plus tard en journée, entre 11 heures et midi, les gares de validation sont davantage dans le cœur de Paris, avec la Défense a gauche et Saint-Denis au nord

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[12],Median_Valid_Station)

En fin de journée, c’est Paris et surtout la Défense qui ressortent

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[19],Median_Valid_Station)

Amusant, non ?

Faire ses cartes météo

Allez, un peu de scraping aujourd’hui (promis, il y aura d’autres billets d’ici la fin de la semaine sur le sujet). Je vais m’inspirer des codes de Romain de la formation Data Science pour l’Actuariat. Le but est de récupérer des données de températures, de précipitations, de vitesse de vent, journalières, en France. On commence par charger quelques librairies,

library(plyr)
library(stringr)
library(OpenStreetMap)
library(leaflet)
library(shiny)
library(rsconnect)
library(mapview)
library(png)
library(magick)
library(yaml)

On va récupérer les deux dernières années, mais comme toujours, le code peut facilement s’adapter.

annee = c(2016:2017)
mois = c('01','02','03','04','05','06','07','08','09','10','11','12')
aaaamm = sort(sub(pattern=' ',replacement = '', x=outer(annee,mois,paste)))
myCols = c(NA,NA,"NULL","NULL","NULL",NA,NA,"factor","NULL","NULL","NULL",NA,"NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL",NA,NA,NA,NA,NA,"NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL","NULL")
for (mois in aaaamm) {
  fichier = paste0("synop_",mois,".csv.gz")
  if (!file.exists(fichier)) {
    page = paste0('https://donneespubliques.meteofrance.fr/donnees_libres/Txt/Synop/Archive/synop.',mois,'.csv.gz')
    download.file(page, fichier, quiet = TRUE, cacheOK = FALSE)
  }
}  
for (mois in aaaamm) {
  fichier = paste0("synop_",mois,".csv.gz")
  if (mois=='201601') {
    data = read.table(gzfile(fichier),header=TRUE, sep=";",quote='"',fileEncoding="UTF-8",colClasses = myCols)
  } else {
    data = rbind(data,read.table(gzfile(fichier),header=TRUE, sep=";",quote='"',fileEncoding="UTF-8",colClasses = myCols))
  }
}

On supprime une des colonnes qui ne sert a rien

data[,"X"] = NULL

et on bricole un peu (sinon on a des soucis avec minuit, qui est décalé de 24 heures)

data$temp = as.numeric(substr(data$date,1,14))
decalage_minuit = function(dateheure){
  r = dateheure
  if (dateheure-trunc(dateheure/1000000)*1000000==0){
    r = dateheure + 10000
  }
  return(r)
}
data$temp = sapply(data$temp,decalage_minuit)
data$jourheure_obs = strptime(data$temp, format = '%Y%m%d%H%M%S', 'UTC')
data$date = as.POSIXct((strptime(data$jourheure_obs,format = "%Y-%m-%d")))
data$heure = as.numeric(substr(data$jourheure_obs, start=12, stop=13))
data[,"temp"] = NULL
data$t = as.numeric(as.character(data$t))-273.15

oui, on doit convertir en degrés Celsius. On peut aussi travailler sur les vitesses de vent

data$dd = as.numeric(as.character(data$dd))
data$ff = as.numeric(as.character(data$ff))/1000*3600
data$ff = round(data$ff/5)*5

que l’on va convertir en km/h, et enfin les précipitations

data$rr1 = pmax(0,as.numeric(as.character(data$rr1)))
data$rr3 = pmax(0,as.numeric(as.character(data$rr3)))
data$rr6 = pmax(0,as.numeric(as.character(data$rr6)))
data$rr12 = pmax(0,as.numeric(as.character(data$rr12)))
data$rr24 = pmax(0,as.numeric(as.character(data$rr24)))

On peut d’ailleurs commencer par une simple visualisation de séries de températures

data_temperature = subset(data, !is.na(data$t))
temp_min = aggregate(x=data_temperature$t,by=list(date=data_temperature$date),FUN=min)
names(temp_min)[2] = "temp_min"
temp_max = aggregate(x=data_temperature$t,by=list(date=data_temperature$date),FUN=max)
names(temp_max)[2] = "temp_max"
temp_moy = aggregate(x=data_temperature$t,by=list(date=data_temperature$date),FUN=mean)
names(temp_moy)[2] = "temp_moy"
temp_stat = cbind(temp_min,temp_max[],temp_moy)
rm(temp_min,temp_max,temp_moy)
temp_stat[,3] = NULL
temp_stat[,4] = NULL

avec le maximum observé, par jour, sur toutes les stations, le minimum, et la moyenne des stations

plot(x=temp_stat$date,y=temp_stat$temp_moy,'l',ylim=c(-15,40),xlab='Date',ylab='Température (°C)')
title('Températures françaises métropolitaines de 01/2016 à 12/2017')
points(x=temp_stat$date,y=temp_stat$temp_min,col="blue",'l')
points(x=temp_stat$date,y=temp_stat$temp_max,col="red",'l')

Pour faire des jolies cartes, il nous faut des informations sur les stations météo.

stations = read.table('https://donneespubliques.meteofrance.fr/donnees_libres/Txt/Synop/postesSynop.csv',header=TRUE, sep=";",quote='"',fileEncoding="UTF-8")
stations = rename(stations, replace=c("ID"="numer_sta"))
stations = rename(stations, replace=c("Nom"="nom"))
stations = rename(stations, replace=c("Latitude"="latitude"))
stations = rename(stations, replace=c("Longitude"="longitude"))
stations = rename(stations, replace=c("Altitude"="altitude"))
data = merge(x=data,y=stations)

Pour la carte, on va se restreindre aux données en France métropolitaine

data = subset(data, data$latitude &gt;= 39)
data = subset(data, data$latitude &lt;= 53)
data = subset(data, data$longitude &gt;= -5.3)
data = subset(data, data$longitude &lt;= 9.8)
coordonnees_stations = unique(data[, c("nom","latitude","longitude")])

Commencons par visualiser le maximum observee le 19 juillet 2016

date_sel = "2016-07-19"
date_sel_OK = paste0(substr(date_sel,9,10),"/",substr(date_sel,6,7),"/",substr(date_sel,1,4))
donnees_carte_0 = subset(data, data$date==as.POSIXct(date_sel))
temp = aggregate(x=donnees_carte_0$t,by=list(numer_sta=donnees_carte_0$numer_sta),FUN="max")
donnees_carte_0 = merge(x=donnees_carte_0,y=temp)
donnees_carte_0 = rename(donnees_carte_0, replace=c("x"="TEMP"))
donnees_carte_0$TEMP = round(donnees_carte_0$TEMP)
donnees_carte = unique(donnees_carte_0[,c('numer_sta','nom','longitude','latitude','TEMP')])
get_france = get_map(c(lon=2.25,lat=46), zoom=5, col='bw')
colfunc = colorRampPalette(c("darkblue","royalblue","cyan","lightblue","orange","red","darkred"))
liste_couleurs = data.frame(TEMP=as.numeric(seq(-49,50,1)),couleur=as.character(colfunc(100)))
donnees_carte = merge(x=donnees_carte,y=liste_couleurs)
france_tmax = ggmap(get_france) + 
  ggtitle(paste0('Températures maximales du ', date_sel_OK)) +
  scale_x_continuous(limits = c(-5, 10), expand = c(0, 0)) +
  scale_y_continuous(limits = c(41.3, 51.1), expand = c(0, 0))
france_tmax = france_tmax + geom_label(data=donnees_carte,aes(x=longitude,y=latitude,label=TEMP),col=as.character(donnees_carte$couleur))
france_tmax

On peut tenter aussi une visualisation des précipitations. On se place le 13 octobre 2016 (oui, il y a eu des inondations dans la region de Montpelier ce jour la)

date_sel = "2016-10-13"
colfunc = colorRampPalette(c('white','darkblue'))
data$precip_en_mm = data$rr3
data_precipitation = subset(data, !is.na(data$precip_en_mm))
precipitations = aggregate(x=data_precipitation$precip_en_mm,by=list(date=data_precipitation$date,station=data_precipitation$nom),FUN=sum)
names(precipitations)[3] = "pp"
precipitations$pp = pmax(precipitations$pp,0)
donnees_carte = subset(precipitations, precipitations$date==date_sel)
qt_prec = quantile(round(donnees_carte$pp),seq(0, 1, 0.05))
donnees_carte$Couleur_prec = colfunc(21)[(findInterval(round(donnees_carte$pp), qt_prec, all.inside=TRUE))]
donnees_carte = rename(donnees_carte, replace=c("station"="nom"))
donnees_carte = merge(x=donnees_carte,y=coordonnees_stations)

On peut tenter un leaflet ici (ça sera plus joli)

france = leaflet() %&gt;% addTiles() %&gt;% fitBounds(lng1=-4.412, lat1=41.92, lng2=9.485, lat2=50.57)
france = france %&gt;% addCircles(lng=donnees_carte$longitude, lat=donnees_carte$latitude, color=donnees_carte$Couleur_prec, opacity = 1, fillColor=donnees_carte$Couleur_prec, fillOpacity = 1, radius=donnees_carte$pp*500)
france

Comme souvent, j’ai des soucis pour integrer du leaflet en wordpress, alors on se contentera dans le billet d’une copie d’ecran (mais un copier/coller du code suffit).

Et pour finir, la meme chose avec les vitesses de vent. On se place le 6 mars 2017 (la encore, la date n’est pas choisie au hasard, c’était la tempete Zeus)

date_sel = "2017-03-06"
date_sel_OK = paste0(substr(date_sel,9,10),"/",substr(date_sel,6,7),"/",substr(date_sel,1,4))
heure_sel = 18
donnees_carte_0 = subset(data, data$date==as.POSIXct(date_sel))
donnees_carte_0 = subset(donnees_carte_0, donnees_carte_0$heure==heure_sel)
donnees_carte = donnees_carte_0

Pour les precipitations, des cercles plus ou moins grands suffisaient. La, on va mettre des fleches

logo = image_trim(image_read("https://image.freepik.com/icones-gratuites/fleche-noire-vers-le-haut_318-30934.jpg"))
if (file.exists('flecheblanche90.png')==FALSE){
  for (angle in seq(10,360, by=10)) {
    temp = image_rotate(logo,angle)
    image_write(temp,paste0('flecheblanche',angle,'.png'))
    temp = image_colorize(image=temp,opacity=40,color='green')
    image_write(temp,paste0('flecheverte',angle,'.png'))
    temp = image_colorize(image=temp,opacity=40,color='yellow')
    image_write(temp,paste0('flechejaune',angle,'.png'))
    temp = image_colorize(image=temp,opacity=40,color='orange')
    image_write(temp,paste0('flecheorange',angle,'.png'))
    temp = image_colorize(image=temp,opacity=40,color='red')
    image_write(temp,paste0('flecherouge',angle,'.png'))
    temp = image_colorize(image=temp,opacity=40,color='black')
    image_write(temp,paste0('flechenoire',angle,'.png'))
  }
}
arrowIcons = icons(
  iconUrl = ifelse(donnees_carte$dd == 0,
            ifelse(donnees_carte$ff&lt;39,
                    'flecheverte360.png',
            ifelse(donnees_carte$ff&lt;79,
                    'flechejaune360.png',
            ifelse(donnees_carte$ff&lt;119,
                    'flecheorange360.png',
            ifelse(donnees_carte$ff&lt;159,
                    'flecherouge360.png',
            ifelse(donnees_carte$ff&gt;159,
                    'flechenoire360.png',
                    'flecheblanche360.png'))))),
            ifelse(donnees_carte$ff&lt;39,
                   paste0('flecheverte',donnees_carte$dd,'.png'),
            ifelse(donnees_carte$ff&lt;79,
                    paste0('flechejaune',donnees_carte$dd,'.png'),
            ifelse(donnees_carte$ff&lt;119,
                    paste0('flecheorange',donnees_carte$dd,'.png'),
            ifelse(donnees_carte$ff&lt;159,
                    paste0('flecherouge',donnees_carte$dd,'.png'),
            ifelse(donnees_carte$ff&gt;159,
                    paste0('flechenoire',donnees_carte$dd,'.png'),
           paste0('flecheblanche',donnees_carte$dd,'.png'))))))),
  iconWidth = 30, iconHeight = 30)

Cette fois, on est bon ! On peut faire la carte

france = leaflet(options = leafletOptions()) %&gt;% addTiles() %&gt;% fitBounds(lng1=-4.412, lat1=41.92, lng2=9.485, lat2=50.57)
france = addMarkers(map = france,lng=donnees_carte$longitude,lat=donnees_carte$latitude,icon=arrowIcons,popup=as.character(donnees_carte$ff))
france

Visualiser la localisation des dinosaures, grâce aux quaternions…

Allez, un petit billet que je rêve de faire depuis des années, mais je viens enfin de trouver un prétexte, grâce à François de la formation Data Science pour l’Actuariat (à qui je vais emprunter l’application qui va suivre). Il y a plusieurs années, j’avais évoqué dans un billet les difficultés de travailler sur les données spatiales, surtout quand on est au pole. Le pole est un point de singularité dans la représentation classique (latitude, longitude) de la sphère. A l’époque, j’avais propose un bricolage simpliste (mettre les points de singularité dans un coin), mais il y a plus propre, à l’aide des quaternions d’Hamilton. Pour les présenter rapidement, et le lien avec les données spatiales, sur le globe terrestre, je vais reprendre la page wikipedia qui explique de manière incroyablement claire l’idée générale, et plus particulièrement les liens des quaternions avec les rotations (qui servent de base dans la représentation de la sphère qu’est le globe terrestre).

Chaque rotation en dimension trois consiste à tourner d’un certain angle \alpha autour d’un certain axe \vec{v}. Pour un angle petit mais non nul (s’il est nul, on parle de rotation identité), l’ensemble des rotations possibles est une petite sphère entourant la rotation identité, où chaque point de la sphère représente un axe pointant dans une direction particulière. Des rotations d’angles de plus en plus grands s’éloignent progressivement de la rotation identité. Aussi, au voisinage de la rotation identité, l’espace abstrait des rotations ressemble à l’espace ordinaire en trois dimensions (qui peut également être vu comme un point central entouré de sphères de différents rayons).

On peut assimiler les différentes directions à partir du pôle (c’est-à-dire les différents méridiens) aux différents axes de rotations et les différentes distances au pôle Nord aux différents angles : on a ainsi une analogie de l’espace des rotations. Mais la surface de la sphère est en deux dimensions alors que les axes de rotation utilisent déjà trois dimensions. L’espace des rotations est donc modélisé par une sphère de dimension 3 dans un espace à 4 dimensions (une hypersphère). Il faut alors penser la sphère ordinaire comme à une section de l’hypersphère (de la même façon qu’un cercle est une section de sphère). On peut prendre la section pour représenter, par exemple, uniquement les rotations d’axes dans le plan xy. Et on peut légitimement penser maintenant aux rotations comme à des points de la sphère en dimension 4.

Bon, rentrons dans les détails. On paramètre la surface d’une sphère à l’aide de deux coordonnées, comme la latitude et la longitude. Mais cela pose des soucis importants aux pôles (comme je l’avais note dans un ancien billet). Le théorème de la boule chevelue montre en fait qu’il n’existe aucun système de coordonnées à deux paramètres  évitant cette dégénérescence. On va donc plonger la sphère dans l’espace à trois dimensions, en la paramétrant a l’aide de trois coordonnées cartésiennes (ici wx et y). Par convention, on place le pôle Nord à (w,y,z) = (1, 0, 0), le pôle Sud à (w,y,z) = (-1, 0, 0) et l’équateur sera le cercle d’équations w = 0 et x^2+y^2=1. Un point (w,x,y) de la sphère représente une rotation de l’espace ordinaire autour de l’axe horizontal dirigé par le vecteur {\displaystyle {\vec {v}}={\begin{pmatrix}x\\y\\0\end{pmatrix}}} et d’angle {\displaystyle \alpha =2\cos ^{-1}w=2\sin ^{-1}{\sqrt {x^{2}+y^{2}}}}
C’est l’idée générale.

Pour parler un peu des quaternions, pour rappel, un plan en dimension 2 peut être paramétré en utilisant les nombres complexes, en introduisant un symbole abstrait \mathbf {i} qui vérifie la règle \mathbf {i}^2=-1. On peut faire la meme chose en dimension 4, en introduisant des symboles abstraits \mathbf {i}, \mathbf {j} et \mathbf {k}. La partie imaginaire {\displaystyle b\mathbf {i} +c\mathbf {j} +d\mathbf {k} } d’un quaternion se comporte comme un vecteur {\displaystyle {\vec {v}}={\begin{pmatrix}b\\c\\d\end{pmatrix}}} d’un espace vectoriel à trois dimensions.

Définissons le quaternion {\displaystyle \mathbf {q}=w+x\mathbf {i} +y\mathbf {j} +z\mathbf {k} =\cos(\alpha /2)+\frac{\vec {v}}{\|v\|}\sin(\alpha /2)} ou bien {\displaystyle \mathbf {q}=w+x\mathbf {i} +y\mathbf {j} +z\mathbf {k} =\cos(\alpha /2)+{\vec {u}}\sin(\alpha /2)} {\displaystyle {\vec {u}}} est un vecteur unitaire. Soit également {\displaystyle {\vec {v}}} un vecteur ordinaire de l’espace en 3 dimensions, considéré comme un quaternion avec une coordonnée réelle nulle. On pourrait que le produit de quaternions{\displaystyle \mathbf{q}{\vec {\nu}}\mathbf{q}^{-1}} renverrait le vecteur {\displaystyle {\vec {\nu}}} tourné d’un angle {\displaystyle \alpha } autour de l’axe dirigé par {\displaystyle {\vec {u}}} . Et c’est effectivement ce qui se passe. Cette opération est connue comme la conjugaison par \mathbf {q}.

Aussi, la multiplication de quaternions correspond à la composition de rotations, car si \mathbf {p} et \mathbf {q} sont des quaternions représentant des rotations, alors la rotation (conjugaison) par \mathbf {pq} est{\displaystyle \mathbf{pq}q{\vec {\nu}}(\mathbf{pq})^{-1}=\mathbf{pq}{\vec {\nu}}\mathbf{q}^{-1}\mathbf{p}^{-1}=\mathbf{p}(\mathbf{q}{\vec {\nu}}\mathbf{q}^{-1})\mathbf{p}^{-1}} ce qui revient à tourner (conjuguer) par \mathbf {q}, puis par \mathbf {p}.

Le quaternion inverse d’une rotation correspond à la rotation inverse, car {\displaystyle q^{-1}(q{\vec {v}}q^{-1})q={\vec {v}}} . Et assez naturellement, le carré d’un quaternion – noté \mathbf {q}^2 – correspond à la rotation de deux fois le même angle autour du même axe. Plus généralement, \mathbf {q}^{ {n}} correspond à une rotation de {n} fois l’angle autour du même axe que \mathbf {q}. Par convention, on peut considérer un réel arbitraire {r}, ce qui permet de calculer des rotations intermédiaires de façon fluide entre des rotations de l’espace.

Un petit exemple. Considérons la rotation f autour de l’axe dirigé par {\displaystyle {\vec {v}}=\mathbf {i} +\mathbf {j} +\mathbf {k} } et d’angle 120°, soit 2\pi/3.

La norme de {\displaystyle {\vec {v}}} est \sqrt{3} , le demi-angle est \pi/3 (ou 60°), le cosinus de ce demi-angle est 1/2, et le sinus est \sqrt{3}/2. Nous devons donc conjuguer avec le quaternion unitaire  {\displaystyle \mathbf{q}=\cos {\frac {\pi }{3}}+\sin {\frac {\pi }{3}}\cdot {\frac {1}{\sqrt {3}}}{\vec {v}}} soit \mathbf{q}={\frac {1}{2}}+{\frac {\sqrt {3}}{2}}\cdot {\frac {1}{\sqrt {3}}}{\vec {v}}={\frac {1}{2}}+{\frac {\sqrt {3}}{2}}\cdot {\frac {\mathbf {i} +\mathbf {j} +\mathbf {k} }{\sqrt {3}}} qui peut finalement s’écrire simplement{\frac {1+\mathbf {i} +\mathbf {j} +\mathbf {k} }{2}}

Si f est la fonction de rotation,{\displaystyle f(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )=\mathbf{q}(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )\mathbf{q}^{-1}}

On peut prouver que l’on obtient l’inverse d’un quaternion unitaire simplement en changeant le signe de ses coordonnées imaginaires. Autrement dit{\displaystyle \mathbf{q}^{-1}={\frac {1-\mathbf {i} -\mathbf {j} -\mathbf {k} }{2}}} et donc f(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} ) s’ecrit {\displaystyle {\frac {1+\mathbf {i} +\mathbf {j} +\mathbf {k} }{2}}(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} ){\frac {1-\mathbf {i} -\mathbf {j} -\mathbf {k} }{2}}} En appliquant les règles ordinaires de calcul avec les quaternions, on obtient {\displaystyle f(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )=c\mathbf {i} +a\mathbf {j} +b\mathbf {k} } Ah oui, et de la même manière qu’on peut associer une matrice 2\times2 à un nombre complexez=a+b\mathbf{i} ~\rightarrow~\begin{pmatrix}a&-b\\b&a\end{pmatrix} on peut associer une matrice 4\times4 à un quaternion\mathbf{q}=a+b\mathbf{i}+c\mathbf{j}+d\mathbf{k} ~\rightarrow~\begin{pmatrix}\quad a&\quad -b&\quad -c&\quad -d\\\quad b&\quad a&\quad -d&\quad c\\\quad c&\quad d&\quad a&\quad -b\\\quad d&\quad -c&\quad b&\quad a\end{pmatrix} (on peut aussi passer par unE matrice 2\times2 à coefficients complexes, mais ça ne servirait qu’à compliquer, ici).

On a vu qu’on pouvait associer une rotation à un quaternion, et un quaternion à une matrice. Si la rotation est d’axe \vec {OM}, où O est le centre de la terre, et M un point sur la surface terrestre, décrit par sa latitude et sa longitude, avec pour angle \alpha (comme décrit dans un billet sur stackoverflow), on a la fonction R suivante

quat = function(lat,long,ang=NA){
  n = length(lat)
  lat = lat/180*pi
  long = long/180*pi
  x = cos(lat) * cos(long)
  y = cos(lat) * sin(long)
  z = sin(lat)
  if (is.na(ang)){
    Q = matrix(c(x,y,z,rep(0,n)), ncol = 4)
  } else {
    Q = matrix(c(sin(ang/2*pi/180) * c(x,y,z), cos(ang/2*pi/180)), ncol =4)
  }
  return(Q)
}

avec la réciproque, permettant de passer d’un quaternion à une rotation, avec une description de l’axe comme auparavant (un point sur la sphère – sur le globe terrestre) et un angle (coordonnées polaires)

polaire = function(Q, digits=2) {
Q = Q/norme(Q)
ang = round(acos(Q[4])*2*180/pi,digits)
n = norme(Q[1:3])
x = Q[1]/n
y = Q[2]/n
z = Q[3]/n
lat = asin(z) * 180/pi
if (z**2 == 1){
long = 0
}  
else {
phi = (x+1i*y)/sqrt(1-z**2)
long = Im(log(phi)) * 180/pi 
}
c(lat,long,ang)
}

à condition de définir au préalable la norme du quaternion

norme = function(Q){
  sqrt(sum(Q**2))
}

On peut aussi définir le produits de quaternion (toutes les opérations sont décrites dans la page wikipedia), qui sera noté \otimes par la suite

pdt_quat = function(Q1,Q2){
  Q=rep(0,4)
  Q[1:3]  = Q1[4]*Q2[1:3]+Q2[4]*Q1[1:3]+pdt_vect(Q1[1:3],Q2[1:3])
  Q[4] = Q1[4]*Q2[4] - pdt_scal(Q1[1:3],Q2[1:3])
  return(Q)
}

mais aussi un produit scalaire

pdt_scal = function(M,N){
  return(sum(M*N))
}

un produit vectoriel

pdt_vect = function(M,N){
  return(c(M[2]*N[3]-M[3]*N[2],
           M[3]*N[1]-M[1]*N[3],
           M[1]*N[2]-M[2]*N[1]))
}

et finalement l’inverse du quaternion \mathbf{q}^{-1}

inv_quat = function(Q){
  (c(0,0,0,2*Q[4])-Q)/norme(Q)
}

On peut aussi demander les coordonnées d’un point M' obtenu comme transformation d’un point M par la rotation \mathbf{q}

rotation = function(M, Q){
polaire(pdt_quat(pdt_quat(Q,M),inv_quat(Q)))[1:2]
}

Maintenant, on va pouvoir passer aux choses sérieuses….

Je l’ai évoqué en introduction, les quaternions peuvent permettre de contourner certains problèmes, comme manipuler des objets (comme la calotte glaciaire) qui sont situés autour du pole (qui est un point de singularité dans la représentation par coordonnées polaires). Une autre application, présentée par François, est celle du déplacement des plaques tectoniques. En particulier, on peut utiliser un fichier de rotations des plaques tectoniques entre aujourd’hui et une certaine date dans le passé (cette idée se retrouve dans le projet gplates programmé avec des librairies python). Ou plus généralement entre deux dates, t_1 et t_2. Le fichier a notre disposition contient ainsi des quaternions \mathbf{q}^{P_0}_{t,P} pour une date t et une plaque P (défini comme un polygone décrit par une collection de latitudes et de longitudes), ou le deplacement de la plaque est décrit relativement a la plaque P_0. Pour des soucis de calculs, on va supposer qu’on peut interpoler linéairement les quaternions,\mathbf{q}^{P_0}_{t,P}=(1-\lambda)\mathbf{q}^{P_0}_{t_1,P}+\lambda\mathbf{q}^{P_0}_{t_2,P}avec\lambda=\frac{t-t_1}{t_2-t_1}et que récursivement, on peut composer les rotations, au sens ou\mathbf{q}^{P_3}_{t,P_1}\mathbf{q}^{P_3}_{t,P_2}\otimes\mathbf{q}^{P_2}_{t,P_1}Aussi, a partir de notre fichier de rotations, on peut creer une fonction qui calcule l’ensemble des quaternions de rotation, pour les plaques plates données. A priori un quaternion \mathbf{q}^{P_0}_{t,P} ne sera calculé qu’une fois

projecteur = function(t,plates,rot){
  ll = length(plates)
  Q0 = matrix(rep(0,4),ll,4)
  for (i in seq(ll)){ 
    cur_plate = i
    Q = c(0,0,0,1)
    while(cur_plate &gt; 0){
  df = rot[rot$Start = t &amp; rot$plate == plates[cur_plate],]
  if (dim(df)[1] &gt; 0) {
  Q1 = Q0[cur_plate,]
  cur_plate = 0 
  if (norme(Q1)==0){
  Q_St = quat(df$lat_St, df$lon_St, df$ang_St)
  Q_End = quat(df$lat_End, df$lon_End, df$ang_End)
  periode = df$End - df$Start
  pct = 0
  if (periode &gt; 0) pct = min(max(0,(t - df$Start)/periode),1)
  Q1 = Q_St + pct * (Q_End - Q_St)
  cur_plate = which(plates==df$anchor)
  if (length(cur_plate) == 0) cur_plate = 0
  } 
  Q = pdt_quat(Q1,Q)
  } else {cur_plate = 0}
  }    
  Q0[i,] = Q
  }
  return(Q0)
}

On peut maintenant appliquer ces outils a des données. Ici, trois bases issues du site http://paleobiodb.org seront exploitées, pour visualiser ou les dinosaures vivaient :
– une base des collections recensant les sites de fouilles (et en particulier leur géolocalisation)
– une base d’occurrences recensant les spécimens trouvés, par collection
– une base des spécimens décrivant les spécimens de dinosaures.

site="http://paleobiodb.org/data1.2/"
req=".txt?datainfo&amp;rowcount&amp;max_ma=999&amp;min_ma=0"
limit = "" #"&amp;limit=100"
names = c("colls/list", "occs/taxa", "occs/list")
destfile = rep('',length(names))
for (i in 1:length(names)) {
  destfile[i] = paste0("data/",sub("/","_",names[i]),".csv")
  download.file(paste0(site,names[i],req,limit),destfile=destfile[i])
}

Pour les données de collection

collection = read.csv(destfile[1],skip=17,sep = ",", header = TRUE)
coll = select(collection, c(collection_no, lng, lat, max_ma, min_ma))
remove(collection)
names(coll) = c('no', 'lng', 'lat', 'max_ma', 'min_ma')

et pour les données d’occurrence

occurence = read.csv(destfile[3],skip=17, sep = ",", header = TRUE)
occ = select(occurence, c(occurrence_no, collection_no, accepted_no))
remove(occurence)
names(occ) = c('no', 'coll_no', 'taxo_no')

Les espèces que nous étudierons ici sont les dinosaures de l’ordre des Ornithischia et des Saurischia (qui incluent les grandes familles classiques de dinosaures – a ce que j’ai pu comprendre)

 taxonomie = read.csv(destfile[2],sep = ",", skip=20, header = TRUE)
    taxo = select(taxonomie,c(orig_no, accepted_rank, accepted_name, parent_no, container_no))
    remove(taxonomie)
    names(taxo) = c('N0', 'rang0', 'nom0', 'parent', 'container')
    taxo = taxo[!is.na(taxo$N0),]
    taxo["N1"]=as.character(taxo$container)
    taxo[taxo$container=='',]$N1 = as.character(taxo[taxo$container=='',]$parent)
    taxo$container = as.factor(taxo$container)
    taxo$parent = NULL
    taxo$container = NULL
    t0 = taxo
    i = 0
    while(dim(taxo[!is.na(taxo[paste0("N",i)]),])[1] &gt; 0){
      i = i+1
      colnames(t0) = c(paste0('N',i),paste0('rang',i), paste0('nom',i), paste0('N',i+1))
      taxo = merge(taxo,t0,all.x=TRUE)
      if (i==10) break
    }
    t1 = select(taxo,N0)
    for (niveau in levels(taxo$rang0)){
      if (niveau != ""){
        t1[niveau]=""
        for (j in seq(0,i)){
          test = which(taxo[paste0("rang",j)]==niveau) 
          t1[test,niveau] = as.character(taxo[test,paste0("nom",j)])
        }  
      }
    }
    taxo=t1[t1["unranked clade"%in%c("Ornithischia","Saurischia")],]
    head(taxo,5)

On peut alors fusionner nos bases

M1 = merge(occ,taxo, by.x=c('taxo_no'),by.y=c('N0'), all=FALSE)
M2 = merge(M1,coll,by.x=c('coll_no'),by.y=c('no'), all.x=TRUE)
paste(dim(M2)[1], "specimens étudiés")

ce qui donne 9771 spécimens

head(M2,5)
coll_no taxo_no     no class           family             genus
1    5195   55999 373398           Nodosauridae      Pawpawsaurus
2   10755   55580 130209       Chaoyangsauridae    Chaoyangsaurus 
3   10760   38561 144305        Dromaeosauridae                  
4   10764   66066 130295        Caudipterygidae       Caudipteryx
5   10764   66068 130294                        Protarchaeopteryx
  infraclass kingdom        order   phylum                   species
1                                 Chordata    Pawpawsaurus campbelli
2                                 Chordata     Chaoyangsaurus youngi
3                    Avetheropoda Chordata                          
4                    Avetheropoda Chordata          Caudipteryx zoui
5                    Avetheropoda Chordata Protarchaeopteryx robusta
  subclass subfamily subgenus suborder subspecies superclass superfamily
1                                                                       
2                                                                       
3                                                                       
4                                                                       
5                                                                       
  superphylum tribe unranked clade      lng      lat max_ma min_ma
1                     Ornithischia -97.3000 32.86667  105.3  99.60
2                     Ornithischia 123.9667 42.93330  150.8 132.90
3                       Saurischia  21.0500 46.11667   70.6  66.00
4                       Saurischia 120.7333 41.80000  130.0 122.46
5                       Saurischia 120.7333 41.80000  130.0 122.46

Voila pour les dinosaures. On peut maintenant chercher des informations sur les plaques tectoniques,

chemin = "data/Shapefile"
download.file('https://www.earthbyte.org/webdav/ftp/earthbyte/GPlates/SampleData_GPlates2.0/Individual/FeatureCollections/Coastlines.zip', 'data/coastlines.zip')
coast_file = 'Matthews_etal_GPC_2016_Coastlines'
unzip(zipfile='data/coastlines.zip', exdir= chemin, junkpaths = TRUE)
continents = readOGR(dsn=chemin,layer=coast_file,verbose=TRUE)

Matthews et al. (2016) a mis en ligne un fichier de rotations simulant la dérive des plaques

download.file('https://www.earthbyte.org/webdav/ftp/earthbyte/GPlates/SampleData_GPlates2.0/Individual/FeatureCollections/Rotations.zip', 'data/rot.zip')
rot_file = 'Matthews_etal_GPC_2016_410-0Ma_GK07.rot'
unzip(zipfile='data/rot.zip', files = c(paste0('Rotations/',rot_file)), exdir= 'data', junkpaths = TRUE)
rot = paste0("data/",rot_file)

On va corriger quelques anomalies

x = readLines(rot)
y = gsub( "!101 !", "!", x )
cat(y, file=rot, sep="\n")
remove(x,y)

et on charge les données en mémoire (pour faire ensuite notre visualisation)

rot_file2 = read.csv(file=rot,header=FALSE,sep='', comment.char = '!')
ll = dim(rot_file2)[1]
rot_file3 = cbind(rot_file2[1:ll-1,],rot_file2[2:ll,])
names(rot_file3) = c('plate','Start','lat_St', 'lon_St', 'ang_St', 'anchor','plate2','End','lat_End', 'lon_End', 'ang_End', 'anchor2') 
rot_file = rot_file3[rot_file3$plate==rot_file3$plate2,]
rot_file$plate2 = NULL
rot_file$anchor2 = NULL
PLATES = sort(unique(rot_file$plate))

On ne va garder que les dinosaures qui peuvent etre rattaches à une plaque tectonique

X = M2%&gt;%select(lng, lat)
Y = SpatialPoints(X,proj4string = continents@proj4string)
plaques = over(Y,continents)$PLATEID1
filtre = which(!is.na(plaques))
X0       = X[filtre,]
NBX = dim(X0)[1]
print(paste0(NBX, " spécimens retenus"))

ce qui laisse quand même 9617 spécimens

PERIOD = M2%&gt;%select(max_ma, min_ma)
PERIOD  = PERIOD[filtre,]
plaques = plaques[filtre]
plaque_id = rep(0,NBX)
for (j in seq(1,NBX)){
    plaque_id[j]=which(PLATES==plaques[j])
}
dataX = M2[filtre,]
dataX["plaques"] = plaques

On y est presque… on va maintenant remonter de -250 millions d’annees a aujourd’hui, en faisant des bonds de 10 millions d’annees

TMAX = 250
TMIN = 0
PAS = 10

Pour toutes ces dates, on calcule les quaternions

ROT = array(rep(0,4),c(TMAX/PAS,length(PLATES),4))
for (t in seq(1,TMAX/PAS)){
  ROT[t,,] = projecteur(t*PAS, PLATES, rot_file)
}
QM = list()
plaque=list()
xy = list()
ll = length(continents@polygons)
for (i in seq(1,ll)){
  M = continents@polygons[[i]]@Polygons[[1]]@coords
  xy[[i]] = M
  QM[[i]] = quat(M[,2],M[,1])
  plaque[[i]] = which(PLATES==continents$PLATEID1[i])
}
QX = quat(X0[,2],X0[,1])

On va ensuite projeter

projete=list()
Xt = list()
X = X0
cpt = 0
for (TIME in seq(TMAX,TMIN,-PAS)){
  # setTxtProgressBar(pb, -TIME)
  cpt = cpt+1
  projete[[cpt]] = continents
  Xt[[cpt]] = X0
  if (TIME &gt; 0 ) {
    for (i in seq(1,ll)){
      M = xy[[i]]
      for (j in seq(1,dim(M)[1])){
        M[j,] = rev(rotation(QM[[i]][j,],ROT[TIME/PAS,plaque[[i]],]))
      }
      inf_180 = which(M[,1] &lt; -90); inf_180_ = length(inf_180)
      sup_180 = which(M[,1] &gt; 90); sup_180_ = length(sup_180)
      if (inf_180_ &gt; 0 &amp; sup_180_ &gt; 0) {
        if(sup_180_&gt;inf_180_) {
          M[inf_180,] = t(t(M[inf_180,]) + c(360,0))
        } else { M[sup_180,] = t(t(M[sup_180,]) - c(360,0))}
      } 
      projete[[cpt]]@polygons[[i]]@Polygons[[1]]@coords = M
    }
 
    # setTxtProgressBar(pb, -TIME + PAS/2)
    for (j in seq(1,NBX)){
      if (PERIOD$max_ma[j] &gt; TIME &amp; PERIOD$min_ma[j] &lt;= TIME){
        X[j,] = rev(rotation(QX[j,],ROT[TIME/PAS,plaque_id[j],]))
      } else{
        X[j,] = c(NA, NA)
      }
    }
    filtre = which(!is.na(X$lng))
    if (length(filtre) &gt; 0){
      Xt[[cpt]]=SpatialPointsDataFrame(coords = X[filtre,], data = dataX[filtre,])  
    } else {
      Xt[[cpt]]=X
    }
  }
}

On peut faire en première carte, 70 millions d’années avant notre ere

t = 1 + (TMAX-70)/PAS
leaflet(options = leafletOptions(minZoom = 1)) %&gt;%  
  addPolygons(data=projete[[t]], weight=2) %&gt;% 
  addMarkers(data = Xt[[t]], 
             popup = ~paste(sep = "
", paste("espèce :", Xt[[t]]$species),
                            paste("genre :", Xt[[t]]$genus),
                            paste("famille :", Xt[[t]]$family),
                            paste("ordre : ", Xt[[t]]["unranked clade",]) ),
             icon = dinoIcon)

Je mets ici une copie d’écran du leaflet ainsi créé

(l’idée est qu’on peut zoomer, ce qui rend l’analyse plus interactive)

Mais on peut aussi aller 200 millions d’années avant notre ere

t = 1 + (TMAX-200)/PAS
leaflet(options = leafletOptions(minZoom = 1)) %&gt;%  
  addPolygons(data=projete[[t]], weight=2) %&gt;% 
  addMarkers(data = Xt[[t]], 
             popup = ~paste(sep = "
", paste("espèce :", Xt[[t]]$species),
                            paste("genre :", Xt[[t]]$genus),
                            paste("famille :", Xt[[t]]$family),
                            paste("ordre : ", Xt[[t]]["unranked clade",]) ),
             icon = dinoIcon)

Amusant, non ? en tout cas, merci François pour cette jolie application des quaternions ! Et merci d’avoir suggéré d’utiliser autre chose que des points rouges sur une carte !

dinoIcon = makeIcon(iconUrl = "https://www.ludeek.com/wp-content/uploads/2015/03/uploadfsdfsdf1426350179.1426350368774.png",
                    iconWidth = 30, iconHeight = 50,
                    iconAnchorX = 15, iconAnchorY = 25)

Le sport en France

Je voulais profiter de la rentree pour mettre en ligne quelques billets sur la data science (comme on dit), en particulier en me basant sur des projets R de la formation en Data Science pour l’Actuariat. L’an passe, j’avais déjà mis en ligne un billet sur le sport (“le sport, une activité de riches“). Cette fois, en m’inspirant de ce qu’a proposé Benoit, on va regarder qui sont les licenciés des différentes fédérations sportives, et ou ils vivent. Comme toujours en R, on charge les librairies qu’on va utiliser…

library(rgdal)
library(sp)
library(reshape2)
library(data.table)
library(ggplot2)
library(gridExtra)
library(ggmap)
library(RColorBrewer)
library(classInt)
library(backports)
library(OpenStreetMap)

J’ouvre une parenthèse rapide, mais en pratique on sait rarement ce qui va servir… ex-post, on va les ramener ce chargement de librairies au début. Je pense que ça serait mieux de les charger juste quand on les utilise. Bon, ensuite, il faut les donnees

Url_Licences = "https://www.data.gouv.fr/s/resources/recensement-des-licences-et-clubs-aupres-des-federations-sportives-agreees-par-le-ministere-charge-d/20180131-163516/Licences_2015.csv"
Licences_2015 = read.csv(file=Url_licences, header=TRUE, sep=",",stringsAsFactors = FALSE) 
Url_Federation = "http://freakonometrics.free.fr/Projet_R/Code_federation.csv"
Code_Fede = read.csv(Url_Federation, sep=";",header=FALSE, skip=3)
colnames(Code_Fede) = c("Code_Federation","Libelle_Federation")

On change ici le nom des variables, ça sera plus simple ensuite, et on retient juste quelques lignes interessantes

Code_Fede = Code_Fede[c(1:31,33:92),c(1:2)]

Il faut ensuite les coordonnées des villes pour faire une carte

Commune = read.csv(file="https://www.data.gouv.fr/fr/datasets/r/554590ab-ae62-40ac-8353-ee75162c05ee", sep=";", header=TRUE)

En fait, juste la latitude de la longitude nous interesse

Geocod = colsplit(Commune$coordonnees_gps, ",", c("Latitude", "Longitude"))
Commune = data.frame(Commune,Geocod)

Un peu de menage ne fera pas de mal

Commune$Ligne_5 = NULL
Commune$coordonnees_gps = NULL
doublons = which(duplicated(Commune$Code_commune_INSEE)) #détecte les lignes où il y a doublon
Commune_Indiv = Commune[-doublons,]

On rajoute maintenant un libelle pour chaque sport

Licences_2015 = merge(x=Licences_2015, y=Code_Fede, by.x="fed_2014", by.y="Code_Federation", all.y=TRUE)

Et on supprime également les lignes ou les codes commune ne sont pas renseignés (car les données ne seront pas exploitables)

Licences_2015 = Licences_2015[!is.na(Licences_2015$newcog2),]

On a besoin de faire un peu attention a Paris et Marseille, car on a des données par arrondissement,

for (i in 1:nrow(Licences_2015)){
  if (Licences_2015[i,c("newcog2")]=="75056") {
    (Licences_2015[i,c("newcog2")] = "75101")}
  if (Licences_2015[i,c("newcog2")]=="13055") {
    (Licences_2015[i,c("newcog2")] = "13101")}}
Licences_2015 = merge(x=Licences_2015, y=Commune_Indiv, by.x="newcog2", by.y="Code_commune_INSEE", all.x=TRUE)

On y est presque. On va créer la variable taux de licenciés (nombre de licences rapporté a la population) pour chaque commune

Licences_2015$Taux_Licencies = ifelse(Licences_2015$pop_2014 != 0,Licences_2015$l_2015/Licences_2015$pop_2014,0)

Maintenant, on peut jouer ! Ou presque… reste a faire quelques regroupements en fonction de ce qu’on veut représenter.

df_Nb_Lic_Agg_Fed = aggregate(data.frame(
Nb_Licence = Licences_2015$l_2015,
Nb_hommes = Licences_2015$l_h_2015,
Nb_femmes = Licences_2015$l_f_2015,
NbLicences_0_4_Ans=Licences_2015$l_0_4_2015,
NbLicences_5_9_Ans=Licences_2015$l_5_9_2015,
NbLicences_10_14_Ans=Licences_2015$l_10_14_2015,
NbLicences_15_19_Ans=Licences_2015$l_15_19_2015,
NbLicences_20_29_Ans=Licences_2015$l_20_29_2015,
NbLicences_30_44_Ans=Licences_2015$l_30_44_2015,
NbLicences_45_59_Ans=Licences_2015$l_45_59_2015,
NbLicences_60_74_Ans=Licences_2015$l_60_74_2015,
NbLicences_75_Ans=Licences_2015$l_75_2015,
Nb_0_4_Ans=Licences_2015$pop_0_4_2014,
Nb_5_9_Ans=Licences_2015$pop_5_9_2014,
Nb_10_14_Ans=Licences_2015$pop_10_14_2014,
Nb_15_19_Ans=Licences_2015$pop_15_19_2014,
Nb_20_29_Ans=Licences_2015$pop_20_29_2014,
Nb_30_44_Ans=Licences_2015$pop_30_44_2014,
Nb_45_59_Ans=Licences_2015$pop_45_59_2014,
Nb_60_74_Ans=Licences_2015$pop_60_74_2014,
Nb_75_Ans=Licences_2015$pop_75_2014,
Pop_femmes=Licences_2015$popf_2014,
Pop_hommes=Licences_2015$poph_2014,
Pop_Totale=Licences_2015$pop_2014), 
by = list(Federation = Licences_2015$Libelle_Federation), sum, na.rm = TRUE)

On peut ainsi calculer le “taux de féminisation” de chaque sport

df_Nb_Lic_Agg_Fed$tx_femmes = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence!=0,df_Nb_Lic_Agg_Fed$Nb_femmes/df_Nb_Lic_Agg_Fed$Nb_Licence,0)

ou la répartition par classe d’âge du nombre de licenciés par fédération

df_Nb_Lic_Agg_Fed$Nb_Licence_Norme = 
  df_Nb_Lic_Agg_Fed$NbLicences_0_4_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_5_9_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_10_14_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_15_19_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_20_29_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_30_44_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_45_59_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_60_74_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_75_Ans

Pour la classe d’age 0-14 ans, on pose alors

df_Nb_Lic_Agg_Fed$Tx_Licences_0_14_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,      (df_Nb_Lic_Agg_Fed$NbLicences_0_4_Ans+df_Nb_Lic_Agg_Fed$NbLicences_5_9_Ans+df_Nb_Lic_Agg_Fed$NbLicences_10_14_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

et pour la classe d’age 15-29 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_15_29_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,
(df_Nb_Lic_Agg_Fed$NbLicences_15_19_Ans+
df_Nb_Lic_Agg_Fed$NbLicences_20_29_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 30-44 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_30_44_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,(df_Nb_Lic_Agg_Fed$NbLicences_30_44_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 45-59 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_45_59_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,                                        (df_Nb_Lic_Agg_Fed$NbLicences_45_59_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 60 ans et plus (on a compris le truc)

df_Nb_Lic_Agg_Fed$Tx_Licences_60_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0, (df_Nb_Lic_Agg_Fed$NbLicences_60_74_Ans+ df_Nb_Lic_Agg_Fed$NbLicences_75_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

On passe a la détermination des 25 premières fédérations en nombre de licenciés

dt_Nb_Lic_Agg_Fed = data.table(df_Nb_Lic_Agg_Fed)
setorder(dt_Nb_Lic_Agg_Fed,-Nb_Licence,na.last=TRUE)
dt_Nb_Lic_Agg_Main_Fed = dt_Nb_Lic_Agg_Fed[1:25,]
graph1 = ggplot(data=dt_Nb_Lic_Agg_Main_Fed, aes(x=reorder(Federation,Nb_Licence), y=Nb_Licence)) + 
  geom_bar(stat="Identity",fill = "blue")+
  geom_text(aes(label=Nb_Licence),check_overlap = TRUE, vjust=0.5, hjust=0, color="blue")+
  ggtitle("TOP 25 des fédérations sportives en termes de licenciés")+
  ylim(0, 2500000)+
  xlab("Fédérations") + ylab("Nombre de licences")
graph1+coord_flip()

On ordonne ensuite par taux de femmes,

setorder(dt_Nb_Lic_Agg_Main_Fed,-tx_femmes,na.last=TRUE)
graph2 = ggplot(data=dt_Nb_Lic_Agg_Main_Fed) +
  aes(x =reorder(Federation,tx_femmes), y = tx_femmes) + geom_bar(stat="Identity",fill = "pink")+
geom_text(aes(label=paste(round(100*tx_femmes, 0), "%", sep="")),check_overlap = TRUE, vjust=0.5, hjust=0.5, color="black")+
xlab("Fédération") + ylab("part des licenciées femmes")+
ggtitle("la pratique sportive féminine par fédération")  
graph2+coord_flip()

Et finalement on va regarder par classe d’age

df_Nb_Lic_Agg_Main_Fed = data.frame(dt_Nb_Lic_Agg_Main_Fed)
Licence_Age = melt(df_Nb_Lic_Agg_Main_Fed, id=c("Federation"), measured=c("Tx_Licences_0_14_Ans","Tx_Licences_15_29_Ans", "Tx_Licences_30_44_Ans", "Tx_Licences_45_59_Ans","Tx_Licences_60_Ans"))
Licence_Age_Clean = Licence_Age[(Licence_Age$variable=="Tx_Licences_0_14_Ans" |       Licence_Age$variable=="Tx_Licences_15_29_Ans" | Licence_Age$variable=="Tx_Licences_30_44_Ans" |
Licence_Age$variable=="Tx_Licences_45_59_Ans" |
Licence_Age$variable=="Tx_Licences_60_Ans"),]  
dt_Licence_Age_Clean = data.table(Licence_Age_Clean)
setorder(dt_Licence_Age_Clean,-variable,na.last=TRUE)
setorder(Licence_Age_Clean,variable,na.last=TRUE)
graph3 = ggplot(data=Licence_Age_Clean, aes(x=Federation, y=value, fill=variable)) +
geom_bar(stat="identity")+
xlab("Fédération") + ylab("répartition par classe d'âge")+
ggtitle("Répartition des licenciés par classe d'âge")  
graph3+coord_flip()+scale_fill_brewer(palette="Paired")

A la lecture du graphique ci-dessus, les sports pourraient être classés en 3 catégories :

  • les “sports de jeunes” : ceux-ci ont plus de la motié de leurs licenciés âgés de moins de 15 ans : il s’agit de la gymnastique, du judo, du handball, de la natation, ou encore de la voile.
  • les “sports de vieux” : on retrouve ici sans surprise la randonnée, le cyclotourisme, le golf, la pétanque, le tir ou encore les sports sous-marins. Ceux-ci voient leurs licenciés avoir plus de 45 ans pour tois quart d’entre eux.
  • les “sports pour tous” qui correspondent à ceux qui n’ont pas encore été cités et pour lesquels classes d’âge apparaissent plus équilibrés

Finallement, on peut regarder quelques sports, sur une carte

map.France = get_map(location = c(lon=1.75, lat=46.70), zoom = 6)
Rugby_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="133",]
Voile_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="128",]
Ski_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="121",]
PetanQ_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="242",]
Rugby = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Rugby_2015, colour="red", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Rugby")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="red"))
Voile = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Voile_2015, colour="blue", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Voile")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="blue"))
Ski = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Ski_2015, colour="grey", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Ski")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="grey"))
Petanque = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = PetanQ_2015, colour="chocolate3", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("pétanque et jeu provençal")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="chocolate3"))
grid.arrange(Rugby,Voile,Ski,Petanque, ncol=2, nrow = 2,top="visualisation géographique de sports \n à fort ancrage régional")

Amusant, non?

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Parallelizing Linear Regression or Using Multiple Sources

My previous post was explaining how mathematically it was possible to parallelize computation to estimate the parameters of a linear regression. More speficially, we have a matrix \mathbf{X} which is n\times k matrix and \mathbf{y} a n-dimensional vector, and we want to compute \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y} by spliting the job. Instead of using the n observations, we’ve seen that it was to possible to compute “something” using the first n_1 rows, then the next n_2 rows, etc. Then, finally, we “aggregate” the m objects created to get our overall estimate.

Parallelizing on multiple cores

Let us see how it works from a computational point of view, to run each computation on a different core of the machine. Each core will see a slave, computing what we’ve seen in the previous post. Here, the data we use are

y = cars$dist
X = data.frame(1,cars$speed)
k = ncol(X)

On my laptop, I have three cores, so we will split it in m=3 chunks

library(parallel)
library(pbapply)
ncl = detectCores()-1
cl = makeCluster(ncl)

This is more or less what we will do: we have our dataset, and we split the jobs,

We can then create lists containing elements that will be sent to each core, as Ewen suggested,

chunk = function(x,n) split(x, cut(seq_along(x), n, labels = FALSE))
a_parcourir = chunk(seq_len(nrow(X)), ncl)
for(i in 1:length(a_parcourir)) a_parcourir[[i]] = rep(i, length(a_parcourir[[i]]))
Xlist = split(X, unlist(a_parcourir))
ylist = split(y, unlist(a_parcourir))

It is also possible to simplify the QR functions we will use

compute_qr = function(x){
  list(Q=qr.Q(qr(as.matrix(x))),R=qr.R(qr(as.matrix(x))))
}
get_Vlist = function(j){
  Q3 = QR1[[j]]$Q %*% Q2list[[j]]
  t(Q3) %*% ylist[[j]]
}
clusterExport(cl, c("compute_qr", "get_Vlist"), envir=environment())

Then, we can run our functions on each core. The first one is

  QR1 = parLapply(cl=cl,Xlist, compute_qr)

note that it is also possible to use

  QR1 = pblapply(Xlist, compute_qr, cl=cl)

which will include a progress bar (that can be nice when the database is rather large). Then use

  R1 = pblapply(QR1, function(x) x$R, cl=cl) %&gt;% do.call("rbind", .)
  Q1 = qr.Q(qr(as.matrix(R1)))
  R2 = qr.R(qr(as.matrix(R1)))
  Q2list = split.data.frame(Q1, rep(1:ncl, each=k))
  clusterExport(cl, c("QR1", "Q2list", "ylist"), envir=environment())
  Vlist = pblapply(1:length(QR1), get_Vlist, cl=cl)
  sumV = Reduce('+', Vlist)

and finally the ouput is

solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Using multiple sources

In practice, it might also happen that various “servers” have the data, but we cannot get a copy. But it is possible to run some functions on their server, and get some output, that we can use afterwards.

Datasets are supposed to be available somewhere. We can send a request, and get a matrix. Then we we aggregate all of them, and send another request. That’s what we will do here. Provider j should run f_1(\mathbf{X}) on his part of the data, that function will return R^{(1)}_j. More precisely, to the first provider, send

function1 = function(subX){
return(qr.R(qr(as.matrix(subX))))}
R1 = function1(Xlist[[1]])

and actually, send that function to all providers, and aggregate the output

for(j in 2:m) R1 = rbind(R1,function1(Xlist[[j]]))

The create on your side the following objects

Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*k+1:k,]

Finally, contact one last time the providers, and send one of your objects

function2=function(subX,suby,Q){
Q1=qr.Q(qr(as.matrix(subX)))
Q2=Q
return(t(Q1%*%Q2) %*% suby)}

Provider j should then run f_2(\mathbf{X},\mathbf{y},Q_j^{(2)}) on his part of the data, using also Q_j^{(2)} as argument (that we obtained on own side) and that function will return (\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j)^{T}_j\mathbf{y}_j. For instance, ask the first provider to run

sumV = function2(Xlist[[1]],ylist[[1]], Q2list[[1]])

and do the same with all providers

for(j in 2:m) sumV = sumV+ function2(Xlist[[j]],ylist[[j]], Q2list[[j]])
solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Discrete or continuous modeling ?

Tuesday, we got our conference “Insurance, Actuarial Science, Data & Models” and Dylan Possamaï gave a very interesting concluding talk. In the introduction, he came back briefly on a nice discussion we usually have in economics on the kind of model we should consider. It was about optimal control. In many applications, we start with a one period economy, then a two period economy, and pretend that we can extend it to n period economy. And then, the continuous case can also be considered. A few years ago, I was working on sports game as an optimal effort startegy (within in a game – fixed time). It was with a discrete model, I was running simulations to get an efficient frontier, where coaches might say “ok, now we have enough (positive) difference, and we get closer to the end of the game, so we can ‘lower the effort’ i.e. top players can relax a little bit” (it was on basket-ball games). I asked a good friend of mine, Romuald, to help me on some technical parts of proofs, but he did not like so much my discrete-time model, and wanted to move to continuous time. And for now six years, we keep saying that someday we should get back to that paper….

My initial thoughts were that the difference was really “cultural”: you are either a continuous-time sort of guy, or a discrete-time one (or maybe none of the two, but that’s another problem). He works with stochastic processes, I work with time series. Of course, we can find connections, but most of the time, the techniques are very different. And tuesday, Dylan mentioned a very nice illustration that it’s not necessarily a cultural difference, and sometimes, it is great to move to continuous time. So I wanted to illustrate that idea.

Consider for instance the following curve.

vu = seq(0,1,length=601)
vv = sin(vu*pi)
plot(vu,vv,type="l",lwd=2)

The goal is to find the value of the maximum, numerically. And here, there are two (very) different strategies

  • the discrete one: we see a (finite) collection of points – for instance, the graph above is a collection of 601 points (connected with a straight line) – and in that case, we need a standard algorithm (in O(n)) to get the value of the maximum
  • the continuous one: we see a function x\mapsto \sin(\pi x), and in that case, we use optimization routines

In the second case, use for instance

optim(0,function(x) -sin(pi*x))
$par
[1] 0.5
 
$value
[1] -1

For the first case, we can use the standard R function, and see how long it takes to use simulations to get an approximation of the maximum

library(microbenchmark)
max_time = function(n) median(microbenchmark(max(sin(runif(n)*pi)))$time)
vn = 10^(seq(1,6,length=21))
vt = Vectorize(max_time)(vn)
plot(vn,vt/1e9,col="blue",pch=19,type="b",log="xy")

but of course, some home-made code can also be used

c_max = function(n=100){
  x = sin(runif(n)*pi)
  y = x[1]
  for(i in 2:length(x)) { 
    if(x[i] &gt; y) { y = x[i] }}
  return(y)}
max_time=function(n) median(microbenchmark(c_max(n))$time)
lines(vn,vt/1e9,type="b")

We can add that horizontal red line using

abline(h=median(microbenchmark(optim(.5,function(x) sin(pi*x)))$time)/1e9,lty=2,col="red")

So, indeed, it looks like computational time to find the maximum in a list of n elements is linear in n, i.e. O(n). And R code is faster than home-made code. But also, interestingly, using continus time (based on analysis techniques) can be much faster. So, sometimes, considering continuous time models can be much easier to solve, from a numerical perspective.

Classification from scratch, boosting 11/8

Eleventh post of our series on classification from scratch. Today, that should be the last one… unless I forgot something important. So today, we discuss boosting.

An econometrician perspective

I might start with a non-conventional introduction. But that’s actually how I understood what boosting was about. And I am quite sure it has to do with my background in econometrics.

The goal here is to solve something which looks likem^\star=\underset{m\in\mathcal{M}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m(\mathbf{x}_i))\right\rbracefor some loss function \ell, and for some set of predictors \mathcal{M}. This is an optimization problem. Well, optimization is here in a function space, but still, that’s simply an optimization problem. And from a numerical perspective, optimization is solve using gradient descent (this is why this technique is also called gradient boosting). And the gradient descent can be visualized like below

Again, the optimum is not some some real value x^\star, but some function m^\star. Thus, here we will have something likem^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(y_i,m^{(k-1)}(\mathbf{x}_i)+h(\mathbf{x}_i))\right\rbrace(as they write it is serious articles) where the term on the right can also be writtenm^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\mathbf{x}_i)}_{\varepsilon_{k,i}},h(\mathbf{x}_i))\right\rbraceI prefer the later, because we see clearly that f is some model we fit on the remaining residuals.

We can rewrite it like that: definer_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x}_i)=m^{(k-1)}(\mathbf{x}_i)}for all i=1,\cdots,n. The goal is to fit a model so that r_{i,k}=h^\star(\mathbf{x}_i), and when we have that optimal function, set m_k(\mathbf{x})=m_{k-1}(\mathbf{x})+\gamma_k h^\star(\mathbf{x}) (yes, we can include some shrinkage here).

Two important comments here. First of all, the idea should be weird to any econometrician. First, we fit a model to explain y by some covariates \mathbf{x}. Then consider the residuals \widehat{\varepsilon}, and to explain them with the same covariate \mathbf{x}. If you try that with a linear regression, you’d done at the end of step 1, since residuals \widehat{\varepsilon} are orthogonal to covariates \mathbf{x}: no way that we can learn from them. Here it works because we consider simple non linear model. And actually, something that can be used is to add a shrinkage parameter. Do not consider \widehat{\varepsilon}=y-\widehat{m}(\mathbf{x}) but \widehat{\varepsilon}=y-\gamma\widehat{m}(\mathbf{x}). The idea of weak learners is extremely important here. The more we shrink, the longer it will take, but that’s not (too) important.

I should also mention that it’s nice to keep learning from our mistakes. But somehow, we should stop, someday. I said that I will not mention this part in this series of posts, maybe later on. But heuristically, we should stop when we start to overfit. And this can be observed either using a split training/validation of the initial dataset or to use cross validation. I will get back on that issue later one in this post, but again, those ideas should probably be dedicated to another series of posts.

Learning with splines

Just to make sure we get it, let’s try to learn with splines. Because standard splines have fixed knots, actually, we do not really “learn” here (and after a few iterations we get to what we would have with a standard spline regression). So here, we will (somehow) optimize knots locations. There is a package to do so. And just to illustrate, use a Gaussian regression here, not a classification (we will do that later on). Consider the following dataset (with only one covariate)

n=300
 set.seed(1)
 u=sort(runif(n)*2*pi)
 y=sin(u)+rnorm(n)/4
 df=data.frame(x=u,y=y)

For an optimal choice of knot locations, we can use

library(freeknotsplines)
xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)

With 5% shrinkage, the code it simply the following

v=.05
 library(splines)
 xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)
 fit=lm(y~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
 yp=predict(fit,newdata=df)
 df$yr=df$y - v*yp
 YP=v*yp
 for(t in 1:200){
   xy.freekt=freelsgen(df$x, df$yr, degree = 1, numknot = 2, 555)
   fit=lm(yr~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
   yp=predict(fit,newdata=df)
   df$yr=df$yr - v*yp
   YP=cbind(YP,v*yp)}
 nd=data.frame(x=seq(0,2*pi,by=.01))
 viz=function(M){
    if(M==1)  y=YP[,1]
    if(M&gt;1)   y=apply(YP[,1:M],1,sum)
    plot(df$x,df$y,ylab="",xlab="")
    lines(df$x,y,type="l",col="red",lwd=3)
    fit=lm(y~bs(x,degree=1,df=3),data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="l",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}

To visualize the ouput after 100 iterations, use

viz(100)


Clearly, we see that we learn from the data here… Cool, isn’t it?

Learning with stumps (and trees)

Let us try something else. What if we consider at each step a regression tree, instead of a linear-by-parts regression (that was considered with linear splines).

library(rpart)
v=.1 
fit=rpart(y~x,data=df)
yp=predict(fit)
df$yr=df$y - v*yp
YP=v*yp
for(t in 1:100){
  fit=rpart(yr~x,data=df)
  yp=predict(fit,newdata=df)
  df$yr=df$yr - v*yp
  YP=cbind(YP,v*yp)}

Again, to visualise the learning process, use

viz=function(M){
y=apply(YP[,1:M],1,sum)
plot(df$x,df$y,ylab="",xlab="")
lines(df$x,y,type="s",col="red",lwd=3)
fit=rpart(y~x,data=df)
yp=predict(fit,newdata=nd)
lines(nd$x,yp,type="s",col="blue",lwd=3)
lines(nd$x,sin(nd$x),lty=2)}


This time, with those trees, it looks like not only we have a good model, but also a different model from the one we can get using a single regression tree.

What if we change the shrinkage parameter?

viz=function(v=0.05){
  fit=rpart(y~x,data=df)
  yp=predict(fit)
  df$yr=df$y - v*yp
  YP=v*yp
  for(t in 1:100){
    fit=rpart(yr~x,data=df)
    yp=predict(fit,newdata=df)
    df$yr=df$yr - v*yp
    YP=cbind(YP,v*yp)}
  y=apply(YP,1,sum)
    plot(df$x,df$y,xlab="",ylab="")
    lines(df$x,y,type="s",col="red",lwd=3)
    fit=rpart(y~x,data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="s",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}


There is clearly an impact of that shrinkage parameter. It has to be small to get a good model. This is the idea of using weak learners to get a good prediction.

Classification and Adaboost

Now that we understand how bootsting works, let’s try to adapt it to classification. It will be more complicated because residuals are usually not very informative in a classification. And it will be hard to shrink. So let’s try something slightly different, to introduce the adaboost algorithm.

In our initial discussion, the goal was to minimize a convex loss function. Here, if we express classes as \{-1,+1\}, the loss function we consider is e^{-y\cdot m(\mathbf{x})} (this product y\cdot m(\mathbf{x})) was already discussed when we’ve seen the SVM algorithm. Note that the loss function related to the logistic model would be \log(1+e^{-y\cdot m(\mathbf{x})}).

What we do here is related to gradient descent (or Newton algorithm). Previously, we were learning from our errors. At each iteration, the residuals are computed and a (weak) model is fitted to these residuals. The the contribution of this weak model is used in a gradient descent optimization process. Here things will be different, because (from my understanding) it is more difficult to play with residuals, because null residuals never exist in classifications. So we will add weights. Initially, all the observations will have the same weights. But iteratively, we ill change them. We will increase the weights of the wrongly predicted individuals and decrease the ones of the correctly predicted individuals. Somehow, we want to focus more on the difficult predictions. That’s the trick. And I guess that’s why it performs so well. This algorithm is well described in wikipedia, so we will use it.

We start with \mathbf{\omega}_0=\mathbf{1}/n, then at each step fit a model (a classification tree) with weights \mathbf{\omega}_k(we did not discuss weights in the algorithms of trees, but it is straigtforward in the formula actually). Let \widehat{h}_{\mathbf{\omega}_k} denote that model (i.e. the probability in each leaves). Then consider the classifier 2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\cdot)>0.5]-1 which returns a value in \{-1,+1\}. Then set \varepsilon_k=\sum_{i\in\mathcal{I}_k}\omega_i where \mathcal{I}_k is the set of misclassified individuals,\mathcal{I}_k=\big\lbrace i:2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)>0.5]-1\neq y_i\big\rbrace Then set \alpha_k = \frac{1}{2} \ln \left(\frac{1-\epsilon_k}{\epsilon_k}\right)and update finally the model usingm_{k=1}=m_k+\alpha_k\widehat{h}_{\mathbf{\omega}_k}as well as the weights\mathbf{\omega}_{k+1}=\mathbf{\omega}_k e^{-\mathbf{y} \alpha_k \widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)}(of course, devide by the sum to insure that the total sum is then 1). And as previously, one can include some shrinkage. To visualize the convergence of the process, we will plot the total error on our dataset.

n_iter = 100
y = (myocarde[,"PRONO"]==1)*2-1
x = myocarde[,1:7]
error = rep(0,n_iter) 
f = rep(0,length(y)) 
w = rep(1,length(y)) #
alpha = 1
library(rpart)
for(i in 1:n_iter){
  w = exp(-alpha*y*f) *w 
  w = w/sum(w)
  rfit = rpart(y~., x, w, method="class")
  g = -1 + 2*(predict(rfit,x)[,2]&gt;.5) 
  e = sum(w*(y*g&lt;0))
  alpha = .5*log ( (1-e) / e )
  alpha = 0.1*alpha 
  f = f + alpha*g
  error[i] = mean(1*f*y&lt;0)
}
plot(seq(1,n_iter),error,type=&quot;l&quot;,
     ylim=c(0,.25),col=&quot;blue&quot;,
     ylab=&quot;Error Rate&quot;,xlab=&quot;Iterations&quot;,lwd=2)


Here we face a classical problem in machine learning: we have a perfect model. With zero error. That is nice, but not interesting. It is also possible in econometrics, with polynomial fits: with 10 observations, and a polynomial of degree 9, we have a perfect fit. But a poor model. Here it is the same. So the trick is to split our dataset in two, a training dataset, and a validation one

set.seed(123)
id_train = sample(1:nrow(myocarde), size=45, replace=FALSE)
train_myocarde = myocarde[id_train,]
test_myocarde = myocarde[-id_train,]

We construct the model on the first one, and we check on the second one that it’s not that bad…

y_train = (train_myocarde[,"PRONO"]==1)*2-1
x_train =  train_myocarde[,1:7]
y_test = (test_myocarde[,"PRONO"]==1)*2-1
x_test = test_myocarde[,1:7]
train_error = rep(0,n_iter) 
test_error = rep(0,n_iter)
f_train = rep(0,length(y_train))
f_test = rep(0,length(y_test)) 
w_train = rep(1,length(y_train)) 
alpha = 1
for(i in 1:n_iter){
  w_train = w_train*exp(-alpha*y_train*f_train) 
  w_train = w_train/sum(w_train)
  rfit = rpart(y_train~., x_train, w_train, method="class")
  g_train = -1 + 2*(predict(rfit,x_train)[,2]&gt;.5)
  g_test = -1 + 2*(predict(rfit,x_test)[,2]&gt;.5)
  e_train = sum(w_train*(y_train*g_train&lt;0))
  alpha = .5*log ( (1-e_train) / e_train )
  alpha = 0.1*alpha 
  f_train = f_train + alpha*g_train
  f_test = f_test + alpha*g_test
  train_error[i] = mean(1*f_train*y_train&lt;0)
  test_error[i] = mean(1*f_test*y_test&lt;0)}
plot(seq(1,n_iter),test_error,col='red')
lines(train_error,lwd=2,col='blue')


Here, as previously, after 80 iterations, we have a perfect model on the training dataset, but it behaves badly on the validation dataset. But with 20 iterations, it seems to be ok…

R function

Of course, it’s possible to use R functions,

library(gbm)
gbmWithCrossValidation = gbm(PRONO ~ .,distribution = "bernoulli",
data = myocarde,n.trees = 2000,shrinkage = .01,cv.folds = 5,n.cores = 1)
bestTreeForPrediction = gbm.perf(gbmWithCrossValidation)

Here cross-validation is considered, and not training/validation, as well as forests instead of single trees, but overall, the idea is the same… Off course, the output is much nicer (here the shrinkage is a very small parameter, and learning is extremely slow)

Classification from scratch, trees 9/8

Nineth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside classification trees. And yes, I promised eight posts in that series, but clearly, that was not sufficient… sorry for the poor prediction.

Decision Tree

Decision trees are easy to read. So easy to read that they are everywhere

We start from the top, and we go down, with a binary choice, at each stop, each node. Let us see how it works on our dataset

library(rpart)
cart = rpart(PRONO~.,data=myocarde)
library(rpart.plot)
prp(cart,type=2,extra=1)


We start here with one single leaf. If we have two explanatory variable (the x-axis and the y-axis if we want to plot it), we will check what happens if we cut the leaf accoring to the value of the first variable (and there will be two subgroups, the one on the left and the one on the right)

or if we cut according to the second one (and there will be two subgroups, the one on top and the one below).

Why and where do we cut? Let us formalize a little bit. A node (a leaf) constains observations, i.e. \{y_i,\mathbf{x})i\}) for some i\in\mathcal{I}\subset\{1,\cdots,n\}. Hence, a leaf a caracterized by \mathcal{I}. For instance, the first node in the tree is \mathcal{I}=\{1,\cdots,n\}. A (binary) split is based on one specific variable – say x_j – and a cutoff, say s. Then, there are two options:

  • either x_{i,j}\leq s, then observation i goes on the left, in \mathcal{I}_L
  • or x_{i,j}> s, then observation i goes on the right, in \mathcal{I}_R

Thus, \mathcal{I}=\mathcal{I}_L\cup\mathcal{I}_R.

Now, define some impurity index, in some node. In the context of a classification tree, the most popular index used (the so-called impurity index) is Gini for node \mathcal{I} is defined as G(\mathcal{I})=-\sum_{y\in\{0,1\}}p_y(1-p_y)where p_y is the proportion of individuals in the leaf of type y. I use this notation here because it can be extended to the case of more than one class. Here, we consider only binary classification. Now, why p_y(1-p_y)? Because we want leaves that are extremely homogeneous. In our dataset, out of 71 individuals, 42 died, 29 survived. A perfect classification would be obtained if we can split in two, with the 29 survivors on the left, and the 42 dead on the right. In that case, leaves would be perfectly homogneous. So, when p_0\approx1 or p_1\approx1, we have strong homogenity. If we want an index to maximize, -p_y(1-p_y) might be an interesting candidate. Further more, the worst case would be a leaf with p_0\approx1/2, which is exactly what we have here. Note that we can also writeG(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\left(1-\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)where n_{y,\mathcal{I}} is the number of individuals of type y in the leaf \mathcal{I}, and n_{\mathcal{I}} is the number of individuals in the leaf \mathcal{I}.

If we do not split, we have indexG(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\left(1-\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)while if we split, define indexG(\mathcal{I}_L,\mathcal{I}_R)=-\sum_{x\in\{L,R\}}\frac{n_x}{n_{\mathcal{I}_x}}{n_{\mathcal{I}}}\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\left(1-\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\right)The code to compute is would be

gini = function(y,classe){
T. = table(y,classe)
nx = apply(T,2,sum)
n. = sum(T)
pxy = T/matrix(rep(nx,each=2),nrow=2)
omega = matrix(rep(nx,each=2),nrow=2)/n
g. = -sum(omega*pxy*(1-pxy))
return(g)}

Actually, one can consider other indices, like the entropic measureE(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\log\left(\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)while if we split, E(\mathcal{I}_L,\mathcal{I}_R)=-\sum_{x\in\{L,R\}}\frac{n_x}{n_{\mathcal{I}_x}}{n_{\mathcal{I}}}\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\log\left(\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\right)

entropy = function(y,classe){
  T. = table(y,classe)
  nx = apply(T,2,sum)
  n. = sum(T)
  pxy = T/matrix(rep(nx,each=2),nrow=2)
  omega = matrix(rep(nx,each=2),nrow=2)/n
  g  = sum(omega*pxy*log(pxy))
return(g)}

This index was used originally in C4.5 algorithm.

Dividing a leaf (or not)

For instance, consider the very first split. Assume that we want to split according to the very first variable

CLASSE = myocarde[,1] &lt;=100
table(CLASSE)
CLASSE
FALSE  TRUE 
   13    58

In that case, there will be 13 invididuals on one side (the left, say), and 58 on the other side (the right).

gini(y=myocarde$PRONO,classe=CLASSE)
[1] -0.4640415

Initially, without any split, it was

-2*mean(myocarde$PRONO)*(1-mean(myocarde$PRONO))
[1] -0.4832375

which can actually also be obtained with

CLASSE = myocarde[,1] gini(y=myocarde$PRONO,classe=CLASSE)
[1] -0.4832375

There is a net gain in spliting of

gini(y=myocarde$PRONO,classe=(myocarde[,1]&lt;=100))-
gini(y=myocarde$PRONO,classe=(myocarde[,1]&lt;=Inf))
[1] 0.01919591

Now, how do we split? Which variable and which cutoff? Well… let’s try all possible splits… Here, we have 7 variables. We can consider all possible values, using

sort(unique(myocarde[,1]))

But in massive datasets, it can be very long. Here, I prefer

seq(min(myocarde[,1]),max(myocarde[,1]),length=101)

so that we try 101 values of possible cutoff. Overall, the number of computations is rather low, with 707 Gini indices to compute. Again, I won’t get back here on the motivations for such a technique to create partitions, I will keep that for the course in Barcelona, but it is fast.

mat_gini = mat_v=matrix(NA,7,101)
for(v in 1:7){
  variable=myocarde[,v]
  v_seuil=seq(quantile(myocarde[,v],
6/length(myocarde[,v])),
quantile(myocarde[,v],1-6/length(
myocarde[,v])),length=101)
  mat_v[v,]=v_seuil
  for(i in 1:101){
CLASSE=variable&lt;=v_seuil[i]
mat_gini[v,i]=
  gini(y=myocarde$PRONO,classe=CLASSE)}}

Actually, the range of possible values is slightly different: I do not want cutoff too much on the left or on the right… having a leaf with one or two observations is not the idea, here. Not, if we plot all the functions, we get

par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
  ylim=range(mat_gini),xlab="",ylab="",
  main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}


Here, the most homogenous leaves obtained using a cut in two parts is when we use variable ‘INSYS’. And the optimal cutoff variable is close to 19. So far, that’s the only information we use. Well, actually no. If the gain is sufficiently large, we go for a split. Here, the gain is

gini(y=myocarde$PRONO,classe=(myocarde[,3]&lt;19))-
gini(y=myocarde$PRONO,classe=(myocarde[,3]&lt;=Inf))
[1] 0.2832801

which is large. Sufficiently large to go for it, and to split in two. Actually, we look at the relative gain

-(gini(y=myocarde$PRONO,classe=(myocarde[,3]&lt;19))-
gini(y=myocarde$PRONO,classe=(myocarde[,3]&lt;=Inf)))/
gini(y=myocarde$PRONO,classe=(myocarde[,3]&lt;=Inf))
[1] 0.5862131

If that gain exceed 1% (the default value in R), we split in two.

Then, we do it again. Twice. First, on go on the leaf on the left, with 27 observations. And we try to see if we can split it.

idx = which(myocarde$INSYS&lt;19)
mat_gini = mat_v = matrix(NA,7,101)
for(v in 1:7){
  variable = myocarde[idx,v]
  v_seuil = seq(quantile(myocarde[idx,v],
7/length(myocarde[idx,v])),
quantile(myocarde[idx,v],1-7/length(
myocarde[idx,v])), length=101)
  mat_v[v,] = v_seuil
  for(i in 1:101){
    CLASSE = variable&lt;=v_seuil[i]
    mat_gini[v,i]=
      gini(y=myocarde$PRONO[idx],classe=CLASSE)}}
par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
       ylim=range(mat_gini),xlab="",ylab="",
       main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}

The graph is here the following,

and observe that the best split is obtained using ‘REPUL’, with a cutoff around 1585. We check that the (relative) gain is sufficiently large, and then we go for it.
And then, we consider the other leaf, and we run the same code

idx = which(myocarde$INSYS&gt;=19)
mat_gini = mat_v = matrix(NA,7,101)
for(v in 1:7){
  variable=myocarde[idx,v]
  v_seuil=seq(quantile(myocarde[idx,v],
6/length(myocarde[idx,v])),
quantile(myocarde[idx,v],1-6/length(
myocarde[idx,v])), length=101)
  mat_v[v,]=v_seuil
  for(i in 1:101){
    CLASSE=variable&lt;=v_seuil[i]
    mat_gini[v,i]=
      gini(y=myocarde$PRONO[idx],
           classe=CLASSE)}}
par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
       ylim=range(mat_gini),xlab="",ylab="",
       main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}


Here, we should split according to ‘REPUL’, and the cutoff is about 1094. Here again, we have to make sure that the split is worth it. And we cut.

Now we have four leaves. And we should run the same code, again. Actually, not on the very first one, which is homogenous. But we should do the same for the other three. If we do it, we can see that we cannot split them any further. Gains will not be sufficiently interesting.

Now guess what… that’s exactly what we have obtained with our initial code

Note that the case of categorical explanatory variables has been discussed in a previous post, a few years ago.

Application on our small dataset

On our small dataset, we obtain (after changing the default values since in R, we should not have leaves with less than 10 observations… and here, the dataset is too small).

tree = rpart(y ~ x1+x2,data=df, 
control = rpart.control(cp = 0.25,
minsplit = 7))
prp(tree,type=2,extra=1)

u = seq(0,1,length=101)
p = function(x,y){predict(tree,newdata=data.frame(x1=x,x2=y),type="prob")[,2]}
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We have a nice and simple cut

With less observations in the leaves, we can easily get a perfect model here

tree = rpart(y ~ x1+x2,data=df, 
control = rpart.control(cp = 0.25,
minsplit = 2))
prp(tree,type=2,extra=1)

u = seq(0,1,length=101)
p = function(x,y){predict(tree,newdata=data.frame(x1=x,x2=y),type="prob")[,2]}
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Nice, isn’t it? Now, just two little additional comments before growing some more trees…

Pruning

I did not mention pruning here. Because there are two possible strategies when growing trees. Either we keep spliting, until we obtain only homogeneous leaves. Once we have a big, deep tree, we go for pruning. Or we use the stategy mentionned here : at each step, we check if the split is worth it. If not, we stop.

Variable Importance

An interesting tool is the variable importance function. The heuristic idea is that if we use variable ‘INSYS’ to split, it is an important variable. And its importance is related to the gain in Gini index. If we get back to the visualization of the tree, it seems that two variables are interesting here: ‘INSYS’ and ‘REPUL’. And we should get back to previous computation to quantify how important both are.

This will be used in our next post, on random forests. But actually it is not the case here, with one single tree. Let us get back to the graph on the initial node.

Indeed, ‘INSYS’ is important, since we decided to use it. But what about ‘INCAR’ or ‘REPUL’? They were very close… And actually, in R, those surrogate splits are considered in the computation, as briefly explained in the vignette. Let us look more carefully at the output of the R function

cart = rpart(PRONO~., myocarde)
split = summary(cart)$splits

If we look at the first part of that object, we get

split
      count ncat    improve    index       adj
INSYS    71   -1 0.58621312   18.850 0.0000000
REPUL    71    1 0.55440034 1094.500 0.0000000
INCAR    71   -1 0.54257020    1.690 0.0000000
PRDIA    71    1 0.27284114   17.000 0.0000000
PAPUL    71    1 0.20466714   23.250 0.0000000

So indeed, ‘INSYS’ was the most important variable, but surrogate splits can also be considered, and ‘INCAR’ and ‘REPUL’ are indeed very important. The gain was 58% (as we obtained) using ‘INSYS’ but there were gains of 55% (nothing to be ashamed of). So it would be unfair to claim that they have no importance, at all. And it is the same for the other leaves that we split,

REPUL    27    1 0.18181818 1585.000 0.0000000
PVENT    27   -1 0.10803571   14.500 0.0000000
PRDIA    27    1 0.10803571   18.500 0.0000000
PAPUL    27    1 0.10803571   22.500 0.0000000
INCAR    27    1 0.04705882    1.195 0.0000000

On the left, we did use ‘REPUL’ (with 18% gain), but ‘PVENT’, ‘PRDIA’ and ‘PAPUL’ were not that bad, with (almost) 11% gain… We can obtain variable importance by summing all those values, and we have

cart$variable.importance
     INSYS      REPUL      INCAR      PAPUL      PRDIA      FRCAR      PVENT 
10.3649847 10.0510872  8.2121267  3.2441501  2.8276121  1.8623046  0.3373771

that we can visualize using

barplot(t(cart$variable.importance),horiz=TRUE)


To be continued with more trees…

Classification from scratch, linear discrimination 8/8

Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis.

Bayes (naive) classifier

Consider the follwing naive classification rulem^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}orm^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}(where \mathbb{P}[\mathbf{X}=\mathbf{x}] is the density in the continuous case).

In the case where y takes two values, that will be standard \{0,1\} here, one can rewrite the later asm^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}and the set\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}is called the decision boundary.

Assume that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then explicit expressions can be derived.m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}where r_y^2 is the Manalahobis distance, r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]

Let \delta_ybe defined as\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)the decision boundary of this classifier is \{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}which is quadratic in {\color{blue}{\mathbf{x}}}. This is the quadratic discriminant analysis. This can be visualized bellow.

The decision boundary is here

But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.

In that case, actually, \delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y) and the decision frontier is now linear in {\color{blue}{\mathbf{x}}}. This is the linear discriminant analysis. This can be visualized bellow

Here the two samples have the same variance matrix and the frontier is

Link with the logistic regression

Assume as previously that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}is equal to \mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}which is linear in \mathbf{x}\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule.

Observe furthermore that the slope is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}which is maximal when \mathbf{\omega} is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], when \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.

Homebrew linear discriminant analysis

To compute vector \mathbf{\omega}

m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean)
m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean)
Sigma = var(myocarde[,1:7])
omega = solve(Sigma)%*%(m1-m0)
omega
                 [,1]
FRCAR -0.012909708542
INCAR  1.088582058796
INSYS -0.019390084344
PRDIA -0.025817110020
PAPUL  0.020441287970
PVENT -0.038298291091
REPUL -0.001371677757

For the constant – in the equation \omega^T\mathbf{x}+b=0 – if we have equiprobable probabilities, use

b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2

Application (on the small dataset)

In order to visualize what’s going on, consider the small dataset, with only two covariates,

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
m0 = apply(df[df$y=="0",1:2],2,mean)
m1 = apply(df[df$y=="1",1:2],2,mean)
Sigma = var(df[,1:2])
omega = solve(Sigma)%*%(m1-m0)
omega
         [,1]
x1 -2.640613174
x2  4.858705676


Using R regular function, we get

library(MASS)
fit_lda = lda(y ~x1+x2 , data=df)
fit_lda
 
Coefficients of linear discriminants:
            LD1
x1 -2.588389554
x2  4.762614663

which is the same coefficient as the one we got with our own code. For the constant, use

b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2

If we plot it, we get the red straight line

plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")])
abline(a=b/omega[2],b=-omega[1]/omega[2],col="red")


As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters

points(m0["x1"],m0["x2"],pch=4)
points(m1["x1"],m1["x2"],pch=4)
segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue")
points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19)

Of course, we can also use R function

predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1
vv=outer(vu,vu,predlda)
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)


One can also consider the quadratic discriminent analysis since it might be difficult to argue that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1

fit_qda = qda(y ~x1+x2 , data=df)

The separation curve is here

plot(df$x1,df$x2,pch=19,
col=c("blue","red")[1+(df$y=="1")])
predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1
vv=outer(vu,vu,predlda)
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)

Classification from scratch, SVM 7/8

Seventh post of our series on classification from scratch. The latest one was on the neural nets, and today, we will discuss SVM, support vector machines.

A formal introduction

Here y takes values in \{-1,+1\}. Our model will be m(\mathbf{x})=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b] Thus, the space is divided by a (linear) border\Delta:\lbrace\mathbf{x}\in\mathbb{R}^p:\mathbf{\omega}^T\mathbf{x}+b=0\rbrace

The distance from point \mathbf{x}_i to \Delta is d(\mathbf{x}_i,\Delta)=\frac{\mathbf{\omega}^T\mathbf{x}_i+b}{\|\mathbf{\omega}\|}If the space is linearly separable, the problem is ill posed (there is an infinite number of solutions). So consider
\max_{\mathbf{\omega},b}\left\lbrace\min_{i=1,\cdots,n}\left\lbrace\text{distance}(\mathbf{x}_i,\Delta)\right\rbrace\right\rbrace

The strategy is to maximize the margin. One can prove that we want to solve \max_{\mathbf{\omega},m}\left\lbrace\frac{m}{\|\mathbf{\omega}\|}\right\rbrace
subject to y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=m, \forall i=1,\cdots,n. Again, the problem is ill posed (non identifiable), and we can consider m=1: \max_{\mathbf{\omega}}\left\lbrace\frac{1}{\|\mathbf{\omega}\|}\right\rbrace
subject to y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=1, \forall i=1,\cdots,n. The optimization objective can be written\min_{\mathbf{\omega}}\left\lbrace\|\mathbf{\omega}\|^2\right\rbrace

The primal problem

In the separable case, consider the following primal problem,\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R}}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2\right\rbracesubject to y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1, \forall i=1,\cdots,n.

In the non-separable case, introduce slack (error) variables \mathbf{\xi} : if y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1, there is no error \xi_i=0.

Let C denote the cost of misclassification. The optimization problem becomes\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R},{\color{red}{\mathbf{\xi}}}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2 + C\sum_{i=1}^n\xi_i\right\rbracesubject to y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1-{\color{red}{\xi_i}}, with {\color{red}{\xi_i}}\geq 0, \forall i=1,\cdots,n.

Let us try to code this optimization problem. The dataset is here

n = length(myocarde[,"PRONO"])
myocarde0 = myocarde
myocarde0$PRONO = myocarde$PRONO*2-1
C = .5

and we have to set a value for the cost C. In the (linearly) constrained optimization function in R, we need to provide the objective function f(\mathbf{\theta}) and the gradient \nabla f(\mathbf{\theta}).

f = function(param){
  w  = param[1:7]
  b  = param[8]
  xi = param[8+1:nrow(myocarde)]
  .5*sum(w^2) + C*sum(xi)}
grad_f = function(param){
  w  = param[1:7]
  b  = param[8]
  xi = param[8+1:nrow(myocarde)]
  c(2*w,0,rep(C,length(xi)))}

and (linear) constraints are written as \mathbf{U}\mathbf{\theta}-\mathbf{c}\geq \mathbf{0}

U = rbind(cbind(myocarde0[,"PRONO"]*as.matrix(myocarde[,1:7]),diag(n),myocarde0[,"PRONO"]),
cbind(matrix(0,n,7),diag(n,n),matrix(0,n,1)))
C = c(rep(1,n),rep(0,n))

Then we use

constrOptim(theta=p_init, f, grad_f, ui = U,ci = C)

Observe that something is missing here: we need a starting point for the algorithm, \mathbf{\theta}_0. Unfortunately, I could not think of a simple technique to get a valid starting point (that satisfies those linear constraints).

Let us try something else. Because those functions are quite simple: either linear or quadratic. Actually, one can recognize in the separable case, but also in the non-separable case, a classic quadratic program\min_{\mathbf{z}\in\mathbb{R}^d}\left\lbrace\frac{1}{2}\mathbf{z}^T\mathbf{D}\mathbf{z}-\mathbf{d}\mathbf{z}\right\rbracesubject to \mathbf{A}\mathbf{z}\geq\mathbf{b}.

library(quadprog)
eps = 5e-4
y = myocarde[,&quot;PRONO&quot;]*2-1
X = as.matrix(cbind(1,myocarde[,1:7]))
n = length(y)
D = diag(n+7+1)
diag(D)[8+0:n] = 0 
d = matrix(c(rep(0,7),0,rep(C,n)), nrow=n+7+1)
A = Ui
b = Ci
sol = solve.QP(D+eps*diag(n+7+1), d, t(A), b, meq=1, factorized=FALSE)
qpsol = sol$solution
(omega = qpsol[1:7])
[1] -0.106642005446 -0.002026198103 -0.022513312261 -0.018958578746 -0.023105767847 -0.018958578746 -1.080638988521
(b     = qpsol[n+7+1])
[1] 997.6289927

Given an observation \mathbf{x}, the prediction is
y=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]

y_pred = 2*((as.matrix(myocarde0[,1:7])%*%omega+b)&gt;0)-1

Observe that here, we do have a classifier, depending if the point lies on the left or on the right (above or below, etc) the separating line (or hyperplane). We do not have a probability, because there is no probabilistic model here. So far.

The dual problem

The Lagrangian of the separable problem could be written introducing Lagrange multipliers \mathbf{\alpha}\in\mathbb{R}^n, \mathbf{\alpha}\geq \mathbf{0} as\mathcal{L}(\mathbf{\omega},b,\mathbf{\alpha})=\frac{1}{2}\|\mathbf{\omega}\|^2-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1\big)Somehow, \alpha_i represents the influence of the observation (y_i,\mathbf{x}_i).

Consider the Dual Problem, with \mathbf{G}=[G_{ij}] and G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i
\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace
subject to \mathbf{y}^T\mathbf{\alpha}=\mathbf{0} and \mathbf{\alpha}\geq\mathbf{0}.

The Lagrangian of the non-separable problem could be written introducing Lagrange multipliers \mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\in\mathbb{R}^n, \mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\geq \mathbf{0}, and define the Lagrangian \mathcal{L}(\mathbf{\omega},b,{\color{red}{\mathbf{\xi}}},\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}) as\frac{1}{2}\|\mathbf{\omega}\|^2+{\color{blue}{C}}\sum_{i=1}^n{\color{red}{\xi_i}}-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1+{\color{red}{\xi_i}}\big)-\sum_{i=1}^n{\color{red}{\beta_i}}{\color{red}{\xi_i}}
Somehow, \alpha_i represents the influence of the observation (y_i,\mathbf{x}_i).

The Dual Problem become with \mathbf{G}=[G_{ij}] and G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace
subject to \mathbf{y}^T\mathbf{\alpha}=\mathbf{0}, \mathbf{\alpha}\geq\mathbf{0} and \mathbf{\alpha}\leq {\color{blue}{C}}.
As previsouly, one can also use quadratic programming

library(quadprog)
eps = 5e-4
y = myocarde[,"PRONO"]*2-1
X = as.matrix(cbind(1,myocarde[,1:7]))
n = length(y)
Q = sapply(1:n, function(i) y[i]*t(X)[,i])
D = t(Q)%*%Q
d = matrix(1, nrow=n)
A = rbind(y,diag(n),-diag(n))
C = .5
b = c(0,rep(0,n),rep(-C,n))
sol = solve.QP(D+eps*diag(n), d, t(A), b, meq=1, factorized=FALSE)
qpsol = sol$solution

The two problems are connected in the sense that for all \mathbf{x}\mathbf{\omega}^T\mathbf{x}+b = \sum_{i=1}^n \alpha_i y_i (\mathbf{x}^T\mathbf{x}_i)+b

To recover the solution of the primal problem,\mathbf{\omega}=\sum_{i=1}^n \alpha_iy_i \mathbf{x}_ithus

omega = apply(qpsol*y*X,2,sum)
omega
                           1                        FRCAR                        INCAR                        INSYS 
 0.0000000000000002439074265  0.0550138658687635215271960 -0.0920163239049630876653652  0.3609571899422952534486342 
                       PRDIA                        PAPUL                        PVENT                        REPUL 
-0.1094017965288692356695677 -0.0485213403643276475207813 -0.0660058643191372279579454  0.0010093656567606212794835

while b=y-\mathbf{\omega}^T\mathbf{x} (but actually, one can add the constant vector in the matrix of explanatory variables).

More generally, consider the following function (to make sure that D is a definite-positive matrix, we use the nearPD function).

svm.fit = function(X, y, C=NULL) {
 n.samples = nrow(X)
 n.features = ncol(X)
 K = matrix(rep(0, n.samples*n.samples), nrow=n.samples)
 for (i in 1:n.samples){
  for (j in 1:n.samples){
   K[i,j] = X[i,] %*% X[j,] }}
 Dmat = outer(y,y) * K
 Dmat = as.matrix(nearPD(Dmat)$mat) 
 dvec = rep(1, n.samples)
 Amat = rbind(y, diag(n.samples), -1*diag(n.samples))
 bvec = c(0, rep(0, n.samples), rep(-C, n.samples))
 res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1)
 a = res$solution 
 bomega = apply(a*y*X,2,sum)
 return(bomega)
}

On our dataset, we obtain

M = as.matrix(myocarde[,1:7])
center = function(z) (z-mean(z))/sd(z)
for(j in 1:7) M[,j] = center(M[,j])
bomega = svm.fit(cbind(1,M),myocarde$PRONO*2-1,C=.5)
y_pred = 2*((cbind(1,M)%*%bomega)&gt;0)-1
table(obs=myocarde0$PRONO,pred=y_pred)
    pred
obs  -1  1
  -1 27  2
  1   9 33

i.e. 11 misclassification, out of 71 points (which is also what we got with the logistic regression).

Kernel Based Approach

In some cases, it might be difficult to “separate” by a linear separators the two sets of points, like below,

It might be difficult, here, because which want to find a straight line in the two dimensional space (x_1,x_2). But maybe, we can distort the space, possible by adding another dimension

That’s heuristically the idea. Because on the case above, in dimension 3, the set of points is now linearly separable. And the trick to do so is to use a kernel. The difficult task is to find the good one (if any).

A positive kernel on \mathcal{X} is a function K:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R} symmetric, and such that for any n, \forall\alpha_1,\cdots,\alpha_n and \forall\mathbf{x}_1,\cdots,\mathbf{x}_n,\sum_{i=1}^n\sum_{j=1}^n\alpha_i\alpha_j k(\mathbf{x}_i,\mathbf{x}_j)\geq 0.
For example, the linear kernel is k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i^T\mathbf{x}_j. That’s what we’ve been using here, so far. One can also define the product kernel k(\mathbf{x}_i,\mathbf{x}_j)=\kappa(\mathbf{x}_i)\cdot\kappa(\mathbf{x}_j) where \kappa is some function \mathcal{X}\rightarrow\mathbb{R}.

Finally, the Gaussian kernel is k(\mathbf{x}_i,\mathbf{x}_j)=\exp[-\|\mathbf{x}_i-\mathbf{x}_j\|^2].

Since it is a function of \|\mathbf{x}_i-\mathbf{x}_j\|, it is also called a radial kernel.

linear.kernel = function(x1, x2) {
 return (x1%*%x2)
}
svm.fit = function(X, y, FUN=linear.kernel, C=NULL) {
 n.samples = nrow(X)
 n.features = ncol(X)
 K = matrix(rep(0, n.samples*n.samples), nrow=n.samples)
 for (i in 1:n.samples){
  for (j in 1:n.samples){
   K[i,j] = FUN(X[i,], X[j,])
  }
 }
 Dmat = outer(y,y) * K
 Dmat = as.matrix(nearPD(Dmat)$mat) 
 dvec = rep(1, n.samples)
 Amat = rbind(y, diag(n.samples), -1*diag(n.samples))
 bvec = c(0, rep(0, n.samples), rep(-C, n.samples))
 res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1)
 a = res$solution 
 bomega = apply(a*y*X,2,sum)
 return(bomega)
}

Link to the regression

To relate this duality optimization problem to OLS, recall that y=\mathbf{x}^T\mathbf{\omega}+\varepsilon, so that \widehat{y}=\mathbf{x}^T\widehat{\mathbf{\omega}}, where \widehat{\mathbf{\omega}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}
But one can also write y=\mathbf{x}^T\widehat{\mathbf{\omega}}=\sum_{i=1}^n \widehat{\alpha}_i\cdot \mathbf{x}^T\mathbf{x}_i
where \widehat{\mathbf{\alpha}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\widehat{\mathbf{\omega}}, or conversely, \widehat{\mathbf{\omega}}=\mathbf{X}^T\widehat{\mathbf{\alpha}}.

Application (on our small dataset)

One can actually use a dedicated R package to run a SVM. To get the linear kernel, use

library(kernlab)
df0 = df
df0$y = 2*(df$y=="1")-1
SVM1 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , type="C-svc")

Since the dataset is not linearly separable, there will be some mistakes here

table(df0$y,predict(SVM1))
 
     -1 1
  -1  2 2
  1   1 5

The problem with that function is that it cannot be used to get a prediction for other points than those in the sample (and I could neither extract \omega nor b from the 24 slots of that objet). But it’s possible by adding a small option in the function

SVM2 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , prob.model=TRUE, type="C-svc")

With that function, we convert the distance as some sort of probability. Someday, I will try to replicate the probabilistic version of SVM, I promise, but today, the goal is just to understand what is done when running the SVM algorithm. To visualize the prediction, use

pred_SVM2 = function(x,y){
return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])}
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],
     cex=1.5,xlab="",
     ylab="",xlim=c(0,1),ylim=c(0,1))
vu = seq(-.1,1.1,length=251)
vv = outer(vu,vu,function(x,y) pred_SVM2(x,y))
contour(vu,vu,vv,add=TRUE,lwd=2,nlevels = .5,col="red")


Here the cost is C=.5, but of course, we can change it

SVM2 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "vanilladot" , prob.model=TRUE, type="C-svc")
pred_SVM2 = function(x,y){
return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])}
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],
     cex=1.5,xlab="",
     ylab="",xlim=c(0,1),ylim=c(0,1))
vu = seq(-.1,1.1,length=251)
vv = outer(vu,vu,function(x,y) pred_SVM2(x,y))
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red")


As expected, we have a linear separator. But slightly different. Now, let us consider the “Radial Basis Gaussian kernel”

SVM3 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "rbfdot" , prob.model=TRUE, type="C-svc")

Observe that here, we’ve been able to separare the white and the black points

table(df0$y,predict(SVM3))
 
     -1 1
  -1  4 0
  1   0 6
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],
     cex=1.5,xlab="",
     ylab="",xlim=c(0,1),ylim=c(0,1))
vu = seq(-.1,1.1,length=251)
vv = outer(vu,vu,function(x,y) pred_SVM3(x,y))
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red")


Now, to be completely honest, if I understand the theory of the algorithm used to compute \omega and b with linear kernel (using quadratic programming), I do not feel confortable with this R function. Especially if you run it several times… you can get (with exactly the same set of parameters)

or

(to be continued…)