Category Archives: Graphics

Le sport en France

Je voulais profiter de la rentree pour mettre en ligne quelques billets sur la data science (comme on dit), en particulier en me basant sur des projets R de la formation en Data Science pour l’Actuariat. L’an passe, j’avais déjà mis en ligne un billet sur le sport (“le sport, une activité de riches“). Cette fois, en m’inspirant de ce qu’a proposé Benoit, on va regarder qui sont les licenciés des différentes fédérations sportives, et ou ils vivent. Comme toujours en R, on charge les librairies qu’on va utiliser…

library(rgdal)
library(sp)
library(reshape2)
library(data.table)
library(ggplot2)
library(gridExtra)
library(ggmap)
library(RColorBrewer)
library(classInt)
library(backports)
library(OpenStreetMap)

J’ouvre une parenthèse rapide, mais en pratique on sait rarement ce qui va servir… ex-post, on va les ramener ce chargement de librairies au début. Je pense que ça serait mieux de les charger juste quand on les utilise. Bon, ensuite, il faut les donnees

Url_Licences = "https://www.data.gouv.fr/s/resources/recensement-des-licences-et-clubs-aupres-des-federations-sportives-agreees-par-le-ministere-charge-d/20180131-163516/Licences_2015.csv"
Licences_2015 = read.csv(file=Url_licences, header=TRUE, sep=",",stringsAsFactors = FALSE) 
Url_Federation = "http://freakonometrics.free.fr/Projet_R/Code_federation.csv"
Code_Fede = read.csv(Url_Federation, sep=";",header=FALSE, skip=3)
colnames(Code_Fede) = c("Code_Federation","Libelle_Federation")

On change ici le nom des variables, ça sera plus simple ensuite, et on retient juste quelques lignes interessantes

Code_Fede = Code_Fede[c(1:31,33:92),c(1:2)]

Il faut ensuite les coordonnées des villes pour faire une carte

Commune = read.csv(file="https://www.data.gouv.fr/fr/datasets/r/554590ab-ae62-40ac-8353-ee75162c05ee", sep=";", header=TRUE)

En fait, juste la latitude de la longitude nous interesse

Geocod = colsplit(Commune$coordonnees_gps, ",", c("Latitude", "Longitude"))
Commune = data.frame(Commune,Geocod)

Un peu de menage ne fera pas de mal

Commune$Ligne_5 = NULL
Commune$coordonnees_gps = NULL
doublons = which(duplicated(Commune$Code_commune_INSEE)) #détecte les lignes où il y a doublon
Commune_Indiv = Commune[-doublons,]

On rajoute maintenant un libelle pour chaque sport

Licences_2015 = merge(x=Licences_2015, y=Code_Fede, by.x="fed_2014", by.y="Code_Federation", all.y=TRUE)

Et on supprime également les lignes ou les codes commune ne sont pas renseignés (car les données ne seront pas exploitables)

Licences_2015 = Licences_2015[!is.na(Licences_2015$newcog2),]

On a besoin de faire un peu attention a Paris et Marseille, car on a des données par arrondissement,

for (i in 1:nrow(Licences_2015)){
  if (Licences_2015[i,c("newcog2")]=="75056") {
    (Licences_2015[i,c("newcog2")] = "75101")}
  if (Licences_2015[i,c("newcog2")]=="13055") {
    (Licences_2015[i,c("newcog2")] = "13101")}}
Licences_2015 = merge(x=Licences_2015, y=Commune_Indiv, by.x="newcog2", by.y="Code_commune_INSEE", all.x=TRUE)

On y est presque. On va créer la variable taux de licenciés (nombre de licences rapporté a la population) pour chaque commune

Licences_2015$Taux_Licencies = ifelse(Licences_2015$pop_2014 != 0,Licences_2015$l_2015/Licences_2015$pop_2014,0)

Maintenant, on peut jouer ! Ou presque… reste a faire quelques regroupements en fonction de ce qu’on veut représenter.

df_Nb_Lic_Agg_Fed = aggregate(data.frame(
Nb_Licence = Licences_2015$l_2015,
Nb_hommes = Licences_2015$l_h_2015,
Nb_femmes = Licences_2015$l_f_2015,
NbLicences_0_4_Ans=Licences_2015$l_0_4_2015,
NbLicences_5_9_Ans=Licences_2015$l_5_9_2015,
NbLicences_10_14_Ans=Licences_2015$l_10_14_2015,
NbLicences_15_19_Ans=Licences_2015$l_15_19_2015,
NbLicences_20_29_Ans=Licences_2015$l_20_29_2015,
NbLicences_30_44_Ans=Licences_2015$l_30_44_2015,
NbLicences_45_59_Ans=Licences_2015$l_45_59_2015,
NbLicences_60_74_Ans=Licences_2015$l_60_74_2015,
NbLicences_75_Ans=Licences_2015$l_75_2015,
Nb_0_4_Ans=Licences_2015$pop_0_4_2014,
Nb_5_9_Ans=Licences_2015$pop_5_9_2014,
Nb_10_14_Ans=Licences_2015$pop_10_14_2014,
Nb_15_19_Ans=Licences_2015$pop_15_19_2014,
Nb_20_29_Ans=Licences_2015$pop_20_29_2014,
Nb_30_44_Ans=Licences_2015$pop_30_44_2014,
Nb_45_59_Ans=Licences_2015$pop_45_59_2014,
Nb_60_74_Ans=Licences_2015$pop_60_74_2014,
Nb_75_Ans=Licences_2015$pop_75_2014,
Pop_femmes=Licences_2015$popf_2014,
Pop_hommes=Licences_2015$poph_2014,
Pop_Totale=Licences_2015$pop_2014), 
by = list(Federation = Licences_2015$Libelle_Federation), sum, na.rm = TRUE)

On peut ainsi calculer le “taux de féminisation” de chaque sport

df_Nb_Lic_Agg_Fed$tx_femmes = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence!=0,df_Nb_Lic_Agg_Fed$Nb_femmes/df_Nb_Lic_Agg_Fed$Nb_Licence,0)

ou la répartition par classe d’âge du nombre de licenciés par fédération

df_Nb_Lic_Agg_Fed$Nb_Licence_Norme = 
  df_Nb_Lic_Agg_Fed$NbLicences_0_4_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_5_9_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_10_14_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_15_19_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_20_29_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_30_44_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_45_59_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_60_74_Ans+
  df_Nb_Lic_Agg_Fed$NbLicences_75_Ans

Pour la classe d’age 0-14 ans, on pose alors

df_Nb_Lic_Agg_Fed$Tx_Licences_0_14_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,      (df_Nb_Lic_Agg_Fed$NbLicences_0_4_Ans+df_Nb_Lic_Agg_Fed$NbLicences_5_9_Ans+df_Nb_Lic_Agg_Fed$NbLicences_10_14_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

et pour la classe d’age 15-29 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_15_29_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,
(df_Nb_Lic_Agg_Fed$NbLicences_15_19_Ans+
df_Nb_Lic_Agg_Fed$NbLicences_20_29_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 30-44 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_30_44_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,(df_Nb_Lic_Agg_Fed$NbLicences_30_44_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 45-59 ans

df_Nb_Lic_Agg_Fed$Tx_Licences_45_59_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0,                                        (df_Nb_Lic_Agg_Fed$NbLicences_45_59_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

pour la classe d’age 60 ans et plus (on a compris le truc)

df_Nb_Lic_Agg_Fed$Tx_Licences_60_Ans = ifelse(df_Nb_Lic_Agg_Fed$Nb_Licence_Norme != 0, (df_Nb_Lic_Agg_Fed$NbLicences_60_74_Ans+ df_Nb_Lic_Agg_Fed$NbLicences_75_Ans)/df_Nb_Lic_Agg_Fed$Nb_Licence_Norme,0)

On passe a la détermination des 25 premières fédérations en nombre de licenciés

dt_Nb_Lic_Agg_Fed = data.table(df_Nb_Lic_Agg_Fed)
setorder(dt_Nb_Lic_Agg_Fed,-Nb_Licence,na.last=TRUE)
dt_Nb_Lic_Agg_Main_Fed = dt_Nb_Lic_Agg_Fed[1:25,]
graph1 = ggplot(data=dt_Nb_Lic_Agg_Main_Fed, aes(x=reorder(Federation,Nb_Licence), y=Nb_Licence)) + 
  geom_bar(stat="Identity",fill = "blue")+
  geom_text(aes(label=Nb_Licence),check_overlap = TRUE, vjust=0.5, hjust=0, color="blue")+
  ggtitle("TOP 25 des fédérations sportives en termes de licenciés")+
  ylim(0, 2500000)+
  xlab("Fédérations") + ylab("Nombre de licences")
graph1+coord_flip()

On ordonne ensuite par taux de femmes,

setorder(dt_Nb_Lic_Agg_Main_Fed,-tx_femmes,na.last=TRUE)
graph2 = ggplot(data=dt_Nb_Lic_Agg_Main_Fed) +
  aes(x =reorder(Federation,tx_femmes), y = tx_femmes) + geom_bar(stat="Identity",fill = "pink")+
geom_text(aes(label=paste(round(100*tx_femmes, 0), "%", sep="")),check_overlap = TRUE, vjust=0.5, hjust=0.5, color="black")+
xlab("Fédération") + ylab("part des licenciées femmes")+
ggtitle("la pratique sportive féminine par fédération")  
graph2+coord_flip()

Et finalement on va regarder par classe d’age

df_Nb_Lic_Agg_Main_Fed = data.frame(dt_Nb_Lic_Agg_Main_Fed)
Licence_Age = melt(df_Nb_Lic_Agg_Main_Fed, id=c("Federation"), measured=c("Tx_Licences_0_14_Ans","Tx_Licences_15_29_Ans", "Tx_Licences_30_44_Ans", "Tx_Licences_45_59_Ans","Tx_Licences_60_Ans"))
Licence_Age_Clean = Licence_Age[(Licence_Age$variable=="Tx_Licences_0_14_Ans" |       Licence_Age$variable=="Tx_Licences_15_29_Ans" | Licence_Age$variable=="Tx_Licences_30_44_Ans" |
Licence_Age$variable=="Tx_Licences_45_59_Ans" |
Licence_Age$variable=="Tx_Licences_60_Ans"),]  
dt_Licence_Age_Clean = data.table(Licence_Age_Clean)
setorder(dt_Licence_Age_Clean,-variable,na.last=TRUE)
setorder(Licence_Age_Clean,variable,na.last=TRUE)
graph3 = ggplot(data=Licence_Age_Clean, aes(x=Federation, y=value, fill=variable)) +
geom_bar(stat="identity")+
xlab("Fédération") + ylab("répartition par classe d'âge")+
ggtitle("Répartition des licenciés par classe d'âge")  
graph3+coord_flip()+scale_fill_brewer(palette="Paired")

A la lecture du graphique ci-dessus, les sports pourraient être classés en 3 catégories :

  • les “sports de jeunes” : ceux-ci ont plus de la motié de leurs licenciés âgés de moins de 15 ans : il s’agit de la gymnastique, du judo, du handball, de la natation, ou encore de la voile.
  • les “sports de vieux” : on retrouve ici sans surprise la randonnée, le cyclotourisme, le golf, la pétanque, le tir ou encore les sports sous-marins. Ceux-ci voient leurs licenciés avoir plus de 45 ans pour tois quart d’entre eux.
  • les “sports pour tous” qui correspondent à ceux qui n’ont pas encore été cités et pour lesquels classes d’âge apparaissent plus équilibrés

Finallement, on peut regarder quelques sports, sur une carte

map.France = get_map(location = c(lon=1.75, lat=46.70), zoom = 6)
Rugby_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="133",]
Voile_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="128",]
Ski_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="121",]
PetanQ_2015 = Licence_Max_2015[Licence_Max_2015$fed_2014=="242",]
Rugby = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Rugby_2015, colour="red", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Rugby")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="red"))
Voile = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Voile_2015, colour="blue", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Voile")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="blue"))
Ski = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = Ski_2015, colour="grey", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("Ski")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="grey"))
Petanque = ggmap(map.France, extent = "normal") +
  geom_point(aes(x = Longitude, y = Latitude), data = PetanQ_2015, colour="chocolate3", alpha = 0.5, size=2.0, na.rm=TRUE)+
  theme_nothing(legend = TRUE) +
  theme(legend.position = "bottom")+
  ggtitle("pétanque et jeu provençal")+
  theme(plot.title = element_text(size = 10, face = "bold", hjust=0.5, color="chocolate3"))
grid.arrange(Rugby,Voile,Ski,Petanque, ncol=2, nrow = 2,top="visualisation géographique de sports \n à fort ancrage régional")

Amusant, non?

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar)
src = osmsource_api()
bb = center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua = get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids = find(ua, way(tags(k %in% vc)))
nat_ids = find_down(ua, way(nat_ids))
nat = subset(ua, ids = nat_ids)
nat_poly = as_sp(nat, type)}
 
listev = function(vc,type="polygons"){
  nat_ids = find(ua, way(tags(v %in% vc)))
  nat_ids = find_down(ua, way(nat_ids))
  nat = subset(ua, ids = nat_ids)
  nat_poly  as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines")
H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M)
plot(W,add=TRUE,col="blue")
plot(P,add=TRUE,col="green")
if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp)
library(raster)
library(rgdal)
library(rgeos)
library(maptools)
identification = function(xy,h,PLG){
  b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2),
               y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h)))
  pb1=Polygon(b)    
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))
  UC=gUnionCascaded(PLG)
  return(gIntersection(SPb1,UC))
}

and then, we identify, as follows

whichidtf = function(xy,h){
  h=.7*h
  label="EMPTY"
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"
return(label)
}

Let is use colored rectangle to make sure it works

nx=length(vx)
vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2)
ny=length(vy)
vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2)
 plot(M,border="white")
 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         lb=whichidtf(c(vx[i],vy[j]),h)
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      
     }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png)
 library(grid)
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png")
 tree = readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree
 rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png")
water = readPNG("angle-double-up.png")
 rev_water=water
 rev_water[,,3]=water[,,4]
 home = readPNG("home.png")
 rev_home=home
 rev_home[,,4]=home[,,4]*.5
 church = readPNG("church.png")
 rev_church=church
 rev_church[,,1]=church[,,4]*.5
 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white")
 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     lb=whichidtf(c(vx[i],vy[j]),h)
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     
   }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).

 

Visualiser les résultats du premier tour

Juste quelques lignes de code, pour visualiser les résultats du premier tour en France. L’idée est de faire une carte assez minimaliste, avec des cercles centrés sur les centroïdes des départements. On commence par récupérer les données pour le fond de carte, un fichier 7z sur le site de l’ign.

download.file("https://wxs-telechargement.ign.fr/oikr5jryiph0iwhw36053ptm/telechargement/inspire/GEOFLA_THEME-DEPARTEMENTS_2016$GEOFLA_2-2_DEPARTEMENT_SHP_LAMB93_FXX_2016-06-28/file/GEOFLA_2-2_DEPARTEMENT_SHP_LAMB93_FXX_2016-06-28.7z",destfile = "dpt.7z")

On a dans ce fichier des informations sur les centroïdes

library(maptools)
library(maps)
departements<-readShapeSpatial("DEPARTEMENT.SHP")
plot(departements)
points(departements@data$X_CENTROID,departements@data$Y_CENTROID,pch=19,col="red")

Comme ça ne marche pas très bien, on va le refaire à la main, par exemple pour l’Ille-et-Vilaine,

pos=which(departements@data[,"CODE_DEPT"]==35)
Poly_35=departements[pos,]
plot(departements)
plot(Poly_35,col="yellow",add=TRUE)
departements@data[pos,c("X_CENTROID","Y_CENTROID")]
points(departements@data[pos,c("X_CENTROID","Y_CENTROID")],pch=19,col="red")
library(rgeos)
(ctd=gCentroid(Poly_35,byid=TRUE))
points(ctd,pch=19,col="blue")

Comme ça marche mieux, on va utiliser ces centroïdes.

ctd=as.data.frame(gCentroid(departements,byid=TRUE))
plot(departements)
points(ctd,pch=19,col="blue")

Maintenant, il nous faut les résultats des élections, par département. On peut aller scraper le site du ministère d’ intérieur. On a une page par département, donc c’est facile de parcourir. Par contre, dans l’adresse url, il faut le code région. Comme je suis un peu fainéant, au lieu de faire une table de correspondance, on teste tout, jusqu’à ce que ça marche. L’idée est d’aller cherche le nombre de voix obtenues par un des candidats.

candidat="M. Emmanuel MACRON"
library(XML)
voix=function(no=35){
testurl=FALSE
i=1
vect_reg=c("084","027","053","024","094","044","032","028","075","076","052",
"093","011","001","002","003","004")
region=NA
while((testurl==FALSE)&(i<=20)){
reg=vect_reg[i]
nodpt=paste("0",no,sep="")
# if(!is.na(as.numeric(no))){if(as.numeric(no)<10) nodpt=paste("00",no,sep="")}
url=paste("http://elections.interieur.gouv.fr/presidentielle-2017/",reg,"/",nodpt,"/index.html",sep="")
test=try(htmlParse(url),silent =TRUE)
if(!inherits(test, "try-error")){testurl=TRUE
region=reg}
i=i+1
}
tabs tab=tabs[[2]]
nb=tab[tab[,1]==candidat,"Voix"]
a<-unlist(strsplit(as.character(nb)," "))
as.numeric(paste(a,collapse=""))}

On peut alors tester

> voix(35)
[1] 84648

Comme ça semble marcher, on le fait pour tous les départements

liste_dpt=departements@data$CODE_DEPT
nbvoix=Vectorize(voix)(liste_dpt)

On peut alors visualiser sur une carte.

plot(departements,border=NA)
points(ctd,pch=19,col=rgb(1,0,0,.25),cex=nbvoix/50000)

Et on peut aussi tenter pour une autre candidate,

candidat="Mme Marine LE PEN"

et on obtient la carte suivante

plot(departements,border=NA)
points(ctd,pch=19,col=rgb(0,0,1,.25),cex=nbvoix/50000)

Evolution du nombre de demandeurs d’emploi

Allez, on continue avec les projets rendus dans le cadre du projet R de la formation en Data Science pour l’Actuariat, Meryem Schalck a proposé une jolie animation des statistiques des demandeurs d’emploi. Sur http://dares.travail-emploi.gouv.fr/, on récupère les séries d’évolution du nombre de demandeurs d’emploi, par département (entre autres), sous forme de fichier Excel. Histoire de simplifier, je mets en ligne une copie en csv,

ufc<- read.csv("http://freakonometrics.free.fr/DemandeV2.csv", header=TRUE, sep = ";")

Pour charger les packages, Meryem propose une jolie petite fonction

Chargepackages <- function(x){
x <- as.character(x)
if(!require(x, character.only = TRUE)){
install.packages(x, dependencies = TRUE)
require(x, character.only = TRUE)
}
}
packages <- list("sp", "rgdal", "animation", "rgeos", "mapproj", "maptools", "dplyr", "ggplot2", "RColorBrewer", "classInt", "PBSmapping", "ggmap", "splancs", "osmar", "maps", "scales", "osmar", "geosphere", "RCurl", "bitops", "XML", "splancs", "PBSmapping", "htmltools"
)
lapply(packages, Chargepackages)

On va retravailler un peu les données pour commencer

x<-ufc$nombre
y<-ufc$demande
dt<-as.Date(ufc$date,"%d/%m/%Y")
n<-length(x)

et ensuite on fait notre animation,

colfunc <- colorRampPalette(c("green","yellow", "red"))
YlOrBr <- data.frame(col=colfunc(length(unique(y))),num=unique(y)[order(unique(y))])
ufc %>%
left_join(YlOrBr, by=c("demande"="num")) -> YlOrBr
oopt = ani.options(interval = 0.5)
library(scales)
YlOrBr$Date<-as.Date(YlOrBr$date,"%d/%m/%Y")
ani.record(reset = TRUE) # clear history before recording
for (i in 1:length(unique(dt))) {
YlOrBr[i>=x,] %>%
ggplot() +  geom_point(aes(x=Date,y=demande),colour=as.character(YlOrBr$col[i>=x]))+
scale_x_date(breaks=date_breaks("6 month"),limits = c(min(dt),max(dt))) +
ylim(c(1000,max(y))) +
ggtitle ("Nombre de demandeurs d'emploi par mois entre 01/2000 et 04/2016") +
xlab("Date") +
ylab("Nombre de demandeurs d'emploi cat A ensemble") +
theme(axis.text.x  = element_text(angle=45, vjust=0.5, size=5)) -> p
print(p)
ani.record()
}
oopts = ani.options(interval = 0.2)
ani.record()
saveHTML(ani.replay(),img.name = "record_plot")

et voilà

Inter-relationships in a matrix

Last week, I wanted to displaying inter-relationships between data in a matrix. My friend Fleur, from AXA, mentioned an interesting possible application, in car accidents. In car against car accidents, it might be interesting to see which parts of the cars were involved. On https://www.data.gouv.fr/fr/, we can find such a dataset, with a lot of information of car accident involving bodily injuries (in France, a police report is necessary, and all of them are reported in a big dataset… actually several dataset, with information of people involved, cars, locations, etc). For 2014 claims, the dataset is

> base = read.csv("https://www.data.gouv.fr/s/resources/base-de-donnees-accidents-corporels-de-la-circulation-sur-6-annees/20150806-153355/vehicules_2014.csv")

Let us keep only claims involving two vehicules,

> T=table(base$Num_Acc)
> idx=names(T)[which(T==2)]

For 2014, we have 32,222 claims.

> length(idx)
[1] 32222

In this dataset, we have information about where cars were hit,

plus ‘9’ for multiple hot (in rollover accidents) and ‘0’ should be missing information.

> nom=c("NA","Front","Front R",'Front L',"Back","Back R","Back L","Side R","Side L","Multiple")

Now, we simply have to go through our dataset, and get the matrix. My first idea was to get a symmetric one,

> B=base[base$Num_Acc %in% idx,]  
> B=B[order(B$Num_Acc),]
> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+   M[a,b]=M[a,b]+1
+   M[b,a]=M[b,a]+1
+ }
> rownames(M)=nom
> colnames(M)=nom

The problem, when we ask for a symmetric chord diagram, is that we cannot have Front – Front claims (since values on the diagonal are removed)

> library(circlize)
> chordDiagramFromMatrix(M,symmetric=TRUE)

So let’s pretend that there could be some possible distinction in the dataset, between the first and the second row. Like the first one is the ‘responsible’ driver. Or like, for insurer, the first one is your insured. Just to avoid this symmetry problem

> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+ M[a,b]=M[a,b]+1
+ }
> rownames(M)=paste("A",nom,sep=" ")
> colnames(M)=paste("B",nom,sep=" ")

If we visualize the chord diagram, this time it is more complex to analyze,

> chordDiagram(M)

Below we have the first row (say our driver, letter A) and on top, the second row (say the other driver, letter B),

In bodily injury claims, we observe a large proportion of Front – Front claims, as well as Front – Back. And as expected Back-Back are not that common….

Minimalist Maps

This week, I mentioned a series of maps, on Twitter,

Friday evening, just before leaving the office to pick-up the kids after their first week back in class, Matthew Champion (aka ) sent me an email, asking for more details. He wanted to know if I did produce those graphs, and if he could mention then, in a post. The truth is, I have no idea who produced those graphs, but I told him one can easily reproduce them. For instance, for the cities, in R, use

> library(maps)
> data("world.cities")
> plot(world.cities$lon,world.cities$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

It is possible to get a more minimalist one by plotting only cities with more than 100,000 unhabitants, e.g.,

> world.cities2 = world.cities[
+ world.cities$pop>100000,]
> plot(world.cities2$lon,world.cities2$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

For the airports, it was slightly more complex since on http://openflights.org/data.html#airport, 6,977 airports  are mentioned. But on http://www.naturalearthdata.com/, I found another dataset with only 891 airports.

> library(maptools)
> shape <- readShapePoints(
+ "~/data/airport/ne_10m_airports.shp")
> plot(shape,pch=19,cex=.7)

On the same website, one can find a dataset for ports,

> shape <- readShapePoints(
+ "~/data/airport/ne_10m_ports.shp")
> plot(shape,pch=19,cex=.7)

This is for graphs based on points. For those based on lines, for instance rivers, shapefiles can be downloaded from https://github.com/jjrom/hydre/tree/, and then, use

> require(maptools)
> shape <- readShapeLines(
+ "./data/river/GRDC_687_rivers.shp")
> plot(shape,col="blue")

For roads, the shapefile can be downloaded from http://www.naturalearthdata.com/

> shape <- readShapeLines(
+ "./data/roads/ne_10m_roads.shp")
> plot(shape,lwd=.5)

Last, but not least, for lakes, we need the polygons,

> shape <- readShapePoly(
+ "./data/lake/ne_10m_lakes.shp")
> plot(shape,col="blue",border="blue",lwd=2)

Nice, isn’t it? See See the world differently with these minimalist maps for ‘s post.

Shapefiles from Isodensity Curves

Recently, with @3wen, we wanted to play with isodensity curves. The problem is that it is difficult to get – numerically – the equation of the contour (even if we can easily plot it). Consider the following surface (just for fun, in order to illustrate the idea)

> f=function(x,y) x*y+(1-x)*(1-y)
> u=v=seq(0,1,length=21)
> v=seq(0,1,length=11)
> f=outer(u,v,f)
> persp(u,v,f,theta=angle,phi=10,box=TRUE,
+ shade=TRUE,ticktype="detailed",xlab="",
+ ylab="",zlab="",col="yellow")

For instance, assume that we want to locate areas where the density exceed 0.7 (here in the lower left corner, SW, and the upper right corner, NE)

> image(u,v,f)
> contour(u,v,f,add=TRUE,levels=.7)

Continue reading Shapefiles from Isodensity Curves

Kernel Density Estimation with Ripley’s Circumferential Correction

The revised version of the paper Kernel Density Estimation with Ripley’s Circumferential Correction is now online, on hal.archives-ouvertes.fr/.

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique — based of properties of Gaussian kernels — to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.

There are new applications, and new graphs, too

Most of the codes can be found on https://github.com/ripleyCorr/Kernel_density_ripley (as well as datasets).

Generating Hurricanes with a Markov Spatial Process

The National Hurricane Center (NHC) collects datasets with all  storms in North Atlantic, the North Atlantic Hurricane Database (HURDAT). For all sorms, we have the location of the storm, every six jours (at midnight, six a.m., noon and six p.m.). Note that we have also the date, the maximal wind speed – on a 6 hour window – and the pressure in the eye of the storm.

It is possible to run the following function

library(XML)
extract.track=function(year=2012,p=TRUE){

Continue reading Generating Hurricanes with a Markov Spatial Process

Allez les Bleus !

In almost three weeks, the (FIFA) World Cup will start, in Brazil. I have to admit that I am not a big fan of soccer, so I will not talk to much about it. Actually, I wanted to talk about colors, and variations on some colors. For instance, there are a lot of blues. In order to visualize standard blues, let us consider the following figure, inspired by the well known chart of R colors,

BLUES=colors()[grep("blue",colors())]
RGBblues=col2rgb(BLUES)
library(grDevices)
HSVblues=rgb2hsv( RGBblues[1,], RGBblues[2,], RGBblues[3,])
HueOrderBlue=order( HSVblues[1,], HSVblues[2,], HSVblues[3,] )
SetTextContrastColor=function(color) ifelse( mean(col2rgb(color)) > 127, "black", "white")
TextContrastColor=unlist( lapply(BLUES, SetTextContrastColor) )
c=11
l=6
plot(0, type="n", ylab="", xlab="",axes=FALSE, ylim=c(0,11), xlim=c(0,6))
for (j in 1:11){
  for (i in 1:6){
  k=(j-1)*6 + i
rect(i-1,j-1,i,j, border=NA, col=BLUES[ HueOrderBlue[k] ])
text(i-.5,j-.5,paste(BLUES[k]), cex=0.75, col=TextContrastColor[ HueOrderBlue[k] ])}}

All the color names that contain “blue” in it are here.

 

Having the choice between several possible colors is interesting, but it can also be interesting to get a palette of blue colors, What we can get is the following

library(RColorBrewer)
blues=colorRampPalette(brewer.pal(9,"Blues"))(100)

In order to illustrate the use of palette colors, consider some data, on soccer players (officially registered). The dataset – lic-2012-v1.csv – can be downloaded from http://data.gouv.fr/fr/dataset/… (I will also use a dataset we have on location of all towns, in France, with latitudes and longitudes)

base1=read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE)
base1$cp=base1$dep*1000+base1$com
base2=read.csv("lic-2012-v1.csv", header=TRUE)
base2=base2[base2$fed_2012==111,]
names(base2)[1]="cp"
base2$cp=as.numeric(as.character(base2$cp))

The problem with France (I should probably say one of the many problems) is that regions and departements are not well coded, in the standard functions. To explain where départements are, let us use the dept.rda file, and then, we can get a matching between R names, and standard (administrative) ones,

base21=base2[,c("cp","l_2012","pop_2010")]
base21$dpt=trunc(base21$cp/1000)
library(maps)
load("dept.rda")
base21$nomdpt=dept$dept[match(as.numeric(base21$dpt),dept$CP)]
L=aggregate(base21$l_2012,by=list(Category=base21$nomdpt),FUN=sum)
P=aggregate(base21$pop_2010,by=list(Category=base21$nomdpt),FUN=sum)
base=data.frame(D=P$Category,Y=L$x/P$x,C=trunc(L$x/P$x/.0006))
france=map(database="france")
matche=match.map(france,base$D,exact=TRUE)
map(database="france", fill=TRUE,col=blues[base$C[matche]],resolution=0)

Here are the rates of soccer players (with respect to the total population) It is also possible to look at rate not by département, but by town,

base10=base1[,c("cp","long","lat","pop_2010")]
base20=base2[,c("cp","l_2012")]
base=merge(base10,base20)
Y=base$l_2012/base$pop_2010
QY=as.numeric(cut(Y,c(0,quantile(Y,(1:99)/100),10),labels=1:100))
library(maps)
map("france",xlim=c(-1,1),ylim=c(46,48))
points(base$long,base$lat,cex=.4,pch=19,col=blues[QY])

The darker the dot, the more player, We can also zoom in, to get a better understanding, in the northern part of France, for instance, or in the Southern part,

We can obtain a map which is not (too) far away from the one mentioned a few months ago on http://slate.fr/france/78502/.

Conditional densities, on one single graph

With Stéphane Tufféry we’ve been working on credit scoring1 and we’ve been using the popular german credit dataset,

> myVariableNames <- c("checking_status","duration","credit_history",
+ "purpose","credit_amount","savings","employment","installment_rate",
+ "personal_status","other_parties","residence_since","property_magnitude",
+ "age","other_payment_plans","housing","existing_credits","job",
+ "num_dependents","telephone","foreign_worker","class")
> credit = read.table(
+ "http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/german/german.data",
+ header=FALSE,col.names=myVariableNames)
> credit$class <- credit$class-1

We wanted to get a nice code to produce a graph like the one below,

Yesterday, Stéphane came up with the following code, that can easily be adapted

> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> varQuanti = function(base,y,x)
+ {
+ layout(matrix(c(1, 2), 2, 1, byrow = TRUE),heights=c(3, 1))
+	par(mar = c(2, 4, 2, 1))
+	base0 <- base[base[,y]==0,]
+	base1 <- base[base[,y]==1,]
+	xlim1 <- range(c(base0[,x],base1[,x]))
+	ylim1 <- c(0,max(max(density(base0[,x])$y),max(density(base1[,x])$y)))
+	plot(density(base0[,x]),main=" ",col=CL[1],ylab=paste("Density of ",x),
+		 xlim = xlim1, ylim = ylim1 ,lwd=2)
+	par(new = TRUE)
+	plot(density(base1[,x]),col=CL[6],lty=1,lwd=2,
+		 xlim = xlim1, ylim = ylim1,xlab = '', ylab = '',main=' ')
+	legend("topright",c(paste(y," = 0"),paste(y," = 1")),
+		   lty=1,col=CL[c(1,6)],lwd=2)
+	texte <- c("Kruskal-Wallis'Chi² = \n\n",
+       round(kruskal.test(base[,x]~base[,y])$statistic*1000)/1000)
+	text(xlim1[2]*0.8, ylim1[2]*0.5, texte,cex=0.75)
+	boxplot(base[,x]~base[,y],horizontal = TRUE,xlab= y,col=CL[c(2,5)])
+}
> varQuanti(credit,"class","duration")

The code is not complex, but since I usually waste a lot of time on my graphs, I will try to upload more frequently short posts, dedicated to graphs, in R (without ggplot).

1.for a chapter on statistical learning in the forthcoming Computational Actuarial Science with R

Visualizing densities of spatial processes

We recently uploaded on http://hal.archives-ouvertes.fr/hal-00725090 a revised version of our work, with Ewen Gallic (a.k.a. @3wen) on Visualizing spatial processes using Ripley’s correction: an application to bodily-injury car accident location

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and di#cult to implement. We provide a simple technique – based of properties of Gaussian kernels – to compute e#efficiently weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. An illustration on location of bodily-injury car accident (and hot spots) in the western part of France is discussed, where a lot of accident occur close to large cities, next to the sea.

Sketches of the R code can be found in the paper, to produce maps, an to describe the impact of our boundary correction. For instance, in Finistère, the distribution of car accident is the following (with a standard kernel on the left, and with correction on the right), with 186 claims (involving bodily injury)

and in Morbihan with 180 claims, observed in a specific year (2008 as far as I remember),

The code is the same as the one mentioned last year, except perhaps plotting functions. First, one needs to defi
ne a color scale and associated breaks

breaks <- seq( min( result $ZNA , na.rm = TRUE ) * 0.95 , max ( result$ZNA , na.rm = TRUE ) * 1.05 , length = 21)
col <- rev( heat . colors (20) )

to
finally plot the estimation

image . plot ( result $X, result $Y, result $ZNA , xlim = range (pol[,
1]) , ylim = range (pol[, 2]) , breaks = breaks , col = col ,
xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n",
zlim = range ( breaks ), horizontal = TRUE )

It is possible to add a contour, the observations, and the border of the polygon

contour ( result $X, result $Y, result $ZNA , add = TRUE , col = "grey ")
points (X[, 1], X[, 2], pch = 19, cex = 0.2 , col = " dodger blue")
polygon (pol , lwd = 2)

Now, if one wants to improve the aesthetics of the map, by adding a Google Maps base map, the
first thing to do – after loading ggmap package – is to get the base map

theMap <- get_map( location =c( left =min (pol [ ,1]) , bottom =min (pol[ ,2]) , right =max (pol [ ,1]) , 
top =max (pol [ ,2])), source =" google ", messaging =F, color ="bw")

Of course, data need to be put in the right format

getMelt <- function ( smoothed ){
res <- melt ( smoothed $ZNA)
res [ ,1] <- smoothed $X[res [ ,1]]
res [ ,2] <- smoothed $Y[res [ ,2]]
names (res) <- list ("X","Y","ZNA")
return (res )
}
smCont <- getMelt ( result )

Breaks and labels should be prepared

theLabels <- round (breaks ,2)
indLabels <- floor (seq (1, length ( theLabels ),length .out =5)) 
indLabels [ length ( indLabels )] <- length ( theLabels ) 
theLabels <- as. character ( theLabels [ indLabels ])
theLabels [ theLabels =="0"] <- " 0.00 "

Now, the map can be built

P <- ggmap ( theMap )
P <- P + geom _ point (aes(x=X, y=Y, col=ZNA), alpha =.3 , data =
smCont [!is.na( smCont $ZNA ) ,], na.rm= TRUE )

It is possible to add a contour

P <- P + geom _ contour ( data = smCont [!is.na( smCont $ZNA) ,] ,aes(x=
X, y=Y, z=ZNA ), alpha =0.5 , colour =" white ")

and colors need to be updated

P <- P + scale _ colour _ gradient ( name ="", low=" yellow ", high ="
red", breaks = breaks [ indLabels ], limits = range ( breaks ),
labels = theLabels )

To remove the axis legends and labels, the theme should be updated

P <- P + theme ( panel . grid . minor = element _ line ( colour =NA), panel
. grid . minor = element _ line ( colour =NA), panel . background =
element _ rect ( fill =NA , colour =NA), axis . text .x= element _ blank() ,
axis . text .y= element _ blank () , axis . ticks .x= element _ blank() ,
axis . ticks .y= element _ blank () , axis . title = element _ blank() , rect = element _ blank ())

The
final step, in order to draw the border of the polygon

polDF <- data . frame (pol)
colnames ( polDF ) <- list ("lon","lat")
(P <- P + geom _ polygon ( data =polDF , mapping =( aes(x=lon , y=lat)), colour =" black ", fill =NA))

Then, we’ve applied that methodology to estimate the road network density in those two regions, in order to understand if high intensity means that it is a dangerous area, or if it simply because there is a lot of traffic (more traffic, more accident),

We have been using the dataset obtained from the Geofabrik website which provides
Open-StreetMap data. Each observation is a section of a road, and contains a few points identifi
ed by their geographical coordinates that allow to draw lines. We have use those points to estimate a proxy of road intensity, with weight going from 10 (highways) to 1 (service roads).

splitroad <- function ( listroad , h = 0.0025) {
pts = NULL
weights <- types . weights [ match ( unique ( listroad $ type ), types .
weights $ type ), " weight "]
for (i in 1:( length ( listroad ) - 1)) {
d = diag (as. matrix ( dist ( listroad [[i]]))[, 2: nrow ( listroad
[[i ]]) ]))
}}
return (pts )
}

See Ewen’s blog for more details on the code, http://editerna.free.fr/blog/…. Note that Ewen did publish a poster of the paper (in French), for the http://r2013-lyon.sciencesconf.org/ conference, that will be organized in Lyon on June 27th-28th, see

All comments are welcome…

Playing cards, with R

In my courses on R, I usually show how to insert a picture as a background for a graph. But it is also to see the picture as an object, and to insert it in a graph everywhere we like to see it, as explained on the awesome blog http://rsnippets.blogspot.ca/…. (in a post published in January 2012). I wanted to insert cards in a graph. Cards can be found, e.g. on wikipedia, even French versions, like the one I used to play with when I was a kid (see e.g. the Jack of clubs, http://commons.wikimedia.org/…, or the Queen of hearts, http://commons.wikimedia.org/…). But graphs are in svg. First, we have to export them in ppm, either using gimp, or online, with http://www.sciweavers.org/… instance. Here, I have a copy of the 32 cards, and the code to read one, in R, is

library(pixmap)
card=read.pnm("1000px_10_of_clubs.ppm")

Then, I can plot the cart using

plot(card,add=TRUE)

(on a predefined graph) The interesting part is that it is possible to plot the picture within a given box, but it has be bee specified when we read the image file, using

card=read.pnm("1000px_10_of_clubs.ppm",bbox=c(300,200,800,1100))
plot(card,add=TRUE)

If we want to visulize all the cards, first, we have to store the pictures (the cards) in some R format, in a list, then to check for all of them for their dimensions, and then, we can write a code to plot any of them, anywhere we like (again it has to be specified when we read the file, which might take a while)

L=list(cards="french cards")
L2=list(cards="french cards")
color=c("spades","clubs","hearts","diamonds")
nb=c("07","08","09","10","Jack","Queen","King","01")
N=1:32
for(n in N){
  i=trunc((n-1)/4)+1  #number
  j=(n-1)%%4+1        #color
  name_card=paste("1000px_",nb[i],"_of_",color[j],".ppm",sep="")
L[[n+1]]=read.pnm(name_card)  
L2[[n+1]]=name_card
}

Now,if we want to play one specific card (out of those 32), we can use

card_plot=function(id,loc){
usr <- par("usr")
pin <- par("pin")
card=L[[id+1]]
x.asp <- (card@size[2] * (usr[2] - usr[1]) / pin[1])
y.asp <- (card@size[1] * (usr[4] - usr[3]) / pin[2])
card.height <-.9
card.width <- card.height * x.asp / y.asp
y.0 <- loc[2]
x.0 <- loc[1]
bbox <- c(x.0, y.0, x.0 + card.width, y.0 + card.height)
card <- read.pnm(L2[[id+1]],bbox = bbox)
plot(card,add=TRUE)
}

Note that, here, first we read the file to check the dimensions, and then, we read it again, using the appropriate box (with height given, here 0.9). Now, it is possible to plot the 32 cards on the same graph, for a given ordering

seq_card_plot=function(seq_id){
  X=seq(0,7*.5,by=.5)
  Y=0:4
  table = plot(0:4,0:4,ylim=c(0,4),
  axes=FALSE,xlab="",ylab="",col="white")    
  for(n in 1:length(seq_id)){
  i=trunc((n-1)/4)+1  #number
  j=(n-1)%%4+1         #color
    card_plot(id=seq_id[n],loc=c(X[i],Y[j])) 
  }}

If we did not shuffle the cards, it would be

seq_card_plot(N)

But it is possible to shuffle the cards, of course,

set.seed(1)
seq_card_plot(sample(N))

Now, to be honest, I am a bit disappointed because I did not use the fact that I have vector based images here. So it should be possible to get much nicer images, I guess…

LaTeX in R graphs

A nice post was recently published on the rsnippets blog, about the tikzDevice R package. This package is – indeed – awesome. Even if it has been removed from the CRAN website. Of course, it can be download from the archive folder, on http://cran.r-project.org/…, but also (for a more recent version)  on http://download.r-forge.r-project.org/…. But first, it is necessary to install the following package.

> install.packages("filehash")

Then, we download on of the tikzDevice.zip files, and load it, e.g. using (on Mac)

Then, we can load the library

> library("tikzDevice")

If we want to use nice LaTeX formulas, it might be necessary to upload some (LaTeX) libraries and to specify the encoding format

> "options(tikzMetricPackages = c("\\usepackage[utf8]{inputenc}",
+ "\\usepackage[T1]{fontenc}", "\\usetikzlibrary{calc}", "\\usepackage{amssymb}"))

(this is detailed, e.g. in  http://yihui.name/…), then, we write a code to plot a graph. The idea is to produce a tex file which contains the graph, or more precisely which will produce a pdf graph when we compile it. We start with

> tikz("normal-dist.tex", width = 8, height = 4, 
+ standAlone = TRUE,
+ packages = c("\\usepackage{tikz}",
+ "\\usepackage[active,tightpage,psfixbb]{preview}",
+ "\\PreviewEnvironment{pgfpicture}",
+ "\\setlength\\PreviewBorder{0pt}",
+ "\\usepackage{amssymb}"))

We will produce a 8×4 graph. The graph is the following,

> u=seq(-3,3,by=.01)
> plot(u,dnorm(u),type="l",axes=FALSE,xlab="",ylab="",col="white")
> axis(1)
> I=which((u>=0)&(u<=1))
> polygon(c(u[I],rev(u[I])),c(dnorm(u)[I],rep(0,length(I))),col="red",border=NA)
> lines(u,dnorm(u),lwd=2,col="blue")

We can add text (or TeX based text)

> text(-1.5, dnorm(-1.5)+.17, "$\\textcolor{blue}{X\\sim\\mathcal{N}(0,1)}$", cex = 1.5)
> text(1.75, dnorm(1.75)+.25, 
+ "$\\textcolor{red}{\\mathbb{P}(X\\in[0,1])=\\displaystyle{\\int_0^1 \\varphi(x)dx}}$", cex = 1.5)

And we end the file with a standard

> dev.off()

This will produce a .tex file. If we compile this file, we can generate a pdf file, that can be inserted in lecture notes, slides or articles,

Nice, isn’t it ?

Happy St Patrick’s Day

I love Saint Patrick’s Day for, at least, two reasons. The first one is that, on March 17th, you can play out loud The Pogues, the second one is that it’s the only day in the year when I really enjoy getting a Guiness in a pub. And Guiness is important in statistical science (I did mention a couple of hours ago – on this blog –  that beers were important for social reasons in the academic world, but that was for other reasons…)

> theta=seq(0,pi/2,length=101)
> leaf=sin(2*theta)+.25*sin(6*theta)
> for(k in 0:3)
+ polygon(leaf*cos(theta+k*pi/2),leaf*sin(theta+k*pi/2),col="green")

As mentioned in all my statistics and econometrics courses, the history of statistics (I mean here mathematical statistics) is closely related to Guinness.

A long time ago, there was a Guinness Brewing Company of Dublin, which – as its name suggests – was an Irish brewing company. And the boss, who was to inherit the family business, decided to attract young students, trained in chemistry at Cambridge or Oxford.

In 1899, William Sealy Gosset, who had obtained a double degree in math and chemistry, left Oxford to Dublin. And to be quite honest, being graduate in maths meant when he had studied differential equations and astronomy. Basically, mathematics were useless for Guinness, and he got there with his expertise in chemistry. In fact, William turned out to be also a very good administrator, but this has nothing to do with our story.

William had good memories of his studies in math, and he wondered if he could find a problem to look at. He started studies on workmanship, noting that conditions vary so much (temperature, from hops, malt, manufacturing conditions …) that there were only few consistent data. The “law of errors”  (the central limit theorem) can not apply under these conditions.

In short, Bill (now we know each other a little, we’ll call him Bill) took many measurements, and noticed that the Poisson distribution could be an interesting model to work with. To make the story short, Bill managed to use statistical techniques to control the variance of the production, meaning that he was able to lower losses in the production of beer.

A nice application like this one deserved publication in a scientific journal … Well, of course the Poisson distribution has long been known (it was 1904 and a few months before, Von Bortkiewicz found elegant applications of this law, as discussed in a post  a few weeks ago). But there was a disclosure issue there: Bill’s contract prohibited him from disclosing secrets to the competitors.

Meanwhile, Bill had met Karl Pearson, who was then editor of Biometrika, and encouraged him to publish his results. In 1906, Bill who had helped Guiness to gain a lot of money – doing applied mathematics can be usefull – managed to take a sabbatical to work with Pearson to Galton Laboratory biometrics. Bill and Karl decided to publish the work under a pseudonym “Student.” The legend claims that they had hesitated to use “pupil.”

And for almost 30 years, “Mr Gosset” honorable employee Guinness led a dissolute life by publishing in statistical journals (after work in the brewery) always under the pseudonym “Student”. Of course, it might not be that simple. I mean, Bill had a family life, too. And his wife was the captain of the national Hockey team. So I hardly imagine Bill playing the smart ass and doing mathematical computations, when it was time to wash the dishes or iron his shirt…

In 1908, he wrote a remarkable “the probable error of the mean” remarked, at least, by Ronald Fisher. In fact, Bill found that there was a interesting law, but – as the normal – it was difficult to manipulate to obtain confidence intervals. Without a computer, he had the idea of ​​using monte carlo methods to tabulate quantiles and construct its tables. And he was probably the first one to look carefully at the problem of small samples, unlike Karl Pearson, who always put focus on the asymptotic case.

In fact, looking at his small sample, he saw the denominator magnitudes very close to those specifically manipulated Karl, in particular a square root of chi-square law. Well, of course, remained the normality assumption, but at least we had some results for finite samples !

For the story, William Gosset suggested to use letter z for its statistics, the ratio between the mean and (empirical) standard deviation. But a few years later, statisticians became accustomed to use this letter for Gaussian distribution (i.e. when the variance is known), and it became the standard to use the letter t. Hence finally the present name of “Student-t distribution” and in regression outputs, we have the “t-test”.

A legend (told by Harold Hotelling in his memoirs) claims that the Guinness family discovered this double life on the day of the death of William Gosset in 1937 when mathematicians requested financial assistance to print a volume of the works of their employee. But another legend claims that Mr Guinness himself would have suggested his nickname when he had expressed his intention to publish his research… So I guess we’ll never know. But at least, I’ll think about Bill when I’ll get my first Guiness tonight (but I will probably not be able to tell this story anymore when I’ll reach the fourth…)