Category Archives: Computer

A quelle distance d’une banque habite-t-on ?

Dans le cadre du projet de R de la formation en Data Science pour l’Actuariat, je vais continuer à mettre en ligne des morceaux de codes qui peuvent être utiles, dans un contexte spatial. Le dernier billet, sur cartographier le vote pour le Brexit, avait été repris (et bien amélioré) sur le site des voisins de rgeomatic. Aujourd’hui, je vais m’inspirer du travail d’Etienne Flichy qui mixe répartition de la population sur le territoire, et localisation des agences bancaires.

On parle des banques ici, mais si on a une base avec les coiffeurs, les boulangeries, etc, on peut faire la même chose ! (autant dire qu’on va pouvoir s’amuser quand la base sirene sera rendue ouverte – dans les semaines à venir). On va supposer que l’on a une base avec toutes les banques géocodées. Bon, pour l’exercice, on va utiliser la localisation des agences bancaires, en utilisant les données de cbanque.com. C’est assez facile d’aller scraper le site, quand on regarde la façon dont sont faites les pages, e.g. http://cbanque.com/pratique/agences/credit-cooperatif/35/. Là on récupère les adresses (postales) et on peut utiliser https://adresse.data.gouv.fr/csv/ (ou différents outils) pour géolocaliser les adresses.

Continue reading A quelle distance d’une banque habite-t-on ?

Cartographier le vote pour le Brexit

Je suis, ces temps-ci plongé dans les projets de R que j’avais donnés pour la formation Data Science pour l’Actuariat. Comme j’ai eu plein de choses intéressantes, je me suis dit que je pourrais tenter de reprendre dans des billets sur le blog (en plus, ça me permet de vérifier au passage que les codes tournent).

Pour commencer la série (oui, il y en aura d’autres), Flavien Thery a proposé une cartographie du vote pour le Brexit. La première étape était de récupérer un fond de carte (on va utiliser la seconde zone administrative ici)

library(sp)
library(raster)
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/GBR_adm2.rds","GBR_adm2.rds")
UK=readRDS("GBR_adm2.rds")
UK@data[159,"HASC_2"]="GB.NR"
plot(UK, xlim = c(-4,-2), ylim = c(50, 59), main="UK areas")

Cette carte est un peu surprenante… parce qu’on est sur le Royaume uni, ce qui inclu l’Angleterre, le Pays de Galles, l’Ecosse et l’Irlande du Nord. Et l’Irlande du Sud n’est pas sur la carte… Autant la rajouter (ça sera plus joli si on met du bleu pour l’eau)

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/IRL_adm0.rds","IRL_adm0.rds")
IRL=readRDS("IRL_adm0.rds")
plot(IRL,add=TRUE)

Mais on commence à rajouter l’Irlande, pourquoi pas la France, qui est juste en bas à droite, et qu’on devrait voir un peu…

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
plot(FR,add=TRUE)

Bon, arrêtons là….

On peut ensuite récupérer les données du référendum sur le Brexit (que je stocke sur le blog histoire de gagner un peu de temps)

loc="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2016/12/EU-referendum-result-data.csv"
referendum=read.csv(loc,header=TRUE,dec=".",sep=",",stringsAsFactors = FALSE)
referendum=referendum[c(3,6,13,14)]
library(plyr)
referendum=ddply(referendum,.(Region,HASC_code),summarise,Remain=sum(Remain),Leave=sum(Leave))

On peut retrouver que le Brexit a gagné, avec 51,89% des votes exprimés (ce qui est conforme à ce que dit wikipedia)

> sum(referendum$Leave)/(sum(referendum$Leave)+sum(referendum$Remain))
[1] 0.5189184

On peut regarder, région par région, si c’est le leave ou le remain qui a gagné, en utilisant

referendum=referendum[c(referendum$Region!="Northern Ireland"),]
referendum=referendum[c(referendum$HASC_code!="Gibraltar"),]
row.names(referendum)=seq(1,nrow(referendum),1)
leave_or_remain=cbind(referendum,"Brexit?"=0)
leave_or_remain[,"Brexit?"]=ifelse(leave_or_remain$Remain<leave_or_remain$Leave,rgb(1,0,0,.7),rgb(0,0,1,.7))
map_data=data.frame(UK@data)
map_data=cbind(map_data,"Brexit"=0)
for (i in 1:nrow(map_data)){
if(map_data[i,"NAME_1"]=="Northern Ireland"){
map_data[i,"Brexit"]="blue"}else{
map_data[i,"Brexit"]=as.character(leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],'Brexit?'])
}
}
plot(UK, col = map_data$Brexit, border = "gray1", xlim = c(-4,-2), ylim = c(50, 59), main="How the UK has voted?", bg="#A6CAE0")
plot(IRL, col = "lightgrey", border = "gray1",add=TRUE)
plot(FR, col = "lightgrey", border = "gray1",add=TRUE)

legend(-1,59,c("Leave","Remain"),fill=c("red","blue"),bty="n")

(on rajoute une petite légende pour être plus clair). Mais on peut aller plus loin en représentant le pourcentage obtenu, par région, par le camp du leave. Pour ça, on peut utiliser le package cartography

library(cartography)
cols <- carto.pal(pal1 = "red.pal",n1 = 5,pal2="green.pal",n2=5)
map_data=data.frame(UK@data)
map_data=cbind(map_data,"Percentage_Remain"=0)
for (i in 1:nrow(map_data)){
if(map_data[i,"NAME_1"]=="Northern Ireland"){
map_data[i,"Percentage_Remain"]=55.78}else{         map_data[i,"Percentage_Remain"]=100*leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],"Remain"]/(leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],"Remain"]+leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],"Leave"])
}
}
plot(UK, col = "grey", border = "gray1", xlim = c(-4,-2), ylim = c(50, 59),bg="#A6CAE0")
plot(IRL, col = "lightgrey", border = "gray1",add=TRUE)
plot(FR, col = "lightgrey", border = "gray1",add=TRUE)
choroLayer(spdf = UK,
df = map_data,
var = "Percentage_Remain",
breaks = seq(0,100,10),
col = cols,
legend.pos = "topright",
legend.title.txt = "",
legend.values.rnd = 2,
add = TRUE)

(là encore, une légende permet d’aider à lire la carte). Amusant, n’est ce pas ?

R in Insurance, 2017

Following the successfull conferences in London (2013, 2014, 2016) and in Amsterdam (2015), the next edition will take place in Paris. The R in insurance 2017 will take place in ENSAE on June 8.

This one-day conference will focus again on applications in insurance and actuarial science that use R, the lingua franca for statistical computation. The intended audience of the conference includes both academics and practitioners who are active or interested in the applications of R in insurance. The two invited speakers are Katrien Antonio (KU Leuven) and Julie Seguela (Covea). It will be a nice event !

La centralisation française, et la répartition de la population sur le territoire

Suite à mon billet d’hier soir sur la distribution en France, j’ai eu plusieurs commentaires – sur Twitter – qui notaient que ce n’était pas surprenant que la France soit aussi concentrée, en terme de population, compte tenu de l’importance de la centralisation en France (par opposition à l’Allemagne, par exemple).  Et comme le note wikipedia, à la page sur le centralisme, “depuis la Révolution française (et même avant), la France est un État très centraliste“.

Il y a quelques années, Mattia Bunel (aka @mattiabunel) avait rédigé un joli mémoire sur l’influence des contraintes environnementales sur la distribution de la population française entre 1793 et 1999. Au delà du travail de rédaction, il y a surtout un gros travail sur les données. Mattia a remis en forme les données de cassini.ehess.fr. Et Mattia a eu la gentille de m’envoyer ses données, avec en ligne les villages en France, leur superficie, et leur population, à plusieurs dates, entre 1793 et 1999. Pour récupérer la superficie (Mattia a passer du temps à mettre à jour, pour les villages qui ont fusionné, en particulier) le code est tout simplement

> base=read.csv("/Cassini/export.csv")
> dim(base)
[1] 41409    40
> S=base$Superficie
> n=nchar(as.character(S))
> S=substr(S,1,n-3)
> Surface=as.numeric(gsub(" ", "", as.character(S), fixed = TRUE))
> idx=which(!is.na(Surface))
> B=base[idx,]
> dim(B)
[1] 36576    40

On a ainsi les 36,000 communes françaises. Ensuite, on peut extraire, par année, la population. Histoire d’avoir un calcul de l’indice de Gini qui soit cohérent avec ce que j’avais fait hier – dans Non-Uniform Population Density in some European Countries – l’idée est la suivante : supposons qu’il y ait 2 villages, un de superficie 4, et de population 4, et un autre de superficie 1, et de population 3. Dans le billet d’hier, je raisonnais pas unité spatiale (en l’occurrence un petit carré)

et on va faire pareil ici. Autrement dit, on va répartir la population uniformément au sein du village de taille 4, ce qui va faire 4 “villages unitaires” avec 1 habitant chacun,

On a ainsi 5 unités, 4 avec une population de 1, et 1 avec une population de 3. De cet échantillon – {1,1,1,1,3} – on peut calculer l’indice de Gini. La fonction pour extraire la population, et calculer l’indice de Gini est

> library(ineq)
> LC=function(annee){
+   nom=paste("X",annee,sep="")
+   P=B[,nom]
+   P=gsub(" ", "", as.character(P), fixed = TRUE)
+   Pop=as.numeric(substr(P,3,nchar(P)))
+   P=Pop[which(!is.na(Pop))]
+   S=Surface[which(!is.na(Pop))]
+   D=P/S
+   S1=round(S/20)
+   X=rep(NA,sum(S1))
+   s=0
+   for(i in 1:length(S1)){
+     X[s+1:S1[i]]=D[i]
+     s=s+S1[i]
+   }
+   Gini(X)
+ }

Si on calcule notre indice de Gini pour toutes les années, on obtient

> Y=names(base)[7:ncol(base)]
> Y=as.numeric(substr(Y,2,5))
> gini=Vectorize(LC)(Y)
> plot(Y,gini,type="b")

On retrouve un indice de Gini qui dépasse 0.7 aujourd’hui (que nous avions obtenu hier, avec une méthodologie assez différente), mais surtout, on voit que l’indice de Gini n’a cessé d’augmenter, depuis la révolution française… L’interprétation rapide serait que le centralisme français ne cesse pas d’augmenter, depuis 200 ans.

La donnée, pierre angulaire de notre économie

Jeudi, je participerais aux Rendez-vous Parlementaires de Bretagne sur l’Economie Numérique,

où on m’a demandé de parler de données. Je dois avoir une quinzaine de minutes, mais comme toujours j’ai prévu pour 2 heures d’exposé. Les slides sont maintenant en ligne, et je me suis dis que je pourrais faire un billet pour détailler le fond de mon propos,

Continue reading La donnée, pierre angulaire de notre économie

Non-Uniform Population Density in some European Countries

A few months ago, I did mention that France was a country with strong inequalities, especially when you look at higher education, and research teams. Paris has almost 50% of the CNRS researchers, while only 3% of the population lives there.

It looks like Paris is the only city, in France. And I wanted to check that, indeed, France is a country with strong inequalities, when we look at population density.

Using data from sedac.ciesin.columbia.edu, it is possible to get population density on a small granularity level,

> rm(list=ls())
> base=read.table(
+      "/home/charpentier/glp00ag.asc",
+      skip=6)
> X=t(as.matrix(base,ncol=8640))
> X=X[,ncol(X):1]

The scales for latitudes and longitudes can be obtained from the text file,

> #ncols         8640
> #nrows         3432
> #xllcorner     -180
> #yllcorner     -58
> #cellsize      0.0416666666667

Hence, we have

> library(maps)
> world=map(database="world")
> vx=seq(-180,180,length=nrow(X)+1)
> vx=(vx[2:length(vx)]+vx[1:(length(vx)-1)])/2
> vy=seq(-58,85,length=ncol(X)+1)
> vy=(vy[2:length(vy)]+vy[1:(length(vy)-1)])/2

If we plot our density, as in a previous post, on Where People Live,

> I=seq(1,nrow(X),by=10)
> J=seq(1,ncol(X),by=10)
> image(vx[I],vy[J],log(1+X[I,J]),
+ col=rev(heat.colors(101)))
> lines(world[[1]],world[[2]])

we can see that we have a match, between the big population matrix, and polygons of countries.

Consider France, for instance. We can download the contour polygon with higher precision,

> library(rgdal)
> fra=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds",
+ "fr.rds")
> Fra=readRDS("fr.rds")
> n=length(Fra@polygons[[1]]@Polygons)
> L=rep(NA,n)
> for(i in 1:n) L[i]=nrow(Fra@polygons[[1]]@Polygons[[i]]@coords)
> idx=which.max(L)
> polygon_Fr=
+       Fra@polygons[[1]]@Polygons[[idx]]@coords
> min_poly=apply(polygon_Fr,2,min)
> max_poly=apply(polygon_Fr,2,max)
> idx_i=which((vx>min_poly[1])&(vx<max_poly[1]))
> idx_j=which((vy>min_poly[2])&(vy<max_poly[2]))
> sub_X=X[idx_i,idx_j]
> image(vx[idx_i],vy[idx_j],
+       log(sub_X+1),col=rev(heat.colors(101)),
+       xlab="",ylab="")
> lines(polygon_Fr)

We are now able to extract information about population for France, only (actually, it is only mainland France, islands are not considered here… to avoid complicated computations

> library(sp)
> xy=expand.grid(x = vx[idx_i], y = vy[idx_j])
> dim(xy)
[1] 65730     2

Here, we have 65,730 small squares, in France.

> pip=point.in.polygon(xy[,1],xy[,2],
+     polygon_Fr[,1],polygon_Fr[,2])>0
> dim(pip)=dim(sub_X)
> Fr=sub_X[pip]
> sum(Fr)
[1] 58105272

Observe that the total population within the French polygon is close to 60 million people, which is consistent with actual figures. Now, if we look more carefully at repartition over the French territory

> library(ineq)
> Gini(Fr)
[1] 0.7296936

Gini coefficient is rather high (over 70%), but it is also possible to visualize Lorenz curve,

> plot(Lc(Fr))

Observe that in 5% of the territory, we can find almost 54% of the population

> 1-min(LcF$L[LcF$p>.95])
[1] 0.5462632

In order to compare with other countries, consider the

> LC=function(rds="fr.rds"){
+ Fra=readRDS(rds)
+ n=length(Fra@polygons[[1]]@Polygons)
+ L=rep(NA,n)
+ for(i in 1:n) 
L[i]=nrow(Fra@polygons[[1]]@Polygons[[i]]@coords)
+ idx=which.max(L)
+ polygon_Fr=
+      Fra@polygons[[1]]@Polygons[[idx]]@coords
+ min_poly=apply(polygon_Fr,2,min)
+ max_poly=apply(polygon_Fr,2,max)
+ idx_i=which((vx>min_poly[1])&(vx<max_poly[1]))
+ idx_j=which((vy>min_poly[2])&(vy<max_poly[2]))
+ sub_X=X[idx_i,idx_j]
+ xy=expand.grid(x = vx[idx_i], y = vy[idx_j])
+ dim(xy)
+ pip=point.in.polygon(xy[,1],xy[,2],
+     polygon_Fr[,1],polygon_Fr[,2])>0
+ dim(pip)=dim(sub_X)
+ Fr=sub_X[pip]
+ return(list(gini=Gini(Fr),LC=Lc(Fr))
+ }
> FRA=LC()

For instance, consider Germany, or Italy

> deu=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/DEU_adm0.rds","deu.rds")
> DEU=LC("deu.rds")
> ita=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/ITA_adm0.rds","ita.rds")
> ITA=LC("ita.rds")

It is possible to plot Lorenz curve, together,

> plot(FRA$LC,col="blue")
> lines(DEU$LC,col="black")
> lines(ITA$LC,col="red")

Observe that France is clearly below the other ones. Compared with Germany, there is a significant difference

> FRA$gini
[1] 0.7296936
> DEU$gini
[1] 0.5088853

More precisely, if 54% of French people live in 5% of the territory, only 40% of Italians, and 32% of the Germans,

> 1-min(FRA$LC$L[FRA$LC$p>.95])
[1] 0.5462632
> 1-min(ITA$LC$L[ITA$LC$p>.95])
[1] 0.3933227
> 1-min(DEU$LC$L[DEU$LC$p>.95])
[1] 0.3261124

How long could it take to run a regression

This afternoon, while I was discussing with Montserrat (aka @mguillen_estany) we were wondering how long it might take to run a regression model. More specifically, how long it might take if we use a Bayesian approach. My guess was that the time should probably be linear in , the number of observations. But I thought I would be good to check.

Let us generate a big dataset, with one million rows,

> n=1e6
> X=runif(n)
> Y=2+5*X+rnorm(n)
> B=data.frame(X,Y)

Consider as a benchmark the standard linear regression,

> lm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = lm(Y~X,data=B[idx,])
+   summary(reg)
+ }

Here the regression is a subset of smaller size. We can do the same with a Bayesian approach, using stan,

> stan_lm ="
+ data {
+ int N;
+ vector[N] x;
+ vector[N] y;
+ }
+ parameters {
+ real alpha;
+ real beta;
+ real tau;
+ }
+ transformed parameters {
+ real sigma;
+ sigma <- 1 / sqrt(tau);
+ }
+ model{
+ y ~ normal(alpha + beta * x, sigma);
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ tau ~ gamma(0.001, 0.001);
+ }
+ "

Define then the model

> library(rstan)
> system.time( 
  stanmodel <<- stan_model(model_code = stan_lm))
utilisateur     système      écoulé 
      0.043       0.000       0.043

We want to see how long it might take to run a regression,

> lm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+       data = list(N=n,
+                   x=X[idx],
+                   y=Y[idx]),
+       iter = 1000, warmup=200)
+   summary(fit)
+ }

We use the following package to see how long it takes

> library(microbenchmark)
> time_lm = function(n){
+  M = microbenchmark(lm_freq(n),
+      lm_bayes(n),times=50)
+  return(apply( matrix(M$time,nrow=2),1,mean))
+ }

We can now compare the time it took with ten, one hundred, on thousand, and ten thousand observations,

> vN = c(10,100,1000,10000)
> T = Vectorize(time_lm)(vN)

we can then plot it

> plot(vN,T[2,]/1e6,log="xy",col="red",type="b",
+      xlab="Number of Observations",ylab="Time")
> lines(vN,T[1,]/1e6,col="blue",type="b")

It looks like (if we forget about the very small sample) that the time it takes to run a regression is linear, with the two techniques (the frequentist and the Bayesian ones).

And actually, the same story olds for logistic regressions. Consider the following dataset

> n=1e6
> X=runif(n)
> S=-3+2*X+rnorm(n)
> Y=rbinom(n,size=1,prob=exp(S)/(1+exp(S)))
> B=data.frame(X,Y)

The frequentist version of the logistic regression is

> glm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = glm(Y~X,data=B[idx,],family=binomial)
+   summary(reg)
+ }

and the Bayesian one, using stan,

> stan_glm = "
+ data {
+ int N;
+ vector[N] x;
+ int<lower=0,upper=1> y[N];
+ }
+ parameters {
+ real alpha;
+ real beta;
+ }
+ model {
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ y ~ bernoulli_logit(alpha + beta * x);
+ }
+ "
> stanmodel = stan_model(model_code = stan_glm) )
> glm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+        data = list(N=n,
+        x = X[idx],
+        y = Y[idx]),
+        iter = 1000, warmup=200)
+   summary(fit)
+ }

Again, we can see how long it takes to run those regression models

> time_gl m= function(n){
+   M = microbenchmark(glm_freq(n),
+   glm_bayes(n),times=50)
+   return(apply( matrix(M$time,nrow=2),1,mean))
+ }

 

Where People Live, part 2

Following my previous post, I wanted to use another dataset to visualize where people live, on Earth. The dataset is coming from sedac.ciesin.columbia.edu. We you register, you can download the database

> base=read.table("glp00ag15.asc",skip=6)

The database is a ‘big’ 1440×572 matrix, in each cell (latitude and longitude) we have the population

>  X=t(as.matrix(base,ncol=1440))
>  dim(X)
[1] 1440  572

The dataset looks like

> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ log(X+1)[,ncol(X):1],col=rev(heat.colors(101)),
+ axes=FALSE,xlab="",ylab="")

Now, if we keep only place where people actually live (i.e. removing cold desert and oceans) we get

> M=X>0
> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ M[,ncol(X):1],col=c("white","light green"),
+ axes=FALSE,xlab="",ylab="")

Then, we can visualize where 50% of the population lives,

> Order=matrix(rank(X,ties.method="average"),
+ nrow(X),ncol(X))
> idx=cumsum(sort(as.numeric(X),
+ decreasing=TRUE))/sum(X)
> M=(X>0)+(Order>length(X)-min(which(idx>.5)))
> image(seq(-180,180,length=nrow(X)), + seq(-90,90,length=ncol(X)), + M[,ncol(X):1],col=c("white",
+ "light green",col="red"), + axes=FALSE,xlab="",ylab="")

50% of the population lives in the red area, and 50% in the green area. More precisely, 50% of the population lives on 0.75% of the Earth,

> table(M)/length(X)*100
M
         0          1          2 
69.6233974 29.6267968  0.7498057

And 90% of the population lives in the red area below (5% of the surface of the Earth)

> M=(X>0)+(Order>length(X)-min(which(idx>.9)))
> table(M)/length(X)*100
M
        0         1         2 
69.623397 25.512335  4.864268 
> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ M[,ncol(X):1],col=c("white",
+ "light green",col="red"),
+ axes=FALSE,xlab="",ylab="")

Breizh Camp, Economics with Computers

I have been invited, as keynote speaker, at the 6th BreizhCamp, organized in Rennes, from March 23rd till March 25th, “la conférence des développeurs du grand ouest” as they call it. I am deeply honored, since it is a huge conference… The organizing committee asked me to give a (brief) talk on data, and big data. But data is just the visible tip of the iceberg, and I cannot talk about data without mentioning algorithms. So I will try to talk about algorithmics, econometrics, machine learning, and data (and big data, of course).

Slides are now online… More to come in the next days…

Where People Live

There was an interesting map on reddit this morning, with a visualisation of latitude and longituge of where people live, on Earth. So I tried to reproduce it. To compute the density, I used a kernel based approch

> library(maps)
> data("world.cities")
> X=world.cities[,c("lat","pop")]
> liss=function(x,h){
+   w=dnorm(x-X[,"lat"],0,h)
+   sum(X[,"pop"]*w)
+ }
> vx=seq(-80,80)
> vy=Vectorize(function(x) liss(x,1))(vx)
> vy=vy/max(vy)
> plot(world.cities$lon,world.cities$lat,)
> for(i in 1:length(vx)) 
+ abline(h=vx[i],col=rgb(1,0,0,vy[i]),lwd=2.7)

For the other axis, we use a miror technique, to ensure that -180 is close the +180

> Y=world.cities[,c("long","pop")]
> Ya=Y; Ya[,1]=Y[,1]-360
> Yb=Y; Yb[,1]=Y[,1]+360
> Y=rbind(Y,Ya,Yb)
> liss=function(y,h){
+   w=dnorm(y-Y[,"long"],0,h)
+   sum(Y[,"pop"]*w)
+ } 
> vx=seq(-180,180)
> vy=Vectorize(function(x) liss(x,1))(vx)
> vy=vy/max(vy)
> plot(world.cities$lon,world.cities$lat,pch=19)
> for(i in 1:length(vx)) 
+ abline(v=vx[i],col=rgb(1,0,0,vy[i]),lwd=2.7)

Now we can add the two, on the same graph

Spatial and Temporal Viz of Gas Price, in France

A great think in France, is that we can play with a great database with gas price, in all gas stations, almost eveyday. The file is rather big, so let’s make sure we have enough memory to run our codes,

> rm(list=ls())

To extract the data, first, we should extract the xml file, and then convert it in a more common R object (say a list)

> year=2014
> loc=paste("http://donnees.roulez-eco.fr/opendata/annee/",year,sep="")
> download.file(loc,destfile="oil.zip")

Content type 'application/zip' length 15248088 bytes (14.5 MB)

> unzip("oil.zip", exdir="./")
> fichier=paste("PrixCarburants_annuel_",year,
".xml",sep="")
> library(plyr)
> library(XML)
> library(lubridate)
> l=xmlToList(fichier)

We have a large dataset, with prices, for various types of gaz, for almost any gas station in France, almost every day, in 2014. It is a 1.4Gb list, with 11,064 elements (each of them being a gas station)

> length(l)
[1] 11064

There are two ways to look at the data. A first idea is to consider a gas station, and to extract the time series.

> time_series=function(no,type_gas="Gazole"){
+   prix=list()
+   date=list()
+   nom=list()
+   j=0
+   for(i in 1:length(l[[no]])){
+     v=names(l[[no]])
+     if(!is.null(v[i])){
+       if(v[i]=="prix"){
+         j=j+1
+  date[[j]]=as.character(l[[no]][[i]]["maj"])
+  prix[[j]]=as.character(l[[no]][[i]]["valeur"])
+  nom[[j]]=as.character(l[[no]][[i]]["nom"])
+       }}
+   }
+   id=which(unlist(nom)==type_gas)
+   n=length(id)
+   jour=function(j) as.Date(substr(date[[id[j]]],1,10),"%Y-%m-%d")
+   jour_heure=function(j) as.POSIXct(substr(date[[id[j]]],1,19), format = "%Y-%m-%d %H:%M:%S", tz = "UTC")
+   ext_y=function(j) substr(date[[id[j]]],1,4)
+   ext_m=function(j) substr(date[[id[j]]],6,7)
+   ext_d=function(j) substr(date[[id[j]]],9,10)
+   ext_h=function(j) substr(date[[id[j]]],12,13)
+   ext_mn=function(j) substr(date[[id[j]]],15,16)
+   prix_essence=function(i) as.numeric(prix[[id[i]]])/1000
+   base1=data.frame(indice=no,
+            id=l[[no]]$.attrs["id"],
+            adresse=l[[no]]$adresse,
+            ville=l[[no]]$ville,
+  lat=as.numeric(l[[no]]$.attrs["latitude"])
/100000,
+  lon=as.numeric(l[[no]]$.attrs["longitude"])
/100000,
+       cp=l[[no]]$.attrs["cp"],
+       saufjour=l[[no]]$ouverture["saufjour"], 
+       Y=unlist(lapply(1:n,ext_y)),
+       M=unlist(lapply(1:n,ext_m)),
+       D=unlist(lapply(1:n,ext_d)),
+       H=unlist(lapply(1:n,ext_h)),
+       MN=unlist(lapply(1:n,ext_mn)),
+    prix=unlist(lapply(1:n,prix_essence)))
+   
+   base1=base1[!is.na(base1$prix),]
+   
+   date_d=paste(year,"-01-01 12:00:00",sep="")
+   date_f=paste(year,"-12-31 12:00:00",sep="")
+   vecteur_date=seq(as.POSIXct(date_d, format =
+                 "%Y-%m-%d %H:%M:%S"),
+                    as.POSIXct(date_f, format = 
+                 "%Y-%m-%d %H:%M:%S"),by="days")
+   date=paste(base1$Y,"-",base1$M,"-",base1$D,
+   " ",base1$H,":",base1$MN,":00",sep="")
+   date_base=as.POSIXct(date, format = 
+                "%Y-%m-%d %H:%M:%S", tz = "UTC")
+   idx=function(t) sum(vecteur_date[t]>=date_base)
+   vect_idx=Vectorize(idx)(1:length(vecteur_date))
+   P=c(NA,base1$prix)
+   base2=ts(P[1+vect_idx],
+         start=year,frequency=365)
+   list(base=base1,
+        ts=base2)
+ }

To get the time series, extrapolation is necessary, since we have here observation at irregular dates. Here, for instance, for the second gas station, we get

> plot(time_series(2)$ts,ylim=c(1,1.6),col="red")
> lines(time_series(2,"SP98")$ts,col="blue")

An alternative is to study gas price from a spatial perspective. Given a date, we want the price in all stations. As previously, we keep the last price observed, in each station,

> spatial=function(dt){
+   base=NULL
+   for(no in 1:length(l)){  
+     prix=list()
+     date=list()
+     j=0
+     for(i in 1:length(l[[no]])){
+     v=names(l[[no]])
+     if(!is.null(v[i])){
+       if(v[i]=="prix"){
+   j=j+1
+   date[[j]]=as.character(l[[no]][[i]]["maj"])
+       }}
+   }
+   n=j
+   D=as.Date(substr(unlist(date),1,10),"%Y-%m-%d")
+   k=which(D==D[which.max(D[D<=dt])])
+ if(length(k)>0){
+   B=Vectorize(function(i) l[[no]][[k[i]]])(1:length(k))
+ if("nom" %in%  rownames(B)){  
+   k=which(B["nom",]=="Gazole")
+   prix=as.numeric(B["valeur",k])/1000
+   if(length(prix)==0) prix=NA
+   base1=data.frame(indice=no,
+   lat=as.numeric(l[[no]]$.attrs["latitude"])
/100000,
+   lon=as.numeric(l[[no]]$.attrs["longitude"])
/100000,
+   gaz=prix)
+   base=rbind(base,base1)
+ }}}
+ return(base)}

For instance, for the 5th of May, 2014, we get the following dataset

> B=spatial(as.Date("2014-05-05"))

To visualize prices, consider only mainland France (excluding islands in the Pacific, or close to the Caribeans)

> idx=which((B$lon>(-10))&(B$lon<20)&
+ (B$lat>35)&(B$lat<55))
> B=B[idx,]
> Q=quantile(B$gaz,seq(0,1,by=.01),na.rm=TRUE)
> Q[1]=0
> x=as.numeric(cut(B$gaz,breaks=unique(Q)))
> CL=c(rgb(0,0,1,seq(1,0,by=-.025)),
+ rgb(1,0,0,seq(0,1,by=.025)))
> plot(B$lon,B$lat,pch=19,col=CL[x])

Red dots are the most expensive gas stations, that particular day.

If we add contours of the French regions, we get

> library(maps)
> map("france")
> points(B$lon,B$lat,pch=19,col=CL[x])

 

We can also focus on some specific region, say the South of Brittany.

> library(OpenStreetMap)
> map <- openmap(c(lat= 48,   lon= -3),
+                c(lat= 47,   lon= -2))
> map <- openproj(map) 
> plot(map)
> points(B$lon,B$lat,pch=19,col=CL[x])

As we can see on that map, there are regions that are rather empty, where the closest gas station might be a bit far away. Actually, it is possible to add Voronoi sets on the map,

> dB=data.frame(lon=B$lon,lat=B$lat)
> idx=which(!duplicated(dB))
> dB=dB[idx,]

 

which could help to get the price of the closest gaz station.

> library(tripack)
> V <- voronoi.mosaic(dB$lon[id],dB$lat[id])
> plot(V,add=TRUE)

It is possible to plot each polygon with the color of the gaz station we add. Actually, it is a bit tricky, and I could not find a R function to to this. So I did it manually,

> plot(map)
> P <- voronoi.polygons(V)
> library(sp)
> point_in_i=function(i,point) point.in.polygon(point[1],point[2],P[[i]][,1],P[[i]][,2])
> which_point=function(i) which(Vectorize(function(j) point_in_i(i,c(dB$lon[id[j]],dB$lat[id[j]])))(1:length(id))>0)
> for(i in 1:length(P)) polygon(P[[i]],col=CL[x[id[which_point(i)]]],border=NA)

With this map, we can see that we have blue areas, i.e. all stations in a given area are cheap (because of competition), but in some places, a very expensive one is next to a very cheap one. I guess we should look closer at the dynamics… [to be continued….]

Inter-relationships in a matrix

Last week, I wanted to displaying inter-relationships between data in a matrix. My friend Fleur, from AXA, mentioned an interesting possible application, in car accidents. In car against car accidents, it might be interesting to see which parts of the cars were involved. On https://www.data.gouv.fr/fr/, we can find such a dataset, with a lot of information of car accident involving bodily injuries (in France, a police report is necessary, and all of them are reported in a big dataset… actually several dataset, with information of people involved, cars, locations, etc). For 2014 claims, the dataset is

> base = read.csv("https://www.data.gouv.fr/s/resources/base-de-donnees-accidents-corporels-de-la-circulation-sur-6-annees/20150806-153355/vehicules_2014.csv")

Let us keep only claims involving two vehicules,

> T=table(base$Num_Acc)
> idx=names(T)[which(T==2)]

For 2014, we have 32,222 claims.

> length(idx)
[1] 32222

In this dataset, we have information about where cars were hit,

plus ‘9’ for multiple hot (in rollover accidents) and ‘0’ should be missing information.

> nom=c("NA","Front","Front R",'Front L',"Back","Back R","Back L","Side R","Side L","Multiple")

Now, we simply have to go through our dataset, and get the matrix. My first idea was to get a symmetric one,

> B=base[base$Num_Acc %in% idx,]  
> B=B[order(B$Num_Acc),]
> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+   M[a,b]=M[a,b]+1
+   M[b,a]=M[b,a]+1
+ }
> rownames(M)=nom
> colnames(M)=nom

The problem, when we ask for a symmetric chord diagram, is that we cannot have Front – Front claims (since values on the diagonal are removed)

> library(circlize)
> chordDiagramFromMatrix(M,symmetric=TRUE)

So let’s pretend that there could be some possible distinction in the dataset, between the first and the second row. Like the first one is the ‘responsible’ driver. Or like, for insurer, the first one is your insured. Just to avoid this symmetry problem

> M=matrix(0,10,10)
> for(i in seq(1,nrow(B),by=2)){
+   a=B$choc[i]+1
+   b=B$choc[i+1]+1
+ M[a,b]=M[a,b]+1
+ }
> rownames(M)=paste("A",nom,sep=" ")
> colnames(M)=paste("B",nom,sep=" ")

If we visualize the chord diagram, this time it is more complex to analyze,

> chordDiagram(M)

Below we have the first row (say our driver, letter A) and on top, the second row (say the other driver, letter B),

In bodily injury claims, we observe a large proportion of Front – Front claims, as well as Front – Back. And as expected Back-Back are not that common….

Visualising a Circular Density

This afternoon, Jean-Luc asked me some help about an old post I did publish, minuit, l’heure du crime; and some graphs published a few days after, where I used a different visualisation, in another post.

The idea is that the hour can be seen as circular, in the sense that 23:58 is actually very close to 00:03. So when we use a nonparametric kernel estimator of time events, we have to take into account that property. More specifically, consider the density of an angle, i.e. a function http://freakonometrics.blog.free.fr/public/perso2/circ-01.gif such that

http://freakonometrics.blog.free.fr/public/perso2/circ-02.gif

with a circular relationship, in the sense that http://freakonometrics.blog.free.fr/public/perso2/circ-03.gif.

In the dataset sent by Jean-Luc, we have some thefts in a big city, in France. The dataset is a simple spreadsheet with one columns, with ’00:20′ or ’17:45′ inside. Those are more or less reported time of thefts, as declared to the police.

B=read.table("Temp_Heures_VV.csv",header=TRUE,
  sep=";")
HM=as.character(B[,1])
H=substr(HM,1,(nchar(HM)-3))
M=substr(HM,(nchar(HM)-1),(nchar(HM)))
X=as.numeric(H)+as.numeric(M)/60

The time is a number from 0 to 24.

U=seq(0,1,by=1/250)
O=U*2*pi
U12=seq(0,1,by=1/24)
O12=U12*2*pi
OM=2*pi*X/24
XL=c(X-24,X,X+24)
d=density(X)
d=density(XL,bw=d$bw,n=1500)
I=which((d$x>=6)&(d$x<=30))
Od=d$x[I]/24*2*3.141592-3.141592/2
Dd=d$y[I]/max(d$y)+1

The idea to get a nice density estimation is to use a simple mirror technique : we have three versions of the data, one for today, one for yesterday, and one for tomorrow. Of course, we have to use a shorter bandwidth.

R=1/24/max(d$y)/3+1 
plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2),
     type="l",axes=FALSE,xlab="",ylab="")
for(i in 3.14159/12*(0:12)){ 
  segments(-cos(i),-sin(i),cos(i),sin(i),col="grey")} 
segments(.9*cos(O12),.9*sin(O12),
         1.1*cos(O12),1.1*sin(O12))
text(.7,0,"6")
text(-.7,0,"18")
text(0,-.7,"12")
text(0,.7,"24")
R=1/24/max(d$y)/3+1
lines(R*cos(O),R*sin(O),lty=2)
AX=R*cos(Od);AY=-R*sin(Od)
BX=Dd*cos(Od);BY=-Dd*sin(Od)
COUL=rep("blue",length(AX))
COUL[R<Dd]="red"
CM=cm.colors(200)
a=trunc(100*Dd/R)
COUL=CM[a]
segments(AX,AY,BX,BY,col=COUL,lwd=2)
lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)

The dotted line would be a uniform distribution over the day. The true distribution is the black bold line. The area in purple is when we have more crimes, and the blue line is when we have less crimes. The blue area is equal to the purple one. There is a clear symmetry in the evening around midnight (but not during the day : 6 am is not the same as 6 pm). This graph is the circular visualisation of the kernel density estimator, the same way the rose diagram is the circular visualisation of the histogram.

Playing with Leaflet (and Radar locations)

Yesterday, my friend Fleur did show me some interesting features of the leaflet package, in R.

library(leaflet)

In order to illustrate, consider locations of (fixed) radars, in several European countries. To get the data, use

download.file("http://carte-gps-gratuite.fr/radars/zones-de-danger-destinator.zip","radar.zip")
unzip("radar.zip")
 
 ext_radar=function(nf){
radar=read.table(file=paste("destinator/",nf,sep=""), sep = ",", header = FALSE, stringsAsFactors = FALSE)
 radar$type <- sapply(radar$V3, function(x) {z=as.numeric(unlist(strsplit(x, " ")[[1]])); return(z[!is.na(z)])})
  radar <- radar[,c(1,2,4)]
  names(radar) <- c("lon", "lat", "type")
  return(radar)}
 
L=list.files("./destinator/")
nl=nchar(L)
id=which(substr(L,4,8)=="Radar" & substr(L,nl-2,nl)=="csv")
 
radar_E=NULL
for(i in id) radar_E=rbind(radar_E,ext_radar(L[i]))

(to be honest, if you run that code, you will get several countries, but not France… but if you want to add it, you should be able to do so…). The first tool is based on popups. If you click on a point on the map, you get some information, such as the speed limit where you can find a radar. To get a nice pictogram, use

fileUrl <- "http://evadeo.typepad.fr/.a/6a00d8341c87ef53ef01310f9238e6970c-800wi"
download.file(fileUrl,"radar.png", mode = 'wb')
RadarICON <- makeIcon(  iconUrl = fileUrl,   iconWidth = 20, iconHeight = 20)

And then, use to following code get a dynamic map, mentionning the variable that should be used for the popup

m <- leaflet(data = radar_E) 
m <- m %>% addTiles() 
m <- m %>% addMarkers(~lon, ~lat, icon = RadarICON, popup = ~as.character(type))
m

Because the picture is a bit heavy, with almost 20K points, let us focus only on France,

Continue reading Playing with Leaflet (and Radar locations)