Tag Archives: map

Fairness and discrimination, PhD Course, #4 Wasserstein Distances and Optimal Transport

For the fourth course, we will discuss Wasserstein distance and Optimal Transport. Last week, we mentioned distances, dissimilarity and divergences. But before talking about Wasserstein, we should mention Cramer distance.

Cramer and Wasserstein distances

The definition of Cramér distance, for k\geq1, is

while Wasserstein will be (also for k\geq1)

If we consider cumulative distribution functions, for the first one (Cramer), we consider some sort of “vertical” distance, while for the second one (Wasserstein), we consider some “horizontal” one,

Obviously, when k=1, the two distances are identical

c1 = function(x) abs(pnorm(x,0,1)-pnorm(x,1,2))
w1 = function(x) abs(qnorm(x,0,1)-qnorm(x,1,2))
integrate(c1,-Inf,Inf)$value
[1] 1.166631
integrate(w1,0,1)$value
[1] 1.166636

But when k>1, it is no longer the case.

c2 = function(x) (pnorm(x,0,1)-pnorm(x,1,2))^2
w2 = function(u) (qnorm(u,0,1)-qnorm(u,1,2))^2
sqrt(integrate(c2,-Inf,Inf)$value)
[1] 0.5167714
sqrt(integrate(w2,0,1)$value)
[1] 1.414214

For instance, we can illustrate with a simple multinomial distribution, and the distance with some Binomial one, with some parametric inference based on distance minimization \theta^\star=\text{argmin}\{d(p,q_{\theta})\}(where here a multinomial distribution with parameters \boldsymbol{p}=(.5,.1,.4), taking values respectively in \{0,1,10\}, while the binomial distribution has probabilities \boldsymbol{q}_{\theta}=(1-\theta,\theta), taking values in \{0,10\})

One can prove that

while

When k=1, observe that the distance is easy to compute when distributions are ordered

When k=2, the two distances are not equal

In the Gaussian (and the Bernoulli) case, we can get an expression for the distance (and much more as we will see later on)

There are several representations for W_2

And finally, we can also discuss W_{\infty}

Wasserstein distances, and optimal transport

Wasserstein distance can also we written using some sort of expected values, when considering random variables instead of distributions, and some best-case scenario, or cheapest transportation cost,

which lead to the so call Kantorovich problem

An alternative way to look at this problem is to consider a transport map, and a push-forward measure

This is simply

Of course such mapping exist

We can then consider Monge problem

And interestingly, those two problems are (somehow) equivalent

Discrete case

If \boldsymbol{a}_{{A}}\in\mathbb{R}_+^{\color{red}{n_{{A}}}} and \boldsymbol{a}_{{B}}\in\mathbb{R}_+^{\color{blue}{n_{{B}}}}, defineU(\boldsymbol{a}_{{A}},\boldsymbol{a}_{{B}})=\big\lbrace M\in\mathbb{R}_+^{\color{red}{n_{{A}}}\times\color{blue}{n_{{B}}}}:M\boldsymbol{1}_{\color{blue}{n_{{B}}}}=\boldsymbol{a}_{A}\text{ and }{M}^\top\boldsymbol{1}_{\color{red}{n_{{A}}}}=\boldsymbol{a}_{B}\big\rbraceFor convenience, let U_{\color{red}{n_{{A}}},\color{blue}{n_{{B}}}} denote \displaystyle{U\left(\boldsymbol{1}_{n_{{A}}},\frac{\color{red}{n_{{A}}}}{\color{blue}{n_{{B}}}}\boldsymbol{1}_{n_{{B}}}\right)} (so that U_{\color{red}{n},\color{blue}{n}} is the set of permutation matrices associated with \mathcal{S}_n). Let C_{i,j}=d(x_i,y_{j})^kso that W_k^k(\boldsymbol{x},\boldsymbol{y}) = \underset{P\in U_{\color{red}{n_{{A}}},\color{blue}{n_{{B}}}}}{\text{argmin}} \Big\lbrace \langle P,C\rangle \Big\rbracewhere\langle P,C\rangle = \sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}C_{i,j} then consider P^* \in \underset{P\in U_{\color{red}{n_A},\color{blue}{n_B}}}{\text{argmin}} \Big\lbrace \langle P,C\rangle \Big\rbraceFor the slides, if we have the same sample sizes in the two groups, we have

we can illustrate below (with costs, or distances)

And with different group sizes,

i.e., if we consider real datasets

And as usual, we can consider some penalized version. Define \mathcal{E}(P) = -\sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}\log P_{i,j}or\mathcal{E}'(P) = -\sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}\big[\log P_{i,j}-1\big] or \mathcal{E}'(P) = -\sum_{i=1}^{\color{red}{n_{{A}}}} \sum_{j=1}^{\color{blue}{n_{{B}}}} P_{i,j}\big[\log P_{i,j}-1\big] Define P^*_\gamma = \underset{P\in U_{\color{red}{n_{{A}}},\color{blue}{n_{{B}}}}}{\text{argmin}} \Big\lbrace \langle P,C\rangle -\gamma \mathcal{E}(P) \Big\rbrace The problem is strictly convex.

Sinkhorn relaxation

This idea is related to the following theorem

Consider a simple optimal transportation problem between 6 points to 6 other points,

set.seed(123)
x = (1:6)/7
y = runif(9)
x
[1] 0.14 0.29 0.43 0.57 0.71 0.86
y[1:6]
[1] 0.29 0.79 0.41 0.88 0.94 0.05
library(T4transport)
Wxy = wasserstein(x,y[1:6])
Wxy$plan

that we can visualize below (the first observation of \boldsymbol{x} is matched with the last one of \boldsymbol{y}, the second observation of \boldsymbol{x} is matched with the first one of \boldsymbol{y}, etc)

We observe that we simply match according to ranks.

But we can also use a penalized version

Sxy = sinkhorn(x, y[1:6], p = 2, lambda = 0.001)
Sxy$plan

here with a very small pernalty

or a larger one

Sxy = sinkhorn(x, y[1:6], p = 2, lambda = 0.05)
Sxy$plan

In the discrete case, optimal transport can be related to Hardy-Littlewood-Polya inequality, that is related to the idea of matching based on ranks (corresponding to a monotone mapping function)

We have then

In the bivariate dicrete case, we have

Optimal mapping

We have mentioned that, in the univariate setting

and clearly, \mathcal{T}^\star is increasing. In the Gaussian case, for examplex_{{B}}=\mathcal{T}^\star(x_{{A}})= \mu_{{B}}+\sigma_{{B}}\sigma_{{A}}^{-1} (x_A-\mu_{{A}}).In the multivariate case, we need a more general concept of increasingness to define an “increasing” mapping \mathcal{T}^\star:\mathbb{R}^k\to\mathbb{R}^k.

In the Gaussian case, for example, we have a linear mapping,\boldsymbol{x}_{{B}} = \mathcal{T}^\star(\boldsymbol{x}_{{A}})=\boldsymbol{\mu}_{{B}} + \boldsymbol{A}(\boldsymbol{x}_{{A}}-\boldsymbol{\mu}_{{A}})where \boldsymbol{A} is a symmetric positive matrix that satisfies \boldsymbol{A}\boldsymbol{\Sigma}_{{A}}\boldsymbol{A}=\boldsymbol{\Sigma}_{{B}}, which has a unique solution given by \boldsymbol{A}=\boldsymbol{\Sigma}_{{A}}^{-1/2}\big(\boldsymbol{\Sigma}_{{A}}^{1/2}\boldsymbol{\Sigma}_{{B}}\boldsymbol{\Sigma}_{{A}}^{1/2}\big)^{1/2}\boldsymbol{\Sigma}_{{A}}^{-1/2}, where \boldsymbol{M}^{1/2} is the square root of the square (symmetric) positive matrix \boldsymbol{M} based on the Schur decomposition (\boldsymbol{M}^{1/2} is a positive symmetric matrix). In R, for example, use the expm package,

M = expm::sqrtm(matrix(c(1,1.2,1.2,2),2,2))
M
[,1] [,2]
[1,] 0.8244771 0.5658953
[2,] 0.5658953 1.2960565
M %*% M
[,1] [,2]
[1,] 1.0 1.2
[2,] 1.2 2.0

Optimal mapping, on real data

To illustrate, it is possible to consider the optimal matching, between the height of n men and n women,

Another example (discussed in Optimal Transport for Counterfactual Estimation: A Method for Causal Inference – with a nice R notebook created by Ewen), consider Black and non-Black mothers in the U.S.

or the joint mapping, in dimension 2

We will spend more time on those functions (and the related concept) in a few weeks, when discussing barycenters and geodesics… More details in the slides (online) and in the forthcoming textbook,

Sharing pictures from holidays in the Canadian Rockies (with R)

My kids have a very popular blog (at least among their grandmothers) where they frequently post pictures from everyday’s life (since they live 5000km from them), as well as pictures taken from holidays. This afternoon, I tried to used the popupImage function from the leaflet package to post pictures, on a map (to explain where we spent our holiday this summer). This post is just to keep tracks of that code.

First, we need to load the appropriate R packages

library(leaflet)
library(mapview)

Then, we take a picture, and we locate it, for instance Mirror Lake (on the trail to Lake Agnes). Since leaflet uses openstreetmap, I recommend to use it also for location (and not google maps… coordinates can be slightly different)

df=data.frame(lat =51.41603, long=-116.23946,
nom = "Miror Lake",photo="http://freakonometrics.free.fr/jaspeR/_DSC5967.jpg")

I guess you can also use the metadata if you take pictures with a cell phone, and you add the location… but I am (very) old fashioned, and still use a camera to take pictures. Then you can add a dozen pictures

df=rbind(df, data.frame(lat =51.4164, long=-116.2442,
nom = "Lake Agnes",photo="http://freakonometrics.free.fr/jaspeR/_DSC6003.jpg"))
df=rbind(df, data.frame(lat =51.3215642,long=-116.193718,
nom="Moraine Lake",photo="http://freakonometrics.free.fr/jaspeR/_DSC5957.jpg"))

From that dataframe, we need two kinds of information: the location, and the url of the picture,

data_df=df[,c("lat","long")]
images = as.character(df$photo)

Then we can create the leaflet map (sorry for typos, but wordpress converts the > symbol into some “>” characters… which makes R pipe operator hard to read)

m = leaflet(data_df) %>%
  addTiles() %>%
  addCircleMarkers(
    fillOpacity = 0.8, radius = 5,
    lng = ~long, lat =~lat, 
    popup = popupImage(images)
  )

and export it (in a nice html file)

library(htmlwidgets)
saveWidget(m, file="jaspR.html")

Extracting information from a picture, round 2

Yesterday, I published a post on extracting information from a picture, but it did not work as expected. I claimed that it was because of the original graph I had. More precisely, the was based on some weird projection, and I could not reconcile. So I decide to cheat a little bit, by creating my own map,

Colors are ugly, I know. But I got them using

u = seq(0,1,length=30)
couleurs = rgb(u,rev(u),0,1)

The picture is

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage3.png"
library(pixmap)
library(png)
IMG = readPNG(url)

I used those colors because it would make things easy when extracting reds and greens…

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Let us see if the contour of France can be overlaid

library(maptools)
library(PBSmapping)
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
par(mfrow=c(1,1))
PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We have a perfect match, don’t we…?

Let us now use a shapefile based on départements,

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm2.rds","FRA_adm2.rds")
FR2=readRDS("FRA_adm2.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR2)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
k=35
pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
points(pX,pY)nge(pX)

For instance, the thirty-fifth polygon is the following

Let us extract the color inside that polygon

u=1:nrow(ROUGE)
v=1:ncol(ROUGE)

The code would be

pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
E=expand.grid(u,v)
M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)>0,length(u),length(v))
image(u,v,ROUGE*M,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(pX,pY)

Now, for each département, I extract the average value of red, and the average value of green,

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)>0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

On the graph below, we have the original values on the x-axis (unemployement, in percent) and the “average value of red”.  Note that points are almost perfectly correlated… The accumulation can be explained because on the original map, different values could have the same color

So far, I can claim that we’ve been able to extract useful information from the original picture.

Consider the case now that the original map was the following one

The picture can be downloaded using the following code

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage5.png"
library(pixmap)
library(png)
IMG = readPNG(url)

Here, the colors are obtained from a standard palette,

library(pals)
couleurs = rev(brewer.rdylgn(30))

Here again, we use our previous code to extract reds and greens

And if we use our function

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)>0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

we obtain the following graph

Here again, we have a strong correlation, not to say comonotonic variables (in the sense that ranks are identical). Nice, isn’t it ?

Extracting information from a picture, round 1

This week, I wanted to get information I found on the nice map, below. I could not get access to the original dataset, per zip code… and I was wondering, if (assuming that the map was with high resolution) it was actually possible to extract information, using a simple R function…

As we can see, there is red, and green on the map, and I would love to know which are the green and the red cities, in France. One important issue is actually the background. Here it’s nice, it white… but white is a strange color, achromatic and very light. More specifically, if I search red areas, the background is very red. And very green, too. So, to avoid those issues, I did use gimp to change the background, into black. On the opposite, where it’s black, it’s neither red, nor green !

Let us get the map, and extract information from the file

url="https://freakonometrics.hypotheses.org/files/2018/12/inondation3.png"
download.file(url,"inondation3.png")
image="inondation3.png"
library(pixmap)
library(png)
IMG=readPNG(image)

Information is stored in several matrices – or in arrays.  Dimension 1 is the height of the picture (in pixels), dimension 2 is the width, and the third one is either 1 (red), 2 (green) or 3 (blue), based on the rgb decomposition of each pixel. Then, I try to find the border of the map

nl=dim(IMG)[1]
nc=dim(IMG)[2]
MAT=(IMG[,,1]+IMG[,,2])/2
x=apply(MAT,2,max)
plot(x,type="l")

When it’s null, it means no color on the line of the matrix, i.e. completly black (initially, I used the mean function, but the maximum really behaves like a step function)

y=apply(MAT,1,max)
plot(y,type="l")

Let us find cutoff values, on the left and on the right, on top and on the bottom

image(1:nc,1:nl,t(MAT))
abline(v=min(which(x>.2)),col="blue")
abline(v=max(which(x>.2)),col="blue")
abline(h=min(which(y>.2)),col="blue")
abline(h=max(which(y>.2)),col="blue")

We obtain the following (forget about the fact that – somehow – France is upside-down)

We can zoom-in, just to make sure that our border are fine

par(mfrow=c(1,2))
image(min(which(x>.2))+(-5):5,1:nl,t(MAT)[min(which(x>.2))+(-5):5,])
abline(v=min(which(x>.2))+(-5):5,col="white")
abline(v=min(which(x>.2)),col="blue")
x1=min(which(x>.2))-1

and on the vertical range

image(max(which(x>.2))+(-5):5,1:nl,t(MAT)[max(which(x>.2))+(-5):5,])
abline(v=max(which(x>.2))+(-5):5,col="white")
abline(v=max(which(x>.2)),col="blue")
x2=max(which(x>.2))+1

So far so good. Let us keep the subpart of the picture,

image(x1:x2,y1:y2,t(MAT)[x1:x2,y1:y2])

Now, let us focus on the red part / component of that picture

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))

That’s not bad, isn’t it ? And get can have a similar graph for the green part

VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Now, I wanted to ajust a map of France on that one. Using shapefiles of administrative regions, it would be possible to get the proportion of red and green parts (départements, cantons, etc). As a starting point (before going to ‘départements’), let us use a standard shapefile for France

library(maptools)
library(PBSmapping)
url="http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds"
download.file(url,"FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We try here to rescale it. The left part should be on the left part of the picture as well as the right part. And the same holds for the top, and the bottom,

Unfortunately, even if we change the projection technique, I could not match perfectly the contour of France. I am quite sure that it’s a projection problem ! But I did try a dozen popular ones, with no success… so if anyone has a clever idea…

Les transports publics parisiens

Histoire de continuer la série de billets sur la visualisation, et la manipulation de données ouvertes, je vais reprendre de codes de Tony, de la formation Data Science pour l’Actuariat, pour visualiser le transport dans Paris (et la région parisienne). Si j’ai le temps, dans les jours a venir, je ferais une analyse du réseau de métro, compare aux autres grandes villes européennes. Pour commencer, on va récupérer les données, fournies par le site d’open data du stif, le syndicat des transports d’Ile de France (https://opendata.stif.info). Les données sont découpées par semestre, ce qui rend le code un peu lourd… mais bon, ça n’est pas plus complique pour autant.

library(dplyr)
library(stringr)
library(ggplot2)
library(xlsx)
library(ggmap)

On commence par lire tous les fichiers en ligne

nbvalid = list()
download.file("https://opendata.stif.info/explore/dataset/emplacement-des-gares-idf-data-generalisee/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","Gares.csv")
gares = read.csv("Gares.csv", sep=";", header = TRUE)
distr_pers = list()
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires1.csv")
distr_pers$S1 = read.csv("horaires1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires2.csv")
distr_pers$S2 = read.csv("horaires2.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations1.csv")
nbvalid$S1 = read.csv("validations1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations2.csv")
nbvalid$S2 = read.csv("validations2.csv", sep=";", header = TRUE)
download.file("https://freakonometrics.free.fr/Correspondance_NOM.csv","Correspondance_NOM.csv")
Cooresp = read.csv("Correspondance_NOM.csv", sep=";", header = TRUE)

On a ensuite besoin de définir les dates des vacances, pour 2017

Vacances = list()
Vacances$Noel = append(seq(from = as.Date("01/01/2017", format="%d/%m/%Y"), to=as.Date("02/01/2017", format="%d/%m/%Y"), by=1),seq(from = as.Date("24/12/2017", format="%d/%m/%Y"), to=as.Date("31/12/2017", format="%d/%m/%Y"), by=1))
Vacances$Ski = seq(from = as.Date("04/02/2017", format="%d/%m/%Y"), to=as.Date("19/02/2017", format="%d/%m/%Y"), by=1)
Vacances$Printemps = seq(from = as.Date("02/04/2017", format="%d/%m/%Y"), to=as.Date("17/04/2017", format="%d/%m/%Y"), by=1)
Vacances$Ete = seq(from = as.Date("08/07/2017", format="%d/%m/%Y"), to=as.Date("03/09/2017", format="%d/%m/%Y"), by=1)
Vacances$Toussaint = seq(from = as.Date("21/10/2017", format="%d/%m/%Y"), to=as.Date("05/11/2017", format="%d/%m/%Y"), by=1)
Vacances$All=Reduce(append,Vacances)

Après, un peu de nettoyage est nécessaire, avec des gares en double (par exemple quand passe a la fois le RER et le métro), et pour recuperer leur localisation spatiale (latitude et longitude)

gares$NOM_LONG = as.character(gares$NOM_LONG)
DD = (gares$NOM_LONG[duplicated(gares$NOM_LONG)])
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="Metro"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"M", sep="-")
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="RER"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"R", sep="-")
gares$NOM_LONG=factor(gares$NOM_LONG)
 
a=as.character(gares$Geo.Point)
gares$Y=as.numeric(str_extract_all(a,"^[0-9]+.[0-9]+"))
gares$X=as.numeric(str_extract_all(a,"[0-9]+.[0-9]+$"))

On compte ensuite les nombres de validation de tickets, par gare

Manip_nbvalid = function(Data,DD,gares) {
  i=grep("^[a-zA-Z]+",as.character(Data$NB_VALD))
  Data$NB_VALD[i]=as.integer(5)
  i=is.na(Data$NB_VALD)
  Data$NB_VALD[i]=as.integer(5)
  Data$LIBELLE_ARRET=as.character(Data$LIBELLE_ARRET)
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="100"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"M", sep="-")
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="800"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"R", sep="-")
 
  for (i in seq(1,nrow(Cooresp))) { Data$LIBELLE_ARRET=gsub(as.character(Cooresp$nbval[i]),as.character(Cooresp$gares[i]),Data$LIBELLE_ARRET)
  }
gares$NOM_LONG=as.character(gares$NOM_LONG)
Data=dplyr::left_join(Data,gares[,c("NOM_LONG","X","Y")],by=c("LIBELLE_ARRET"="NOM_LONG"))
  Data=Data[is.na(Data$CODE_STIF_ARRET)==FALSE,]
  Data=Data[Data$CODE_STIF_ARRET!="ND",]
  Data$NB_VALD=as.integer(as.character(Data$NB_VALD))
  Data$JOUR=as.Date(Data$JOUR)
  Data$CODE_STIF_TRNS=factor(Data$CODE_STIF_TRNS)
  Data$CODE_STIF_RES=factor(Data$CODE_STIF_RES)
  Data$CODE_STIF_ARRET=factor(Data$CODE_STIF_ARRET)
  Data$LIBELLE_ARRET=factor(Data$LIBELLE_ARRET)
  Data$ID_REFA_LDA=factor(Data$ID_REFA_LDA)
  Data$CATEGORIE_TITRE=factor(Data$CATEGORIE_TITRE)
  Data$JOURSEM=weekdays(Data$JOUR)  
  return(Data)
}
nbvalid=lapply(nbvalid, Manip_nbvalid,DD=DD,gares=gares)

On a ainsi tous les comptages, pour toutes les gares. On fait ensuite un découpage par tranche horaire

Manip_dist_pers = function(DataFrame) {
  DataFrame=DataFrame[(DataFrame$TRNC_HORR_60)!="ND",]
  DataFrame$TRNC_HORR_60=factor(DataFrame$TRNC_HORR_60, levels = c("0H-1H", "1H-2H", "2H-3H", "3H-4H", "4H-5H", "5H-6H", "6H-7H", "7H-8H", "8H-9H", "9H-10H", "10H-11H", "11H-12H", "12H-13H", "13H-14H", "14H-15H", "15H-16H", "16H-17H", "17H-18H", "18H-19H", "19H-20H", "20H-21H", "21H-22H", "22H-23H", "23H-0H")) 
  DataFrame=DataFrame[(DataFrame$CODE_STIF_ARRET)!="ND",]
  DataFrame$CODE_STIF_ARRET=factor(DataFrame$CODE_STIF_ARRET)
DataFrame$TRANCHE=str_extract(as.character(DataFrame$TRNC_HORR_60),"^[0-9]{1,2}")
  return(DataFrame)
}
distr_pers=lapply(distr_pers, Manip_dist_pers)

On peut ensuite recuperer la distribution des validation, par jour

distr_JOURV=list()
distr_JOURV$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$Y=rbind(distr_JOURV$S1,distr_JOURV$S2)
distr_JOUR=list()
distr_JOUR$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$Y=rbind(distr_JOUR$S1,distr_JOUR$S2)
distr_JOUR_Station=list()
distr_JOUR_Station$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
distr_JOUR_Station$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
Manip_dist_Jour = function(DataFrame) {
  DataFrame$JOURSEM=factor(DataFrame$JOURSEM,levels = c("lundi","mardi","mercredi","jeudi","vendredi","samedi","dimanche"))
  DataFrame$TypeJ=character(nrow(DataFrame))
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ete]="Ete"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Noel]="Noel"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ski]="Ski"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Printemps]="Printemps"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Toussaint]="Toussaint"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$All == FALSE]="HorsVacances"
  DataFrame$CAT_JOUR=character(nrow(DataFrame))
  DFr=list()
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOVS"
  DFr$JOVS$Data = DataFrame[ii,]
  DFr$JOVS$Nom="Jours ouvrés Vacances Scolaires"
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOHV"
  DFr$JOHV$Data = DataFrame[ii,]
  DFr$JOHV$Nom="Jours ouvés Hors Vacances Scolaires"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAVS"
  DFr$SAVS$Data = DataFrame[ii,]
  DFr$SAVS$Nom="Samedi VS"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAHV"
  DFr$SAHV$Data = DataFrame[ii,]
  DFr$SAHV$Nom="Samedi HV"
  ii=DataFrame$JOURSEM=="dimanche"
  DataFrame$CAT_JOUR[ii]="DIJFP"
  DFr$DIJFP$Data = DataFrame[ii,]
  DFr$DIJFP$Nom="Dimanche"
  return(list("TypeJ"=DFr,"Distr"=DataFrame))
}
res=Manip_dist_Jour(distr_JOUR$Y)
distr_TypeJ=res$TypeJ
distr_JOUR$Y=res$Distr
res=Manip_dist_Jour(distr_JOURV$Y)
distr_TypeJV=res$TypeJ
distr_TypeJ_Station=list()
res=Manip_dist_Jour(distr_JOUR_Station$S1)
distr_TypeJ_Station$S1=res$TypeJ
distr_JOUR_Station$S1=res$Distr
res=Manip_dist_Jour(distr_JOUR_Station$S2)
distr_TypeJ_Station$S2=res$TypeJ
distr_JOUR_Station$S2=res$Distr
rm(res)

On peut alors tracer toutes sortes de graphiques, par exemple le nombre de validations, par jour, entre le 1er janvier et le 31 décembre, en fonction du jour de la semaine.

g0 = ggplot(distr_JOUR$Y, aes(x=JOUR, y=NB_VALD, color = JOURSEM)) + geom_point()
g0 = g0 + labs(title="Nombres de validations chaque jours de 2017", x="Date", y="Nombre de validations")
g0

On peut voir la très forte baisse les jours de semaine pendant les vacances d’été. Au lieu de regarder sur l’année, on peut regarder sur la journée

Fct_FqH = function(DataFrame,distr_pers) {
DataFrame=dplyr::full_join(DataFrame,distr_pers[,c("CAT_JOUR","CODE_STIF_ARRET","pourc_validations","TRANCHE","TRNC_HORR_60")],by=c("CODE_STIF_ARRET"="CODE_STIF_ARRET","CAT_JOUR"="CAT_JOUR"))
  DataFrame$NB_VALD=DataFrame$NB_VALD*DataFrame$pourc_validations
  return(DataFrame)
}
distr_JOUR_Station$S1=Fct_FqH(distr_JOUR_Station$S1, distr_pers$S1)
distr_JOUR_Station$S2=Fct_FqH(distr_JOUR_Station$S2, distr_pers$S2)
distr_JOUR_Station$Y=rbind(distr_JOUR_Station$S1,distr_JOUR_Station$S2)
distr_JOUR_Station$Y=distr_JOUR_Station$Y[is.na(distr_JOUR_Station$Y$NB_VALD)==FALSE,]

On peut alors faire un graphique, en fonction de la tranche horaire, pour certaines périodes, par exemple en dehors de vacances scolaires, en semaine (par heure, on a ici un boxplot)

Graphique_HOR = function(DataFrame,TypeJ,NomJ) {
  # Graphique de la distribution de l'affluence par tranche horaire et type de jours
  g1 = ggplot(DataFrame[DataFrame$CAT_JOUR==TypeJ,], aes(x=TRNC_HORR_60, y=pourc_validations, color = TRNC_HORR_60,las=2)) + geom_boxplot() + ylim(c(0,100))
  g1 = g1 + labs(title=paste(c("Distribution des validations par tranche horaire ",NomJ), sep="", collapse = ""), x="Jours", y="Nombre de validations") +
  theme(axis.text.x= element_text(size = 8, angle = 45))
  g1
}
Graphique_HOR(distr_JOUR_Station$Y,"JOHV","Jours ouvrés Hors Vacances Scolaires")

ou bien le samedi

Graphique_HOR(distr_JOUR_Station$Y,"SAHV","Samedi Hors Vacances Scolaires")

On peut tenter un peu de cartographie. Comme nombre de métros/bus, dans le monde, on a souvent uniquement accès aux nœuds d’entrée dans le réseau (et pas aux nœuds de sortie). Mais ça reste intéressant, et très informatif

get_Paris1 = get_map(c(2.3448688,48.8613029), zoom = 11)
Paris1 = ggmap(get_Paris1)

Par gare, et par heure, on peut regarder le nombre de validations de tickets

Median_Valid = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
Median_Valid_Station = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, TRNC_HORR_60,LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
 
Carte_Densite = function(Nom,Carte,TypeJ,HOR,DataFrame) {
if (HOR=="") {
    ii=DataFrame$CAT_JOUR==TypeJ
    NomSave=paste("Densité des validations",Nom,TypeJ)
  }
  else {
    ii=DataFrame$CAT_JOUR==TypeJ &amp; DataFrame$TRNC_HORR_60==HOR
    NomSave=paste("Densité des validations",Nom,TypeJ,HOR)
  }
  U=DataFrame[ii,]
  n=round(log10(median(U$NB_VALD)))-1
  n=max(1,10^n)
  Nb_Repete_Stations=ceiling(U$NB_VALD/n)
  U$Size_Stations=U$NB_VALD/max(U$NB_VALD)
  Z=U[rep(1:nrow(U),Nb_Repete_Stations),]
  Carte_A= Carte + geom_point(aes(x=X,y=Y),data=Z,col="coral", size=10*Z$Size_Stations) +
    geom_density2d(data = Z, aes(x=X,y=Y), size = 0.5) + 
    stat_density2d(data = Z, aes(x=X,y=Y,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") +
    scale_fill_gradient(low = "chartreuse", high = "red",guide = FALSE) + 
    scale_alpha(range = c(0, 0.3), guide = FALSE) + ggtitle(NomSave) +
    theme(axis.title.x = element_blank(), axis.title.y = element_blank(), axis.text.x= element_blank(), axis.text.y = element_blank())
 
  suppressWarnings(print(Carte_A))
}

Par exemple, si on regarde les points de validations de tickets entre 5 et 6 heures du matin, on obtient

L=levels(Median_Valid_Station$TRNC_HORR_60)
Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[6],Median_Valid_Station)

avec beaucoup de ville dans la banlieue proche. Plus tard en journée, entre 11 heures et midi, les gares de validation sont davantage dans le cœur de Paris, avec la Défense a gauche et Saint-Denis au nord

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[12],Median_Valid_Station)

En fin de journée, c’est Paris et surtout la Défense qui ressortent

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[19],Median_Valid_Station)

Amusant, non ?

Linear Regression, with Map-Reduce

Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.

Consider the case of the linear regression, \mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon} (with classical matrix notations). The OLS estimate of \mathbf{\beta} is \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}. To illustrate, consider a not too big dataset, and run some regression.

lm(dist~speed,data=cars)$coefficients
(Intercept)       speed 
 -17.579095    3.932409
y=cars$dist
X=cbind(1,cars$speed)
solve(crossprod(X,X))%*%crossprod(X,y)
           [,1]
[1,] -17.579095
[2,]   3.932409

How is this computed in R? Actually, it is based on the QR decomposition of \mathbf{X}, \mathbf{X}=\mathbf{Q}\mathbf{R}, where \mathbf{Q} is an orthogonal matrix (ie \mathbf{Q}^T\mathbf{Q}=\mathbb{I}). Then \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}

solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y
           [,1]
[1,] -17.579095
[2,]   3.932409

So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.

Consider m blocks

m = 5

and split vectors and matrices
\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right] and \mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right]
To split vectors and matrices, use (eg)

Xlist = list()
for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,]
ylist = list()
for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10]

and get small QR recomposition (per subset)

QR1 = list()
for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]]))))

Consider the QR decomposition of \mathbf{R}^{(1)} which is the first step of the reduce part\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}where\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right]

R1 = QR1[[1]]$R
for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R)
Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]

Define – as step 2 of the reduce part\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j
and\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j

Q3list = list()
for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]]
Vlist = list()
for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]]

and finally set – as the step 3 of the reduce part\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j

sumV = Vlist[[1]]
for(j in 2:m) sumV = sumV+Vlist[[j]]
solve(R2) %*% sumV
           [,1]
[1,] -17.579095
[2,]   3.932409

It looks like we’ve been able to parallelise our linear regression…

Faire (rapidement) un zonier

Dans le cours d’actuariat de l’assurance non-vie, on avait évoqué rapidement l’idée de faire un zonier. Autrement dit, on souhaite créer une variable polytomique (avec 4 ou 5 classes) représentant le critère spatial du risque. On va tenter de segmenter le territoire, en regroupant l’ensemble des territoires ‘proches’ (en terme de risque) étudiées en un nombre de classes prédéfini, permettant de tenir compte d’un (potentiel) critère spatial dans la tarification. Pour les aspects pratiques,  comme on va manipuler des données spatiales, chargeons quelques packages pour commencer

library(maptools)
library(rgeos)
library(rgdal)
library(ggplot2)
library(plyr)
library(maptools)
library(cartography)

On va aussi avoir besoin de données spatialisées (disons en France métropolitaire). Pour faire simple, je vais tirer des codes insee de communes au hasard, un millier, et une variable Y qui va nous intéresser. Et je suppose qu’elle dépend de X que je n’observe pas, mais qui est liée à des caractéristiques spatiales (en gros la latitude, i.e. un positionnement nord/sud)

download.file("http://freakonometrics.free.fr/popfr19752010.csv","popfr.csv")
base = read.csv("popfr.csv",header=TRUE)
base$insee = base$dep*1000+base$com
n=1000
set.seed(123)
id=sample(1:nrow(base),size=n)
simbase=data.frame(insee=base$insee[id])
X=(46-base$lat[id])/2
simbase$Y=X+rnorm(n)

Ici, on a la base suivante (mais on va supposer que je n’observe pas X – qui n’a servi qu’à simuler des données)

> head(simbase)
  insee          Y
1 61499 -1.8573363
2 19181 -0.7059191
3 55307 -0.5923649
4 74030  0.6098773
5 30328 -0.4584795
6 10050 -1.2361240

Classiquement, Y peut être un résidu (normalisé) de régression, par exemple. Pour visualiser, il faut un fond de carte

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds") 
FR0=readRDS("FRA_adm0.rds")

Ensuite, on met les points (en fusionnant nos données avec la base insee donnant latitude et longitude des codes insee de communes)

plot(FR0)
simbase = merge(simbase,base[,c("insee","long","lat")])
cols = rev(carto.pal(pal1 = "red.pal",n1 = 10,pal2="green.pal",n2=10))
bk = seq(-5,4.5,length=21)
cuty = cut(simbase$Y,breaks=bk,labels=1:20)
points(simbase$long,simbase$lat, col=cols[cuty],pch=19,cex=.5)

On va voir eux techniques pour faire un zonier. Le premier est de travailler par zone prédéfinie, comme le département,

simbase$dpt=trunc(simbase$insee/1000)
A=aggregate(x = simbase$Y,by=list(simbase$dpt),mean)
names(A)=c("dpt","y")
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm2.rds","FRA_adm2.rds")
FR=readRDS("FRA_adm2.rds")
donnees_carte=data.frame(FR@data)
d=donnees_carte$CCA_2
d[d=="2A"]="201"
d[d=="2B"]="202"
donnees_carte$dpt=as.numeric(as.character(d))
donnees_carte=merge(donnees_carte,A,all.x=TRUE)
donnees_carte=donnees_carte[order(donnees_carte$OBJECTID),]
bk=seq(-2.75,2.75,length=21)
donnees_carte$cuty=cut(donnees_carte$y,breaks=bk,labels=1:20)
cols = rev(carto.pal(pal1 = "red.pal",n1 = 10,pal2="green.pal",n2=10))
plot(FR, col=cols[donnees_carte$cuty],xlim=c(-5.2,12))

autrement dit, on fait une carte choroplèthe

(il faut s’assurer que les couleurs sont mises au bon endroit). Ensuite, on peut définir deux zones,

bk=seq(-2.75,2.75,length=3)
donnees_carte$cuty=cut(donnees_carte$y,breaks=bk,labels=1:2)
plot(FR, col=cols[c(4,16)][donnees_carte$cuty],xlim=c(-5.2,12))

ou quatre

bk=seq(-2.75,2.75,length=5)
donnees_carte$cuty=cut(donnees_carte$y,breaks=bk,labels=1:4)
plot(FR, col=cols[c(3,8,12,17)][donnees_carte$cuty],xlim=c(-5.2,12))

On va créer uen variable à 4 modalités, à partir des départements.

Une autre solution consiste à utiliser les données spatiales, obtenues par fusion avec la base insee

simbase=merge(simbase,base[,c("insee","long","lat")])

On va ensuite se donner une grille, en France. C’est un peu pénible. Il nous faut le polynôme de contour de la France

P1=FR0@polygons[[1]]@Polygons[[355]]@coords
P2=FR0@polygons[[1]]@Polygons[[27]]@coords
plot(FR0,border=NA)
polygon(P1)
polygon(P2)

(ici on a juste la France métropolitaine, et la Corse – deux polygônes donc)

et on part d’une grille sur un rectangle

grille<-expand.grid(seq(min(simbase$long),max(simbase$long),length=101),seq(min(simbase$lat),max(simbase$lat),length=101))
paslong=(max(simbase$long)-min(simbase$long))/100
paslat=(max(simbase$lat)-min(simbase$lat))/100

On retient alors juste les points qui sont dans les polygônes

f=function(i){ (point.in.polygon (grille[i, 1]+paslong/2 , grille[i, 2]+paslat/2 , P1[,1],P1[,2])>0)+(point.in.polygon (grille[i, 1]+paslong/2 , grille[i, 2]+paslat/2 , P2[,1],P2[,2])>0) }
indic=unlist(lapply(1:nrow(grille),f))
grille=grille[which(indic==1),]
points(grille[,1]+paslong/2,grille[,2]+paslat/2,cex=.4,pch=19,col="blue")

voilà le résultat

Maintenant, on peut utiliser du krigeage mais j’ai plutôt voulu tenter des plus proches voisins. Pour chaque point de la grille, on prend la moyenne des plus proches voisins (à vol d’oiseau, i.e. avec une norme Euclidienne – sur la sphère)

library(geosphere)
knn=function(i,k=20){
d=distHaversine(grille[i,1:2],simbase[,c("long","lat")], r=6378.137)
  r=rank(d)
  ind=which(r<=k)
  mean(simbase[ind,"Y"])
}
grille$y=Vectorize(knn)(1:nrow(grille))
bk=seq(-2.75,2.75,length=21)
grille$cuty=cut(grille$y,breaks=bk,labels=1:20)
cols <- rev(carto.pal(pal1 = "red.pal",n1 = 10,pal2="green.pal",n2=10))
points(grille[,1]+paslong/2,grille[,2]+paslat/2,col=cols[grille$cuty],pch=19)

Ici, ca donne la carte suivante

mais là encore, on peut retenir deux niveaux

bk=seq(-2.75,2.75,length=3)
grille$cuty=cut(grille$y,breaks=bk,labels=1:2)
plot(FR0,border=NA)
polygon(P1)
polygon(P2)
points(grille[,1]+paslong/2,grille[,2]+paslat/2,col=cols[c(4,16)][grille$cuty],pch=19)

ce qui donne les zones suivantes

bk=seq(-2.75,2.75,length=5)
grille$cuty=cut(grille$y,breaks=bk,labels=1:4)
plot(FR0,border=NA)
polygon(P1)
polygon(P2)
points(grille[,1]+paslong/2,grille[,2]+paslat/2,col=cols[c(3,8,12,17)][grille$cuty],pch=19)

ou bien, si on retient quatre niveaux

A partir de là, pour créer une variable de zone, le plus simple est à un point donné de chercher le point de la grille le plus proche, et de lui donner la valeur de la couleur associée

pred=function(z){
  d=distHaversine(z,grille[,1:2], r=6378.137)
  grille[which.min(d),"cuty"]
}

Facile, non ?

Non-Uniform Population Density in some European Countries

A few months ago, I did mention that France was a country with strong inequalities, especially when you look at higher education, and research teams. Paris has almost 50% of the CNRS researchers, while only 3% of the population lives there.

It looks like Paris is the only city, in France. And I wanted to check that, indeed, France is a country with strong inequalities, when we look at population density.

Using data from sedac.ciesin.columbia.edu, it is possible to get population density on a small granularity level,

> rm(list=ls())
> base=read.table(
+      "/home/charpentier/glp00ag.asc",
+      skip=6)
> X=t(as.matrix(base,ncol=8640))
> X=X[,ncol(X):1]

The scales for latitudes and longitudes can be obtained from the text file,

> #ncols         8640
> #nrows         3432
> #xllcorner     -180
> #yllcorner     -58
> #cellsize      0.0416666666667

Hence, we have

> library(maps)
> world=map(database="world")
> vx=seq(-180,180,length=nrow(X)+1)
> vx=(vx[2:length(vx)]+vx[1:(length(vx)-1)])/2
> vy=seq(-58,85,length=ncol(X)+1)
> vy=(vy[2:length(vy)]+vy[1:(length(vy)-1)])/2

If we plot our density, as in a previous post, on Where People Live,

> I=seq(1,nrow(X),by=10)
> J=seq(1,ncol(X),by=10)
> image(vx[I],vy[J],log(1+X[I,J]),
+ col=rev(heat.colors(101)))
> lines(world[[1]],world[[2]])

we can see that we have a match, between the big population matrix, and polygons of countries.

Consider France, for instance. We can download the contour polygon with higher precision,

> library(rgdal)
> fra=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds",
+ "fr.rds")
> Fra=readRDS("fr.rds")
> n=length(Fra@polygons[[1]]@Polygons)
> L=rep(NA,n)
> for(i in 1:n) L[i]=nrow(Fra@polygons[[1]]@Polygons[[i]]@coords)
> idx=which.max(L)
> polygon_Fr=
+       Fra@polygons[[1]]@Polygons[[idx]]@coords
> min_poly=apply(polygon_Fr,2,min)
> max_poly=apply(polygon_Fr,2,max)
> idx_i=which((vx>min_poly[1])&(vx<max_poly[1]))
> idx_j=which((vy>min_poly[2])&(vy<max_poly[2]))
> sub_X=X[idx_i,idx_j]
> image(vx[idx_i],vy[idx_j],
+       log(sub_X+1),col=rev(heat.colors(101)),
+       xlab="",ylab="")
> lines(polygon_Fr)

We are now able to extract information about population for France, only (actually, it is only mainland France, islands are not considered here… to avoid complicated computations

> library(sp)
> xy=expand.grid(x = vx[idx_i], y = vy[idx_j])
> dim(xy)
[1] 65730     2

Here, we have 65,730 small squares, in France.

> pip=point.in.polygon(xy[,1],xy[,2],
+     polygon_Fr[,1],polygon_Fr[,2])>0
> dim(pip)=dim(sub_X)
> Fr=sub_X[pip]
> sum(Fr)
[1] 58105272

Observe that the total population within the French polygon is close to 60 million people, which is consistent with actual figures. Now, if we look more carefully at repartition over the French territory

> library(ineq)
> Gini(Fr)
[1] 0.7296936

Gini coefficient is rather high (over 70%), but it is also possible to visualize Lorenz curve,

> plot(Lc(Fr))

Observe that in 5% of the territory, we can find almost 54% of the population

> 1-min(LcF$L[LcF$p>.95])
[1] 0.5462632

In order to compare with other countries, consider the

> LC=function(rds="fr.rds"){
+ Fra=readRDS(rds)
+ n=length(Fra@polygons[[1]]@Polygons)
+ L=rep(NA,n)
+ for(i in 1:n) 
L[i]=nrow(Fra@polygons[[1]]@Polygons[[i]]@coords)
+ idx=which.max(L)
+ polygon_Fr=
+      Fra@polygons[[1]]@Polygons[[idx]]@coords
+ min_poly=apply(polygon_Fr,2,min)
+ max_poly=apply(polygon_Fr,2,max)
+ idx_i=which((vx>min_poly[1])&(vx<max_poly[1]))
+ idx_j=which((vy>min_poly[2])&(vy<max_poly[2]))
+ sub_X=X[idx_i,idx_j]
+ xy=expand.grid(x = vx[idx_i], y = vy[idx_j])
+ dim(xy)
+ pip=point.in.polygon(xy[,1],xy[,2],
+     polygon_Fr[,1],polygon_Fr[,2])>0
+ dim(pip)=dim(sub_X)
+ Fr=sub_X[pip]
+ return(list(gini=Gini(Fr),LC=Lc(Fr))
+ }
> FRA=LC()

For instance, consider Germany, or Italy

> deu=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/DEU_adm0.rds","deu.rds")
> DEU=LC("deu.rds")
> ita=download.file(
"http://biogeo.ucdavis.edu/data/gadm2.8/rds/ITA_adm0.rds","ita.rds")
> ITA=LC("ita.rds")

It is possible to plot Lorenz curve, together,

> plot(FRA$LC,col="blue")
> lines(DEU$LC,col="black")
> lines(ITA$LC,col="red")

Observe that France is clearly below the other ones. Compared with Germany, there is a significant difference

> FRA$gini
[1] 0.7296936
> DEU$gini
[1] 0.5088853

More precisely, if 54% of French people live in 5% of the territory, only 40% of Italians, and 32% of the Germans,

> 1-min(FRA$LC$L[FRA$LC$p>.95])
[1] 0.5462632
> 1-min(ITA$LC$L[ITA$LC$p>.95])
[1] 0.3933227
> 1-min(DEU$LC$L[DEU$LC$p>.95])
[1] 0.3261124

Where People Live, part 2

Following my previous post, I wanted to use another dataset to visualize where people live, on Earth. The dataset is coming from sedac.ciesin.columbia.edu. We you register, you can download the database

> base=read.table("glp00ag15.asc",skip=6)

The database is a ‘big’ 1440×572 matrix, in each cell (latitude and longitude) we have the population

>  X=t(as.matrix(base,ncol=1440))
>  dim(X)
[1] 1440  572

The dataset looks like

> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ log(X+1)[,ncol(X):1],col=rev(heat.colors(101)),
+ axes=FALSE,xlab="",ylab="")

Now, if we keep only place where people actually live (i.e. removing cold desert and oceans) we get

> M=X>0
> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ M[,ncol(X):1],col=c("white","light green"),
+ axes=FALSE,xlab="",ylab="")

Then, we can visualize where 50% of the population lives,

> Order=matrix(rank(X,ties.method="average"),
+ nrow(X),ncol(X))
> idx=cumsum(sort(as.numeric(X),
+ decreasing=TRUE))/sum(X)
> M=(X>0)+(Order>length(X)-min(which(idx>.5)))
> image(seq(-180,180,length=nrow(X)), + seq(-90,90,length=ncol(X)), + M[,ncol(X):1],col=c("white",
+ "light green",col="red"), + axes=FALSE,xlab="",ylab="")

50% of the population lives in the red area, and 50% in the green area. More precisely, 50% of the population lives on 0.75% of the Earth,

> table(M)/length(X)*100
M
         0          1          2 
69.6233974 29.6267968  0.7498057

And 90% of the population lives in the red area below (5% of the surface of the Earth)

> M=(X>0)+(Order>length(X)-min(which(idx>.9)))
> table(M)/length(X)*100
M
        0         1         2 
69.623397 25.512335  4.864268 
> image(seq(-180,180,length=nrow(X)),
+ seq(-90,90,length=ncol(X)),
+ M[,ncol(X):1],col=c("white",
+ "light green",col="red"),
+ axes=FALSE,xlab="",ylab="")

Where People Live

There was an interesting map on reddit this morning, with a visualisation of latitude and longituge of where people live, on Earth. So I tried to reproduce it. To compute the density, I used a kernel based approch

> library(maps)
> data("world.cities")
> X=world.cities[,c("lat","pop")]
> liss=function(x,h){
+   w=dnorm(x-X[,"lat"],0,h)
+   sum(X[,"pop"]*w)
+ }
> vx=seq(-80,80)
> vy=Vectorize(function(x) liss(x,1))(vx)
> vy=vy/max(vy)
> plot(world.cities$lon,world.cities$lat,)
> for(i in 1:length(vx)) 
+ abline(h=vx[i],col=rgb(1,0,0,vy[i]),lwd=2.7)

For the other axis, we use a miror technique, to ensure that -180 is close the +180

> Y=world.cities[,c("long","pop")]
> Ya=Y; Ya[,1]=Y[,1]-360
> Yb=Y; Yb[,1]=Y[,1]+360
> Y=rbind(Y,Ya,Yb)
> liss=function(y,h){
+   w=dnorm(y-Y[,"long"],0,h)
+   sum(Y[,"pop"]*w)
+ } 
> vx=seq(-180,180)
> vy=Vectorize(function(x) liss(x,1))(vx)
> vy=vy/max(vy)
> plot(world.cities$lon,world.cities$lat,pch=19)
> for(i in 1:length(vx)) 
+ abline(v=vx[i],col=rgb(1,0,0,vy[i]),lwd=2.7)

Now we can add the two, on the same graph

Spatial and Temporal Viz of Gas Price, in France

A great think in France, is that we can play with a great database with gas price, in all gas stations, almost eveyday. The file is rather big, so let’s make sure we have enough memory to run our codes,

> rm(list=ls())

To extract the data, first, we should extract the xml file, and then convert it in a more common R object (say a list)

> year=2014
> loc=paste("http://donnees.roulez-eco.fr/opendata/annee/",year,sep="")
> download.file(loc,destfile="oil.zip")

Content type 'application/zip' length 15248088 bytes (14.5 MB)

> unzip("oil.zip", exdir="./")
> fichier=paste("PrixCarburants_annuel_",year,
".xml",sep="")
> library(plyr)
> library(XML)
> library(lubridate)
> l=xmlToList(fichier)

We have a large dataset, with prices, for various types of gaz, for almost any gas station in France, almost every day, in 2014. It is a 1.4Gb list, with 11,064 elements (each of them being a gas station)

> length(l)
[1] 11064

There are two ways to look at the data. A first idea is to consider a gas station, and to extract the time series.

> time_series=function(no,type_gas="Gazole"){
+   prix=list()
+   date=list()
+   nom=list()
+   j=0
+   for(i in 1:length(l[[no]])){
+     v=names(l[[no]])
+     if(!is.null(v[i])){
+       if(v[i]=="prix"){
+         j=j+1
+  date[[j]]=as.character(l[[no]][[i]]["maj"])
+  prix[[j]]=as.character(l[[no]][[i]]["valeur"])
+  nom[[j]]=as.character(l[[no]][[i]]["nom"])
+       }}
+   }
+   id=which(unlist(nom)==type_gas)
+   n=length(id)
+   jour=function(j) as.Date(substr(date[[id[j]]],1,10),"%Y-%m-%d")
+   jour_heure=function(j) as.POSIXct(substr(date[[id[j]]],1,19), format = "%Y-%m-%d %H:%M:%S", tz = "UTC")
+   ext_y=function(j) substr(date[[id[j]]],1,4)
+   ext_m=function(j) substr(date[[id[j]]],6,7)
+   ext_d=function(j) substr(date[[id[j]]],9,10)
+   ext_h=function(j) substr(date[[id[j]]],12,13)
+   ext_mn=function(j) substr(date[[id[j]]],15,16)
+   prix_essence=function(i) as.numeric(prix[[id[i]]])/1000
+   base1=data.frame(indice=no,
+            id=l[[no]]$.attrs["id"],
+            adresse=l[[no]]$adresse,
+            ville=l[[no]]$ville,
+  lat=as.numeric(l[[no]]$.attrs["latitude"])
/100000,
+  lon=as.numeric(l[[no]]$.attrs["longitude"])
/100000,
+       cp=l[[no]]$.attrs["cp"],
+       saufjour=l[[no]]$ouverture["saufjour"], 
+       Y=unlist(lapply(1:n,ext_y)),
+       M=unlist(lapply(1:n,ext_m)),
+       D=unlist(lapply(1:n,ext_d)),
+       H=unlist(lapply(1:n,ext_h)),
+       MN=unlist(lapply(1:n,ext_mn)),
+    prix=unlist(lapply(1:n,prix_essence)))
+   
+   base1=base1[!is.na(base1$prix),]
+   
+   date_d=paste(year,"-01-01 12:00:00",sep="")
+   date_f=paste(year,"-12-31 12:00:00",sep="")
+   vecteur_date=seq(as.POSIXct(date_d, format =
+                 "%Y-%m-%d %H:%M:%S"),
+                    as.POSIXct(date_f, format = 
+                 "%Y-%m-%d %H:%M:%S"),by="days")
+   date=paste(base1$Y,"-",base1$M,"-",base1$D,
+   " ",base1$H,":",base1$MN,":00",sep="")
+   date_base=as.POSIXct(date, format = 
+                "%Y-%m-%d %H:%M:%S", tz = "UTC")
+   idx=function(t) sum(vecteur_date[t]>=date_base)
+   vect_idx=Vectorize(idx)(1:length(vecteur_date))
+   P=c(NA,base1$prix)
+   base2=ts(P[1+vect_idx],
+         start=year,frequency=365)
+   list(base=base1,
+        ts=base2)
+ }

To get the time series, extrapolation is necessary, since we have here observation at irregular dates. Here, for instance, for the second gas station, we get

> plot(time_series(2)$ts,ylim=c(1,1.6),col="red")
> lines(time_series(2,"SP98")$ts,col="blue")

An alternative is to study gas price from a spatial perspective. Given a date, we want the price in all stations. As previously, we keep the last price observed, in each station,

> spatial=function(dt){
+   base=NULL
+   for(no in 1:length(l)){  
+     prix=list()
+     date=list()
+     j=0
+     for(i in 1:length(l[[no]])){
+     v=names(l[[no]])
+     if(!is.null(v[i])){
+       if(v[i]=="prix"){
+   j=j+1
+   date[[j]]=as.character(l[[no]][[i]]["maj"])
+       }}
+   }
+   n=j
+   D=as.Date(substr(unlist(date),1,10),"%Y-%m-%d")
+   k=which(D==D[which.max(D[D<=dt])])
+ if(length(k)>0){
+   B=Vectorize(function(i) l[[no]][[k[i]]])(1:length(k))
+ if("nom" %in%  rownames(B)){  
+   k=which(B["nom",]=="Gazole")
+   prix=as.numeric(B["valeur",k])/1000
+   if(length(prix)==0) prix=NA
+   base1=data.frame(indice=no,
+   lat=as.numeric(l[[no]]$.attrs["latitude"])
/100000,
+   lon=as.numeric(l[[no]]$.attrs["longitude"])
/100000,
+   gaz=prix)
+   base=rbind(base,base1)
+ }}}
+ return(base)}

For instance, for the 5th of May, 2014, we get the following dataset

> B=spatial(as.Date("2014-05-05"))

To visualize prices, consider only mainland France (excluding islands in the Pacific, or close to the Caribeans)

> idx=which((B$lon>(-10))&(B$lon<20)&
+ (B$lat>35)&(B$lat<55))
> B=B[idx,]
> Q=quantile(B$gaz,seq(0,1,by=.01),na.rm=TRUE)
> Q[1]=0
> x=as.numeric(cut(B$gaz,breaks=unique(Q)))
> CL=c(rgb(0,0,1,seq(1,0,by=-.025)),
+ rgb(1,0,0,seq(0,1,by=.025)))
> plot(B$lon,B$lat,pch=19,col=CL[x])

Red dots are the most expensive gas stations, that particular day.

If we add contours of the French regions, we get

> library(maps)
> map("france")
> points(B$lon,B$lat,pch=19,col=CL[x])

 

We can also focus on some specific region, say the South of Brittany.

> library(OpenStreetMap)
> map <- openmap(c(lat= 48,   lon= -3),
+                c(lat= 47,   lon= -2))
> map <- openproj(map) 
> plot(map)
> points(B$lon,B$lat,pch=19,col=CL[x])

As we can see on that map, there are regions that are rather empty, where the closest gas station might be a bit far away. Actually, it is possible to add Voronoi sets on the map,

> dB=data.frame(lon=B$lon,lat=B$lat)
> idx=which(!duplicated(dB))
> dB=dB[idx,]

 

which could help to get the price of the closest gaz station.

> library(tripack)
> V <- voronoi.mosaic(dB$lon[id],dB$lat[id])
> plot(V,add=TRUE)

It is possible to plot each polygon with the color of the gaz station we add. Actually, it is a bit tricky, and I could not find a R function to to this. So I did it manually,

> plot(map)
> P <- voronoi.polygons(V)
> library(sp)
> point_in_i=function(i,point) point.in.polygon(point[1],point[2],P[[i]][,1],P[[i]][,2])
> which_point=function(i) which(Vectorize(function(j) point_in_i(i,c(dB$lon[id[j]],dB$lat[id[j]])))(1:length(id))>0)
> for(i in 1:length(P)) polygon(P[[i]],col=CL[x[id[which_point(i)]]],border=NA)

With this map, we can see that we have blue areas, i.e. all stations in a given area are cheap (because of competition), but in some places, a very expensive one is next to a very cheap one. I guess we should look closer at the dynamics… [to be continued….]

Clusters of (French) Regions

For the data scienec course of tomorrow, I just wanted to post some functions to illustrate cluster analysis. Consider the dataset of the French 2012 elections

> elections2012=read.table(
"http://freakonometrics.free.fr/elections_2012_T1.csv",sep=";",dec=",",header=TRUE)
> voix=which(substr(names(
+ elections2012),1,11)=="X..Voix.Exp")
> elections2012=elections2012[1:96,]
> X=as.matrix(elections2012[,voix])
> colnames(X)=c("JOLY","LE PEN","SARKOZY","MÉLENCHON","POUTOU","ARTHAUD","CHEMINADE","BAYROU","DUPONT-AIGNAN","HOLLANDE")
> rownames(X)=elections2012[,1]

The hierarchical cluster analysis is obtained using

> cah=hclust(dist(X))
> plot(cah,cex=.6)

To get five groups, we have to prune the tree

> rect.hclust(cah,k=5)
> groups.5 <- cutree(cah,5)

We have to zoom-in to visualize the French regions,

It is also possible to use

> library(dendroextras)
> plot(colour_clusters(cah,k=5))

And again, if we zoom-in, we get

The interpretation of the clusters can be obtained using

> aggregate(X,list(groups.5),mean)
  Group.1     JOLY   LE PEN  SARKOZY
1       1 2.185000 18.00042 28.74042
2       2 1.943824 23.22324 25.78029
3       3 2.240667 15.34267 23.45933
4       4 2.620000 21.90600 34.32200
5       5 3.140000  9.05000 33.80000

It is also possible to visualize those clusters on a map, using

> library(RColorBrewer)
> CL=brewer.pal(8,"Set3")
> carte_classe <- function(groupes){
+ library(stringr)
+ elections2012$dep <- elections2012[,2]
+ elections2012$dep <- tolower(elections2012$dep)
+ elections2012$dep <- str_replace_all(elections2012$dep, pattern = " |-|'|/", replacement = "")
+ library(maps)
+ france<-map(database="france")
+ france$dep <- france$names
+ france$dep <- tolower(france$dep)
+ france$dep <- str_replace_all(france$dep, pattern = " |-|'|/", replacement = "")
+ corresp_noms <- elections2012[, c(1,2, ncol(elections2012))]
+ corresp_noms$dep[which(corresp_noms$dep %in% "corsesud")] <- "corsedusud"
+ col2001<-groupes+1
+ names(col2001) <- corresp_noms$dep[match(names(col2001), corresp_noms[,1])]
+ color <- col2001[match(france$dep, names(col2001))]
+ map(database="france", fill=TRUE, col=CL[color])
+ }
> carte_classe(cutree(cah,5))

or, if we simply want 4 clusters

> carte_classe(cutree(cah,4))

 

Another Interactive Map for the Cholera Dataset

Following my previous post, François (aka @FrancoisKeck) posted a comment mentionning another package I could use to get an interactive map, the rleafmap package. And the heatmap was here easy to include.

The first part is still the same, to get the data,

> require(rleafmap)
> library(sp)
> library(rgdal)
> library(maptools)
> library(KernSmooth)
> setwd("/home/arthur/Documents/")
> deaths <- readShapePoints("Cholera_Deaths")
> df_deaths <- data.frame(deaths@coords)
> coordinates(df_deaths)=~coords.x1+coords.x2
> proj4string(df_deaths)=CRS("+init=epsg:27700") 
> df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
> df=data.frame(df_deaths@coords)

To get a first visualisation, use

> stamen_bm <- basemap("stamen.toner")
> j_snow <- spLayer(df_deaths, stroke = FALSE)
> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 14)

and again, using the + and the – in the top left area, we can zoom in, or out. Or we can do it manually,

> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)

To get the heatmap, use

> library(spatstat)
> library(maptools)

> win <- owin(xrange = bbox(df_deaths)[1,] + c(-0.01,0.01), yrange = bbox(df_deaths)[2,] + c(-0.01,0.01))
> df_deaths_ppp <- ppp(coordinates(df_deaths)[,1],  coordinates(df_deaths)[,2], window = win)
> 
> df_deaths_ppp_d <- density.ppp(df_deaths_ppp, 
  sigma = min(bw.ucv(df[,1]),bw.ucv(df[,2])))
 
> df_deaths_d <- as.SpatialGridDataFrame.im(df_deaths_ppp_d)
> df_deaths_d$v[df_deaths_d$v < 10^3] <- NA

> stamen_bm <- basemap("stamen.toner")
> mapquest_bm <- basemap("mapquest.map")
 
> j_snow <- spLayer(df_deaths, stroke = FALSE)
> df_deaths_den <- spLayer(df_deaths_d, layer = "v", cells.alpha = seq(0.1, 0.8, length.out = 12))
> my_ui <- ui(layers = "topright")

> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 1000, height = 750, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)

The amazing thing here are the options in the top right corner. For instance, we can remove some layers, e.g. to remove the points

or to change the background

To get an html file, instead of a standard visualisation in RStudio, use

> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 450, height = 350, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16, directView ="browser")

which will generate the html table (as well as some additional files actually) above. Awesome, isn’t it?

Equidistant points on a map

This morning, I had a comment on a recent post, regarding a graph I did upload on the blog, which was extracted from a paper now online (see http://hal.archives-ouvertes.fr/hal-00871883). Jo (from KUL, I guess I can share that piece of information) asked me

I was wondering whether you would want to share the R code for plotting figures 1 and 14? W.r.t. the former, the figure-in-figure is a nice touch; as to the latter, I am curious to know how you translated distance in km to the size parameters of the graph (par(“usr”)) for plotting the corresponding concentric circles (and the arrow indicating the radius) on top of your map.

At first, I thought I made a mistake in my plot. I mean, each time I have a question, I start to be suspicious, and I start to wonder if what I did was valid, or not. Here was the graph

Let’s make it clear: I do not draw circles here. So yes, I believe that what I did is valid. What I did is simple. First, I get the background map,

library(maps)
map("world",xlim=c(130,150),ylim=c(25,45),fill=TRUE,col="light green")

Then, I need some function to compute distance from coordinates. The functions I use are

deg2rad = function(deg) return(deg*pi/180)
DISTANCEDEG = function(long1, lat1, long2, lat2) {
R=6371; d=acos(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2) * cos(long2-long1)) * R
return(d) 
}

The center here will be Tokyo (東京),

X=139+45/60
Y=35+40/60

The idea now is simple: I generate a grid (here 501×501)

VX=seq(X-10,X+10,length=501)
VY=seq(Y-10,Y+10,length=501)
VtX=rep(VX,each=length(VY))
VtY=rep(VY,length(VX))
ZDeg=deg2rad(cbind(VtX,VtY))

I compute the distance from all those points to Tokyo, and check is the distance is larger or smaller than a given value,

L=500
D1=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L

If the distance is smaller than 500km, then I put a blue dot on the graph,

points(VtX[D1],VtY[D1],pch=19,cex=.2,col="light blue")

Then I use the same procedure for 250km (obviously, it is more convenient to start from larger and to go to smaller distances)

L=250
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtX[D],VtY[D],pch=19,cex=.2,col="light yellow")

Then, I did draw an arrow to ilustrate the largest distance

k=which.max(VtX[D1])
arrows((VtX[D1])[k],(VtY[D1])[k],X,Y,code=3,length=.1)
text(((VtX[D1])[k]+X)/2,Y+.35,"500 km")

And now, I have the graph.

Now, the point is that it should depend on the kind of projection we use, right? So here is a function that can be used for different kind of projections (some slight changes are necessary, since the map is now centered on some point, and we cannot use standard coordinates)

library(mapproj)
mapjapan = function(pr="conic",pm=45){
map("world","japan",fill=TRUE,col="light green",projection=pr, par=pm)
MP=mapproject(data.frame(x=X,y=Y),projection="")
Xp=MP$x
Yp=MP$y
VX=seq(X-10,X+10,length=501)
VY=seq(Y-10,Y+10,length=501)
VtX=rep(VX,each=length(VY))
VtY=rep(VY,length(VX))
MP=mapproject(data.frame(x=VtX,y=VtY),projection="")
VtXp=MP$x
VtYp=MP$y
ZDeg=deg2rad(cbind(VtX,VtY))
L=500
D1=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D1],VtYp[D1],pch=19,cex=.2,col="light blue")
L=250
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
L=100
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light blue")
L=50
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
points(Xp,Yp,pch=19,cex=.4,col="red")
map("world","japan",projection=pr, par=pm,add=TRUE)
}

The default function here produces a map based on a conic projection,

mapjapan()

But we can also use a Bonne projection (a pseudo-conic one, named after Rigobert Bonne)

mapjapan("bonne")

or Lagrange projection,

mappjapan("lagrange",NULL)

and (as a last one), Albers projections,

mapjapan("albers",c(30,40))

Of course, much more projections are possible !

We do not see much here, right ? So let us play with a larger country to visualize something. Like Canada. And the distance to, say, Winnipeg.

The first thing to do is to define the coordinates of Winnipeg,

X=-(97+08/60)
Y=(49+53/60)

Then, we slightly change our function

mapcanada = function(pr="conic",pm=45){
map("world","canada",fill=TRUE,col="light green",projection=pr, par=pm)
MP=mapproject(data.frame(x=X,y=Y),projection="")
Xp=MP$x
Yp=MP$y
VX=seq(X-30,X+30,length=501)
VY=seq(Y-30,Y+30,length=501)
VtX=rep(VX,each=length(VY))
VtY=rep(VY,length(VX))
MP=mapproject(data.frame(x=VtX,y=VtY),projection="")
VtXp=MP$x
VtYp=MP$y
ZDeg=deg2rad(cbind(VtX,VtY))
L=2000
D1=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D1],VtYp[D1],pch=19,cex=.2,col="light blue")
L=1000
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
L=500
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light blue")
L=200
D=DISTANCEDEG(ZDeg[,1],ZDeg[,2],deg2rad(X),deg2rad(Y))<L
points(VtXp[D],VtYp[D],pch=19,cex=.2,col="light yellow")
points(Xp,Yp,pch=19,cex=.4,col="red")
map("world","canada",projection=pr, par=pm,add=TRUE)
}

Now, we can have some fun

mapcanada()

mapcanada("bonne",45)

mapcanada("albers",c(30,40))

mapcanada("lagrange",NULL)

Fun, isn’t it ? Changing the projection will change the shape of equidistant curves.