Tag Archives: maps

Juxtaposer des cartes et ‘confounding’

Pour aller un peu plus loin que mon précédant billet, élections, pièges à com(mentaires à la con) je voulais juste montrer quelques cartes, obtenues par simulations. Mais avant, revenons deux minutes sur la notion de ‘confounding‘. Pour reprendre le dessin de wikipedia, l’idée est que, conditionnellement à https://latex.codecogs.com/gif.latex?Z (notre ‘confounding factor’), les variables https://latex.codecogs.com/gif.latex?X et https://latex.codecogs.com/gif.latex?Y sont indépendantes. Mais cette indépendance conditionnelle peut (comme bien souvent) impliquer une forte corrélation entre https://latex.codecogs.com/gif.latex?X et https://latex.codecogs.com/gif.latex?Y. On peut ainsi imaginer un modèle de la forme https://latex.codecogs.com/gif.latex?X=Z+\varepsilonhttps://latex.codecogs.com/gif.latex?\varepsilon désigne une composante idiosyncratique, et https://latex.codecogs.com/gif.latex?Y=Z+\eta (où là aussi https://latex.codecogs.com/gif.latex?\eta est une composante idiosyncratique).

Par exemple, on peut supposer que https://latex.codecogs.com/gif.latex?Z est une variable liés à la lattitude du département, par exemple la température moyenne (histoire d’avoir une vague histoire à raconter, et d’être moins rébarbatif)

> n=115
> X=matrix(rnorm(3*n),n,3)
> X[,1]=sort(X[,1])
> library(maps)
> map("france",fill =TRUE,
col= rgb(1,0,0,pnorm(X[,1])))

Maintenant, supposons qu’on ait une variable https://latex.codecogs.com/gif.latex?X très corrélée à https://latex.codecogs.com/gif.latex?Z, par exemple la vente de crème solaire (en volume, toujours pour raconter une histoire qui illustrerait un peu mon propos)

> r=.8
> X[,2]=r*X[,1]+sqrt(1-r^2)*X[,2]
> map("france",fill =TRUE,
col= rgb(0,1,0,pnorm(X[,1])))

ce qui donne le nuage de points suivant (avec une forte corrélation avec le ‘confounding factor’)

On suppose maintenant qu’on a une autre variable https://latex.codecogs.com/gif.latex?Y, elle aussi très corrélée à https://latex.codecogs.com/gif.latex?Z, par exemple le prix d’une plaquette de beurre (qui pourrait être très cher dans le sud, où l’on cuisine plutôt à l’huile)

> r=.8
> X[,3]=r*X[,1]+sqrt(1-r^2)*X[,3]
> map("france",fill =TRUE,
col= rgb(0,1,0,pnorm(X[,1])))

avec une corrélation du même ordre, avec le ‘confounding factor’

Si on juxtapose les deux cartes, on a des variables corrélées,

mais on ne peut pas en conclure, juste en regardant ces cartes, que les ventes de crème solaire influencent le prix d’une plaquette de beurre ! Et on peut même accentuer encore l’effet, en jouant un peu sur les couleurs. Par exemple, avec

> map("france",fill =TRUE,col = rgb(1,0,0,qbeta(pnorm(X[,1]),.2,.5)))

on accentue un peu l’écart entre le nord et le sud (je joue ici sur l’échelle des couleurs, pas sur les valeurs !)

et sur nos deux variables, on a encore une juxtaposition qui donne envie de raconter un jolie histoire, non ?

Ah oui, ma corrélation entre les variables https://latex.codecogs.com/gif.latex?X et https://latex.codecogs.com/gif.latex?Y est relativement forte, de l’ordre de 0.6 (qui est supérieure à ce qu’on avait observé hier sur le taux de chômage et le taux du FN aux élections régionales)

> cor(X)
[,1]      [,2]      [,3]
[1,] 1.0000000 0.7863838 0.7727523
[2,] 0.7863838 1.0000000 0.6312878
[3,] 0.7727523 0.6312878 1.0000000

mais la corrélation est très sensible aux lois marginales, ça ne veut pas dire grand chose. Même si, visuellement, le nuage de points peut faire penser à celui qu’on avait hier

Bref, juxtaposer des cartes, c’est joli. De là à en déduire quoi que ce soit….

A quelle distance d’une banque habite-t-on ?

Dans le cadre du projet de R de la formation en Data Science pour l’Actuariat, je vais continuer à mettre en ligne des morceaux de codes qui peuvent être utiles, dans un contexte spatial. Le dernier billet, sur cartographier le vote pour le Brexit, avait été repris (et bien amélioré) sur le site des voisins de rgeomatic. Aujourd’hui, je vais m’inspirer du travail d’Etienne Flichy qui mixe répartition de la population sur le territoire, et localisation des agences bancaires.

On parle des banques ici, mais si on a une base avec les coiffeurs, les boulangeries, etc, on peut faire la même chose ! (autant dire qu’on va pouvoir s’amuser quand la base sirene sera rendue ouverte – dans les semaines à venir). On va supposer que l’on a une base avec toutes les banques géocodées. Bon, pour l’exercice, on va utiliser la localisation des agences bancaires, en utilisant les données de cbanque.com. C’est assez facile d’aller scraper le site, quand on regarde la façon dont sont faites les pages, e.g. http://cbanque.com/pratique/agences/credit-cooperatif/35/. Là on récupère les adresses (postales) et on peut utiliser https://adresse.data.gouv.fr/csv/ (ou différents outils) pour géolocaliser les adresses.

Continue reading A quelle distance d’une banque habite-t-on ?

Playing with Leaflet (and Radar locations)

Yesterday, my friend Fleur did show me some interesting features of the leaflet package, in R.

library(leaflet)

In order to illustrate, consider locations of (fixed) radars, in several European countries. To get the data, use

download.file("http://carte-gps-gratuite.fr/radars/zones-de-danger-destinator.zip","radar.zip")
unzip("radar.zip")
 
 ext_radar=function(nf){
radar=read.table(file=paste("destinator/",nf,sep=""), sep = ",", header = FALSE, stringsAsFactors = FALSE)
 radar$type <- sapply(radar$V3, function(x) {z=as.numeric(unlist(strsplit(x, " ")[[1]])); return(z[!is.na(z)])})
  radar <- radar[,c(1,2,4)]
  names(radar) <- c("lon", "lat", "type")
  return(radar)}
 
L=list.files("./destinator/")
nl=nchar(L)
id=which(substr(L,4,8)=="Radar" & substr(L,nl-2,nl)=="csv")
 
radar_E=NULL
for(i in id) radar_E=rbind(radar_E,ext_radar(L[i]))

(to be honest, if you run that code, you will get several countries, but not France… but if you want to add it, you should be able to do so…). The first tool is based on popups. If you click on a point on the map, you get some information, such as the speed limit where you can find a radar. To get a nice pictogram, use

fileUrl <- "http://evadeo.typepad.fr/.a/6a00d8341c87ef53ef01310f9238e6970c-800wi"
download.file(fileUrl,"radar.png", mode = 'wb')
RadarICON <- makeIcon(  iconUrl = fileUrl,   iconWidth = 20, iconHeight = 20)

And then, use to following code get a dynamic map, mentionning the variable that should be used for the popup

m <- leaflet(data = radar_E) 
m <- m %>% addTiles() 
m <- m %>% addMarkers(~lon, ~lat, icon = RadarICON, popup = ~as.character(type))
m

Because the picture is a bit heavy, with almost 20K points, let us focus only on France,

Continue reading Playing with Leaflet (and Radar locations)

Construction de cartes minimalistes

Le week-end passé, suite à la publication de See the world differently with these minimalist maps par , il y a eu pas mal d’activité autour des cartes minimalistes. En particulier, Reka (aka ) et Philippe (aka @recifs) m’ont proposé de faire un billet pour Visions Carto sur la construction de ces cartes. Je suis flatté, même si je trouve ma contribution incroyablement modeste sur ce projet (et je me sens toujours humble face aux dessins superbes de Reka).

Je renvoie donc vers le billet Cartes Minimalistes pour plus de détails, mais pour les plus curieux, je rajoute deux cartes, plus française. La première correspond aux voies ferroviaires,

library(maptools)
setwd("/home/freakonometrics/Documents/data")
loc="http://www.mapcruzin.com/download-shapefile/france-railways-shape.zip"
download.file(loc,destfile="rail_france.zip")
unzip("rail_france.zip", exdir="./rail_france/")
shap=readShapeLines("./rail_france/railways.shp")
plot(shap,lwd=.7)

et la seconde, aux routes dans la région parisienne,

loc="http://www.mapcruzin.com/download-shapefile/france-roads-shape.zip"
download.file(loc,destfile="road_france.zip")
unzip("road_france.zip", exdir="./road_france/")
shap=readShapeLines("./road_france/roads.shp")
plot(shap,lwd=.7,ylim=48.85+c(-.5,.5),
xlim=2.35+c(-.5,.5))

La prochaine fois, j’expliquerais un peu comment corriger les shapefiles quand on a des soucis avec (je repense au commentaire qui disait que qu’il était dommage d’avoir une route entre le Royaume Uni et l’Islande).

Minimalist Maps

This week, I mentioned a series of maps, on Twitter,

Friday evening, just before leaving the office to pick-up the kids after their first week back in class, Matthew Champion (aka ) sent me an email, asking for more details. He wanted to know if I did produce those graphs, and if he could mention then, in a post. The truth is, I have no idea who produced those graphs, but I told him one can easily reproduce them. For instance, for the cities, in R, use

> library(maps)
> data("world.cities")
> plot(world.cities$lon,world.cities$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

It is possible to get a more minimalist one by plotting only cities with more than 100,000 unhabitants, e.g.,

> world.cities2 = world.cities[
+ world.cities$pop>100000,]
> plot(world.cities2$lon,world.cities2$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

For the airports, it was slightly more complex since on http://openflights.org/data.html#airport, 6,977 airports  are mentioned. But on http://www.naturalearthdata.com/, I found another dataset with only 891 airports.

> library(maptools)
> shape <- readShapePoints(
+ "~/data/airport/ne_10m_airports.shp")
> plot(shape,pch=19,cex=.7)

On the same website, one can find a dataset for ports,

> shape <- readShapePoints(
+ "~/data/airport/ne_10m_ports.shp")
> plot(shape,pch=19,cex=.7)

This is for graphs based on points. For those based on lines, for instance rivers, shapefiles can be downloaded from https://github.com/jjrom/hydre/tree/, and then, use

> require(maptools)
> shape <- readShapeLines(
+ "./data/river/GRDC_687_rivers.shp")
> plot(shape,col="blue")

For roads, the shapefile can be downloaded from http://www.naturalearthdata.com/

> shape <- readShapeLines(
+ "./data/roads/ne_10m_roads.shp")
> plot(shape,lwd=.5)

Last, but not least, for lakes, we need the polygons,

> shape <- readShapePoly(
+ "./data/lake/ne_10m_lakes.shp")
> plot(shape,col="blue",border="blue",lwd=2)

Nice, isn’t it? See See the world differently with these minimalist maps for ‘s post.

Interactive Maps for John Snow’s Cholera Data

This week, in Istanbul, for the second training on data science, we’ve been discussing classification and regression models, but also visualisation. Including maps. And we did have a brief introduction to the  leaflet package,

devtools::install_github("rstudio/leaflet")
require(leaflet)

To see what can be done with that package, we will use one more time the John Snow’s cholera dataset, discussed in previous posts (one to get a visualisation on a google map background, and the second one on an openstreetmap background),

library(sp)
library(rgdal)
library(maptools)
setwd("/cholera/")
deaths <- readShapePoints("Cholera_Deaths")
df_deaths <- data.frame(deaths@coords)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
df=data.frame(df_deaths@coords)
lng=df$coords.x1
lat=df$coords.x2

Once installed the leaflet package, we can use the package at the RStudio console (which is what we will do here), or within R Markdown documents, and within Shiny applications. But because of restriction we got on this blog (rules of hypotheses.org) So there will be only copies of my screen. But if you run the code, in RStudio you will get interactvive maps in the viewer window.

First step. To load a map, centered initially in London, use

m = leaflet()%>% addTiles() 
m %>% fitBounds(-.141,  51.511, -.133, 51.516)

In the viewer window of RStudio, it is just like on OpenStreetMap, e.g. we can zoom-in, or zoom-out (with the standard + and – in the top left corner)

And we can add additional material, such as the location of the deaths from cholera (since we now have the same coordinate representation system here)

rd=.5
op=.8
clr="blue"
m = leaflet() %>% addTiles()
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr)

We can also add some heatmap.

X=cbind(lng,lat)
kde2d <- bkde2D(X, bandwidth=c(bw.ucv(X[,1]),bw.ucv(X[,2])))

But there is no heatmap function (so far) so we have to do it manually,

x=kde2d$x1
y=kde2d$x2
z=kde2d$fhat
CL=contourLines(x , y , z)

We have now a list that contains lists of polygons corresponding to isodensity curves. To visualise of of then, use

m = leaflet() %>% addTiles() 
m %>% addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Of course, we can get at the same time the points and the polygon

m = leaflet() %>% addTiles() 
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr) %>%
  addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Continue reading Interactive Maps for John Snow’s Cholera Data

Langue et Cartographie

La plupart des cartes du monde standard de R sont en anglais. L’autre jour, des étudiants souhaitaient visualiser des données tirées d’une base où les noms des pays sont en anglais. Pour obtenir une correspondance entre des noms anglais et des noms français, on peut utiliser la base suivante

> library(gdata)
> library(xlsx)
> download.file("http://www.stat.gouv.qc.ca/statistiques/divisions-territoriales/pays-liste-isq-web.xls","corresp")
>  xls_corresp <- read.xls("corresp",sheet=1,encoding="latin1")

On a ici

>  df_corresp <- data.frame(
+ FR=xls_corresp$X.5,
+ EN=xls_corresp$X.11)
> df_corresp[5:10,]
                    FR                 EN
5  Belgique-Luxembourg Belgium-Luxembourg
6    Îles du Pacifique    Pacific Islands
7          Afghanistan        Afghanistan
8       Afrique du Sud       South Africa
9           Îles Åland      Åland Islands
10             Albanie            Albania

Pour avoir une correspondance, entre les noms sous R, et ceux dans la base à notre disposition, il faut manipuler un peu les chaînes de caractères,

>  df_corresp$FR = as.character(df_corresp$FR)
>  df_corresp$FR = iconv(df_corresp$FR, to="ASCII//TRANSLIT") 
>  df_corresp$FR = tolower(df_corresp$FR)
>  remove_minus = function(s) paste(unlist(strsplit(s, split='-',fixed=TRUE)),collapse="")
>  remove_space = function(s) paste(unlist(strsplit(s, split=' ',fixed=TRUE)),collapse="")
>  df_corresp$FR = sapply(df_corresp$FR,remove_minus)
>  df_corresp$FR = sapply(df_corresp$FR,remove_space)

> df_corresp$EN = as.character(df_corresp$EN)
> df_corresp$EN = iconv(df_corresp$EN, to="ASCII//TRANSLIT") 
> df_corresp$EN = tolower(df_corresp$EN)
> df_corresp$EN = sapply(df_corresp$EN,remove_minus)
> df_corresp$EN = sapply(df_corresp$EN,remove_space)
> split_dots = function(s) strsplit(s, split=':',fixed=TRUE)[[1]][1]

Si on regarde les pays que l’on a pu convertir le nom en anglais pour avoir une correspondance avec le nom du pays dans la base de R,

> library(maps)
>  world<-map(database="world")
>  world$pays_EN <- world$names  
>  world$pays_EN <- tolower(world$pays_EN)
>  world$pays_EN = sapply(world$pays_EN,remove_space) 
>  world$pays_EN = sapply(world$pays_EN,remove_minus) 
>  world$pays_EN = sapply(world$pays_EN,split_dots) 
>  world$pays_FR <- df_corresp$FR[match(world$pays_EN, df_corresp$EN)]

on obtient le graphique suivant

>  color <- !is.na(world$pays_FR)
>  map(database="world", fill=TRUE, col=color)

Les seuls pays pour lesquels on n’a pas de correspondance sont les États-Unis d’Amérique (usa dans la base de R), la Russie (ussr), les Congos (avec la République Démocratique et l’autre), et la Côte d’Ivoire. En bricolant un peu sur ces 4 pays, on pourra avoir une correspondance entre les noms utilisés sous R, et les noms en français.

John Snow, and Google Maps

In my previous post, I discussed how to use OpenStreetMaps (and standard plotting functions of R) to visualize John Snow’s dataset. But it is also possible to use Google Maps (and ggplot2 types of graphs).

library(ggmap)
get_london <- get_map(c(-.137,51.513), zoom=17)
london <- ggmap(get_london)

Again, the tricky part comes from the fact that the coordinate representation system, here, is not the same as the one used on Robin Wilson’s blog.

> library(maptools)
> setwd("/cholera/")
> deaths <- readShapePoints("Cholera_Deaths")
> head(deaths@coords)
coords.x1 coords.x2
0  529308.7  181031.4
1  529312.2  181025.2
2  529314.4  181020.3
3  529317.4  181014.3
4  529320.7  181007.9
5  529336.7  181006.0
> X <- deaths@coords

or, use d X_deaths.RData. So now, we have to change it

df_deaths <- data.frame(X)
library(sp)
library(rgdal)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))

Here, we have the same coordinate system as the one used in Google Maps. Now, we can add a layer, with the points,

london + geom_point(aes(x=coords.x1, y=coords.x2),data=data.frame(df_deaths@coords),col="red")

Again, it is possible to add the density, as an additional layer,

london + geom_point(aes(x=coords.x1, y=coords.x2), 
data=data.frame(df_deaths@coords),col="red")+
geom_density2d(data = data.frame(df_deaths@coords), 
aes(x = coords.x1, y=coords.x2), size = 0.3) + 
stat_density2d(data = data.frame(df_deaths@coords), 
aes(x = coords.x1, y=coords.x2,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") + scale_fill_gradient(low = "green", high = "red",guide = FALSE) + 
scale_alpha(range = c(0, 0.3), guide = FALSE)

 

John Snow, and OpenStreetMap

While I was working for a training on data visualization, I wanted to get a nice visual for John Snow’s cholera dataset. This dataset can actually be found in a great package of famous historical datasets.

library(HistData)
data(Snow.deaths)
data(Snow.streets)

One can easily visualize the deaths, on a simplified map, with the streets (here simple grey segments, see Vincent Arel-Bundock’s post)

plot(Snow.deaths[,c("x","y")], col="red", pch=19, cex=.7,xlab="", ylab="", xlim=c(3,20), ylim=c(3,20))
slist <- split(Snow.streets[,c("x","y")],as.factor(Snow.streets[,"street"]))
invisible(lapply(slist, lines, col="grey"))

Continue reading John Snow, and OpenStreetMap

Subjective Ways of Cutting a Continuous Variables

You have probably seen @coulmont‘s maps. If you haven’t, you should probably go and spend some time on his blog (but please, come back afterwards, I still have a story to tell you). Consider for instance the maps we obtained for a post published in Monkey Cage, a few months ago,

The codes were discussed on a blog post (I spent some time on the econometric model, not really on the map, by that time).

My mentor in cartography, Reka (aka @visionscarto) taught me that maps were always subjective. And indeed.

Continue reading Subjective Ways of Cutting a Continuous Variables

Generating Hurricanes with a Markov Spatial Process

The National Hurricane Center (NHC) collects datasets with all  storms in North Atlantic, the North Atlantic Hurricane Database (HURDAT). For all sorms, we have the location of the storm, every six jours (at midnight, six a.m., noon and six p.m.). Note that we have also the date, the maximal wind speed – on a 6 hour window – and the pressure in the eye of the storm.

It is possible to run the following function

library(XML)
extract.track=function(year=2012,p=TRUE){

Continue reading Generating Hurricanes with a Markov Spatial Process

Crowded Cities, Paris, Hong Kong and Montréal

Over the past years, I’ve been living in different cities, all of them being completely different, compared with the others. I have been living in Paris, which is a big city in Europe, with a large suburban neighborhood, too (la banlieue).

Then, I’ve been living in Hong Kong, which is a larger city, in Asia.

It was crowded. I mean, it was the feeling I had, while I was living there. And more recently, I’ve been living in Montréal, in North America. Montreal is a large city. Or to be more specific, an island,

The three cities are quite different. Paris, 2.211 million unhabitants, and 105,4 km² (density 21,057 unhabitants per km²). Montréal, 1.621 million unhabitants, and three times wider 365.1 km² (density 4,441 unhabitants per km²). Hong Kong, 7.234 million unhabitants, and again three times wider 1,104 km² (density 6,553 unhabitants per km²). In Hong Kong, there are several hill where it is not possible to build anything: on a large part of the island, the density is null.

Continue reading Crowded Cities, Paris, Hong Kong and Montréal

Moving the North Pole to the Equator

I am still working with @3wen on visualizations of the North Pole. So far, it was not that difficult to generate maps, but we started to have problems with the ice region in the Arctic. More precisely, it was complicated to compute the area of this region (even if we can easily get a shapefile). Consider the globe,

worldmap <- ggplot() + 
geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) +
scale_y_continuous(breaks = (-2:2) * 30) +
scale_x_continuous(breaks = (-4:4) * 45)

and then, add three points in the northern hemisphere, and plot the associated triangle

P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), 
fill ="blue", alpha = 0.6, col = "light blue", size = .8)+
geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+

for some given projection, e.g.

coord_map("ortho", orientation=c(61, -74, 0))

This can be done with the following function

proj1=function(x=75){
triangle <- data.frame(long=c(-70,-110,-90*(x<90)+90*(x>90)),
lat=c(60,60,x*(x<90)+(90-(x-90))*(x>90)),group=1, region=1)
worldmap <- ggplot() + 
geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) +
scale_y_continuous(breaks = (-2:2) * 30) +
scale_x_continuous(breaks = (-4:4) * 45)
P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), 
fill ="blue", alpha = 0.6, col = "light blue", size = .8)+
geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+
coord_map("ortho", orientation=c(61, -74, 0)) 
print(P1)
}

or

I am not sure if I understand why the projection of the triangle is not convex on the graph above, but say it’s not a big deal, here. Actually, our problem is that our interest is on regions (polygons, from a geometrical point of view) that do contain the North Pole. And here, it starts to be messy. I can easily move the upper point on the other side of the globe, but the polygon is not correct,

I do understand that it should be a problem, non-trivial, but it means that it should not be that simple to compute the area of a polygon (a region) that contains the North Pole. Which is exactly what we did observe in our computation. And I believe that one heuristic interpretation is related to the following graph

My skills in geometry are extremely poor. So do not expect that I will go through the code of the function that compute the area of a polygon ! Actually, my idea is the following : if the problem is that the North Pole is in the region, let’s consider some rotation, to shift the North Pole on the Equation. The code here is, from latitudes and longitude, to get new latitudes and longitudes, after a rotation around the y-axis (the North Pole will go down, along Greenwhich meridian) is

rotation=function(Z,theta){
lon=Z[,1]/180*pi; lat=Z[,2]/180*pi
x=cos(lon)*cos(lat)
y=sin(lon)*cos(lat)
z=sin(lat)
pt1=cbind(x,y,z)
M=matrix(c(cos(theta),0,-sin(theta),0,1,0,sin(theta),0,cos(theta)),3,3)
pt2=t(M%*%t(pt1))
lat=asin(pt2[,3])*180/pi
lon=atan2(pt2[,2],pt2[,1])*180/pi
return(cbind(lon,lat))}

With a rotation from  (no change) to  (the North Pole on the equator), we get

From now on, it is possible to compute the area of any region containing the North Pole ! One should simply apply the rotation function on all datebases generated from shapefiles (and then the opposite rotation to get a proper location) ! We can then compute the centroid of the ice region, for example,

r.glace=glace
r.glace[,1:2]=rotation(glace[,1:2],pi/2)
M=matrix(NA,length(unique(glace$id)),3)
j=0
for(i in unique(glace$id)){j=j+1
Polyglace <- as(r.glace[glace$id==i,c("long","lat")],"gpc.poly")
M[j,1]=area.poly(Polyglace)
M[j,2:3]=centroid(r.glace[r.glace$id==i,c("long","lat")])
}
Z=c(weighted.mean(M[,2],M[,1]),weighted.mean(M[,3],M[,1]))
rotation(rbind(Z),-pi/2)[1,])

And we get

and below, we can visualize all the locations of the centroid of the ice region in the past 25 years

Voting Twice in France

On the Monkey Cage blog, Baptiste Coulmont (a.k.a. @coulmont) recently uploaded a post entitled “You can vote twice ! The many political appeals of proxy votes in France“, coauthored with Joël Gombin (a.k.a. @joelgombin), and myself. The study was initially written in French as mentioned in a previous post. Baptiste posted additional information on his blog (http://coulmont.com/blog/…) and I also wanted to post some lines of code, to mention a model that was not used in that study (more complex to analyze, but more realistic, and with the same conclusions). The econometric study is based on aggregated voted, with a possible ecological misinterpretation.

  • Regression Model: Possible Explanatory Variables

The first idea was to model proxies using a binomial regression, per pooling station  where  denote the number of proxy vote, per station , and  denotes the number of voters. Proportion  can be a function of possible explanatory variables (on Baptiste’s blog there are additional information about the datasets, obtained from insee.fr and opendata.paris.fr)

> bt1=read.table("paris2007-pres-t1.csv",header=TRUE,sep=";")
> bt2=read.table("paris2007-pres-t2.csv",header=TRUE,sep=";")
> bv=read.table("paris-bv-insee-07.csv",header=TRUE,sep=";")
> bv$BV=bv$BVCOM
> baset1=merge(bt1,bv,by="BV")
> baset2=merge(bt2,bv,by="BV")
> baset1$LOGEMENT=baset1$PROPRIO+baset1$LOCNONHLM+baset1$LOCHLM+baset1$GRATUIT
> baset2$LOGEMENT=baset2$PROPRIO+baset2$LOCNONHLM+baset2$LOCHLM+baset2$GRATUIT

For instance, assume that  is a function of the proportion of owner of the place people live in, denoted  in the neighborhood of the pooling station,

> variable="PROPRIO"
> reference="LOGEMENT"
> baset1$taux=baset1[,variable]/baset1[,reference]
> baset2$taux=baset2[,variable]/baset2[,reference]

We can consider a logistic regression

or a logistic regression with splines, if we do not want to assume a linear model

With cubic splines, the code is

> b=hist(baset1$taux,plot=FALSE)
> library(splines)
> regt1=glm(PROCURATIONS/INSCRITS~bs(taux,6),family=binomial,weights=INSCRITS,data=baset1)
> regt2=glm(PROCURATIONS/INSCRITS~bs(taux,6),family=binomial,weights=INSCRITS,data=baset2)
> u=seq(min(baset1$taux)+.015,max(baset1$taux)-.015,by=.001)
> ND=data.frame(taux=u)
> ug=seq(0,max(baset1$taux)+.05,by=.001)
> pt1=predict(regt1,newdata=ND,se=TRUE,type="response")
> pt2=predict(regt2,newdata=ND,se=TRUE,type="response")
> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> plot(ug,ug*1,col="white",xlab=nom,ylab="Taux de procuration",
+ ylim=c(0,.1))
> for(i in 1:(length(b$breaks)-1)){
+ polygon(b$breaks[i+c(0,0,1,1)],c(0,b$counts[i],b$counts[i],0)
+ /max(b$counts)*.05,col="light yellow",border=NA)}
> polygon(c(u,rev(u)),c(pt1$fit+2*pt1$se.fit,rev(pt1$fit-2*pt1$se.fit)),
+ border=NA,density=30,col=CL[4])

while a standard logistic regression would be

> lines(u,pt1$fit,col=CL[6],lwd=2)
> polygon(c(u,rev(u)),c(pt2$fit+2*pt2$se.fit,rev(pt2$fit-2*pt2$se.fit)),
+ border=NA,density=30,col=CL[3])
> lines(u,pt2$fit,col=CL[1],lwd=2)
> regt1l=glm(PROCURATIONS/INSCRITS~taux,family=binomial,weights=INSCRITS,data=baset1)
> regt2l=glm(PROCURATIONS/INSCRITS~taux,family=binomial,weights=INSCRITS,data=baset2)
> ND=data.frame(taux=ug)
> pt1l=predict(regt1l,newdata=ND,se=TRUE,type="response")
> pt2l=predict(regt2l,newdata=ND,se=TRUE,type="response")
> lines(ug,pt1l$fit,col=CL[5],lty=2)
> lines(ug,pt2l$fit,col=CL[2],lty=2)
> legend(0,.1,c("Second Tour","Premier Tour"),col=CL[c(1,6)],
+ lwd=2,lty=1,border=NA)

Here it is (the confidence region is for the spline regression) with on blue the first round of the Presidential election, and in red, the second round (in France, it’s a two-round system)

(the legend of the y axis is not correct). We can consider as explanatory variable the rate of H.L.M., low-cost housing or council housing,

If I like the graph, unfortunately, the interpretation of coefficient  might be complicated

> summary(regt1l)

Call:
glm(formula = PROCURATIONS/INSCRITS ~ taux, family = binomial, 
    data = baset1, weights = INSCRITS)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-12.9549   -1.5722    0.0319    1.6292   13.1303  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -3.70811    0.01516  -244.6   <2e-16 ***
taux         1.49666    0.04012    37.3   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 12507  on 836  degrees of freedom
Residual deviance: 11065  on 835  degrees of freedom
AIC: 15699

Number of Fisher Scoring iterations: 4

> summary(regt2l)

Call:
glm(formula = PROCURATIONS/INSCRITS ~ taux, family = binomial, 
    data = baset2, weights = INSCRITS)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-15.4872   -1.7817   -0.1615    1.6035   12.5596  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -3.24272    0.01230 -263.61   <2e-16 ***
taux         1.45816    0.03266   44.65   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 9424.7  on 836  degrees of freedom
Residual deviance: 7362.3  on 835  degrees of freedom
AIC: 12531

Number of Fisher Scoring iterations: 4

So we did consider a standard linear regression model, for the proxy rate, per station,

(again, either a model with splines, or a standard linear model). The code is

> regt1=lm(PROCURATIONS/INSCRITS~bs(taux,6),weights=INSCRITS,data=baset1)
> regt2=lm(PROCURATIONS/INSCRITS~bs(taux,6),weights=INSCRITS,data=baset2)
> u=seq(min(baset1$taux)+.015,max(baset1$taux)-.015,by=.001)
> ND=data.frame(taux=u)
> ug=seq(0,max(baset1$taux)+.05,by=.001)
> pt1=predict(regt1,newdata=ND,se=TRUE,type="response")
> pt2=predict(regt2,newdata=ND,se=TRUE,type="response")
> library(RColorBrewer)
> CL=brewer.pal(6, "RdBu")
> plot(ug,ug*1,col="white",xlab=nom,ylab="Taux de procuration",
+ ylim=c(0,.1))
> for(i in 1:(length(b$breaks)-1)){
+ polygon(b$breaks[i+c(0,0,1,1)],c(0,b$counts[i],b$counts[i],0)
+ /max(b$counts)*.05,col="light yellow",border=NA)}
> polygon(c(u,rev(u)),c(pt1$fit+2*pt1$se.fit,rev(pt1$fit-2*pt1$se.fit)),
+ border=NA,density=30,col=CL[4])
> lines(u,pt1$fit,col=CL[6],lwd=2)
> polygon(c(u,rev(u)),c(pt2$fit+2*pt2$se.fit,rev(pt2$fit-2*pt2$se.fit)),
+ border=NA,density=30,col=CL[3])
> lines(u,pt2$fit,col=CL[1],lwd=2)
> regt1l=lm(PROCURATIONS/INSCRITS~taux,weights=INSCRITS,data=baset1)
> regt2l=lm(PROCURATIONS/INSCRITS~taux,weights=INSCRITS,data=baset2)
> ND=data.frame(taux=ug)
> pt1l=predict(regt1l,newdata=ND,se=TRUE,type="response")
> pt2l=predict(regt2l,newdata=ND,se=TRUE,type="response")
> lines(ug,pt1l$fit,col=CL[5],lty=2)
> lines(ug,pt2l$fit,col=CL[2],lty=2)
> legend(0,.1,c("Second Tour","Premier Tour"),col=CL[c(1,6)],
+ lwd=2,lty=1,border=NA)

Here is again the evolution as a function of the rate of owner of their homes,

The graph is rather close to the one before, and here, the interpretation of the summary table is more conventional,

> summary(regt1l)

Call:
lm(formula = PROCURATIONS/INSCRITS ~ taux, data = baset1, weights = INSCRITS)

Weighted Residuals:
    Min      1Q  Median      3Q     Max 
-1.9994 -0.2926  0.0011  0.3173  3.2072 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.021268   0.001739   12.23   <2e-16 ***
taux        0.054371   0.004812   11.30   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.646 on 835 degrees of freedom
Multiple R-squared:  0.1326,	Adjusted R-squared:  0.1316 
F-statistic: 127.7 on 1 and 835 DF,  p-value: < 2.2e-16

> summary(regt2l)

Call:
lm(formula = PROCURATIONS/INSCRITS ~ taux, data = baset2, weights = INSCRITS)

Weighted Residuals:
    Min      1Q  Median      3Q     Max 
-2.9029 -0.4148 -0.0338  0.4029  3.4907 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.033909   0.001866   18.17   <2e-16 ***
taux        0.079749   0.005165   15.44   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.6934 on 835 degrees of freedom
Multiple R-squared:  0.2221,	Adjusted R-squared:  0.2212 
F-statistic: 238.4 on 1 and 835 DF,  p-value: < 2.2e-16

We have used those codes to produce the graphs mentioned in the post. But before mentioning the residuals of the multiple model we considered, I wanted to share some awesome code that produce maps (I can say that those codes are awesome since Baptiste wrote most of them).

  • Visualization of Residuals on a Map of Paris

To plot the neighborhood of the pooling stations, one more time the post on Baptiste’s blog, explains how the shapefile was obtained from cartelec.net

> library(maptools)
> library(rgdal)
> library(classInt)
> paris=readShapeSpatial("paris-cartelec.shp")

To visualize the proxy rate (the average of round one and round two), here is the code

> elec=data.frame()
> elec=cbind(bt1$BV,(bt1$PROCURATIONS+bt2$PROCURATIONS),(bt1$EXPRIMES+bt2$EXPRIMES))
> colnames(elec)=c("BV","PROCURATIONS","EXPRIMES")
> elec=as.data.frame(elec)
> elec$BV=bt1$BV

To get nice colors, function of the rates, we use

> m=match(paris$BUREAU,elec$BV)
> plotvar=100*elec$PROCURATIONS/elec$EXPRIMES
> nclr=7
> plotclr=brewer.pal(nclr,"RdYlBu")[nclr:1] 
>(plotvar[m], nclr, style="fisher",dataPrecision=1)
> colcode=findColours(class, plotclr)

and finally

> par(mar=c(1,1,1,1))
> plot(paris,col=colcode,border=colcode)
> legend(656274.9, 6867308,legend=names(attr(colcode,"table")), 
+ fill=attr(colcode, "palette"), cex=1, bty="n",
+ title="Frequence procurations (%)")

If we consider a model with three explanatory variable, to explain the proxy rate,

> regt1=lm(PROCURATIONS/INSCRITS~I(POP65P/POP)+
+ I(PROPRIO/LOGEMENT)+I(CS3/POP1564),weights=INSCRITS,data=baset1)

we can plot the residuals using

> m=match(paris$BUREAU,elec$BV)
> plotvar=100*residuals(regt1)
> nclr=7
> plotclr=brewer.pal(nclr,"RdYlBu")[nclr:1] 
>(plotvar[m], nclr, style="fisher",dataPrecision=1)
> colcode=findColours(class, plotclr)
> par(mar=c(1,1,1,1))
> plot(paris,col=colcode,border=colcode)
> legend(656274.9, 6867308,legend=names(attr(colcode,"table")), 
+ fill=attr(colcode, "palette"), cex=1, bty="n",title="Residus")

It might not be a pure random spatial noise… But we could not get better with our small set of covariates.