For our last course on flows and networks, I have uploaded additional material. Slides are now online
A markdown-based webpage is now online, with all the codes.
For our last course on flows and networks, I have uploaded additional material. Slides are now online
A markdown-based webpage is now online, with all the codes.
Pour aller un peu plus loin que mon précédant billet, élections, pièges à com(mentaires à la con) je voulais juste montrer quelques cartes, obtenues par simulations. Mais avant, revenons deux minutes sur la notion de ‘confounding‘. Pour reprendre le dessin de wikipedia, l’idée est que, conditionnellement à Z (notre ‘confounding factor’), les variables X et Y sont indépendantes. Mais cette indépendance conditionnelle peut (comme bien souvent) impliquer une forte corrélation entre X et Y. On peut ainsi imaginer un modèle de la forme X=Z+\varepsilon où \varepsilon désigne une composante idiosyncratique, et Y=Z+\eta (où là aussi \eta est une composante idiosyncratique).
Par exemple, on peut supposer que Z est une variable liés à la lattitude du département, par exemple la température moyenne (histoire d’avoir une vague histoire à raconter, et d’être moins rébarbatif)
> n=115
> X=matrix(rnorm(3*n),n,3)
> X[,1]=sort(X[,1])
> library(maps)
> map("france",fill =TRUE,
col= rgb(1,0,0,pnorm(X[,1])))
Maintenant, supposons qu’on ait une variable X très corrélée à Z, par exemple la vente de crème solaire (en volume, toujours pour raconter une histoire qui illustrerait un peu mon propos)
> r=.8
> X[,2]=r*X[,1]+sqrt(1-r^2)*X[,2]
> map("france",fill =TRUE,
col= rgb(0,1,0,pnorm(X[,1])))
ce qui donne le nuage de points suivant (avec une forte corrélation avec le ‘confounding factor’)
On suppose maintenant qu’on a une autre variable Y , elle aussi très corrélée à Z, par exemple le prix d’une plaquette de beurre (qui pourrait être très cher dans le sud, où l’on cuisine plutôt à l’huile)
> r=.8
> X[,3]=r*X[,1]+sqrt(1-r^2)*X[,3]
> map("france",fill =TRUE,
col= rgb(0,1,0,pnorm(X[,1])))
avec une corrélation du même ordre, avec le ‘confounding factor’
Si on juxtapose les deux cartes, on a des variables corrélées,
mais on ne peut pas en conclure, juste en regardant ces cartes, que les ventes de crème solaire influencent le prix d’une plaquette de beurre ! Et on peut même accentuer encore l’effet, en jouant un peu sur les couleurs. Par exemple, avec
> map("france",fill =TRUE,col = rgb(1,0,0,qbeta(pnorm(X[,1]),.2,.5)))
on accentue un peu l’écart entre le nord et le sud (je joue ici sur l’échelle des couleurs, pas sur les valeurs !)
et sur nos deux variables, on a encore une juxtaposition qui donne envie de raconter un jolie histoire, non ?
Ah oui, ma corrélation entre les variables X et Y est relativement forte, de l’ordre de 0.6 (qui est supérieure à ce qu’on avait observé hier sur le taux de chômage et le taux du FN aux élections régionales)
> cor(X)
[,1] [,2] [,3]
[1,] 1.0000000 0.7863838 0.7727523
[2,] 0.7863838 1.0000000 0.6312878
[3,] 0.7727523 0.6312878 1.0000000
mais la corrélation est très sensible aux lois marginales, ça ne veut pas dire grand chose. Même si, visuellement, le nuage de points peut faire penser à celui qu’on avait hier
Bref, juxtaposer des cartes, c’est joli. De là à en déduire quoi que ce soit….
Dans le cadre du projet de R de la formation en Data Science pour l’Actuariat, je vais continuer à mettre en ligne des morceaux de codes qui peuvent être utiles, dans un contexte spatial. Le dernier billet, sur cartographier le vote pour le Brexit, avait été repris (et bien amélioré) sur le site des voisins de rgeomatic. Aujourd’hui, je vais m’inspirer du travail d’Etienne Flichy qui mixe répartition de la population sur le territoire, et localisation des agences bancaires.
On parle des banques ici, mais si on a une base avec les coiffeurs, les boulangeries, etc, on peut faire la même chose ! (autant dire qu’on va pouvoir s’amuser quand la base sirene sera rendue ouverte – dans les semaines à venir). On va supposer que l’on a une base avec toutes les banques géocodées. Bon, pour l’exercice, on va utiliser la localisation des agences bancaires, en utilisant les données de cbanque.com. C’est assez facile d’aller scraper le site, quand on regarde la façon dont sont faites les pages, e.g. http://cbanque.com/pratique/agences/credit-cooperatif/35/. Là on récupère les adresses (postales) et on peut utiliser https://adresse.data.gouv.fr/csv/ (ou différents outils) pour géolocaliser les adresses.
Continue reading A quelle distance d’une banque habite-t-on ?
This Wednesday, I will give a graduate crash course on computational actuarial science, with R, which will be the second part of the lecture of Tuesday. Slides are now available,
Yesterday, my friend Fleur did show me some interesting features of the leaflet package, in R.
library(leaflet)
In order to illustrate, consider locations of (fixed) radars, in several European countries. To get the data, use
download.file("http://carte-gps-gratuite.fr/radars/zones-de-danger-destinator.zip","radar.zip") unzip("radar.zip") ext_radar=function(nf){ radar=read.table(file=paste("destinator/",nf,sep=""), sep = ",", header = FALSE, stringsAsFactors = FALSE) radar$type <- sapply(radar$V3, function(x) {z=as.numeric(unlist(strsplit(x, " ")[[1]])); return(z[!is.na(z)])}) radar <- radar[,c(1,2,4)] names(radar) <- c("lon", "lat", "type") return(radar)} L=list.files("./destinator/") nl=nchar(L) id=which(substr(L,4,8)=="Radar" & substr(L,nl-2,nl)=="csv") radar_E=NULL for(i in id) radar_E=rbind(radar_E,ext_radar(L[i]))
(to be honest, if you run that code, you will get several countries, but not France… but if you want to add it, you should be able to do so…). The first tool is based on popups. If you click on a point on the map, you get some information, such as the speed limit where you can find a radar. To get a nice pictogram, use
fileUrl <- "http://evadeo.typepad.fr/.a/6a00d8341c87ef53ef01310f9238e6970c-800wi" download.file(fileUrl,"radar.png", mode = 'wb') RadarICON <- makeIcon( iconUrl = fileUrl, iconWidth = 20, iconHeight = 20)
And then, use to following code get a dynamic map, mentionning the variable that should be used for the popup
m <- leaflet(data = radar_E) m <- m %>% addTiles() m <- m %>% addMarkers(~lon, ~lat, icon = RadarICON, popup = ~as.character(type)) m
Because the picture is a bit heavy, with almost 20K points, let us focus only on France,
Le week-end passé, suite à la publication de See the world differently with these minimalist maps par Matthew Champion, il y a eu pas mal d’activité autour des cartes minimalistes. En particulier, Reka (aka @visionscarto) et Philippe (aka @recifs) m’ont proposé de faire un billet pour Visions Carto sur la construction de ces cartes. Je suis flatté, même si je trouve ma contribution incroyablement modeste sur ce projet (et je me sens toujours humble face aux dessins superbes de Reka).
Je renvoie donc vers le billet Cartes Minimalistes pour plus de détails, mais pour les plus curieux, je rajoute deux cartes, plus française. La première correspond aux voies ferroviaires,
library(maptools) setwd("/home/freakonometrics/Documents/data") loc="http://www.mapcruzin.com/download-shapefile/france-railways-shape.zip" download.file(loc,destfile="rail_france.zip") unzip("rail_france.zip", exdir="./rail_france/") shap=readShapeLines("./rail_france/railways.shp") plot(shap,lwd=.7)
et la seconde, aux routes dans la région parisienne,
loc="http://www.mapcruzin.com/download-shapefile/france-roads-shape.zip" download.file(loc,destfile="road_france.zip") unzip("road_france.zip", exdir="./road_france/") shap=readShapeLines("./road_france/roads.shp") plot(shap,lwd=.7,ylim=48.85+c(-.5,.5), xlim=2.35+c(-.5,.5))
La prochaine fois, j’expliquerais un peu comment corriger les shapefiles quand on a des soucis avec (je repense au commentaire qui disait que qu’il était dommage d’avoir une route entre le Royaume Uni et l’Islande).
This week, I mentioned a series of maps, on Twitter,
some minimalist maps https://t.co/YCNPf3AR9n (poke @visionscarto) pic.twitter.com/Ip9Tylsbkv
— Arthur Charpentier (@freakonometrics) 2 Septembre 2015
Friday evening, just before leaving the office to pick-up the kids after their first week back in class, Matthew Champion (aka @matthewchampion) sent me an email, asking for more details. He wanted to know if I did produce those graphs, and if he could mention then, in a post. The truth is, I have no idea who produced those graphs, but I told him one can easily reproduce them. For instance, for the cities, in R, use
> library(maps) > data("world.cities") > plot(world.cities$lon,world.cities$lat, + pch=19,cex=.7,axes=FALSE,xlab="",ylab="")
It is possible to get a more minimalist one by plotting only cities with more than 100,000 unhabitants, e.g.,
> world.cities2 = world.cities[ + world.cities$pop>100000,] > plot(world.cities2$lon,world.cities2$lat, + pch=19,cex=.7,axes=FALSE,xlab="",ylab="")
For the airports, it was slightly more complex since on http://openflights.org/data.html#airport, 6,977 airports are mentioned. But on http://www.naturalearthdata.com/, I found another dataset with only 891 airports.
> library(maptools) > shape <- readShapePoints( + "~/data/airport/ne_10m_airports.shp") > plot(shape,pch=19,cex=.7)
On the same website, one can find a dataset for ports,
> shape <- readShapePoints( + "~/data/airport/ne_10m_ports.shp") > plot(shape,pch=19,cex=.7)
This is for graphs based on points. For those based on lines, for instance rivers, shapefiles can be downloaded from https://github.com/jjrom/hydre/tree/, and then, use
> require(maptools) > shape <- readShapeLines( + "./data/river/GRDC_687_rivers.shp") > plot(shape,col="blue")
For roads, the shapefile can be downloaded from http://www.naturalearthdata.com/
> shape <- readShapeLines( + "./data/roads/ne_10m_roads.shp") > plot(shape,lwd=.5)
Last, but not least, for lakes, we need the polygons,
> shape <- readShapePoly( + "./data/lake/ne_10m_lakes.shp") > plot(shape,col="blue",border="blue",lwd=2)
Nice, isn’t it? See See the world differently with these minimalist maps for Matthew Champion‘s post.
This week, in Istanbul, for the second training on data science, we’ve been discussing classification and regression models, but also visualisation. Including maps. And we did have a brief introduction to the leaflet package,
devtools::install_github("rstudio/leaflet") require(leaflet)
To see what can be done with that package, we will use one more time the John Snow’s cholera dataset, discussed in previous posts (one to get a visualisation on a google map background, and the second one on an openstreetmap background),
library(sp) library(rgdal) library(maptools) setwd("/cholera/") deaths <- readShapePoints("Cholera_Deaths") df_deaths <- data.frame(deaths@coords) coordinates(df_deaths)=~coords.x1+coords.x2 proj4string(df_deaths)=CRS("+init=epsg:27700") df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84")) df=data.frame(df_deaths@coords) lng=df$coords.x1 lat=df$coords.x2
Once installed the leaflet package, we can use the package at the RStudio console (which is what we will do here), or within R Markdown documents, and within Shiny applications. But because of restriction we got on this blog (rules of hypotheses.org) So there will be only copies of my screen. But if you run the code, in RStudio you will get interactvive maps in the viewer window.
First step. To load a map, centered initially in London, use
m = leaflet()%>% addTiles() m %>% fitBounds(-.141, 51.511, -.133, 51.516)
In the viewer window of RStudio, it is just like on OpenStreetMap, e.g. we can zoom-in, or zoom-out (with the standard + and – in the top left corner)
And we can add additional material, such as the location of the deaths from cholera (since we now have the same coordinate representation system here)
rd=.5 op=.8 clr="blue" m = leaflet() %>% addTiles() m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr)
We can also add some heatmap.
X=cbind(lng,lat) kde2d <- bkde2D(X, bandwidth=c(bw.ucv(X[,1]),bw.ucv(X[,2])))
But there is no heatmap function (so far) so we have to do it manually,
x=kde2d$x1 y=kde2d$x2 z=kde2d$fhat CL=contourLines(x , y , z)
We have now a list that contains lists of polygons corresponding to isodensity curves. To visualise of of then, use
m = leaflet() %>% addTiles() m %>% addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)
Of course, we can get at the same time the points and the polygon
m = leaflet() %>% addTiles() m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr) %>% addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)
Continue reading Interactive Maps for John Snow’s Cholera Data
La plupart des cartes du monde standard de R sont en anglais. L’autre jour, des étudiants souhaitaient visualiser des données tirées d’une base où les noms des pays sont en anglais. Pour obtenir une correspondance entre des noms anglais et des noms français, on peut utiliser la base suivante
> library(gdata) > library(xlsx) > download.file("http://www.stat.gouv.qc.ca/statistiques/divisions-territoriales/pays-liste-isq-web.xls","corresp") > xls_corresp <- read.xls("corresp",sheet=1,encoding="latin1")
On a ici
> df_corresp <- data.frame( + FR=xls_corresp$X.5, + EN=xls_corresp$X.11) > df_corresp[5:10,] FR EN 5 Belgique-Luxembourg Belgium-Luxembourg 6 Îles du Pacifique Pacific Islands 7 Afghanistan Afghanistan 8 Afrique du Sud South Africa 9 Îles Åland Åland Islands 10 Albanie Albania
Pour avoir une correspondance, entre les noms sous R, et ceux dans la base à notre disposition, il faut manipuler un peu les chaînes de caractères,
> df_corresp$FR = as.character(df_corresp$FR) > df_corresp$FR = iconv(df_corresp$FR, to="ASCII//TRANSLIT") > df_corresp$FR = tolower(df_corresp$FR) > remove_minus = function(s) paste(unlist(strsplit(s, split='-',fixed=TRUE)),collapse="") > remove_space = function(s) paste(unlist(strsplit(s, split=' ',fixed=TRUE)),collapse="") > df_corresp$FR = sapply(df_corresp$FR,remove_minus) > df_corresp$FR = sapply(df_corresp$FR,remove_space) > df_corresp$EN = as.character(df_corresp$EN) > df_corresp$EN = iconv(df_corresp$EN, to="ASCII//TRANSLIT") > df_corresp$EN = tolower(df_corresp$EN) > df_corresp$EN = sapply(df_corresp$EN,remove_minus) > df_corresp$EN = sapply(df_corresp$EN,remove_space) > split_dots = function(s) strsplit(s, split=':',fixed=TRUE)[[1]][1]
Si on regarde les pays que l’on a pu convertir le nom en anglais pour avoir une correspondance avec le nom du pays dans la base de R,
> library(maps) > world<-map(database="world") > world$pays_EN <- world$names > world$pays_EN <- tolower(world$pays_EN) > world$pays_EN = sapply(world$pays_EN,remove_space) > world$pays_EN = sapply(world$pays_EN,remove_minus) > world$pays_EN = sapply(world$pays_EN,split_dots) > world$pays_FR <- df_corresp$FR[match(world$pays_EN, df_corresp$EN)]
on obtient le graphique suivant
Les seuls pays pour lesquels on n’a pas de correspondance sont les États-Unis d’Amérique (usa dans la base de R), la Russie (ussr), les Congos (avec la République Démocratique et l’autre), et la Côte d’Ivoire. En bricolant un peu sur ces 4 pays, on pourra avoir une correspondance entre les noms utilisés sous R, et les noms en français.
In my previous post, I discussed how to use OpenStreetMaps (and standard plotting functions of R) to visualize John Snow’s dataset. But it is also possible to use Google Maps (and ggplot2 types of graphs).
library(ggmap) get_london <- get_map(c(-.137,51.513), zoom=17) london <- ggmap(get_london)
Again, the tricky part comes from the fact that the coordinate representation system, here, is not the same as the one used on Robin Wilson’s blog.
or, use d X_deaths.RData. So now, we have to change it
df_deaths <- data.frame(X) library(sp) library(rgdal) coordinates(df_deaths)=~coords.x1+coords.x2 proj4string(df_deaths)=CRS("+init=epsg:27700") df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
Here, we have the same coordinate system as the one used in Google Maps. Now, we can add a layer, with the points,
london + geom_point(aes(x=coords.x1, y=coords.x2),data=data.frame(df_deaths@coords),col="red")
Again, it is possible to add the density, as an additional layer,
london + geom_point(aes(x=coords.x1, y=coords.x2), data=data.frame(df_deaths@coords),col="red")+ geom_density2d(data = data.frame(df_deaths@coords), aes(x = coords.x1, y=coords.x2), size = 0.3) + stat_density2d(data = data.frame(df_deaths@coords), aes(x = coords.x1, y=coords.x2,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") + scale_fill_gradient(low = "green", high = "red",guide = FALSE) + scale_alpha(range = c(0, 0.3), guide = FALSE)
While I was working for a training on data visualization, I wanted to get a nice visual for John Snow’s cholera dataset. This dataset can actually be found in a great package of famous historical datasets.
library(HistData) data(Snow.deaths) data(Snow.streets)
One can easily visualize the deaths, on a simplified map, with the streets (here simple grey segments, see Vincent Arel-Bundock’s post)
plot(Snow.deaths[,c("x","y")], col="red", pch=19, cex=.7,xlab="", ylab="", xlim=c(3,20), ylim=c(3,20)) slist <- split(Snow.streets[,c("x","y")],as.factor(Snow.streets[,"street"])) invisible(lapply(slist, lines, col="grey"))
You have probably seen @coulmont‘s maps. If you haven’t, you should probably go and spend some time on his blog (but please, come back afterwards, I still have a story to tell you). Consider for instance the maps we obtained for a post published in Monkey Cage, a few months ago,
The codes were discussed on a blog post (I spent some time on the econometric model, not really on the map, by that time).
My mentor in cartography, Reka (aka @visionscarto) taught me that maps were always subjective. And indeed.
Continue reading Subjective Ways of Cutting a Continuous Variables
The National Hurricane Center (NHC) collects datasets with all storms in North Atlantic, the North Atlantic Hurricane Database (HURDAT). For all sorms, we have the location of the storm, every six jours (at midnight, six a.m., noon and six p.m.). Note that we have also the date, the maximal wind speed – on a 6 hour window – and the pressure in the eye of the storm.
It is possible to run the following function
library(XML) extract.track=function(year=2012,p=TRUE){
Continue reading Generating Hurricanes with a Markov Spatial Process
Over the past years, I’ve been living in different cities, all of them being completely different, compared with the others. I have been living in Paris, which is a big city in Europe, with a large suburban neighborhood, too (la banlieue).
Then, I’ve been living in Hong Kong, which is a larger city, in Asia.
It was crowded. I mean, it was the feeling I had, while I was living there. And more recently, I’ve been living in Montréal, in North America. Montreal is a large city. Or to be more specific, an island,
The three cities are quite different. Paris, 2.211 million unhabitants, and 105,4 km² (density 21,057 unhabitants per km²). Montréal, 1.621 million unhabitants, and three times wider 365.1 km² (density 4,441 unhabitants per km²). Hong Kong, 7.234 million unhabitants, and again three times wider 1,104 km² (density 6,553 unhabitants per km²). In Hong Kong, there are several hill where it is not possible to build anything: on a large part of the island, the density is null.
Continue reading Crowded Cities, Paris, Hong Kong and Montréal
I am still working with @3wen on visualizations of the North Pole. So far, it was not that difficult to generate maps, but we started to have problems with the ice region in the Arctic. More precisely, it was complicated to compute the area of this region (even if we can easily get a shapefile). Consider the globe,
worldmap <- ggplot() + geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) + scale_y_continuous(breaks = (-2:2) * 30) + scale_x_continuous(breaks = (-4:4) * 45)
and then, add three points in the northern hemisphere, and plot the associated triangle
P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), fill ="blue", alpha = 0.6, col = "light blue", size = .8)+ geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+
for some given projection, e.g.
coord_map("ortho", orientation=c(61, -74, 0))
This can be done with the following function
proj1=function(x=75){ triangle <- data.frame(long=c(-70,-110,-90*(x<90)+90*(x>90)), lat=c(60,60,x*(x<90)+(90-(x-90))*(x>90)),group=1, region=1) worldmap <- ggplot() + geom_polygon(data = world.df, aes(x = long, y = lat, group = group)) + scale_y_continuous(breaks = (-2:2) * 30) + scale_x_continuous(breaks = (-4:4) * 45) P1 <- worldmap + geom_polygon(data = triangle, aes(x = long, y = lat, group = group), fill ="blue", alpha = 0.6, col = "light blue", size = .8)+ geom_point(data = triangle, aes(x = long, y = lat, group = group),colour = "red")+ coord_map("ortho", orientation=c(61, -74, 0)) print(P1) }
or
I am not sure if I understand why the projection of the triangle is not convex on the graph above, but say it’s not a big deal, here. Actually, our problem is that our interest is on regions (polygons, from a geometrical point of view) that do contain the North Pole. And here, it starts to be messy. I can easily move the upper point on the other side of the globe, but the polygon is not correct,
I do understand that it should be a problem, non-trivial, but it means that it should not be that simple to compute the area of a polygon (a region) that contains the North Pole. Which is exactly what we did observe in our computation. And I believe that one heuristic interpretation is related to the following graph
My skills in geometry are extremely poor. So do not expect that I will go through the code of the function that compute the area of a polygon ! Actually, my idea is the following : if the problem is that the North Pole is in the region, let’s consider some rotation, to shift the North Pole on the Equation. The code here is, from latitudes and longitude, to get new latitudes and longitudes, after a rotation around the y-axis (the North Pole will go down, along Greenwhich meridian) is
rotation=function(Z,theta){ lon=Z[,1]/180*pi; lat=Z[,2]/180*pi x=cos(lon)*cos(lat) y=sin(lon)*cos(lat) z=sin(lat) pt1=cbind(x,y,z) M=matrix(c(cos(theta),0,-sin(theta),0,1,0,sin(theta),0,cos(theta)),3,3) pt2=t(M%*%t(pt1)) lat=asin(pt2[,3])*180/pi lon=atan2(pt2[,2],pt2[,1])*180/pi return(cbind(lon,lat))}
With a rotation from (no change) to (the North Pole on the equator), we get
From now on, it is possible to compute the area of any region containing the North Pole ! One should simply apply the rotation function on all datebases generated from shapefiles (and then the opposite rotation to get a proper location) ! We can then compute the centroid of the ice region, for example,
r.glace=glace r.glace[,1:2]=rotation(glace[,1:2],pi/2) M=matrix(NA,length(unique(glace$id)),3) j=0 for(i in unique(glace$id)){j=j+1 Polyglace <- as(r.glace[glace$id==i,c("long","lat")],"gpc.poly") M[j,1]=area.poly(Polyglace) M[j,2:3]=centroid(r.glace[r.glace$id==i,c("long","lat")]) } Z=c(weighted.mean(M[,2],M[,1]),weighted.mean(M[,3],M[,1])) rotation(rbind(Z),-pi/2)[1,])
And we get
and below, we can visualize all the locations of the centroid of the ice region in the past 25 years