Tag Archives: ggmap

Accidents de voiture et de deux roues à Paris

Il y a quelques mois, j’avais commencé à reprendre des codes que j’avais vu passer dans les projets de R que j’avais donnés pour la formation Data Science pour l’Actuariat. J’avais commencé avec un billet sur sur cartographier le vote pour le Brexit (qui avait été repris – et amélioré – sur le site des voisins de rgeomatic). Aujourd’hui, en rangeant un peu mon ordinateur, je suis tombé sur un code de Quentin sur les acccidents de la route à Paris.

Comme toujours on charge quelques packages

require(downloader)
require(ggmap)
require(stringr)
options(encoding = "UTF-8")

et on récupère les données, en ligne sur https://opendata.paris.fr

path="https://opendata.paris.fr/explore/dataset/accidentologie/download/?format=csv&timezone=Europe/Berlin&use_labels_for_header=true"
dest="test"
download(path,destfile=dest,mode="wb")
loc=paste(getwd(),"/",dest,sep="" )
obj <- read.csv(file = loc, stringsAsFactor = FALSE,sep=";")
file.remove(dest)

On peut commencer par regarder quand on eu lieu les accidents, quelle jour et à quelle heure. On va alors agréger les données

t=data.frame(obj,counter=1)
parJour=aggregate(t$counter, by=list(format(as.Date(t$Date),"%A")), sum)

et créer une base qui contient toutes les informations

vec_dat=data.frame();
vec_dat[ (1:3),1]=parJour$Group.1[3:5]
vec_dat[ (1:3),2]=parJour$x[3:5]
vec_dat[ 4,1]=parJour$Group.1[2]
vec_dat[ 4,2]=parJour$x[2]
vec_dat[ 5,1]=parJour$Group.1[7]
vec_dat[ 5,2]=parJour$x[7]
vec_dat[ 6,1]=parJour$Group.1[6]
vec_dat[ 6,2]=parJour$x[6]
vec_dat[ 7,1]=parJour$Group.1[1]
vec_dat[ 7,2]=parJour$x[1]

on va rajouter un nom, pour que ce soit plus clair

names(vec_dat)[1]="Jour";names(vec_dat)[2]="Nbsinistre"

On choisit éventuellement une palette de couleurs

Palette<-colorRampPalette(c("lightblue","darkblue"))

et on fait un barplot, pour compter le nombre d’accidents, par jour de la semaine, sur Paris

barplot(vec_dat$Nbsinistre,names.arg=vec_dat$Jour,main="Répartition du nombre d'accidents par jour de la semaine ", col=Palette(10), xlab="jour de la semaine", ylab="nombre d'accidents")

On peut ensuite regarder par heure de la journée (en données brutes, sans lissage)

parHeure=aggregate(t$counter, by=list(format(strptime(t$Heure,format="%H:%M"),"%H")), sum)
plot(parHeure,type="l", col=c("blue"), main=" Répartition du nombre d'accidents par heure de la journée", xlab="heure de la journee", ylab="nombre d'accidents")

Passons maintenant à de la géolocalisation

t2 = subset(t, Coordonnées!= "")
coord=data.frame(t(matrix(as.numeric(unlist(strsplit(t2$Coordonnées,"[,]"))),nrow=2)))
t2$lat=coord[,1]
t2$lon=coord[,2]
t2 = subset(t2, (
(48.81 <= lat) & (lat<=48.90) & (2.25<=lon) &(lon<=2.43) ) )

On peut regarder sur une carte où ont eu lieu les accidents

map<-get_map(location = "Paris", zoom=12, maptype="roadmap", color="bw")
vis <- ggmap(map) + geom_point(data = t2, aes(x=lon, y=lat, color=1)) + theme( legend.position="none") + labs(title="Cartographie des accidents à Paris")
vis

On peut tenter une densité (brute, non corrigée d’une densité de routes, ou de population)

overlay<-stat_density2d(data=t2, aes(x=lon, y=lat, fill=..level.., alpha=..level..), contour=T, n=100, geom="polygon")
densi<- ggmap(map) + overlay + scale_fill_gradient("Accident Density") +scale_alpha(range=c(0.4,0.75),guide=FALSE) + guides(fill=guide_colorbar(barwidth=1.5, barheight = 10))
densi = densi + labs(title="Densite des accidents à Paris")
densi

et non-corrigée ici d’effets de bords (ce qui donne l’illusion qu’il y a plus d’accidents intra-muros que sur le boulevard périphérique par exemple). On peut également regarder en fonction de variables explicatives, comme le type de véhicules impliqués, ou la gravité

densi_byGravUs<- densi + facet_wrap(~ Usager.1.Grav, nrow=2)
densi_byGravUs = densi_byGravUs + labs(title="Densité des accidents à Paris par gravité d'accident")
densi_byGravUs

Bref, on peut faire facilement plein de chose à partir de ces données.

John Snow, and Google Maps

In my previous post, I discussed how to use OpenStreetMaps (and standard plotting functions of R) to visualize John Snow’s dataset. But it is also possible to use Google Maps (and ggplot2 types of graphs).

library(ggmap)
get_london <- get_map(c(-.137,51.513), zoom=17)
london <- ggmap(get_london)

Again, the tricky part comes from the fact that the coordinate representation system, here, is not the same as the one used on Robin Wilson’s blog.

> library(maptools)
> setwd("/cholera/")
> deaths <- readShapePoints("Cholera_Deaths")
> head(deaths@coords)
coords.x1 coords.x2
0  529308.7  181031.4
1  529312.2  181025.2
2  529314.4  181020.3
3  529317.4  181014.3
4  529320.7  181007.9
5  529336.7  181006.0
> X <- deaths@coords

or, use d X_deaths.RData. So now, we have to change it

df_deaths <- data.frame(X)
library(sp)
library(rgdal)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))

Here, we have the same coordinate system as the one used in Google Maps. Now, we can add a layer, with the points,

london + geom_point(aes(x=coords.x1, y=coords.x2),data=data.frame(df_deaths@coords),col="red")

Again, it is possible to add the density, as an additional layer,

london + geom_point(aes(x=coords.x1, y=coords.x2), 
data=data.frame(df_deaths@coords),col="red")+
geom_density2d(data = data.frame(df_deaths@coords), 
aes(x = coords.x1, y=coords.x2), size = 0.3) + 
stat_density2d(data = data.frame(df_deaths@coords), 
aes(x = coords.x1, y=coords.x2,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") + scale_fill_gradient(low = "green", high = "red",guide = FALSE) + 
scale_alpha(range = c(0, 0.3), guide = FALSE)

 

Visualizing densities of spatial processes

We recently uploaded on http://hal.archives-ouvertes.fr/hal-00725090 a revised version of our work, with Ewen Gallic (a.k.a. @3wen) on Visualizing spatial processes using Ripley’s correction: an application to bodily-injury car accident location

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and di#cult to implement. We provide a simple technique – based of properties of Gaussian kernels – to compute e#efficiently weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. An illustration on location of bodily-injury car accident (and hot spots) in the western part of France is discussed, where a lot of accident occur close to large cities, next to the sea.

Sketches of the R code can be found in the paper, to produce maps, an to describe the impact of our boundary correction. For instance, in Finistère, the distribution of car accident is the following (with a standard kernel on the left, and with correction on the right), with 186 claims (involving bodily injury)

and in Morbihan with 180 claims, observed in a specific year (2008 as far as I remember),

The code is the same as the one mentioned last year, except perhaps plotting functions. First, one needs to defi
ne a color scale and associated breaks

breaks <- seq( min( result $ZNA , na.rm = TRUE ) * 0.95 , max ( result$ZNA , na.rm = TRUE ) * 1.05 , length = 21)
col <- rev( heat . colors (20) )

to
finally plot the estimation

image . plot ( result $X, result $Y, result $ZNA , xlim = range (pol[,
1]) , ylim = range (pol[, 2]) , breaks = breaks , col = col ,
xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n",
zlim = range ( breaks ), horizontal = TRUE )

It is possible to add a contour, the observations, and the border of the polygon

contour ( result $X, result $Y, result $ZNA , add = TRUE , col = "grey ")
points (X[, 1], X[, 2], pch = 19, cex = 0.2 , col = " dodger blue")
polygon (pol , lwd = 2)

Now, if one wants to improve the aesthetics of the map, by adding a Google Maps base map, the
first thing to do – after loading ggmap package – is to get the base map

theMap <- get_map( location =c( left =min (pol [ ,1]) , bottom =min (pol[ ,2]) , right =max (pol [ ,1]) , 
top =max (pol [ ,2])), source =" google ", messaging =F, color ="bw")

Of course, data need to be put in the right format

getMelt <- function ( smoothed ){
res <- melt ( smoothed $ZNA)
res [ ,1] <- smoothed $X[res [ ,1]]
res [ ,2] <- smoothed $Y[res [ ,2]]
names (res) <- list ("X","Y","ZNA")
return (res )
}
smCont <- getMelt ( result )

Breaks and labels should be prepared

theLabels <- round (breaks ,2)
indLabels <- floor (seq (1, length ( theLabels ),length .out =5)) 
indLabels [ length ( indLabels )] <- length ( theLabels ) 
theLabels <- as. character ( theLabels [ indLabels ])
theLabels [ theLabels =="0"] <- " 0.00 "

Now, the map can be built

P <- ggmap ( theMap )
P <- P + geom _ point (aes(x=X, y=Y, col=ZNA), alpha =.3 , data =
smCont [!is.na( smCont $ZNA ) ,], na.rm= TRUE )

It is possible to add a contour

P <- P + geom _ contour ( data = smCont [!is.na( smCont $ZNA) ,] ,aes(x=
X, y=Y, z=ZNA ), alpha =0.5 , colour =" white ")

and colors need to be updated

P <- P + scale _ colour _ gradient ( name ="", low=" yellow ", high ="
red", breaks = breaks [ indLabels ], limits = range ( breaks ),
labels = theLabels )

To remove the axis legends and labels, the theme should be updated

P <- P + theme ( panel . grid . minor = element _ line ( colour =NA), panel
. grid . minor = element _ line ( colour =NA), panel . background =
element _ rect ( fill =NA , colour =NA), axis . text .x= element _ blank() ,
axis . text .y= element _ blank () , axis . ticks .x= element _ blank() ,
axis . ticks .y= element _ blank () , axis . title = element _ blank() , rect = element _ blank ())

The
final step, in order to draw the border of the polygon

polDF <- data . frame (pol)
colnames ( polDF ) <- list ("lon","lat")
(P <- P + geom _ polygon ( data =polDF , mapping =( aes(x=lon , y=lat)), colour =" black ", fill =NA))

Then, we’ve applied that methodology to estimate the road network density in those two regions, in order to understand if high intensity means that it is a dangerous area, or if it simply because there is a lot of traffic (more traffic, more accident),

We have been using the dataset obtained from the Geofabrik website which provides
Open-StreetMap data. Each observation is a section of a road, and contains a few points identifi
ed by their geographical coordinates that allow to draw lines. We have use those points to estimate a proxy of road intensity, with weight going from 10 (highways) to 1 (service roads).

splitroad <- function ( listroad , h = 0.0025) {
pts = NULL
weights <- types . weights [ match ( unique ( listroad $ type ), types .
weights $ type ), " weight "]
for (i in 1:( length ( listroad ) - 1)) {
d = diag (as. matrix ( dist ( listroad [[i]]))[, 2: nrow ( listroad
[[i ]]) ]))
}}
return (pts )
}

See Ewen’s blog for more details on the code, http://editerna.free.fr/blog/…. Note that Ewen did publish a poster of the paper (in French), for the http://r2013-lyon.sciencesconf.org/ conference, that will be organized in Lyon on June 27th-28th, see

All comments are welcome…