Tag Archives: Ripley

Kernel Density Estimation with Ripley’s Circumferential Correction

The revised version of the paper Kernel Density Estimation with Ripley’s Circumferential Correction is now online, on hal.archives-ouvertes.fr/.

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique — based of properties of Gaussian kernels — to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.

There are new applications, and new graphs, too

Most of the codes can be found on https://github.com/ripleyCorr/Kernel_density_ripley (as well as datasets).

Visualizing densities of spatial processes

We recently uploaded on http://hal.archives-ouvertes.fr/hal-00725090 a revised version of our work, with Ewen Gallic (a.k.a. @3wen) on Visualizing spatial processes using Ripley’s correction: an application to bodily-injury car accident location

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and di#cult to implement. We provide a simple technique – based of properties of Gaussian kernels – to compute e#efficiently weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. An illustration on location of bodily-injury car accident (and hot spots) in the western part of France is discussed, where a lot of accident occur close to large cities, next to the sea.

Sketches of the R code can be found in the paper, to produce maps, an to describe the impact of our boundary correction. For instance, in Finistère, the distribution of car accident is the following (with a standard kernel on the left, and with correction on the right), with 186 claims (involving bodily injury)

and in Morbihan with 180 claims, observed in a specific year (2008 as far as I remember),

The code is the same as the one mentioned last year, except perhaps plotting functions. First, one needs to defi
ne a color scale and associated breaks

breaks <- seq( min( result $ZNA , na.rm = TRUE ) * 0.95 , max ( result$ZNA , na.rm = TRUE ) * 1.05 , length = 21)
col <- rev( heat . colors (20) )

to
finally plot the estimation

image . plot ( result $X, result $Y, result $ZNA , xlim = range (pol[,
1]) , ylim = range (pol[, 2]) , breaks = breaks , col = col ,
xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n",
zlim = range ( breaks ), horizontal = TRUE )

It is possible to add a contour, the observations, and the border of the polygon

contour ( result $X, result $Y, result $ZNA , add = TRUE , col = "grey ")
points (X[, 1], X[, 2], pch = 19, cex = 0.2 , col = " dodger blue")
polygon (pol , lwd = 2)

Now, if one wants to improve the aesthetics of the map, by adding a Google Maps base map, the
first thing to do – after loading ggmap package – is to get the base map

theMap <- get_map( location =c( left =min (pol [ ,1]) , bottom =min (pol[ ,2]) , right =max (pol [ ,1]) , 
top =max (pol [ ,2])), source =" google ", messaging =F, color ="bw")

Of course, data need to be put in the right format

getMelt <- function ( smoothed ){
res <- melt ( smoothed $ZNA)
res [ ,1] <- smoothed $X[res [ ,1]]
res [ ,2] <- smoothed $Y[res [ ,2]]
names (res) <- list ("X","Y","ZNA")
return (res )
}
smCont <- getMelt ( result )

Breaks and labels should be prepared

theLabels <- round (breaks ,2)
indLabels <- floor (seq (1, length ( theLabels ),length .out =5)) 
indLabels [ length ( indLabels )] <- length ( theLabels ) 
theLabels <- as. character ( theLabels [ indLabels ])
theLabels [ theLabels =="0"] <- " 0.00 "

Now, the map can be built

P <- ggmap ( theMap )
P <- P + geom _ point (aes(x=X, y=Y, col=ZNA), alpha =.3 , data =
smCont [!is.na( smCont $ZNA ) ,], na.rm= TRUE )

It is possible to add a contour

P <- P + geom _ contour ( data = smCont [!is.na( smCont $ZNA) ,] ,aes(x=
X, y=Y, z=ZNA ), alpha =0.5 , colour =" white ")

and colors need to be updated

P <- P + scale _ colour _ gradient ( name ="", low=" yellow ", high ="
red", breaks = breaks [ indLabels ], limits = range ( breaks ),
labels = theLabels )

To remove the axis legends and labels, the theme should be updated

P <- P + theme ( panel . grid . minor = element _ line ( colour =NA), panel
. grid . minor = element _ line ( colour =NA), panel . background =
element _ rect ( fill =NA , colour =NA), axis . text .x= element _ blank() ,
axis . text .y= element _ blank () , axis . ticks .x= element _ blank() ,
axis . ticks .y= element _ blank () , axis . title = element _ blank() , rect = element _ blank ())

The
final step, in order to draw the border of the polygon

polDF <- data . frame (pol)
colnames ( polDF ) <- list ("lon","lat")
(P <- P + geom _ polygon ( data =polDF , mapping =( aes(x=lon , y=lat)), colour =" black ", fill =NA))

Then, we’ve applied that methodology to estimate the road network density in those two regions, in order to understand if high intensity means that it is a dangerous area, or if it simply because there is a lot of traffic (more traffic, more accident),

We have been using the dataset obtained from the Geofabrik website which provides
Open-StreetMap data. Each observation is a section of a road, and contains a few points identifi
ed by their geographical coordinates that allow to draw lines. We have use those points to estimate a proxy of road intensity, with weight going from 10 (highways) to 1 (service roads).

splitroad <- function ( listroad , h = 0.0025) {
pts = NULL
weights <- types . weights [ match ( unique ( listroad $ type ), types .
weights $ type ), " weight "]
for (i in 1:( length ( listroad ) - 1)) {
d = diag (as. matrix ( dist ( listroad [[i]]))[, 2: nrow ( listroad
[[i ]]) ]))
}}
return (pts )
}

See Ewen’s blog for more details on the code, http://editerna.free.fr/blog/…. Note that Ewen did publish a poster of the paper (in French), for the http://r2013-lyon.sciencesconf.org/ conference, that will be organized in Lyon on June 27th-28th, see

All comments are welcome…

Border bias and weighted kernels

With Ewen (aka @3wen), not only we have been playing on Twitter this month, we have also been working on kernel estimation for densities of spatial processes. Actually, it is only a part of what he was working on, but that part on kernel estimation has been the opportunity to write a short paper, that can now be downloaded on hal.

The problem with kernels is that kernel density estimators suffer a strong bias on borders. And with geographic data, it is not uncommon to have observations very close to the border (frontier, or ocean). With standard kernels, some weight is allocated outside the area: the density does not sum to one. And we should not look for a global correction, but for a local one. So we should use weighted kernel estimators (see on hal for more details). The problem that weights can be difficult to derive, when the shape of the support is a strange polygon. The idea is to use a property of product Gaussian kernels (with identical bandwidth) i.e. with the interpretation of having noisy observation, we can use the property of circular isodensity curve. And this can be related to Ripley (1977) circumferential correction. And the good point is that, with R, it is extremely simple to get the area of the intersection of two polygons. But we need to upload some R packages first,

require(maps)
require(sp)
require(snow)
require(ellipse)
require(ks)
require(gpclib)
require(rgeos)
require(fields)

To be more clear, let us illustrate that technique on a nice example. For instance, consider some bodiliy injury car accidents in France, in 2008 (that I cannot upload but I can upload a random sample),

base_cara=read.table(
"http://freakonometrics.blog.free.fr/public/base_fin_morb.txt",
sep=";",header=TRUE)

The border of the support of our distribution of car accidents will be the contour of the Finistère departement, that can be found in standard packages

geoloc=read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE,sep=",",comment.char="",check.names=FALSE,
colClasses=c(rep("character",5),rep("numeric",38)))
geoloc=geoloc[,c("dep","com","com_nom",
"long","lat","pop_2008")]
geoloc$id=paste(sprintf("%02s",geoloc$dep),
sprintf("%03s",geoloc$com),sep="")
geoloc=geoloc[,c("com_nom","long","lat","pop_2008")]
head(geoloc)
france=map('france',namesonly=TRUE,
plot=FALSE)
francemap=map('france', fill=TRUE, col="transparent",
plot=FALSE)
detpartement_bzh=france[which(france%in%
c("Finistere","Morbihan","Ille-et-Vilaine",
"Cotes-Darmor"))]
bretagne=map('france',regions=detpartement_bzh,
fill=TRUE, col="transparent", plot=FALSE,exact=TRUE)
finistere=cbind(bretagne$x[321:678],bretagne$y[321:678])
FINISTERE=map('france',regions="Finistere", fill=TRUE,
col="transparent", plot=FALSE,exact=TRUE)
monFINISTERE=cbind(FINISTERE$x[c(8:414)],FINISTERE$y[c(8:414)])

Now we need simple functions,

cercle=function(n=200,centre=c(0,0),rayon)
{theta=seq(0,2*pi,length=100)
m=cbind(cos(theta),sin(theta))*rayon
m[,1]=m[,1]+centre[1]
m[,2]=m[,2]+centre[2]
names(m)=c("x","y")
return(m)}
poids=function(x,h,POL)
{leCercle=cercle(centre=x,rayon=5/pi*h)
POLcercle=as(leCercle, "gpc.poly")
return(area.poly(intersect(POL,POLcercle))/
area.poly(POLcercle))}
lissage = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,
sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)

before defining our weights.

poidsU=function(i,U,h,POL)
{x=U[i,]
poids(x,h,POL)}
OMEGA=parLapply(cl,1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)
stopCluster(cl)
}else
{OMEGA=lapply(1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)}

Note that it is possible to parallelize if there are a lot of observations,

if(n>=500)
{cl <- makeCluster(4,type="SOCK")
worker.init <- function(packages)
{for(p in packages){library(p, character.only=T)}
NULL}
clusterCall(cl, worker.init, c("gpclib","sp"))
clusterExport(cl,c("cercle","poids"))

Then, we can use standard bivariate kernel smoothing functions, but with the weights we just calculated, using a simple technique that can be related to one suggested in Ripley (1977),

fhat=kde(U,H,w=1/OMEGA,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
fhat$estimate=fhat$estimate*sum(1/OMEGA)/n
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}
lissage_without_c = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)}
fhat=kde(U,H,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}

So, now we can play with those functions,

base_cara_FINISTERE=base_cara[which(point.in.polygon(
base_cara$long,base_cara$lat,monFINISTERE[,1],
monFINISTERE[,2])==1),]
coord=cbind(as.numeric(base_cara_FINISTERE$long),
as.numeric(base_cara_FINISTERE$lat))
nrow(coord)
map(francemap)
lissage_FIN_withoutc=lissage_without_c(coord,
monFINISTERE,optimal=TRUE)
lissage_FIN=lissage(coord,monFINISTERE,
optimal=TRUE)
lesBreaks_sans_pop=range(c(
range(lissage_FIN_withoutc$Z),
range(lissage_FIN$Z)))
lesBreaks_sans_pop=seq(min(lesBreaks_sans_pop)*.95,
max(lesBreaks_sans_pop)*1.05,length=21)

plot_article=function(lissage,breaks,
polygone,coord){
par(mar=c(3,1,3,1))
image.plot(lissage$X,lissage$Y,(lissage$ZNA),
xlim=range(polygone[,1]),ylim=range(polygone[,2]),
breaks=breaks, col=rev(heat.colors(20)),xlab="",
ylab="",xaxt="n",yaxt="n",bty="n",zlim=range(breaks),
horizontal=TRUE)
contour(lissage$X,lissage$Y,lissage$ZNA,add=TRUE,
col="grey")
points(coord[,1],coord[,2],pch=19,cex=.1,
col="dodger blue")
polygon(polygone,lwd=2,)}

plot_article(lissage_FIN_withoutc,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

plot_article(lissage_FIN,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

If we look at the graphs, we have the following densities of car accident, with a standard kernel on the left, and our proposal on the right (with local weight adjustment when the estimation is done next to the border of the region of interest),

Similarly, in Morbihan,

With those modified kernels, hot spots appear much more clearly. For more details, the paper is online on hal.