Border bias and weighted kernels

With Ewen (aka @3wen), not only we have been playing on Twitter this month, we have also been working on kernel estimation for densities of spatial processes. Actually, it is only a part of what he was working on, but that part on kernel estimation has been the opportunity to write a short paper, that can now be downloaded on hal.

The problem with kernels is that kernel density estimators suffer a strong bias on borders. And with geographic data, it is not uncommon to have observations very close to the border (frontier, or ocean). With standard kernels, some weight is allocated outside the area: the density does not sum to one. And we should not look for a global correction, but for a local one. So we should use weighted kernel estimators (see on hal for more details). The problem that weights can be difficult to derive, when the shape of the support is a strange polygon. The idea is to use a property of product Gaussian kernels (with identical bandwidth) i.e. with the interpretation of having noisy observation, we can use the property of circular isodensity curve. And this can be related to Ripley (1977) circumferential correction. And the good point is that, with R, it is extremely simple to get the area of the intersection of two polygons. But we need to upload some R packages first,

require(maps)
require(sp)
require(snow)
require(ellipse)
require(ks)
require(gpclib)
require(rgeos)
require(fields)

To be more clear, let us illustrate that technique on a nice example. For instance, consider some bodiliy injury car accidents in France, in 2008 (that I cannot upload but I can upload a random sample),

base_cara=read.table(
"http://freakonometrics.blog.free.fr/public/base_fin_morb.txt",
sep=";",header=TRUE)

The border of the support of our distribution of car accidents will be the contour of the Finistère departement, that can be found in standard packages

geoloc=read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE,sep=",",comment.char="",check.names=FALSE,
colClasses=c(rep("character",5),rep("numeric",38)))
geoloc=geoloc[,c("dep","com","com_nom",
"long","lat","pop_2008")]
geoloc$id=paste(sprintf("%02s",geoloc$dep),
sprintf("%03s",geoloc$com),sep="")
geoloc=geoloc[,c("com_nom","long","lat","pop_2008")]
head(geoloc)
france=map('france',namesonly=TRUE,
plot=FALSE)
francemap=map('france', fill=TRUE, col="transparent",
plot=FALSE)
detpartement_bzh=france[which(france%in%
c("Finistere","Morbihan","Ille-et-Vilaine",
"Cotes-Darmor"))]
bretagne=map('france',regions=detpartement_bzh,
fill=TRUE, col="transparent", plot=FALSE,exact=TRUE)
finistere=cbind(bretagne$x[321:678],bretagne$y[321:678])
FINISTERE=map('france',regions="Finistere", fill=TRUE,
col="transparent", plot=FALSE,exact=TRUE)
monFINISTERE=cbind(FINISTERE$x[c(8:414)],FINISTERE$y[c(8:414)])

Now we need simple functions,

cercle=function(n=200,centre=c(0,0),rayon)
{theta=seq(0,2*pi,length=100)
m=cbind(cos(theta),sin(theta))*rayon
m[,1]=m[,1]+centre[1]
m[,2]=m[,2]+centre[2]
names(m)=c("x","y")
return(m)}
poids=function(x,h,POL)
{leCercle=cercle(centre=x,rayon=5/pi*h)
POLcercle=as(leCercle, "gpc.poly")
return(area.poly(intersect(POL,POLcercle))/
area.poly(POLcercle))}
lissage = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,
sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)

before defining our weights.

poidsU=function(i,U,h,POL)
{x=U[i,]
poids(x,h,POL)}
OMEGA=parLapply(cl,1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)
stopCluster(cl)
}else
{OMEGA=lapply(1:n,poidsU,U=U,h=sqrt(H[1,1]),
POL=as(polygone, "gpc.poly"))
OMEGA=do.call("c",OMEGA)}

Note that it is possible to parallelize if there are a lot of observations,

if(n>=500)
{cl <- makeCluster(4,type="SOCK")
worker.init <- function(packages)
{for(p in packages){library(p, character.only=T)}
NULL}
clusterCall(cl, worker.init, c("gpclib","sp"))
clusterExport(cl,c("cercle","poids"))

Then, we can use standard bivariate kernel smoothing functions, but with the weights we just calculated, using a simple technique that can be related to one suggested in Ripley (1977),

fhat=kde(U,H,w=1/OMEGA,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
fhat$estimate=fhat$estimate*sum(1/OMEGA)/n
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}
lissage_without_c = function(U,polygone,optimal=TRUE,h=.1)
{n=nrow(U)
IND=which(is.na(U[,1])==FALSE)
U=U[IND,]
if(optimal==TRUE) {H=Hpi(U,binned=FALSE);
H=matrix(c(sqrt(H[1,1]*H[2,2]),0,0,sqrt(H[1,1]*H[2,2])),2,2)}
if(optimal==FALSE){H= matrix(c(h,0,0,h),2,2)}
fhat=kde(U,H,xmin=c(min(polygone[,1]),
min(polygone[,2])),xmax=c(max(polygone[,1]),
max(polygone[,2])))
vx=unlist(fhat$eval.points[1])
vy=unlist(fhat$eval.points[2])
VX = cbind(rep(vx,each=length(vy)))
VY = cbind(rep(vy,length(vx)))
VXY=cbind(VX,VY)
Ind=matrix(point.in.polygon(VX,VY, polygone[,1],
polygone[,2]),length(vy),length(vx))
f0=fhat
f0$estimate[t(Ind)==0]=NA
return(list(
X=fhat$eval.points[[1]],
Y=fhat$eval.points[[2]],
Z=fhat$estimate,
ZNA=f0$estimate,
H=fhat$H,
W=fhat$W))}

So, now we can play with those functions,

base_cara_FINISTERE=base_cara[which(point.in.polygon(
base_cara$long,base_cara$lat,monFINISTERE[,1],
monFINISTERE[,2])==1),]
coord=cbind(as.numeric(base_cara_FINISTERE$long),
as.numeric(base_cara_FINISTERE$lat))
nrow(coord)
map(francemap)
lissage_FIN_withoutc=lissage_without_c(coord,
monFINISTERE,optimal=TRUE)
lissage_FIN=lissage(coord,monFINISTERE,
optimal=TRUE)
lesBreaks_sans_pop=range(c(
range(lissage_FIN_withoutc$Z),
range(lissage_FIN$Z)))
lesBreaks_sans_pop=seq(min(lesBreaks_sans_pop)*.95,
max(lesBreaks_sans_pop)*1.05,length=21)

plot_article=function(lissage,breaks,
polygone,coord){
par(mar=c(3,1,3,1))
image.plot(lissage$X,lissage$Y,(lissage$ZNA),
xlim=range(polygone[,1]),ylim=range(polygone[,2]),
breaks=breaks, col=rev(heat.colors(20)),xlab="",
ylab="",xaxt="n",yaxt="n",bty="n",zlim=range(breaks),
horizontal=TRUE)
contour(lissage$X,lissage$Y,lissage$ZNA,add=TRUE,
col="grey")
points(coord[,1],coord[,2],pch=19,cex=.1,
col="dodger blue")
polygon(polygone,lwd=2,)}

plot_article(lissage_FIN_withoutc,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

plot_article(lissage_FIN,breaks=
lesBreaks_sans_pop,polygone=monFINISTERE,
coord=coord)

If we look at the graphs, we have the following densities of car accident, with a standard kernel on the left, and our proposal on the right (with local weight adjustment when the estimation is done next to the border of the region of interest),

Similarly, in Morbihan,

With those modified kernels, hot spots appear much more clearly. For more details, the paper is online on hal.

Est-ce vraiment trop injuste ?

Depuis le début de la semaine, après avoir déposé les enfants au camp de jour du Musée des Beaux Arts, je viens à pieds à l’université. Chaque fois, je me dis que je pourrais prendre le bus, mais comme aucun ne vient, je commence en marchant, en me disant que si un bus passe, je le prendrais. Et tous les matins, j’arrive au bureau sans m’être fait dépassé par le moindre bus. Et bien sur, j’en ai croisé un paquet qui passaient en sens inverse…

C’est vraiment trop injuste ? En tant que statisticien, je dirais que non, tout simplement à cause du biais de sélection. Si on veut formaliser un peu, on va oublier l’aléa, et raisonner avec des bus qui passent de manière purement déterministe. On va aussi supposer qu’il y en a autant dans un sens que dans l’autre…

J’ai une distance http://freakonometrics.blog.free.fr/public/perso6/bus-01.gif à parcourir, on suppose que les bus sont espacés d’une distance http://freakonometrics.blog.free.fr/public/perso6/bus-02.gif, et qu’ils avancent à une vitesse http://freakonometrics.blog.free.fr/public/perso6/bus-03.gif. Autrement dit, les bus passent tous les http://freakonometrics.blog.free.fr/public/perso6/bus-04.gif secondes (si ma vitesse est exprimée en secondes).

Moi, j’avance à une vitesse http://freakonometrics.blog.free.fr/public/perso6/bus-07.gif avec http://freakonometrics.blog.free.fr/public/perso6/bus-06.gif (oui, on va supposer que je vais moins vite que le bus… ce qui n’est pas forcément une hypothèse faible aux heures de pointes, mais disons que le problème n’a de sens que si aller en bus me permet d’aller plus vite). Le temps que je vais mettre si je fais tous le trajet à pied est . Maintenant, comptons les bus qui passent en face. Je vais croiser tous ceux qui sont déjà sur ma portion de trajet, et il y en a http://freakonometrics.blog.free.fr/public/perso6/bus-08.gif. En plus, je croiserais tous ceux qui vont arriver à l’université, et qui n’y sont pas encore, soit http://freakonometrics.blog.free.fr/public/perso6/bus-09.gif, i.e. le temps qu’il me reste à marcher divisé par le temps qui s’écoule entre deux bus. On a alors un total de http://freakonometrics.blog.free.fr/public/perso6/bus-10.gif bus à croiser, en face.

De mon coté cette fois. Je peux compter tous ceux qui vont arriver à l’université, sauf que cette fois, je devrais enlever tous ceux qui sont déjà sur le trajet, et que je ne croiserais pas (car je vais moins vite qu’eux, par hypothèse). Le nombre de bus qui vont me doubler sera alors cette fois http://freakonometrics.blog.free.fr/public/perso6/bus-11.gif

Si je regarde le ratio du nombre de bus croisés sur le nombre de bus qui m’aura doublé, j’obtiens

http://freakonometrics.blog.free.fr/public/perso6/bus-13.gif

Les plus malins auront noté que ce ratio est aussi le ratio entre la vitesse des bus qui me croisent divisé par la vitesse des bus qui me doublent, quand je suis le référentiel (les vitesses sont exprimées par rapport à moi).

Par exemple, si je vais deux fois mois vite que le bus, il y aura trois fois plus de bus qui passent en sens inverse. Et sept fois plus si je vais juste 25% moins vite que le bus. Ce qui n’est pas absurde, quand on pense que je fais le trajet en 20 minutes, et le bus 15…

How to Talk About Books You Haven’t Read?

This summer, I have read with a great pleasure a short book (in French) translated into “How to Talk About Books You Haven’t Read ?”, by Pierre Bayard, professor of French literature at the University of Paris Sorbonne. The book was mentioned on http://www.brainpickings.org/ and on http://www.nytimes.com/ (and can be found in French online). The book is great… and I have identified a lot of things that can be observed if you work as an academic. Not only in literature.

The book starts with Oscar Wilde’s famous quote “I never read a book I must review; it prejudices you so”. So obviously, there should be good reasons not to read books. But first, Pierre Bayard suggests to distinguish among books that you “haven’t read“.

The first illustration is based on the librarian in Robert Musil‘s Der Mann ohne Eigenschaften. Good librarian do not read ! More precisely “The secret of a good librarian is that he never reads anything more of the literature in his charge than the title and the table of contents. Anyone who lets himself go and starts reading a book is lost as a librarian…He’s bound to lose perspective.” You should not read books if you want a good perspective. Further, “We marched down the ranks in that colossal storehouse of books, and I don’t mind telling you I was not particularly overwhelmed […]. Still, after a while I couldn’t help starting to do some figuring in my head, and I got an unexpected answer. […] I had been thinking that if I read a book a day […] I could claim a certain position in the world of the intellect. […] But what do you suppose that librarian said to me as […] I asked him how many books they had in this crazy library ? ‘Three and a half million’ he tells me. It would take me ten thousand years to carry out my plan.” There are so many books that it is clearly impossible to read everything. But this was mentioned earlier, in the Bible, Ecclesiastes 12:12, “the writing of many books is endless“. Actually, we have the same problem with academic research: there are papers published everyday, so as for Musil’s librarian, there is no way to read all of them. So clearly, if we want an overview of a given scientific field, we just go through titles, and abstracts (if we have time)…

As a second illustration, Pierre Bayard mentions the poet Paul Valéry who claimed that “je demeure peu lecteur, car je ne recherche dans un ouvrage que ce qui peut permettre ou interdire quelque chose à ma propre activité”. Similarly in academic research, we read mainly because we have to be sure that what we do is new, and has not be done before.

Then Pierre Bayard mention books that we have just heard of. And to illustrate this point, he mentions Umberto Ecco‘s Il nome della rosa. If you remember the book (or the movie), Baskerville finds the truth not because he has read a copy of the book, but he has heard of it. And it is clearly enough ! “Gradually this second book took shape in my mind as it had to be. I could tell you almost all of it, without reading the pages that were meant to poison me“. And trust me, academics do that all the time ! Because there are books you are supposed to mentioned when you want to publish. But some of them are out of print, and cannot be found in your library, and are way to expensive to buy them. So you just listen to what people say about that book.

So now we know that it is not necessary to read book when talk about them. But still, you have to behave properly if you still want to be reliable. The example mentioned by Pierre Bayard is the Martins-Dexter quiproquo in Graham Greene’s The third man. “‘If you want to know, I’ve never heard of him. What did he write?’ He didn’t realize it, but he was making an enormous impression. Only a great writer could have taken so arrogant, so original“. A similar story can be found in David Lodge’s Campus trilogy, when professors have to confess that they have not read certain books (in the so-called humiliation game), and the head of the literature department admits that he has never read Hamlet. It is rather difficult to admit that you have not read a book.

But similarly, the question “have you read the article ?” is usually quite ambiguous. Pretending that you have read it can mean that you have fully understood the paper, and the proof, and the implication. So usually, I do not pretend that I have read an article. Perhaps I might say that I had time to go through it. But not much more…

More funny is Pierre Siniac’s Ferdinaud Céline, where the author does not know the book he thought he wrote. It looks like funny, but I have to admit that this is a felling I have also experienced ! When people mention your work, or talk about it, but not the way you expected it. Like when you develop a nice theory, and you propose an application at the end, but some people only remember the 12% difference you have with the benchmark model, while the goal of the paper was simply to propose a new method, based on something else you’ve been working on. The article people read is usually not the one you wrote. Especially after a few revisions, where referees have required many changes (and you’ve done them because you need that publication).

Anyway, “How to Talk About Books You Haven’t Read ?” by Pierre Bayard is a great book that should be read by everyone !