Playing with Leaflet (and Radar locations)

Yesterday, my friend Fleur did show me some interesting features of the leaflet package, in R.

library(leaflet)

In order to illustrate, consider locations of (fixed) radars, in several European countries. To get the data, use

download.file("http://carte-gps-gratuite.fr/radars/zones-de-danger-destinator.zip","radar.zip")
unzip("radar.zip")
 
 ext_radar=function(nf){
radar=read.table(file=paste("destinator/",nf,sep=""), sep = ",", header = FALSE, stringsAsFactors = FALSE)
 radar$type <- sapply(radar$V3, function(x) {z=as.numeric(unlist(strsplit(x, " ")[[1]])); return(z[!is.na(z)])})
  radar <- radar[,c(1,2,4)]
  names(radar) <- c("lon", "lat", "type")
  return(radar)}
 
L=list.files("./destinator/")
nl=nchar(L)
id=which(substr(L,4,8)=="Radar" & substr(L,nl-2,nl)=="csv")
 
radar_E=NULL
for(i in id) radar_E=rbind(radar_E,ext_radar(L[i]))

(to be honest, if you run that code, you will get several countries, but not France… but if you want to add it, you should be able to do so…). The first tool is based on popups. If you click on a point on the map, you get some information, such as the speed limit where you can find a radar. To get a nice pictogram, use

fileUrl <- "http://evadeo.typepad.fr/.a/6a00d8341c87ef53ef01310f9238e6970c-800wi"
download.file(fileUrl,"radar.png", mode = 'wb')
RadarICON <- makeIcon(  iconUrl = fileUrl,   iconWidth = 20, iconHeight = 20)

And then, use to following code get a dynamic map, mentionning the variable that should be used for the popup

m <- leaflet(data = radar_E) 
m <- m %>% addTiles() 
m <- m %>% addMarkers(~lon, ~lat, icon = RadarICON, popup = ~as.character(type))
m

Because the picture is a bit heavy, with almost 20K points, let us focus only on France,

Continue reading Playing with Leaflet (and Radar locations)

Computational Time of Predictive Models

Tuesday, at the end of my 5-hour crash course on machine learning for actuaries, Pierre asked me an interesting question about computational time of different techniques. I’ve been presenting the philosophy of various algorithm, but I forgot to mention computational time. I wanted to try several classification algorithms on the dataset used to illustrate the techniques

> rm(list=ls())
> myocarde=read.table(
"http://freakonometrics.free.fr/myocarde.csv",
head=TRUE,sep=";")
> levels(myocarde$PRONO)=c("Death","Survival")

But the dataset is rather small, with 71 observations and 7 explanatory variables. So I decided to replicate the observations, and to add some covariates,

> levels(myocarde$PRONO)=c("Death","Survival")
> idx=rep(1:nrow(myocarde),each=100)
> TPS=matrix(NA,30,10)
> myocarde_large=myocarde[idx,]
> k=23
> M=data.frame(matrix(rnorm(k*
+ nrow(myocarde_large)),nrow(myocarde_large),k))
> names(M)=paste("X",1:k,sep="")
> myocarde_large=cbind(myocarde_large,M)
> dim(myocarde_large)
[1] 7100   31
> object.size(myocarde_large)
2049.064 kbytes

The dataset is not big… but at least, it does not take 0.0001 sec. to run a regression.  Actually, to run a logistic regression, it takes 0.1 second

> system.time(fit< glm(PRONO~.,
+ data=myocarde_large, family="binomial"))
       user      system     elapsed 
      0.114       0.016       0.134 
> object.size(fit)
9,313.600 kbytes

And I was surprised that the regression object was 9Mo, which is more than four times the size of the dataset. With a large dataset, 100 times larger,

> dim(myocarde_large_2)
[1] 710000     31

it takes 20 sec.

> system.time(fit<-glm(PRONO~.,
+ data=myocarde_large_2, family="binomial"))
utilisateur     système      écoulé 
     16.394       2.576      19.819 
> object.size(fit)
90,9025.600 kbytes

and the object is ‘only’ ten times bigger.

Continue reading Computational Time of Predictive Models

Heuristics on bias and variance for kernel density estimators

Consider the simple case of a moving histogram (which is a very simple kernel). The idea is to recall that

where

is the slope close to point .

Then we use the empirical cumulative density to approximate the slope, i.e.

which can also be writen

Consider now the density seen as a random variable

where the‘s are i.i.d. where , with

Thus, observe that , but that’s not what we’re looking for… From Taylor’s expansion,

thus

where the bias comes from the approximation of the density by some string. About the variance,

thus, since ,

i.e.

We can observe that

is decreasing as , while the variance is increasing as . This is the standard bias-variance tradeoff in statistics.

Convergence and Asymptotic Results

Last week, in our mathematical statistics course, we’ve seen the law of large numbers (that was proven in the probability course), claiming that

given a collection  of i.i.d. random variables, with

To visualize that convergence, we can use

> m=100
> mean_samples=function(n=10){
+   X=matrix(rnorm(n*m),nrow=m,ncol=n)
+   return(apply(X,1,mean))
+ }
> B=matrix(NA,100,20)
> for(i in 1:20){
+   B[,i]=mean_samples(i*10)
+ }
> colnames(B)=as.character(seq(10,200,by=10))
> boxplot(B)

It is possible to visualize also the  bounds (used in the central limit theorem to get a limiting non degenerated distribution)

> u=seq(0,21,by=.2)
> v=sqrt(u*10)
> lines(u,1.96/v,col="red")
> lines(u,-1.96/v,col="red")

Yesterday, we’ve been discussing properties of the empirical cumulative distribution function,

We’ve seen Glivenko-Cantelli theorem, which states that (under mild assumptions)

To visualize that convergence use the following code. Here I use the trick

to get the maximum (componentwise) between two matrices

> m=100
> inf_sample=function(n=10){
+ X=matrix(rnorm(n*m),nrow=m,ncol=n)
+ Xs=t(apply(X,1,sort))
+ Pe_inf=matrix(rep((0:(n-1))/n,
+ each=m),nrow=m,ncol=n)
+ Pe_sup=matrix(rep((0:n)/n,each=m),
+ nrow=m,ncol=n)
+ Pt=pnorm(Xs)
+ D1=abs(Pe_inf-Pt)
+ D2=abs(Pe_sup-Pt)
+ Df=(D1+D2)/2+abs(D2-D1)/2
+ return(apply(Df,1,max))
+ }
> B=matrix(NA,100,20)
> for(i in 1:20){
+   B[,i]=inf_sample(i*10)
+ }
> colnames(B)=as.character(seq(10,200,by=10))
> boxplot(B)

We have also discussed the pointwise asymptotic normality of the empirical cumulative distribution function

Here again, it is possible to visualize it. The first step is to compute several trajectories for empirical cumulative distribution function

> u=seq(-3,3,by=.1)
> plot(u,u,ylim=c(0,1),col="white")
> M=matrix(NA,length(u),1000)
> for(m in 1:1000){
+ n=100
+ x=rnorm(n)
+ Femp=Vectorize(function(t) mean(x<=t))
+ v=Femp(u)
+ M[,m]=v
+ lines(u,v,col='light blue',type="s")
+ }

Note that we can compute (pointwise) confidence bands

> lines(u,apply(M,1,mean),col="red",type="l")
> lines(u,apply(M,1,function(x) quantile(x,.05)),
+ col="red",type="s")
> lines(u,apply(M,1,function(x) quantile(x,.95)),
+ col="red",type="s")

Now, if we focus on one specific point, we can visualize the asmptotic normality (i.e. the almost normality when we have a sample of size 100)

> x0=-1
> y=M[which(u==x0),]
> hist(y,probability=TRUE,
+ breaks=seq(.015,0.55,by=.01))
> vu=seq(0,1,by=.001)
> lines(vu,dnorm(vu,pnorm(x0),
+ sqrt((pnorm(x0)*(1-pnorm(x0)))/100)),
+ col="red")

Petit exercice de proba

Mardi dernier, on avait fait une série d’exercices de proba, et je voulais reprendre un exercice pour lequel j’avais proposé une vérification sur ordinateur.

Pour résoudre l’exercice, j’avais suggéré la méthode suivante. est tiré parmi , alors que est tiré parmi .  On note assez facilement que le plus petit nombre est , et comme le ppcm de  et  est ,

(je laisse faire les calculs pour montrer qu’effectivement, au delà, on n’est plus dans les ensembles de départ). Bref, seuls  nombres figurent dans les deux ensembles. La probabilité que l’on cherche est alors donnée par

où on somme sur nos  nombres. On a alors

où chacune des probabilités vaut  (on a une chance sur  de tomber sur n’importe quel nombre), et on a une somme de  termes. Donc la probabilité que l’on cherche est

Pour le vérifier, on peut utiliser le petit code (R) suivant

> list_X=seq(2,by=3,length=100)
> list_Y=seq(3,by=4,length=100)
> n=1e8
> x=sample(list_X,size=n,replace=TRUE)
> y=sample(list_Y,size=n,replace=TRUE)
> sum(x==y)/n
[1] 0.00250221

qui confirme le petit calcul que l’on vient de faire.

Construction de cartes minimalistes

Le week-end passé, suite à la publication de See the world differently with these minimalist maps par , il y a eu pas mal d’activité autour des cartes minimalistes. En particulier, Reka (aka ) et Philippe (aka @recifs) m’ont proposé de faire un billet pour Visions Carto sur la construction de ces cartes. Je suis flatté, même si je trouve ma contribution incroyablement modeste sur ce projet (et je me sens toujours humble face aux dessins superbes de Reka).

Je renvoie donc vers le billet Cartes Minimalistes pour plus de détails, mais pour les plus curieux, je rajoute deux cartes, plus française. La première correspond aux voies ferroviaires,

library(maptools)
setwd("/home/freakonometrics/Documents/data")
loc="http://www.mapcruzin.com/download-shapefile/france-railways-shape.zip"
download.file(loc,destfile="rail_france.zip")
unzip("rail_france.zip", exdir="./rail_france/")
shap=readShapeLines("./rail_france/railways.shp")
plot(shap,lwd=.7)

et la seconde, aux routes dans la région parisienne,

loc="http://www.mapcruzin.com/download-shapefile/france-roads-shape.zip"
download.file(loc,destfile="road_france.zip")
unzip("road_france.zip", exdir="./road_france/")
shap=readShapeLines("./road_france/roads.shp")
plot(shap,lwd=.7,ylim=48.85+c(-.5,.5),
xlim=2.35+c(-.5,.5))

La prochaine fois, j’expliquerais un peu comment corriger les shapefiles quand on a des soucis avec (je repense au commentaire qui disait que qu’il était dommage d’avoir une route entre le Royaume Uni et l’Islande).

Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc

There were a lot of posts, recently, related to those topics, starting with Noah Smith ‘s piece entitled “Economics has a Math Problem” and more recently “Econometrics, Math, and Machine Learning…what?” by Matt Bogard. I don’t have (yet) a clear mind on those issues, but there are still a few thoughts that I wanted to share. I did not really want to, but I’ve been asked, on Twitter, and I thought it might be good to write them down, to clarify some ideas I have, but also (probably, hopefully) to get interesting feedbacks.

Continue reading Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc

Minimalist Maps

This week, I mentioned a series of maps, on Twitter,

Friday evening, just before leaving the office to pick-up the kids after their first week back in class, Matthew Champion (aka ) sent me an email, asking for more details. He wanted to know if I did produce those graphs, and if he could mention then, in a post. The truth is, I have no idea who produced those graphs, but I told him one can easily reproduce them. For instance, for the cities, in R, use

> library(maps)
> data("world.cities")
> plot(world.cities$lon,world.cities$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

It is possible to get a more minimalist one by plotting only cities with more than 100,000 unhabitants, e.g.,

> world.cities2 = world.cities[
+ world.cities$pop>100000,]
> plot(world.cities2$lon,world.cities2$lat,
+ pch=19,cex=.7,axes=FALSE,xlab="",ylab="")

For the airports, it was slightly more complex since on http://openflights.org/data.html#airport, 6,977 airports  are mentioned. But on http://www.naturalearthdata.com/, I found another dataset with only 891 airports.

> library(maptools)
> shape <- readShapePoints(
+ "~/data/airport/ne_10m_airports.shp")
> plot(shape,pch=19,cex=.7)

On the same website, one can find a dataset for ports,

> shape <- readShapePoints(
+ "~/data/airport/ne_10m_ports.shp")
> plot(shape,pch=19,cex=.7)

This is for graphs based on points. For those based on lines, for instance rivers, shapefiles can be downloaded from https://github.com/jjrom/hydre/tree/, and then, use

> require(maptools)
> shape <- readShapeLines(
+ "./data/river/GRDC_687_rivers.shp")
> plot(shape,col="blue")

For roads, the shapefile can be downloaded from http://www.naturalearthdata.com/

> shape <- readShapeLines(
+ "./data/roads/ne_10m_roads.shp")
> plot(shape,lwd=.5)

Last, but not least, for lakes, we need the polygons,

> shape <- readShapePoly(
+ "./data/lake/ne_10m_lakes.shp")
> plot(shape,col="blue",border="blue",lwd=2)

Nice, isn’t it? See See the world differently with these minimalist maps for ‘s post.