Another Interactive Map for the Cholera Dataset

Following my previous post, François (aka @FrancoisKeck) posted a comment mentionning another package I could use to get an interactive map, the rleafmap package. And the heatmap was here easy to include.

The first part is still the same, to get the data,

> require(rleafmap)
> library(sp)
> library(rgdal)
> library(maptools)
> library(KernSmooth)
> setwd("/home/arthur/Documents/")
> deaths <- readShapePoints("Cholera_Deaths")
> df_deaths <- data.frame(deaths@coords)
> coordinates(df_deaths)=~coords.x1+coords.x2
> proj4string(df_deaths)=CRS("+init=epsg:27700") 
> df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
> df=data.frame(df_deaths@coords)

To get a first visualisation, use

> stamen_bm <- basemap("stamen.toner")
> j_snow <- spLayer(df_deaths, stroke = FALSE)
> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 14)

and again, using the + and the – in the top left area, we can zoom in, or out. Or we can do it manually,

> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)

To get the heatmap, use

> library(spatstat)
> library(maptools)

> win <- owin(xrange = bbox(df_deaths)[1,] + c(-0.01,0.01), yrange = bbox(df_deaths)[2,] + c(-0.01,0.01))
> df_deaths_ppp <- ppp(coordinates(df_deaths)[,1],  coordinates(df_deaths)[,2], window = win)
> 
> df_deaths_ppp_d <- density.ppp(df_deaths_ppp, 
  sigma = min(bw.ucv(df[,1]),bw.ucv(df[,2])))
 
> df_deaths_d <- as.SpatialGridDataFrame.im(df_deaths_ppp_d)
> df_deaths_d$v[df_deaths_d$v < 10^3] <- NA

> stamen_bm <- basemap("stamen.toner")
> mapquest_bm <- basemap("mapquest.map")
 
> j_snow <- spLayer(df_deaths, stroke = FALSE)
> df_deaths_den <- spLayer(df_deaths_d, layer = "v", cells.alpha = seq(0.1, 0.8, length.out = 12))
> my_ui <- ui(layers = "topright")

> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 1000, height = 750, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)

The amazing thing here are the options in the top right corner. For instance, we can remove some layers, e.g. to remove the points

or to change the background

To get an html file, instead of a standard visualisation in RStudio, use

> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 450, height = 350, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16, directView ="browser")

which will generate the html table (as well as some additional files actually) above. Awesome, isn’t it?

Interactive Maps for John Snow’s Cholera Data

This week, in Istanbul, for the second training on data science, we’ve been discussing classification and regression models, but also visualisation. Including maps. And we did have a brief introduction to the  leaflet package,

devtools::install_github("rstudio/leaflet")
require(leaflet)

To see what can be done with that package, we will use one more time the John Snow’s cholera dataset, discussed in previous posts (one to get a visualisation on a google map background, and the second one on an openstreetmap background),

library(sp)
library(rgdal)
library(maptools)
setwd("/cholera/")
deaths <- readShapePoints("Cholera_Deaths")
df_deaths <- data.frame(deaths@coords)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
df=data.frame(df_deaths@coords)
lng=df$coords.x1
lat=df$coords.x2

Once installed the leaflet package, we can use the package at the RStudio console (which is what we will do here), or within R Markdown documents, and within Shiny applications. But because of restriction we got on this blog (rules of hypotheses.org) So there will be only copies of my screen. But if you run the code, in RStudio you will get interactvive maps in the viewer window.

First step. To load a map, centered initially in London, use

m = leaflet()%>% addTiles() 
m %>% fitBounds(-.141,  51.511, -.133, 51.516)

In the viewer window of RStudio, it is just like on OpenStreetMap, e.g. we can zoom-in, or zoom-out (with the standard + and – in the top left corner)

And we can add additional material, such as the location of the deaths from cholera (since we now have the same coordinate representation system here)

rd=.5
op=.8
clr="blue"
m = leaflet() %>% addTiles()
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr)

We can also add some heatmap.

X=cbind(lng,lat)
kde2d <- bkde2D(X, bandwidth=c(bw.ucv(X[,1]),bw.ucv(X[,2])))

But there is no heatmap function (so far) so we have to do it manually,

x=kde2d$x1
y=kde2d$x2
z=kde2d$fhat
CL=contourLines(x , y , z)

We have now a list that contains lists of polygons corresponding to isodensity curves. To visualise of of then, use

m = leaflet() %>% addTiles() 
m %>% addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Of course, we can get at the same time the points and the polygon

m = leaflet() %>% addTiles() 
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr) %>%
  addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Continue reading Interactive Maps for John Snow’s Cholera Data

Langue et Cartographie

La plupart des cartes du monde standard de R sont en anglais. L’autre jour, des étudiants souhaitaient visualiser des données tirées d’une base où les noms des pays sont en anglais. Pour obtenir une correspondance entre des noms anglais et des noms français, on peut utiliser la base suivante

> library(gdata)
> library(xlsx)
> download.file("http://www.stat.gouv.qc.ca/statistiques/divisions-territoriales/pays-liste-isq-web.xls","corresp")
>  xls_corresp <- read.xls("corresp",sheet=1,encoding="latin1")

On a ici

>  df_corresp <- data.frame(
+ FR=xls_corresp$X.5,
+ EN=xls_corresp$X.11)
> df_corresp[5:10,]
                    FR                 EN
5  Belgique-Luxembourg Belgium-Luxembourg
6    Îles du Pacifique    Pacific Islands
7          Afghanistan        Afghanistan
8       Afrique du Sud       South Africa
9           Îles Åland      Åland Islands
10             Albanie            Albania

Pour avoir une correspondance, entre les noms sous R, et ceux dans la base à notre disposition, il faut manipuler un peu les chaînes de caractères,

>  df_corresp$FR = as.character(df_corresp$FR)
>  df_corresp$FR = iconv(df_corresp$FR, to="ASCII//TRANSLIT") 
>  df_corresp$FR = tolower(df_corresp$FR)
>  remove_minus = function(s) paste(unlist(strsplit(s, split='-',fixed=TRUE)),collapse="")
>  remove_space = function(s) paste(unlist(strsplit(s, split=' ',fixed=TRUE)),collapse="")
>  df_corresp$FR = sapply(df_corresp$FR,remove_minus)
>  df_corresp$FR = sapply(df_corresp$FR,remove_space)

> df_corresp$EN = as.character(df_corresp$EN)
> df_corresp$EN = iconv(df_corresp$EN, to="ASCII//TRANSLIT") 
> df_corresp$EN = tolower(df_corresp$EN)
> df_corresp$EN = sapply(df_corresp$EN,remove_minus)
> df_corresp$EN = sapply(df_corresp$EN,remove_space)
> split_dots = function(s) strsplit(s, split=':',fixed=TRUE)[[1]][1]

Si on regarde les pays que l’on a pu convertir le nom en anglais pour avoir une correspondance avec le nom du pays dans la base de R,

> library(maps)
>  world<-map(database="world")
>  world$pays_EN <- world$names  
>  world$pays_EN <- tolower(world$pays_EN)
>  world$pays_EN = sapply(world$pays_EN,remove_space) 
>  world$pays_EN = sapply(world$pays_EN,remove_minus) 
>  world$pays_EN = sapply(world$pays_EN,split_dots) 
>  world$pays_FR <- df_corresp$FR[match(world$pays_EN, df_corresp$EN)]

on obtient le graphique suivant

>  color <- !is.na(world$pays_FR)
>  map(database="world", fill=TRUE, col=color)

Les seuls pays pour lesquels on n’a pas de correspondance sont les États-Unis d’Amérique (usa dans la base de R), la Russie (ussr), les Congos (avec la République Démocratique et l’autre), et la Côte d’Ivoire. En bricolant un peu sur ces 4 pays, on pourra avoir une correspondance entre les noms utilisés sous R, et les noms en français.

Spliting a Node in a Tree

If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post,

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ head=TRUE,sep=";")
> library(rpart)
> cart<-rpart(PRONO~.,data=MYOCARDE)

we get

> library(rpart.plot)
> library(rattle)
> prp(cart,type=2,extra=1)

Continue reading Spliting a Node in a Tree

Regression Models, It’s Not Only About Interpretation

Yesterday, I did upload a post where I tried to show that “standard” regression models where not performing bad. At least if you include splines (multivariate splines) to take into accound joint effects, and nonlinearities. So far, I do not discuss the possible high number of features (but with boostrap procedures, it is possible to assess something related to variable importance, that people from machine learning like).

But my post was not complete: I was simply plotting the prediction obtained by some model. And it “looked like” the regression was nice, but so were the random forrest, the https://latex.codecogs.com/gif.latex?k-nearest neighbour and boosting algorithm. What if we compare those models on new data?

Continue reading Regression Models, It’s Not Only About Interpretation

On Some Alternatives to Regression Models

When you start discussing with people in machine learning, you quickly hear something like “forget your econometric models, your GLMs, I can easily find a machine learning ‘model’ that can beat yours”. I am usually very sceptical, especially when I hear “easily” or “always“. I have no problem about the fact that I use old econometric models, but I had the feeling that things aren’t that easy. I can understand that we might have problems when we do have a lot of features (I am still working on that, I’ll get back to this point soon), but I have the feeling that I can still capture interactions, and non-linearities with standard econometric models as well as any machine learning algorithm.

Just to illustrate, consider the following ‘model

https://latex.codecogs.com/gif.latex?\mathbb{E}[Y\vert\boldsymbol{X}=\boldsymbol{x}]=m(\boldsymbol{x})

where https://latex.codecogs.com/gif.latex?m(\cdot) is (just to illustrate)

> n <- 5000
> rtf <- function(x1, x2) { sin(x1+x2)/(x1+x2) }
> xgrid <- seq(1,6,length=31)
> ygrid <- seq(1,6,length=31)
> zgrid <- outer(xgrid,ygrid,rtf)
> persp(xgrid,ygrid,zgrid,theta=30, phi=30, 
+ col="green", ticktype="detailed",shade=TRUE)

Continue reading On Some Alternatives to Regression Models

Vector Autoregressive Models

Consider here some https://latex.codecogs.com/gif.latex?VAR(1) model,

https://latex.codecogs.com/gif.latex?\begin{bmatrix}Y_{1,t}%20\\%20Y_{2,t}\end{bmatrix}%20=%20\begin{bmatrix}A_{1,1}&A_{1,2}%20\\%20A_{2,1}&A_{2,2}\end{bmatrix}\begin{bmatrix}Y_{1,t-1}%20\\%20Y_{2,t-1}\end{bmatrix}%20+%20\begin{bmatrix}\varepsilon_{1,t}%20\\%20\varepsilon_{2,t}\end{bmatrix}

We’ve seen in class that stationnarity of that time series, in the sense that https://latex.codecogs.com/gif.latex?\mathbb{E}[\boldsymbol{Y}_t]=\boldsymbol{\mu} and https://latex.codecogs.com/gif.latex?\text{Var}[\boldsymbol{Y}_t,\boldsymbol{Y}_{t-h}]=\boldsymbol{\Gamma}(h), was valid if the roots (in https://latex.codecogs.com/gif.latex?\mathbb{C}) of the characteristic polyonomial –https://latex.codecogs.com/gif.latex?P(z)=\text{det}(\mathbb{I}-\boldsymbol{A}z) – were outside the unit circle.

To visualize this point, consider the following time series

https://latex.codecogs.com/gif.latex?\begin{bmatrix}Y_{1,t}%20\\%20Y_{2,t}\end{bmatrix}%20=%20\begin{bmatrix}0.7&0.4%20\\%200.2&0.3\end{bmatrix}\begin{bmatrix}Y_{1,t-1}%20\\%20Y_{2,t-1}\end{bmatrix}%20+%20\begin{bmatrix}\varepsilon_{1,t}%20\\%20\varepsilon_{2,t}\end{bmatrix}

To generate that time series, we need to generate a bivariate white noise, i.e. https://latex.codecogs.com/gif.latex?\text{Var}(\boldsymbol{\varepsilon}_t)=\boldsymbol{\Sigma} (not necessarily a diagonal matrix), and https://latex.codecogs.com/gif.latex?\text{Var}(\boldsymbol{\varepsilon}_t,\boldsymbol{\varepsilon}_{t-h})=\boldsymbol{0}. For instance

> n=500
> r=0.7
> set.seed(1)
> Z1=rnorm(n)
> Z2=rnorm(n)
> E1=Z1
> E2=r*Z1+sqrt(1-r^2)*Z2

To generate now our time series, use

> A=matrix(c(.7,.2,.4,.3),2,2)
> X1=X2=rep(0,n)
> for(t in 2:n){
+   X1[t]=A[1,1]*X1[t-1]+A[1,2]*X2[t-1]+E1[t]
+   X2[t]=A[2,1]*X1[t-1]+A[2,2]*X2[t-1]+E2[t]  
+ }

Here, we have

> plot(X1,type="l",col="red")
> lines(X2,col="blue")

Those two time series seem to be stationnary. And, indeed,

> polyroot(c(1,-sum(diag(A)),det(A)))
[1] 1.18+0i 6.51-0i
> Mod(polyroot(c(1,-sum(diag(A)),det(A))))
[1] 1.18 6.51

Continue reading Vector Autoregressive Models

Forecast, Automatic Routines vs. Experience

This morning, in our Time Series course, we’ve been playing with some data I got from google.ca/trends/. Actually, we’ve been playing on some old version, downloaded 18 months ago (discussed in a previous post, in French).

> urls = "http://freakonometrics.free.fr/report-headphones-2015.csv"
> report=read.table(
+ urls,skip=4,header=TRUE,sep=",",nrows=585)
> tail(report)
                    Semaine headphones
580 2015-02-08 - 2015-02-14         53
581 2015-02-15 - 2015-02-21         52
582 2015-02-22 - 2015-02-28         51
583 2015-03-01 - 2015-03-07         50
584 2015-03-08 - 2015-03-14         49
585 2015-03-15 - 2015-03-21         49

If we plot that weekly time series, we have

> plot(report[,2],type="l")

Working with weekly series is more complicated (at least to find a simple model, with only a few lags), so let us convert that series into a monthly one,

> source(
+   "http://freakonometrics.blog.free.fr/public/code/H2M.R")
> headphones=H2M(report,lang="FR",type="ts")
> plot(headphones)

Continue reading Forecast, Automatic Routines vs. Experience

Growing some Trees

Consider here the dataset used in a previous post, about visualising a classification (with more than 2 features),

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ header=TRUE,sep=";")

The default classification tree is

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE)
> rpart.plot(arbre,type=4,extra=6)

We can change the options here, such as the minimum number of observations, per node

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+       control=rpart.control(minsplit=10))
> rpart.plot(arbre,type=4,extra=6)

or

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+        control=rpart.control(minsplit=5))
> rpart.plot(arbre,type=4,extra=6)

Continue reading Growing some Trees

How social media usage does and does not predict protests

This post, published today in Monkey Cage, is based on some research, with Marco T. Bastos and Dan Mercea (recently published in the Journal of Communication, see also http://papers.ssrn.com)

A storm of protests in 2011 disputed the legitimacy of the institutional status quo across authoritarian and democratic countries alike. From the Arab Spring to the Indignados, from the London riots to the Occupy movement, a heated debate has ensued about whether people’s coming onto the streets may be aided in any significant manner by the use of social media. The debate has lingered for a decade now and is revisited with every new uprising that rises to international prominence.

Continue reading How social media usage does and does not predict protests

Growing one Tree

Consider the following toy dataset, with some spam/ham information, and two words, “viagra” and “lottery”.

> load(spam.RData)
> head(db)
      Y viagra lottery
27 spam      0       1
37  ham      0       1
57 spam      0       0
89  ham      0       0
20 spam      1       0
86  ham      0       0

For the first node, compute Gini index for the two variables,

> gini=function(variable){
+ T=table(db$Y,db[,variable])
+ nx=apply(T,2,sum)
+ ProbCond=T/matrix(rep(nx,each=2),2,2)
+ ProbCond
+ Gini=-ProbCond*(1-ProbCond)
+ sum(matrix(rep(nx,each=2),2,2)/sum(nx)*Gini)}
> gini("viagra")
[1] -0.44
> gini("lottery")
[1] -0.487

Here Gini index is maximal for “viagra”, so that will be the first node.

Continue reading Growing one Tree

Data Science pour l’Actuariat, partie 1

La formation Data Science pour l’Actuariat a commencé cette semaine. Je faisais la première intervention, avec un cours d’introduction au R avancé, très largement inspiré de http://adv-r.had.co.nz/ (et de l’introduction du livre Computational Actuarial Science, with R).

Le soir, Françoise Soulié Fogelman faisait un exposé pour la leçon inaugurale.

Some More Results on the Theory of Statistical Learning

Yesterday, I did mention a popular graph discussed when studying theoretical foundations of statistical learning. But there is usually another one, which is the following,

As previously, it is a graph with the risk on the https://latex.codecogs.com/gif.latex?y-axis, the red line being on the training sample, and the black line on the validation sample, as a function of something that can be related to the complexity of the model.

Let us get back to the underlying formulas. On the traning sample, we have some empirical risk, defined as

https://latex.codecogs.com/gif.latex?$$R_n=\frac{1}{n}\sum_{i=1}^n%20L(y_i,\widehat{m}_n(\boldsymbol{x}))$$

for some loss function https://latex.codecogs.com/gif.latex?L. From the law of large numbers,

https://latex.codecogs.com/gif.latex?\lim_{n\rightarrow\infty}%20\frac{1}{n}\sum_{i=1}^n%20U_i%20=%20\mathbb{E}[U]

when the https://latex.codecogs.com/gif.latex?U_i‘s are i.i.d., and https://latex.codecogs.com/gif.latex?U_i\sim%20U. But here we look for

https://latex.codecogs.com/gif.latex?\lim_{n\rightarrow\infty}%20%20\underbrace{\frac{1}{n}\sum_{i=1}^n%20L(y_i,\widehat{m}_n(\boldsymbol{x}))%20}_{R_n}

It is difficult to say something about the limit, since the https://latex.codecogs.com/gif.latex?(y,\boldsymbol{x}_i)‘s are independent, but not the https://latex.codecogs.com/gif.latex?L(y_i,\widehat{m}_n(\boldsymbol{x}_i))‘s, because of https://latex.codecogs.com/gif.latex?\widehat{m}_n(\cdot) (which depends on the entire sample).

But if we look at the empirical risk on a validation sample

https://latex.codecogs.com/gif.latex?\lim_{n\rightarrow\infty}%20\underbrace{\frac{1}{n}\sum_{i=1}^n%20L(\tilde{y}_i,\widehat{m}_n(\tilde{\boldsymbol{x}}))}_{\tilde{R}_i}%20=\mathbb{E}[L(Y,\boldsymbol{X})]

One can prove that, with probability https://latex.codecogs.com/gif.latex?\alpha,

https://latex.codecogs.com/gif.latex?\widehat{R}_n\leq%20R_n%20+\sqrt{\frac{{{VC}}[\log(2n/d)+1]-\log[\alpha/4]}{n}}

which depends on https://latex.codecogs.com/gif.latex?n (as discussed in the previous post), but also about that https://latex.codecogs.com/gif.latex?VC parameter, the so-called Vapnik-Chervonenkis dimension. The part on the right is the blue curve on the graph, above.

I won’t spend hours on that dimension https://latex.codecogs.com/gif.latex?VC, but the idea is that this dimension is related to the model complexity. For instance, in dimension one (one covariate), if https://latex.codecogs.com/gif.latex?m(\cdot) is a polynomial of degree https://latex.codecogs.com/gif.latex?d, then https://latex.codecogs.com/gif.latex?VC=d+1. In dimension two (two covariates), if https://latex.codecogs.com/gif.latex?m(\cdot) is a (bivariate) polynomial of degree https://latex.codecogs.com/gif.latex?d, then https://latex.codecogs.com/gif.latex?VC=(d+1)(d+2)/2$, while it would be https://latex.codecogs.com/gif.latex?2(d+1) if https://latex.codecogs.com/gif.latex?m(\cdot) is additive, with two polynomials of degree https://latex.codecogs.com/gif.latex?d.

Let us try to get a graph which looks like the one above, using the same idea as the one in our previous post.

MissClassU=rep(NA,25)
MissClassV=rep(NA,25)
n=200
  U=data.frame(X1=runif(n),X2=runif(n))
  p=(U[,1]+U[,2])/2
  U$Y=rbinom(n,size=1,prob=p)
  V=data.frame(X1=runif(n),X2=runif(n))
  p=(V[,1]+V[,2])/2
  V$Y=rbinom(n,size=1,prob=p)
for(s in 1:25){
reg=glm(Y~poly(X1,s)+poly(X2,s),data=U,
family=binomial)
pd=function(x1,x2) predict(reg,newdata=data.frame(X1=x1,X2=x2),type="response")>.5
  MissClassU[s]=mean(abs(pd(U$X1,U$X2)-U$Y))
  MissClassV[s]=mean(abs(pd(V$X1,V$X2)-V$Y))
}

If we plot the missclassification rate, as a function of the polynomial degree, in purple on the validation sample, and in black on the training sample, we get

Again, it is on one sample, only. We can run it on hundreds, and see how the average risk of misclassification changes with complexity.

MCU=rep(NA,500)
MCV=rep(NA,500)
 
missclassification=function(s){ 
  for(i in 1:500){
    U=data.frame(X1=runif(n),X2=runif(n))
    p=(U[,1]+U[,2])/2
    U$Y=rbinom(n,size=1,prob=p)
reg=glm(Y~bs(X1,s)+bs(X2,s),data=U,
family=binomial)
pd=function(x1,x2) predict(reg,newdata=data.frame(X1=x1,X2=x2),type="response")>.5
    MCU[i]=mean(abs(pd(U$X1,U$X2)-U$Y))  
    V=data.frame(X1=runif(n),X2=runif(n))
    p=(V[,1]+V[,2])/2
    V$Y=rbinom(n,size=1,prob=p)
    MCV[i]=mean(abs(pd(V$X1,V$X2)-V$Y))
   }
  MissClassV=mean(MCU)
  MissClassU=mean(MCV)
return(c(MissClassU,MissClassV))
}

Here, we cannot see the optimal dimension, because our risk on the validation samples keeps increasing. Which makes sence since our data are generated from a linear model, so the optimal transformation should be optained with linear transformation (and not polynomials with higher degrees).