All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

Visualising a Classification in High Dimension, part 2

A few weeks ago, I published a post on Visualising a Classification in High Dimension, based on the use of a principal component analysis, to get a projection on the first two components. Following that post, I was wondering what could be done in the context of a classification on categorical covariates. A natural idea would be to consider a correspondance analysis, and to run a similar code.

Consider here the dataset used in a recent post,

> source("http://freakonometrics.free.fr/import_data_credit.R")

If we consider a correspondance analysis, we get

> library(FactoMineR)
> acm=MCA(train.db,quali.sup = 
+ which(names(train.db,)=="class"),ncp=10)

For the covariates (including also the variable we want to model, considered here as some supplementary variable), the visualisation – on the first two components – is

and for the individuals

Continue reading Visualising a Classification in High Dimension, part 2

Classification with Categorical Variables (the fuzzy side)

The Gaussian and the (log) Poisson regressions share a very interesting property,

i.e. the average predicted value is the empirical mean of our sample.

> mean(predict(lm(dist~speed,data=cars)))
[1] 42.98
> mean(cars$dist)
[1] 42.98

One can prove that it is also the prediction for the average individual in our sample

> predict(lm(dist~speed,data=cars),
+ newdata=data.frame(speed=mean(cars$speed))) 
42.98

The geometric interpretation is that the regression line passes through the centroid,

> plot(cars)
> abline(lm(dist~speed,data=cars),col="red")
> abline(h=mean(cars$dist),col="blue")
> abline(v=mean(cars$speed),col="blue")
> points(mean(cars$speed),mean(cars$dist))

But in all other cases, it is no longer the case. Consider for instance the case of a logistic regression. And to ask for something even more complicated, consider the case where we have only categorical explanatory variables. In that context, it is more difficult to get a prediction for the “average individual”. Unless we consider some fuzzy interpretation of the regression.

Continue reading Classification with Categorical Variables (the fuzzy side)

Another Interactive Map for the Cholera Dataset

Following my previous post, François (aka @FrancoisKeck) posted a comment mentionning another package I could use to get an interactive map, the rleafmap package. And the heatmap was here easy to include.

The first part is still the same, to get the data,

> require(rleafmap)
> library(sp)
> library(rgdal)
> library(maptools)
> library(KernSmooth)
> setwd("/home/arthur/Documents/")
> deaths <- readShapePoints("Cholera_Deaths")
> df_deaths <- data.frame(deaths@coords)
> coordinates(df_deaths)=~coords.x1+coords.x2
> proj4string(df_deaths)=CRS("+init=epsg:27700") 
> df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
> df=data.frame(df_deaths@coords)

To get a first visualisation, use

> stamen_bm <- basemap("stamen.toner")
> j_snow <- spLayer(df_deaths, stroke = FALSE)
> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 14)

and again, using the + and the – in the top left area, we can zoom in, or out. Or we can do it manually,

> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)

To get the heatmap, use

> library(spatstat)
> library(maptools)

> win <- owin(xrange = bbox(df_deaths)[1,] + c(-0.01,0.01), yrange = bbox(df_deaths)[2,] + c(-0.01,0.01))
> df_deaths_ppp <- ppp(coordinates(df_deaths)[,1],  coordinates(df_deaths)[,2], window = win)
> 
> df_deaths_ppp_d <- density.ppp(df_deaths_ppp, 
  sigma = min(bw.ucv(df[,1]),bw.ucv(df[,2])))
 
> df_deaths_d <- as.SpatialGridDataFrame.im(df_deaths_ppp_d)
> df_deaths_d$v[df_deaths_d$v < 10^3] <- NA

> stamen_bm <- basemap("stamen.toner")
> mapquest_bm <- basemap("mapquest.map")
 
> j_snow <- spLayer(df_deaths, stroke = FALSE)
> df_deaths_den <- spLayer(df_deaths_d, layer = "v", cells.alpha = seq(0.1, 0.8, length.out = 12))
> my_ui <- ui(layers = "topright")

> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 1000, height = 750, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)

The amazing thing here are the options in the top right corner. For instance, we can remove some layers, e.g. to remove the points

or to change the background

To get an html file, instead of a standard visualisation in RStudio, use

> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 450, height = 350, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16, directView ="browser")

which will generate the html table (as well as some additional files actually) above. Awesome, isn’t it?

Interactive Maps for John Snow’s Cholera Data

This week, in Istanbul, for the second training on data science, we’ve been discussing classification and regression models, but also visualisation. Including maps. And we did have a brief introduction to the  leaflet package,

devtools::install_github("rstudio/leaflet")
require(leaflet)

To see what can be done with that package, we will use one more time the John Snow’s cholera dataset, discussed in previous posts (one to get a visualisation on a google map background, and the second one on an openstreetmap background),

library(sp)
library(rgdal)
library(maptools)
setwd("/cholera/")
deaths <- readShapePoints("Cholera_Deaths")
df_deaths <- data.frame(deaths@coords)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
df=data.frame(df_deaths@coords)
lng=df$coords.x1
lat=df$coords.x2

Once installed the leaflet package, we can use the package at the RStudio console (which is what we will do here), or within R Markdown documents, and within Shiny applications. But because of restriction we got on this blog (rules of hypotheses.org) So there will be only copies of my screen. But if you run the code, in RStudio you will get interactvive maps in the viewer window.

First step. To load a map, centered initially in London, use

m = leaflet()%>% addTiles() 
m %>% fitBounds(-.141,  51.511, -.133, 51.516)

In the viewer window of RStudio, it is just like on OpenStreetMap, e.g. we can zoom-in, or zoom-out (with the standard + and – in the top left corner)

And we can add additional material, such as the location of the deaths from cholera (since we now have the same coordinate representation system here)

rd=.5
op=.8
clr="blue"
m = leaflet() %>% addTiles()
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr)

We can also add some heatmap.

X=cbind(lng,lat)
kde2d <- bkde2D(X, bandwidth=c(bw.ucv(X[,1]),bw.ucv(X[,2])))

But there is no heatmap function (so far) so we have to do it manually,

x=kde2d$x1
y=kde2d$x2
z=kde2d$fhat
CL=contourLines(x , y , z)

We have now a list that contains lists of polygons corresponding to isodensity curves. To visualise of of then, use

m = leaflet() %>% addTiles() 
m %>% addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Of course, we can get at the same time the points and the polygon

m = leaflet() %>% addTiles() 
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr) %>%
  addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Continue reading Interactive Maps for John Snow’s Cholera Data

Langue et Cartographie

La plupart des cartes du monde standard de R sont en anglais. L’autre jour, des étudiants souhaitaient visualiser des données tirées d’une base où les noms des pays sont en anglais. Pour obtenir une correspondance entre des noms anglais et des noms français, on peut utiliser la base suivante

> library(gdata)
> library(xlsx)
> download.file("http://www.stat.gouv.qc.ca/statistiques/divisions-territoriales/pays-liste-isq-web.xls","corresp")
>  xls_corresp <- read.xls("corresp",sheet=1,encoding="latin1")

On a ici

>  df_corresp <- data.frame(
+ FR=xls_corresp$X.5,
+ EN=xls_corresp$X.11)
> df_corresp[5:10,]
                    FR                 EN
5  Belgique-Luxembourg Belgium-Luxembourg
6    Îles du Pacifique    Pacific Islands
7          Afghanistan        Afghanistan
8       Afrique du Sud       South Africa
9           Îles Åland      Åland Islands
10             Albanie            Albania

Pour avoir une correspondance, entre les noms sous R, et ceux dans la base à notre disposition, il faut manipuler un peu les chaînes de caractères,

>  df_corresp$FR = as.character(df_corresp$FR)
>  df_corresp$FR = iconv(df_corresp$FR, to="ASCII//TRANSLIT") 
>  df_corresp$FR = tolower(df_corresp$FR)
>  remove_minus = function(s) paste(unlist(strsplit(s, split='-',fixed=TRUE)),collapse="")
>  remove_space = function(s) paste(unlist(strsplit(s, split=' ',fixed=TRUE)),collapse="")
>  df_corresp$FR = sapply(df_corresp$FR,remove_minus)
>  df_corresp$FR = sapply(df_corresp$FR,remove_space)

> df_corresp$EN = as.character(df_corresp$EN)
> df_corresp$EN = iconv(df_corresp$EN, to="ASCII//TRANSLIT") 
> df_corresp$EN = tolower(df_corresp$EN)
> df_corresp$EN = sapply(df_corresp$EN,remove_minus)
> df_corresp$EN = sapply(df_corresp$EN,remove_space)
> split_dots = function(s) strsplit(s, split=':',fixed=TRUE)[[1]][1]

Si on regarde les pays que l’on a pu convertir le nom en anglais pour avoir une correspondance avec le nom du pays dans la base de R,

> library(maps)
>  world<-map(database="world")
>  world$pays_EN <- world$names  
>  world$pays_EN <- tolower(world$pays_EN)
>  world$pays_EN = sapply(world$pays_EN,remove_space) 
>  world$pays_EN = sapply(world$pays_EN,remove_minus) 
>  world$pays_EN = sapply(world$pays_EN,split_dots) 
>  world$pays_FR <- df_corresp$FR[match(world$pays_EN, df_corresp$EN)]

on obtient le graphique suivant

>  color <- !is.na(world$pays_FR)
>  map(database="world", fill=TRUE, col=color)

Les seuls pays pour lesquels on n’a pas de correspondance sont les États-Unis d’Amérique (usa dans la base de R), la Russie (ussr), les Congos (avec la République Démocratique et l’autre), et la Côte d’Ivoire. En bricolant un peu sur ces 4 pays, on pourra avoir une correspondance entre les noms utilisés sous R, et les noms en français.

Spliting a Node in a Tree

If we grow a tree with standard functions in R, on the same dataset used to introduce classification tree in some previous post,

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ head=TRUE,sep=";")
> library(rpart)
> cart<-rpart(PRONO~.,data=MYOCARDE)

we get

> library(rpart.plot)
> library(rattle)
> prp(cart,type=2,extra=1)

Continue reading Spliting a Node in a Tree

Regression Models, It’s Not Only About Interpretation

Yesterday, I did upload a post where I tried to show that “standard” regression models where not performing bad. At least if you include splines (multivariate splines) to take into accound joint effects, and nonlinearities. So far, I do not discuss the possible high number of features (but with boostrap procedures, it is possible to assess something related to variable importance, that people from machine learning like).

But my post was not complete: I was simply plotting the prediction obtained by some model. And it “looked like” the regression was nice, but so were the random forrest, the http://latex.codecogs.com/gif.latex?k-nearest neighbour and boosting algorithm. What if we compare those models on new data?

Continue reading Regression Models, It’s Not Only About Interpretation

On Some Alternatives to Regression Models

When you start discussing with people in machine learning, you quickly hear something like “forget your econometric models, your GLMs, I can easily find a machine learning ‘model’ that can beat yours”. I am usually very sceptical, especially when I hear “easily” or “always“. I have no problem about the fact that I use old econometric models, but I had the feeling that things aren’t that easy. I can understand that we might have problems when we do have a lot of features (I am still working on that, I’ll get back to this point soon), but I have the feeling that I can still capture interactions, and non-linearities with standard econometric models as well as any machine learning algorithm.

Just to illustrate, consider the following ‘model

http://latex.codecogs.com/gif.latex?\mathbb{E}[Y\vert\boldsymbol{X}=\boldsymbol{x}]=m(\boldsymbol{x})

where http://latex.codecogs.com/gif.latex?m(\cdot) is (just to illustrate)

> n <- 5000
> rtf <- function(x1, x2) { sin(x1+x2)/(x1+x2) }
> xgrid <- seq(1,6,length=31)
> ygrid <- seq(1,6,length=31)
> zgrid <- outer(xgrid,ygrid,rtf)
> persp(xgrid,ygrid,zgrid,theta=30, phi=30, 
+ col="green", ticktype="detailed",shade=TRUE)

Continue reading On Some Alternatives to Regression Models

Vector Autoregressive Models

Consider here some http://latex.codecogs.com/gif.latex?VAR(1) model,

http://latex.codecogs.com/gif.latex?\begin{bmatrix}Y_{1,t}%20\\%20Y_{2,t}\end{bmatrix}%20=%20\begin{bmatrix}A_{1,1}&A_{1,2}%20\\%20A_{2,1}&A_{2,2}\end{bmatrix}\begin{bmatrix}Y_{1,t-1}%20\\%20Y_{2,t-1}\end{bmatrix}%20+%20\begin{bmatrix}\varepsilon_{1,t}%20\\%20\varepsilon_{2,t}\end{bmatrix}

We’ve seen in class that stationnarity of that time series, in the sense that http://latex.codecogs.com/gif.latex?\mathbb{E}[\boldsymbol{Y}_t]=\boldsymbol{\mu} and http://latex.codecogs.com/gif.latex?\text{Var}[\boldsymbol{Y}_t,\boldsymbol{Y}_{t-h}]=\boldsymbol{\Gamma}(h), was valid if the roots (in http://latex.codecogs.com/gif.latex?\mathbb{C}) of the characteristic polyonomial -http://latex.codecogs.com/gif.latex?P(z)=\text{det}(\mathbb{I}-\boldsymbol{A}z) – were outside the unit circle.

To visualize this point, consider the following time series

http://latex.codecogs.com/gif.latex?\begin{bmatrix}Y_{1,t}%20\\%20Y_{2,t}\end{bmatrix}%20=%20\begin{bmatrix}0.7&0.4%20\\%200.2&0.3\end{bmatrix}\begin{bmatrix}Y_{1,t-1}%20\\%20Y_{2,t-1}\end{bmatrix}%20+%20\begin{bmatrix}\varepsilon_{1,t}%20\\%20\varepsilon_{2,t}\end{bmatrix}

To generate that time series, we need to generate a bivariate white noise, i.e. http://latex.codecogs.com/gif.latex?\text{Var}(\boldsymbol{\varepsilon}_t)=\boldsymbol{\Sigma} (not necessarily a diagonal matrix), and http://latex.codecogs.com/gif.latex?\text{Var}(\boldsymbol{\varepsilon}_t,\boldsymbol{\varepsilon}_{t-h})=\boldsymbol{0}. For instance

> n=500
> r=0.7
> set.seed(1)
> Z1=rnorm(n)
> Z2=rnorm(n)
> E1=Z1
> E2=r*Z1+sqrt(1-r^2)*Z2

To generate now our time series, use

> A=matrix(c(.7,.2,.4,.3),2,2)
> X1=X2=rep(0,n)
> for(t in 2:n){
+   X1[t]=A[1,1]*X1[t-1]+A[1,2]*X2[t-1]+E1[t]
+   X2[t]=A[2,1]*X1[t-1]+A[2,2]*X2[t-1]+E2[t]  
+ }

Here, we have

> plot(X1,type="l",col="red")
> lines(X2,col="blue")

Those two time series seem to be stationnary. And, indeed,

> polyroot(c(1,-sum(diag(A)),det(A)))
[1] 1.18+0i 6.51-0i
> Mod(polyroot(c(1,-sum(diag(A)),det(A))))
[1] 1.18 6.51

Continue reading Vector Autoregressive Models