Some Intuition About the Theory of Statistical Learning

While I was working on the Theory of Statistical Learning, and the concept of consistency, I found the following popular graph (e.g. from thoses slides, here in French)

The curve below is the error on the training sample, as a function of the size of the training sample. Above, it is the error on a validation sample. Our learning process is consistent if the two converge.

I was wondering if it was possible to generate such a graph, with some data, and some statistical model. And indeed, it is rather simple, and it gives nice intuition about possible interpretations. Consider some (simple) classification problem. Here, we consider a logistic regression. We generate a sample of size [;atex]n[/latex], we fit our model, we compute the misclassification rate, then we generate another sample of size n, we use our previous model to make some prediction, and we compute the misclassifiation rate. And we play with n.

missclassification <- function(n){
  U=data.frame(X1=runif(n),X2=runif(n))
  p=(U[,1]+U[,2])/2
  U$Y=rbinom(n,size=1,prob=p)
  reg=glm(Y~X1+X2,data=U,family=binomial)
  pd=function(x1,x2) predict(reg,newdata=data.frame(X1=x1,X2=x2),type="response")>.5
  x=seq(0,1,length=101)
  z=outer(x,x,pd)
  cl2=c(rgb(1,0,0,.4),rgb(0,0,1,.4))
  cl1=c("red","blue")
  image(x,x,z,col=cl2,xlab="",ylab="",main="Training Sample")
  points(U$X1,U$X2,pch=19,col=cl1[1+U$Y])
 
  V=data.frame(X1=runif(n),X2=runif(n))
  p=(V[,1]+V[,2])/2
  V$Y=rbinom(n,size=1,prob=p)
  screen(4)
  image(x,x,z,col=cl2,xlab="",ylab="",main="Validation Sample")
  points(V$X1,V$X2,pch=19,col=cl1[1+V$Y])
 
  MissClassU=mean(abs(pd(U$X1,U$X2)-U$Y))
  MissClassV=mean(abs(pd(V$X1,V$X2)-V$Y))
return(c(MissClassU,MissClassV))
}

If we plot it, we get (in purple, it is the training sample, and in black, the validation sample)

The graph is not exactly the same as above, but it is probably due to the randomness of our samples. If we generate hundreds of samples, it should be just fine.

MCU=rep(NA,500)
MCV=rep(NA,500)
n=250
  for(i in 1:500){
    U=data.frame(X1=runif(n),X2=runif(n))
    p=(U[,1]+U[,2])/2
    U$Y=rbinom(n,size=1,prob=p)
    reg=glm(Y~X1+X2,data=U,family=binomial)
    pd=function(x1,x2) predict(reg,newdata=data.frame(X1=x1,X2=x2),type="response")>.5
    MCU[i]=mean(abs(pd(U$X1,U$X2)-U$Y))
 
    V=data.frame(X1=runif(n),X2=runif(n))
    p=(V[,1]+V[,2])/2
    V$Y=rbinom(n,size=1,prob=p)
    MCV[i]=mean(abs(pd(V$X1,V$X2)-V$Y))
  }
  MissClassV=mean(MCU)
  MissClassU=mean(MCV)

Visualising a Classification in High Dimension

So far, when discussing classification, we’ve been playing on my toy-dataset (actually, I should no claim it’s mine, it is inspired by the one used in the introduction of Boosting, by Robert Schapire and Yoav Freund). But in ral life, there are more observations, and more explanatory variables.With more than two explanatory variables, it starts to be more complicated to visualise. For instance, consider

MYOCARDE=read.table(
"http://freakonometrics.free.fr/saporta.csv",
head=TRUE,sep=";")

where we have observations from people in E.R., for infarctus, and we want to understand who did survive, to get a predictive model. But before running some classifier, let us visualise our data. Since we have seven explanatory variables and our class (survival or death), we can go for a PCA.

library(FactoMineR) # ACP (sur les var continues)
X=MYOCARDE[,1:7]
acp=PCA(X)

To add the death/survival variable, treat it as numerical 0/1 variable (at least to get a direction)

MYOCARDE2=MYOCARDE
MYOCARDE2$PRONO=(MYOCARDE2$PRONO=="SURVIE")*1
acp=PCA(MYOCARDE2,quanti.sup=8,graph=TRUE)

The nice thing is that we see here where variables are colinear with that one. It is also possible to visualise individuals, and classes, too

acp=PCA(MYOCARDE,quali.sup=8,graph=TRUE)
plot(acp, habillage = 8,col.hab=c("red","blue"))

Continue reading Visualising a Classification in High Dimension

Supervised Classification, beyond the logistic

In our data-science class, after discussing limitations of the logistic regression, e.g. the fact that the decision boundary line was a straight line, we’ve mentioned possible natural extensions. Let us consider our (now) standard dataset

 clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
 clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
 x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
 y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
 z <- c(1,1,1,1,1,0,0,1,0,0)
 df <- data.frame(x,y,z)
 plot(x,y,pch=19,cex=2,col=clr1[z+1])

One can consider a quadratic function of the covariates (instead of a linear one)

 reg=glm(z~x+y+I(x^2)+I(y^2)+I(x*y),
     data=df,family=binomial)
 summary(reg)
 
 pred_1 <- function(x,y){
 predict(reg,newdata=data.frame(x=x,
 y=y),type="response")>.5 }
 
 x_grid<-seq(0,1,length=101)
 y_grid<-seq(0,1,length=101)
 z_grid <- outer(x_grid,y_grid,pred_1)
 image(x_grid,y_grid,z_grid,col=clr2)
 points(x,y,pch=19,cex=2,col=clr1[z+1])

Continue reading Supervised Classification, beyond the logistic

Supervised Classification, discriminant analysis

Another popular technique for classification (or at least, which used to be popular) is the (linear) discriminant analysis, introduced by Ronald Fisher in 1936. Consider the same dataset as in our previous post

> clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
> x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
> y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
> z <- c(1,1,1,1,1,0,0,1,0,0)
> df <- data.frame(x,y,z)
> plot(x,y,pch=19,cex=2,col=clr1[z+1])

The main interest of that technique is not the output, but more the fact that we can make here simple (and explicit) computations. Especially to get a better understanding of theoretical concepts on classification.

Continue reading Supervised Classification, discriminant analysis

Tents, Tweets, and Events: Ongoing Protests and Social Media

Our paper, entitled Tents, Tweets, and Events: The Interplay Between Ongoing Protests and Social Media, written with Marco Toledo Bastos (aka ) and Dan Mercea (aka) just appeared in the Journal of Communication

Recent protest movements have fuelled deliberations about the extent to which social media ignite protests. In this paper we compare time-series data of Twitter, Facebook, and onsite protest activity to test the hypothesis of Granger-causality between social media streams and protestors attending demonstrations during the Indignados in Spain, the Occupy movement in the U.S., and the Vinegar protests in Brazil. After applying a Gaussianization procedure to the time series, we confirmed the hypothesis that contentious communication on Twitter and Facebook was Granger-causal of onsite protest activity during the Indignados and the Occupy protests, with bidirectional causality between online and onsite protest activity in the Occupy series. The Vinegar protests in Brazil presented Granger-causality only between Facebook and Twitter and between protestors and injured or arrested protestors. The results indicate that the causal relationship between online and onsite political varies considerably across different socioeconomic contexts with different levels of Internet penetration.

Supervised Classification, Logistic and Multinomial

We will start, in our Data Science course,  to discuss classification techniques (in the context of supervised models). Consider the following case, with 10 points, and two classes (red and blue)

> clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
> clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
> x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
> y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
> z <- c(1,1,1,1,1,0,0,1,0,0)
> df <- data.frame(x,y,z)
> plot(x,y,pch=19,cex=2,col=clr1[z+1])

To get a prediction, i.e. a partition of the space in two parts, consider some logistic regression

> reg=glm(z~x+y,data=df,family=binomial)
> summary(reg)
 
Call:
glm(formula = z ~ x + y, family = binomial, data = df)
 
Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.6593  -0.4400   0.2564   0.5830   1.5374  
 
Coefficients:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)   -1.706      1.999  -0.854    0.393
x             -5.489      5.360  -1.024    0.306
y              8.568      5.515   1.554    0.120
 
(Dispersion parameter for binomial family taken to be 1)
 
    Null deviance: 13.4602  on 9  degrees of freedom
Residual deviance:  8.1445  on 7  degrees of freedom
AIC: 14.144
 
Number of Fisher Scoring iterations: 5

Given some point, the predicted class is obtained using

> pred_1 <- function(x,y){
+ predict(reg,newdata=data.frame(x=x,
+ y=y),type="response")>.5
+ }

(here, the predicted class is simply the one that is the most likely). To visualize it use

> x_grid<-seq(0,1,length=101)
> y_grid<-seq(0,1,length=101)
> z_grid <- outer(x_grid,y_grid,pred_1)
> image(x_grid,y_grid,z_grid,col=clr2)
> points(x,y,pch=19,cex=2,col=clr1[z+1])

Since the logistic regression is a (generalized) linear model, the line that separate the two regions is a straight line.

Continue reading Supervised Classification, Logistic and Multinomial

Modeling Earthquake Dynamics

In 2012, with Marilou Durand, student at UQAM, we have been working on the seismic gap hypothesis, see e.g. McCann et al. (1978) or Kagan & Jackson (1991), or to be more specific, on the dynamics between earthquakes magnitude (or seismic moment) and inter-occurence durations. Our paper should appear soon in the Journal of Seismology,

In this paper, we investigate questions arising in Parsons & Geist (2012). Pseudo causal models connecting magnitudes and waiting times are consider, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos & Karlis (2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are function of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year, or a decade.

The paper is online on https://hal.archives-ouvertes.fr/.

Démographie québécoise

Cet automne, j’étais allé voir Mummy, de Xavier Dolan. Et j’avoue avoir adoré le film ! J’ai tout aimé ! La musique, le cadrage, le rythme, les acteurs (même si j’avais du mal à croire Anne Dorval qui restera toujours pour moi la maman des Parent, surtout que j’ai passé mon temps à Montréal à croiser Daniel Brière – l’acteur, pas le joueur des Avalanches – qui habitait à côté de la maison, et qui faisait souvent le marché en même temps que moi… pour moi les deux étaient attachés à tout jamais…. et la mère de la série et celle du film de Xavier Dolan sont assez différentes).

Le seul reproche que j’aurais pu faire, ce sont les sous-titres (qui nous étaient imposés, en France). Au Québec, on avait pris l’habitude de vivre sans sous-titres, et on a appris une langue en la parlant, au quotidien. Par exemple, j’ai appris ce que voulait dire câlisser sans avoir à ouvrir un dictionnaire, ou sans demander de traduction. De même que j’ai découvert qu’il existait des variantes, comme décâlisser (et j’ai même fini par comprendre le sens). Mais je n’avais jamais eu besoin de visualiser la traduction du mot. Voir des traductions de mots que j’avais fini par découvrir et comprendre m’a déstabilisé. Je reverrais d’autant mieux le film en DVD, sans les sous-titres qui m’étaient imposés au cinéma.

C’est amusant car pendant les vacances, j’en ai profité pour finir Magasin Général de Régis Loisel et Jean-Louis Tripp. Et j’ai eu plaisir à retrouver des expressions québécoises tout au long de la lecture. Même si l’histoire se déroule à Notre Dame du Lac, situé sur le bas Saint Laurent (même si le nom ne semble plus exister depuis quelques années), on est loin des dialogues que l’on peut entendre en région, et on a une version agréable de ce qu’on pourrait entendre à Montréal…

En fait, comme cela est indiqué (trop) discrètement, c’est Jimmy Beaulieu (dont j’avais déjà souligné le travail admirable dans un précédant billet) qui a fait la “traduction” des dialogues, pour avoir (comme le disent les premières pages du livres) des dialogues en québécois “qui soient compréhensibles des deux côtés de l’Atlantique”. Ce qui montre bien qu’un français de France peut comprendre le québécois (moyennant un peu de bonne volonté). Je crois que j’aurais bien aimé avoir des sous-titres pour Mummy dans la même langue que celle parlée dans Magasin Général.

En lisant en particulier les deux derniers tomes de Magasin Général (oui, j’avais un peu de retard), et les histoires de natalité, je me suis souvenu du travail que l’on avait fait il y a un peu plus d’un an avec Julie, qui était venu faire un stage à l’UQAM, et avec qui on avait découvert la démographie du Lac Saint Jean (certes, par rapport à Magasin Général, on est de l’autre côté du Saint Laurent).

les slides (de la soutenance de stage) sont en ligne,