Category Archives: Regression

From multinomial regression to binary classification on some Siamese data

There are two kinds of people in the world: people who think there are two kinds of people in the world and people who don’t

(borrowed from Menand (2018)). Because things are always simpler when we face only binary choice, aren’t they? But consider here the case were multiple options are possible, and let us see if we cannot get back to simpler binary choices. Consider a collection of observations (y_i,\boldsymbol{x}_i) where y_i is some categorical variable, y_i\in\mathcal{A} where \mathcal{A}=\lbrace A_1,\cdots,A_\kappa \rbrace, with \kappa possible categories. Let \mathcal{I}_k=\lbrace i:y_i\in A_k \rbrace.

In a classical multinomial logistic regression, suppose that A_1 is the reference, then \mathbb{P}[Y=A_j|\boldsymbol{X}=\boldsymbol{x}]=\frac{\exp[\boldsymbol{x}^\top\boldsymbol{\beta}_j]}{1+\exp[\boldsymbol{x}^\top\boldsymbol{\beta}_2]+\cdots+\exp[\boldsymbol{x}^\top\boldsymbol{\beta}_k]}With a lot a categories, and a small number of observations, inference can be complicated, and non-robust.

  • the Siamese dataset

The name Siamese I use, here, comes from Siamese Networks. Or sort of… As we say in French, it is an « histoire de l’homme qui a vu l’homme qui a vu l’ours » (story of the man who saw the man who saw the bear). A few years ago, a student tried to explain to me the idea of Siamese Networks and this is what I understood. I might be completely wrong, but the idea I got from it did make sense, in my mind at least. That is the story of that blog post…

The idea of the siamese algorithm will be to consider all pairs of observations, (y_i,\boldsymbol{x}_i) and (y_j,\boldsymbol{x}_j) :

  1. \tilde y_{i,j}=\boldsymbol{1}(y_i=y_j) indicating if individuals i and j are in the same category
  2. \tilde{\boldsymbol{x}}_{i,j} is a collection of p-1 variables,
  • \tilde {x}_{k:i,j}={x}_{k:i}-{x}_{k:j} if x_k is continuous, or \tilde {x}_{k:i,j}=|{x}_{k:i}-{x}_{k:j}| (we can use another metric, e.g. \tilde {x}_{k:i,j}=|{x}_{k:i}-{x}_{k:j}|^2, and this is why I decided to use some GAM model in the logistic regression on the Siamese dataset)
  • \tilde {x}_{k:i,j}=({x}_{k:i},{x}_{k:j})\in\mathcal{X}_k\times\mathcal{X}_k if x_k is a categorical variables (taking values in the set \mathcal{X}_k), or \tilde {x}_{k:i,j}=\boldsymbol{1}({x}_{k:i}\neq{x}_{k:j})\in\{0,1\}

The original dataset was a n\times p matrix, and (if there are no categorical variable), it becomes a n(n-1)/2\times p matrix. The key point is that if the original variable y_i was multinomial, y_{i,j} is now binomial. For instance, if our initial dataset was the following, with two covariates, one continuous and one categorical

its siamese counterpart is the following

  • Classification step

On the dataset (\tilde y_{i,j},\tilde {x}_{k:i,j})_{i,j}, fit a logistic regression, \mathbb{P}[\tilde Y|\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{x}}]=\frac{\exp[\tilde{\boldsymbol{x}}^\top\boldsymbol{\beta}]}{1+\exp[\tilde{\boldsymbol{x}}^\top\boldsymbol{\beta}]}(or any classification model – CART, random forest, etc). But that is the easy part (unless n is large, because the siamese dataset has (roughly) n^2/2 rows). The difficult task is the prediction

  • Prediction step

Consider a new input variable \boldsymbol{x}_{\cdot}, and define its siamese version, \tilde{\boldsymbol{x}}_{\cdot}=(\tilde{\boldsymbol{x}}_{\cdot,j})_j, i.e. a database with n rows. Then compute
p_{\cdot,j}=\mathbb{P}[\tilde Y|\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{x}}_{\cdot,j}]=\frac{\exp[\tilde{\boldsymbol{x}}_{\cdot,j}^\top\boldsymbol{\beta}]}{1+\exp[\tilde{\boldsymbol{x}}_{\cdot,j}^\top\boldsymbol{\beta}]} where p_{\cdot,j} is the probability that (y_j,\boldsymbol{x}_{j}) and (y_{\cdot},\boldsymbol{x}_{\cdot}) are in the same category, as well as p_{i,j}=\mathbb{P}[\tilde Y|\tilde{\boldsymbol{X}}=\tilde{\boldsymbol{x}}_{i,j}]=\frac{\exp[\tilde{\boldsymbol{x}}_{i,j}^\top\boldsymbol{\beta}]}{1+\exp[\tilde{\boldsymbol{x}}_{i,j}^\top\boldsymbol{\beta}]}
Let \boldsymbol{p}_{\cdot}=(p_{\cdot,j}), and similarly \boldsymbol{p}_{i}=(p_{i,j}). Then several techniques can be used to predict y_{\cdot}.

  1. \widehat{y}_{\cdot}=y_{j^\star} where {j^\star}=\underset{j=1,\cdots,n}{\text{argmax}}\{p_{\cdot,j}\}: the predicted class is the one of the observation the most likely to be other same class
  2. \widehat{y}_{\cdot}=y_{j^\star} where {j^\star}=\underset{\ell=1,\cdots,k}{\text{argmax}}\{\overline{p}_{\ell}\}, where\overline{p}_{\ell} = \frac{1}{n_{\ell}}\sum_{j\in\mathcal{I}_s} \boldsymbol{1}(y_j =y_{\ell}),\text{ where }\mathcal{I}_s=\lbrace i:p_{\cdot,i}>s\rbraceconsider only probabilities sufficiently high to be considered, and predict the most important class (majority rule)
  3. \widehat{y}_{\cdot}=y_{j^\star} where {j^\star}=\underset{i=1,\cdots,n}{\text{argmax}}\{\theta_i\} where \theta_i=\cos(\boldsymbol{p}_{\cdot},\boldsymbol{p}_{i})=\displaystyle{\frac{\boldsymbol{p}_{\cdot}\cdot\boldsymbol{p}_{i}}{\|\boldsymbol{p}_{\cdot}\|\|\boldsymbol{p}_{i}\|}}
  4. \widehat{y}_{\cdot}=y_{j^\star} where {j^\star}=\underset{i=1,\cdots,n}{\text{argmax}}\{KL_{\cdot|i}\} and KL_{\cdot|i}=\displaystyle{\sum_{j=1}^n p_{\cdot,j}\log\frac{p_{\cdot,j}}{p_{i,j}}} (but one can select another metric)
  5. \widehat{y}_{\cdot}=y_{j^\star} where {j^\star}=\underset{j\in\mathcal{J}}{\text{argmax}}\{p_{\cdot,j}\} and \mathcal{J} is a sample of k observations, chosen randomly, one in each group (one-shot procedure): the predicted class is the one of the observation the most likely to be other same class

Heuristically, it can be related to some k nearest neighbors strategy: we give the attribute that most neighbors have. The total distance is a weighted sum of the componentwise distances (for the logistic regression).

  • Simulation study

In order to test that technique, let us generate some multinomial model where y has 10 possible labels, with 6 (independent) covariates x_1,\cdots,x_6, and \mathbb{P}[Y=A_k|\boldsymbol{X}=\boldsymbol{x}]\propto \exp[\boldsymbol{x}^\top\boldsymbol{\beta}_k] (where coefficients \boldsymbol{\beta}_k where generated randomly) for k\in\{1,2,\cdots,10\} (there were 10 categories) and with n=700 observations.

n=700
X1=rnorm(n)
X2=rnorm(n)
X3=rnorm(n)
X4=rnorm(n)
X5=rnorm(n)
X6=rnorm(n)
X=cbind(1,X1,X2,X3,sqrt(abs(X4)),X5*X1,X6)
k = 10
 PARAM = matrix(rnorm(k*6),k,6)
 PARAM[,1]=PARAM[,1]-1
 PARAM=cbind(PARAM,0)
 P=matrix(NA,n,k-1)
 for(j in 1:(k-1)) P[,j] = X %*% (PARAM[j,])+rnorm(n)
 P=cbind(P,0)
S=apply(exp(P),1,sum)
Pb = exp(P)/S
tirage = function(i){
      sample(1:10,size=1,prob = Pb[i,])
}
Y = LETTERS[Vectorize(tirage)(1:n)]
dbase = data.frame(Y=as.factor(Y),X1,X2,X3,X4,X5)

In the paragraph previously, I suggested to take the most likely one. Being wrong means that it was not the first choice. But perhaps being the second or the third choice is not that bad, actually. So in my simulations, I look at the proportion of predictions were our prediction is the good one (top 1), if the true one is either the most likely or the second most likely (top 2), or in the top 3. That will be on my x-axis. I draw some line, but we simply have three points (top 1, top 2 and top 3). I compute the proportion of good prediction, using cross-validation techniques (10-fold). The black lines are one of the methods described above. The red one is the standard multinomial model (with a logistic link function). For the Siamese model, I tried several models. I tried a logistic regression, and some smooth version (GAM) on top

and a classification tree, on the left, as well as some random forest on the right, below.

It looks like the multinomial approach performs always better than any Siamese one… and to be honest, I am disappointed.

Here is the code I did use when I considered a logistic regression on the Siamese dataset,

set.seed(1)
kfold = sample(rep(1:10,n/10))
 
KFOLDglm = function(i){
i_test=which(kfold==i)
i_calibration=which(kfold!=i)
y=credit[i_calibration,"Y"]
tirage = function(){
v=c(sample(i_calibration[y==levels(y)[1]],size=1),
    sample(i_calibration[y==levels(y)[2]],size=1),
    sample(i_calibration[y==levels(y)[3]],size=1),
    sample(i_calibration[y==levels(y)[4]],size=1),
    sample(i_calibration[y==levels(y)[5]],size=1),
    sample(i_calibration[y==levels(y)[6]],size=1),
    sample(i_calibration[y==levels(y)[7]],size=1),
    sample(i_calibration[y==levels(y)[8]],size=1),
    sample(i_calibration[y==levels(y)[9]],size=1),
    sample(i_calibration[y==levels(y)[10]],size=1))
names(v)=levels(y)
return(v)
}
 
LogisticModel <- multinom(Y ~ ., data = credit[i_calibration,], trace=FALSE)
 
comparaisonx = function(base,x=base[1,]){
  mix_base = base
  for(j in 1:ncol(base)){
    xj = as.numeric(x[j])-base[,j]
    mix_base[,j] = (xj)
  }
  mix_base
}
comparaisony = function(base,y=base[1]){
  as.factor(base == y)
}
creditx = credit[,-which(names(credit) == "Y")]
nc=length(i_calibration)
B=comparaisonx(base = creditx[i_calibration[2:nc],],x=creditx[i_calibration[1],])
B$Y=comparaisony(base = credit[i_calibration[2:nc],"Y"],y=credit[i_calibration[1],"Y"])
for(i in 2:(nc-1)){
  B0=comparaisonx(base = creditx[i_calibration[(i+1):nc],],x=creditx[i_calibration[i],])
  B0$Y=comparaisony(base = credit[i_calibration[(i+1):nc],"Y"],y=credit[i_calibration[i],"Y"])
  B=rbind(B,B0)
}
credit_mix = B
 
OneShotLogisticModel <- glm(Y ~ ., data = credit_mix, family=binomial)
A_ref = table(credit[i_calibration,"Y"])/length(i_calibration)
 
vect_oneshot = function(i){
  B2=comparaisonx(base = creditx[i_calibration,],x=creditx[i,])
  predict(OneShotLogisticModel,type="response",newdata=B2)
}
 
prediction_oneshot = function(i,type=1){
B2=comparaisonx(base = creditx[i_calibration,],x=creditx[i,])
p=predict(OneShotLogisticModel,type="response",newdata=B2)
y=credit[i_calibration,"Y"]
base = data.frame(p,y)
base = base[rev(order(base$p)),]
if(type==1){T = table(base$y[1:11])
return(names(which.max(T)))}
if(type==2){return(base$y[1])}
if(type==3){A=table(base$y[1:10])/10
T=A/A_ref
return(names(which.max(T)))}
if(type==4){
  costheta = rep(NA,length(i_calibration))
  for(j in 1:length(i_calibration)){
    vecteur_proba = vect_oneshot(i_calibration[j])
    costheta[j] = sum(vecteur_proba*p)/(sqrt(sum(vecteur_proba^2))*sqrt(sum(p^2)))
  }
  return(y[which.max(costheta)])}
if(type==5){
  kl = rep(NA,length(i_calibration))
  for(j in 1:length(i_calibration)){
    vecteur_proba = vect_oneshot(i_calibration[j])
    kl[j] = sum(p*log(vecteur_proba/p))
  }
  return(y[which.max(as.vector(kl))])}
if(type==6){ ## one shot : tirer au hasard un de chaque, et dire lequel est plus credible !
 
y=credit[i_calibration,"Y"]
tirage = function(){
    v=c(sample(i_calibration[y==levels(y)[1]],size=1),
        sample(i_calibration[y==levels(y)[2]],size=1),
        sample(i_calibration[y==levels(y)[3]],size=1),
        sample(i_calibration[y==levels(y)[4]],size=1),
        sample(i_calibration[y==levels(y)[5]],size=1),
        sample(i_calibration[y==levels(y)[6]],size=1),
        sample(i_calibration[y==levels(y)[7]],size=1),
        sample(i_calibration[y==levels(y)[8]],size=1),
        sample(i_calibration[y==levels(y)[9]],size=1),
        sample(i_calibration[y==levels(y)[10]],size=1))
    names(v)=levels(y)
    return(v)
  }
  pd=rep(NA,101)
for(ix in 1:101){
ids = tirage()
B2=comparaisonx(base = creditx[ids,],x=creditx[i,])
p=predict(OneShotLogisticModel,type="response",newdata=B2)
pd[ix]=levels(y)[which.max(p)]
}
levels(y)[which.max(table(pd))]
}
}
PRED0=as.character(credit[i_test,"Y"])
PRED1=as.character(predict(LogisticModel,type = "class",
                           newdata=credit[i_test,]))
PRED21=as.character(Vectorize(function(i) prediction_oneshot(i,type=1))
                    (i_test))
PRED22=as.character(Vectorize(function(i) prediction_oneshot(i,type=2))(i_test))
PRED23=as.character(Vectorize(function(i) prediction_oneshot(i,type=3))(i_test))
PRED24=as.character(Vectorize(function(i) prediction_oneshot(i,type=4))(i_test))
PRED25=as.character(Vectorize(function(i) prediction_oneshot(i,type=5))(i_test))
PRED26=as.character(Vectorize(function(i) prediction_oneshot(i,type=6))(i_test))
B=data.frame(PRED0,PRED1,PRED21,PRED22,PRED23,PRED24,PRED25,PRED26)
B
}
 
s=1/100;setTxtProgressBar(pb, s*2)
for(i in 2:10){
  PREDICTION = rbind(PREDICTION,KFOLDglm(i))
s=s+1/100;setTxtProgressBar(pb, s*2)}
for(j in 1:5) PREDICTION[,j]=as.character(PREDICTION[,j])
L=list()
v=mean(PREDICTION[,1]!=PREDICTION[,2])
names(v)="logistic"
L[["logistic"]]=v
v=c(mean(PREDICTION[,1]!=PREDICTION[,3]),
  mean(PREDICTION[,1]!=PREDICTION[,4]),
  mean(PREDICTION[,1]!=PREDICTION[,5]),
  mean(PREDICTION[,1]!=PREDICTION[,6]),
  mean(PREDICTION[,1]!=PREDICTION[,7]),
  mean(PREDICTION[,1]!=PREDICTION[,8]))
names(v)=c("top10","max","10norm","cos","KL","OS")
L[["glm"]]=v

Some general thoughts on Partial Dependence Plots with correlated covariates

The partial dependence plot is a nice tool to analyse the impact of some explanatory variables when using nonlinear models, such as a random forest, or some gradient boosting.The idea (in dimension 2), given a model m(x_1,x_2) for \mathbb{E}[Y|X_1=x_1,X_2=x_2]. The partial dependence plot for variable x_1 is model m is function p_1 defined as x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2}}[m(x_1,X_2)]. This can be approximated, using some dataset using \widehat{p}_1(x_1)=\frac{1}{n}\sum_{i=1}^n m(x_1,x_{2,i})My concern here what the interpretation of that plot when there are some (strongly) correlated covariates. Let us generate some dataset to start with

n=1000
library(mnormt)
r=.7
set.seed(1234)
X = rmnorm(n,mean = c(0,0),varcov = matrix(c(1,r,r,1),2,2))
Y = 1+X[,1]-2*X[,2]+rnorm(n)/2
df = data.frame(Y=Y,X1=X[,1],X2=X[,2])

As we can see, the true model is here is y_i=\beta_0+\beta_1 x_{1,i}+\beta_2x_{2,i}+\varepsilon_i where \beta_1 =1 but the two variables are positively correlated, and the second one has a strong negative impact. Note that here

reg = lm(Y~.,data=df)
summary(reg)
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.01414    0.01601   63.35   <2e-16 ***
X1           1.02268    0.02305   44.37   <2e-16 ***
X2          -2.03248    0.02342  -86.80   <2e-16 ***

If we estimate a wrongly specified model y_i=b_0+b_1 x_{1,i}+\eta_i, we would get

reg1 = lm(Y~X1,data=df)
summary(reg1)
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.03522    0.04680  22.121   <2e-16 ***
X1          -0.44148    0.04591  -9.616   <2e-16 ***

Thus, on the proper model, \widehat{\beta}_1\sim+1.02 while \widehat{b}_1\sim-0.44  on the mispecified model.

Now, let us look at the parial dependence plot of the good model, using standard R dedicated packages,

library(pdp) 
pdp::partial(reg, pred.var = "X1", plot = TRUE,
              plot.engine = "ggplot2")

which is the linear line y=1+x, that corresponds to y=\beta_0+\beta_1x.

library(DALEX)
plot(DALEX::single_variable(DALEX::explain(reg,
data=df),variable = "X1",type = "pdp"))

which corresponds to the previous graph. Here, it is also possible to creaste our own function to compute that partial dependence plot,

pdp1 = function(x1){
  nd = data.frame(X1=x1,X2=df$X2)
  mean(predict(reg,newdata=nd))
}

that will be the straight line below (the dotted line is the theoretical one y=1+x,

vx=seq(-3.5,3.5,length=101)
vpdp1 = Vectorize(pdp1)(vx)
plot(vx,vpdp1,type="l")
abline(a=1,b=1,lty=2)

which is very different from the univariate regression on x_1

abline(reg1,col="red")

Actually, the later is very consistent with a local regression, only on x_1

library(locfit)
lines(locfit(Y~X1,data=df),col="blue")

Now, to get back to the definition of the partial dependence plot, x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2}}[m(x_1,X_2)], in the context of correlated variable, I was wondering if it would not make more sense to consider some local version actually, something like x_1\mapsto\mathbb{E}_{\mathbb{P}_{X_2|X_1}}[m(x_1,X_2)]. My intuition was that, somehow, it did not make any sense to consider any X_2 while X_1 was fixed (and equal to x_1). But it would make more sense actually to look at more valid X_2‘s given the value of X_1. And a natural estimate could be some k neareast-neighbors, i.e. \tilde{p}_1(x_1)=\frac{1}{k}\sum_{i\in\mathcal{V}_k(x)}^n m(x_1,x_{2,i}) where \mathcal{V}_k(x) is the set of indices of the k x_i‘s that are the closest to x, i.e.

lpdp1 = function(x1){
  nd = data.frame(X1=x1,X2=df$X2)
  idx = rank(abs(df$X1-x1))
  mean(predict(reg,newdata=nd[idx<50,]))
}
vlpdp1 = Vectorize(lpdp1)(vx)
lines(vx,vlpdp1,col="darkgreen",lwd=2)

Surprisingly (?), this local partial dependence plot gives a curve that corresponds to the simple regression…

Régression sur des variables corrélées, un peu de géométrie

Depuis quelques jours, je mets en ligne des billets sur la régression sur des variables corrélés, en commençant par du deuxième effet kiss-cool suivi d’un court billet sur des régressions en cascade. Oui, depuis le premier, mon collègue Olivier ne cesse de demander des compléments. Alors ce soir, on va faire des dessins…

On a ici 3 vecteurs, \color{red}{\vec{x_1}}, \color{green}{\vec{x_2}} et \color{blue}{\vec{y}}. Pour rappel, la longueur de mes vecteurs, c’est la variance de la variable associée, \|{\color{red}{\vec{x_1}}}\|=\text{Var}({\color{red}{x_1}}). Et commençons par supposer que les variables \color{red}{x_1} et \color{green}{x_2} sont indépendantes (ou plus simplement non-corrélées) soit {\color{red}{\vec{x_1}}}\perp\color{green}{\vec{x_2}}. C’est le dessin ci-dessous,

La régression linéaire multiple est ici équivalente aux régressions simples, comme on en parlait dans les billets précédant. Géométriquement, on voit que les coordonnées de la projection de \color{blue}{\vec{y}} sur l’espace engendré par \color{red}{\vec{x_1}} et \color{green}{\vec{x_2}} (c’est à dire le plan (x,y)) correpond au point noir. C’est la régression multiple. Les deux régressions simples sont obtenus en projetant respectivement sur sur l’espace engendré par \color{red}{\vec{x_1}} (c’est à dire la droite (x)) et sur \color{green}{\vec{x_2}} (c’est à dire la droite (y)).

Supposons maintenant que \color{red}{\vec{x_1}} et \color{green}{\vec{x_2}} ne sont plus orthogonaux. Plus précisément, on va garder \color{red}{\vec{x_1}} et \color{blue}{\vec{y}} inchangés, et on va changer , \color{green}{\vec{x_2}}, tout en gardant la norme \|\color{green}{\vec{x_2}}\| inchangée. C’est ce qu’on a sur le schéma ci-dessous

On avait vu qu’on ne pouvait plus faire deux régressions indépendantes, mais qu’on pouvait garder la première, et en faire ensuite une seconde, mais en faisant un peu attention. Pour la première régression, on va projeter \color{blue}{\vec{y}} sur \color{red}{\vec{x_1}} et ca c’est facile, rien n’a changé,

Pour la seconde étape, rappelons qu’on doit projeter à la fois \color{blue}{\vec{y}} et \color{green}{\vec{x_2}} sur l’orthogonal de \color{red}{\vec{x_1}}, c’est à dire ici le plan (y,z), en rouge ci-dessous

(je renvois au précédant billet pour les formules algébriques). On obtient alors les projections suivantes, qu’on pourrait noter \Pi_{\color{red}{x_1^{\perp}}}\color{blue}{\vec{y}} et \Pi_{\color{red}{x_1^{\perp}}}\color{green}{\vec{x_2}}

On le voit, si on regarde sur le mur du fond, en rouge, la projection de \Pi_{\color{red}{x_1^{\perp}}}\color{blue}{\vec{y}} sur \Pi_{\color{red}{x_1^{\perp}}}\color{green}{\vec{x_2}} est un peu plus grande, autrement dit, le coefficient de la régression simple sera un peu plus grand. En fait, si on veut faire des calculs, on voit que le sinus de l’angle \langle\color{red}{\vec{x_1}},\color{green}{\vec{x_2}}\rangle va intervenir ici… Plus l’angle \langle\color{red}{\vec{x_1}},\color{green}{\vec{x_2}}\rangle sera petit, plus les variables seront corrélées, et plus la correction sera importante. Graphiquement, la correction dont je parle, c’est le petit segment, à gauche sur le dessin

Mais simulons quelques données pour visualiser tout ça…

library(mnormt)
n=10000
r=0
set.seed(1)
Z=cbind(rnorm(n)*sqrt(2),rnorm(n)*sqrt(3))
df=data.frame(y=Z[,1]+Z[,2]+rnorm(n)*sqrt(3),x1=Z[,1],x2=Z[,2])

On simule ici des variables explicatives indépendamment. Et on peut vérifier que les distances sont bien celles de notre dessin

var(df$x1)
[1] 2.049731
var(df$x2)
[1] 2.810419
var(df$y)
[1] 8.066665

et en plus, les variables sont quasiment orthogonales

cor(df$x1,df$x2)
[1] 0.004845078

Si on fait la régression multiple

reg=lm(y~0+x1+x2,data=df)
summary(reg)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
x1  0.99904    0.01219   81.94   2e-16 ***
x2  1.00090    0.01017   98.40   2e-16 ***

On a des coefficients qui sont identiques (à très peu de choses près, mais le soucis vient du fait que j’ai enlevé la constante du modèle pour coller au mieux avec mon dessin)

reg1=lm(y~0+x1,data=df)
summary(reg1)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
x1  1.00489    0.01711   58.75   2e-16 ***

et pour la seconde régression aussi

reg2=lm(y~0+x2,data=df)
summary(reg2)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
x2  1.00496    0.01315   76.42   2e-16 ***

On a déjà parlé de ce résultat longuement (je renvoie au précédant billet, des régressions en cascade). Regardons maintenant ce qui se passe si on garde nos observations \color{blue}{y_i}, \color{red}{x_{1,i}} mais que maintenant, la variable \color{red}{x_{2,i}} se retrouve corrélée avec \color{red}{x_{1,i}} , disons avec une corrélation de l’ordre de 0.4

r=.4
df$x2=r*df$x1+sqrt(1-r^2)*df$x2
cor(df$x1,df$x2)
[1] 0.3461496

Faire la régression multiple  donne ici

reg=lm(y~0+x1+x2,data=df)
summary(reg)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
x1   0.5622     0.0130   43.26   2e-16 ***
x2   1.0921     0.0111   98.40   2e-16 ***

ce qui est différent des régressions simples, comme la première

reg1=lm(y~0+x1,data=df)
summary(reg1)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
x1  1.00489    0.01711   58.75   2e-16 ***

Ce que nous dit le théorème de Frisch-Waugh, c’est qu’on peut faire des régressions en cascades: on peut garder cette première régression, mais pour la seconde, on va régresser les projections sur l’orthogonal à \color{red}{\vec{x_1}} (le mur en rouge au fond dans notre dessin). Ici, on obtient

reg2bis=lm(residuals(lm(y~x1,data=df))~residuals(lm(x2~x1,data=df)))
summary(reg2bis)
 
Coefficients:
           Estimate Std. Error t value Pr(|t|)      
residuals(2) 1.0921     0.0111    98.4   2e-16 ***

qui donne effectivement un coefficient de régression un peu supérieur à celui que nous avions auparavant, comme nous l’observions sur le dessin. En fait on peut remarquer que la différence relative s’obtient très facilement avec la corrélation

1/sqrt(1-r^2)
[1] 1.091089

et pour rappel, d’un point de vue géométrique, la corrélation est un cosinus d’un angle, r=\cos(\langle\color{red}{\vec{x_1}},\color{green}{\vec{x_2}}\rangle). Autrement dit \sqrt{1-r^2}=\sin(\langle\color{red}{\vec{x_1}},\color{green}{\vec{x_2}}\rangle), c’est à dire que la correction est directement liée au sinus de l’angle \langle\color{red}{\vec{x_1}},\color{green}{\vec{x_2}}\rangle, comme mentionné auparavant… amusant, non?

Des régressions en cascade

Cette fin de semaine, je mettais en ligne un court billet du deuxième effet kiss-cool où je rappelais que quand on fait une régression sur plusieurs variables explicatives corrélées, ce n’est pas équivalent à faire plusieurs régressions simple. C’est ce que disait le théorème de Frisch-Waugh, publié il y a près de 90 ans. Mais je glissais qu’il était possible de faire des régressions en cascade, sans aller beaucoup plus loin. Il faut relire l’article de Michael Lovell qui donner une interprétation géométrique de ce dessin. Mais programmons le, pour le voir… Considérons une base avec 3 variables explicatives, correspondant à des nombres d’incendies à Chicago

chicago=read.table("http://freakonometrics.free.fr/chicago.txt", header=TRUE,sep=";")

La régression multiple s’écrit iciy_i=\beta_0+\beta_1 x_{1,i}+\beta_2 x_{2,i}+\beta_3 x_{3,i}+\varepsilon_iOn peut estimer ces paramètres par moindre carrés,

reg=lm(Fire~.,data=chicago)
summary(reg)
 
Coefficients:
            Estimate Std. Error t value  Pr(|t|)    
(Intercept) 22.07525    6.19447   3.564 0.000910 ***
X_1         -0.62764    5.28130  -0.119 0.905953    
X_2          0.22378    0.06161   3.632 0.000744 ***
X_3         -1.55059    0.38195  -4.060 0.000204 ***

et on a alors la prévision suivante
\widehat{y}_i=\widehat{\beta}_0+\widehat{\beta}_1 x_{1,i}+\widehat{\beta}_2 x_{2,i}+\widehat{\beta}_3 x_{3,i}En fait, je peux même visualiser cette prévision.

BC = cbind(x0=coefficients(reg)[1],
           x1=coefficients(reg)[2]*chicago[,2],
           x2=coefficients(reg)[3]*chicago[,3],
           x3=coefficients(reg)[4]*chicago[,4],
           yp=predict(reg),y=chicago$Fire)
colr = 1:4
dessin_fire = function(i,BC=BC){
plot(0:3,0:3,xlim=c(0,35),col="white",ylim=c(-1,3),xlab="",ylab="",axes=FALSE)
abline(v=BC[i,1],lty=2)
rect(BC[i,1],3,BC[i,1]+BC[i,2],2,col=colr[1],border=NA)
arrows(BC[i,1],2.5,BC[i,1]+BC[i,2],2.5,lwd=2,length=.1,col="white")
rect(BC[i,1]+BC[i,2],2,BC[i,1]+BC[i,2]+BC[i,3],1,col=colr[2],border=NA)
arrows(BC[i,1]+BC[i,2],1.5,BC[i,1]+BC[i,2]+BC[i,3],1.5,lwd=2,length=.1,col="white") 
rect(BC[i,1]+BC[i,2]+BC[i,3],1,BC[i,1]+BC[i,2]+BC[i,3]+BC[i,4],0,col=colr[3],border=NA)
segments(BC[i,5],0,BC[i,5],1,lwd=3)
arrows(BC[i,1]+BC[i,2]+BC[i,3],.5,BC[i,1]+BC[i,2]+BC[i,3]+BC[i,4],.5,lwd=2,length=.1,col="white") 
abline(v=BC[i,1],lty=2)
rect(BC[i,5],-1,BC[i,6],0,col=colr[4],density=20,border=NA)
segments(BC[i,6],0,BC[i,6],-1,lwd=3,col=colr[4])
text(26,2.5,expression(X[1]),pos=4,col=colr[1])
text(26,1.5,expression(X[2]),pos=4,col=colr[2])
text(26,.5,expression(X[3]),pos=4,col=colr[3])
text(26,-.5,expression(epsilon),pos=4,col=colr[4])
axis(1)}

Par exemple, pour la douzième observation de ma base

dessin_fire(12)

on obtient

On lit de haut en bas : on commence par constante, vers 22. Puis on rajoute trois fois rien, parce que x_1 était non-significative dans notre régression. Puis on rajoute un petit quelque chose à cause de x_2, pour passer à 25, et x_3 nous fait plonger de plus de 20 points, pour finir vers 3.67.Le trait noir est la prévision que l’on obtient, à partir des trois variables. En bleu, on peut même voir l’erreur.

On peut noter que x_1 est non-significative, tout en étant relativement corrélée avec y

cor(chicago[,1:2])
          Fire       X_1
Fire 1.0000000 0.3773486
X_1  0.3773486 1.0000000

Car si x_1 est non-significative, c’est parce que cette variable est très corrélée avec une autre variable explicative. En fait, on peut faire une série de régressions en cascade, en commençant par x_1, et plus spécifiquement, on peut écrire\widehat{y}_i=\underbrace{\widehat{b}_0+\widehat{b}_1 x_{1,i}}_{(1)}+\underbrace{\widehat{b}_{0,2}+\widehat{b}_2 \tilde{x}_{2,i}}_{(2)}+\underbrace{\widehat{b}_{0,3}+\widehat{b}_3 \tilde{x}_{3,i}}_{(3)}où le premier terme (1) est obtenu en régression simplement y sur x_1 (oui, on fait juste une régression simple),  y_i=b_0+b_1 x_{1,i}+\eta_ipuis on apporte une petite correction, pour tenir compte de ce que nous apprend x_2 une fois la première régression effectuée. C’est exactement ce que raconte le théorème de Frish-Waugh : on va projeter y sur x_1^{\perp} (car c’est justement ce que x_1 n’explique pas), et x_2 sur x_1^{\perp}  et faire la régression de ces deux projections, soit, pour (2) \Pi_{x_1^{\perp}}y_i=b_{0,2}+b_2 \Pi_{x_1^{\perp}}x_{2,i}+\eta_{2,i}puis pour (3) \Pi_{(x_1,x_2)^{\perp}}y_i=b_{0,3}+b_3 \Pi_{(x_1,x_2)^{\perp}}x_{3,i}+\eta_{2,i}En terme informatique, cela donne

reg1=lm(Fire~X_1,data=chicago)
reg2=lm(residuals(lm(Fire~X_1,data=chicago))~residuals(lm(X_2~X_1,data=chicago)))
reg3=lm(residuals(lm(Fire~X_1+X_2,data=chicago))~residuals(lm(X_3~X_1+X_2,data=chicago)))

soit ici

BC123=cbind(x0=coefficients(reg1)[1],
            x1=coefficients(reg1)[2]*chicago[,2],
            x2=coefficients(reg2)[1]+coefficients(reg2)[2]*residuals(lm(X_2~X_1,data=chicago)),
            x3=coefficients(reg3)[1]+coefficients(reg3)[2]*residuals(lm(X_3~X_1+X_2,data=chicago)),
            yp=predict(reg),y=chicago$Fire)

En adaptant la fonction de dessin précédante on obtient

dessin_fire_123(12)

Autrement dit, on commence avec une régression simple, en rouge: on part d’une constante vers 3.5, puis on augemente notre prévision, en tenant compte de x_1. Puis on corrige avec x_2 (ou plutôt la partie de x_2 non expliquée par x_1). sur ce qui n’avait pas été expliqué auparavant, ce qui pousse à revoir un peu à la baisse notre prévision, puis on tient compte de x_3 (ou là encore, sa projection sur l’orthogonal de x_1 et x_2). On note que la prévision est très exactement la même qu’auparavant.

Mais on peut aller plus loin… pourquoi commencer par x_1? On peut aussi commencer par x_2. On considère alors\widehat{y}_i=\underbrace{\widehat{b}_0+\widehat{b}_2 x_{2,i}}_{(2)}+\underbrace{\widehat{b}_{0,1}+\widehat{b}_1 \tilde{x}_{1,i}}_{(1)}+\underbrace{\widehat{b}_{0,3}+\widehat{b}_3 \tilde{x}_{3,i}}_{(3)}où le premier terme (2) est obtenu en régression simplement y sur x_2y_i=b_0+b_2 x_{2,i}+\eta_ipuis on apporte une petite correction, pour tenir compte de ce que nous apprend x_1 une fois la première régression effectuée, (1) \Pi_{x_2^{\perp}}y_i=b_{0,1}+b_1\Pi_{x_2^{\perp}}x_{1,i}+\eta_{1,i}puis pour (3) \Pi_{(x_1,x_2)^{\perp}}y_i=b_{0,3}+b_3 \Pi_{(x_1,x_2)^{\perp}}x_{3,i}+\eta_{2,i} Soit

reg1=lm(Fire~X_2,data=chicago)
reg2=lm(residuals(lm(Fire~X_2,data=chicago))~residuals(lm(X_1~X_2,data=chicago)))
reg3=lm(residuals(lm(Fire~X_1+X_2,data=chicago))~residuals(lm(X_3~X_1+X_2,data=chicago)))
BC213=cbind(x0=coefficients(reg1)[1],
            x1=coefficients(reg1)[2]*chicago[,3],
            x2=coefficients(reg2)[1]+coefficients(reg2)[2]*residuals(lm(X_1~X_2,data=chicago)),
            x3=coefficients(reg3)[1]+coefficients(reg3)[2]*residuals(lm(X_3~X_1+X_2,data=chicago)),
            yp=predict(reg),y=chicago$Fire)

et visuellement, on obtient

dessin_fire_213(12)

Cette fois, la constante est un peu plus élevée, et on commence par faire une régression seulement avec x_2, pour arriver un peu en dessous de 10. Puis on corrige. On note que cette seconde correction nous ramène… très exactement comme dans le cas précédant. Mais si on pense en projection successives, on ne doit pas être surpris.

On peut récapituler avec une autre prévision, soit avec le modèle de régresion multiple (mais là encore, x_1 n’explique pas grand chose)

dessin_fire(15)

soit en commencant par une régression simple sur x_1 puis en faisant des régression en cascade,

dessin_fire_123(15)

soit, alternativement, en commencant par une régression simple sur x_2

dessin_fire_213(15)

Comme annoncé, cette trois approches sont équivalentes, et donnent très exactement la même prévision.

Regression discontinuity model for TV series

In September, we are usually happy to see our favorite TV series back on air… Or not? Because, admit it, if we are happy to see those characters back, most of the time, we are disappointed, too. So why not look at the data, to confirm this feeling? Nazareno Andrade shared some nice codes to get IMDB ratings in a nice csv file (you can either use the large csv file, or run your own codes)

download.file("https://github.com/nazareno/imdb-series/raw/master/data/series_from_imdb.csv",
destfile="series_from_imdb.csv")
base = read.csv("series_from_imdb.csv")

It is a large dataset, with more than 64,000 episodes of almost 890 TV series,

str(base)
'data.frame':	64018 obs. of  18 variables:
 $ series_name: Factor w/ 889 levels "'Allo 'Allo!",..: 137 137 137 137 137 137 137 137 137 137 ...
 $ episode    : Factor w/ 54090 levels "-30-","¡Viva los muertos!",..: 32314 7446 16 7176 17748 9562 1379 36218 17845 5553 ...
 $ series_ep  : int  1 2 3 4 5 6 7 8 9 10 ...
 $ season     : int  1 1 1 1 1 1 1 2 2 2 ...
 $ season_ep  : int  1 2 3 4 5 6 7 1 2 3 ...
 $ user_rating: num  8.9 8.7 8.7 8.2 8.3 9.2 8.8 8.7 9.2 8.3 ...

Just pick a TV series, for instance Dan Harmon’s Community,

sbase = base[base$series_name=="Community",]

We can plot the evolution of the rating over the 110 episodes.

sbase=sbase[!duplicated(sbase[,c(1,2,4,5)]),]
sbase$series_ep=1:nrow(sbase)

()since there could be some problem with the data (such as duplicates, let us clean it quickly)

plot(sbase$series_ep,sbase$UserRating,xlab=sbase$series_name[1])
idx=c(0,which(diff(sbase$season)!=0),nrow(sbase))
abline(v=idx+.5,lty=2,col=colr[2])
a = unique(sbase$season)
for(u in a){
  ssbase = sbase[sbase$season==u,]
  reg = lm(UserRating~series_ep,data=ssbase)
  lines(ssbase$series_ep,predict(reg),col=colr[3],lwd=2)
}

The vertical lines are here to visualize the seasons. On issue is that the lenght can vary with time. Consider Linwood Boomer’s Malcom in The Middle,

sbase = base[base$series_name=="Malcolm in the Middle",]

or Craig Thomas and Carter Bays’s How I Met Your Mother,

sbase = base[base$series_name=="How I Met Your Mother",]

On those two, the evolution is rather stable. Look at AMC’s The Walking Dead,

sbase = base[base$series_name=="The Walking Dead",]

Now, look at Howard Gordon and Alex Gansa’s Homeland,

sbase = base[base$series_name=="Homeland",]

There is an issue here with the last episode of season4, “Long Time Coming“, that has a very poor rating. If we remove that point, we get the thin line. Note that the regression line is always increasing. For Michael Hirst’s Vickings, we have

sbase = base[base$series_name=="Vicking",]

If we look more carefully on the previous graph, for five seasons (out of six), we have a positive slope. Well, to be honest, it is not significantly positive most of the time, but still. Out of 80 shows, and a total of 583 seasons, the slope is postive 75% of the time (433) and negative 25% of the time (150).

BASE = NULL
L80 = unique(base$series_name)
for(j in 1:length(L)){
sbase=base[base$series_name==L[j],]
sbase=sbase[!duplicated(sbase[,c(1,2,4,5)]),]
sbase=sbase[sbase$season>0,]
sbase$series_ep=1:nrow(sbase)
a=unique(sbase$season)
a=a[!is.na(a)]
for(u in a){
  ssbase=sbase[sbase$season==u,]
  reg=lm(UserRating~series_ep,data=ssbase)
  pente = NA
  if((!is.na(coefficients(reg)[2]))&(!is.na((summary(reg)$coefficients[2,4])))){
  if((summary(reg)$coefficients[2,4]<.05)&(coefficients(reg)[2]>0)) pente="positive"
  if((summary(reg)$coefficients[2,4]<.05)&(coefficients(reg)[2]<0)) pente="negative" sdf=data.frame(nom=sbase$series_name[1],season=u,slope=coefficients(reg)[2],inf=confint(reg)[2,1],sup=confint(reg)[2,2],signe=pente) BASE=rbind(BASE,sdf)} }} str(BASE) 'data.frame': 583 obs. of 6 variables: $ nom : Factor w/ 80 levels "Friends","Game of Thrones",..: 1 1 1 1 1 1 1 1 1 1 ... 
 
mean(BASE$slope>0)
[1] 0.7427101
table(BASE$signe)
negative positive 
      15      144

Most of the time, the slope is not significant. To be more specific, 72% of the time, the slope is not significant. But when it is, 90% of the time, it is positive (144 seasons). Let us look at other TV series, for instance Joel Surnow and Robert Cochran’s 24,

sbase = base[base$series_name=="24",]

Álex Pina’s La Casa de Papel,

sbase = base[base$series_name=="La Casa de Papel",]

Steven Knight’s Peaky Blinders,

sbase = base[base$series_name=="Peaky Blinders",]

or David Simon’s The Wire,

sbase = base[base$series_name=="The Wire",]

The slope is increasing over almost all seasons. But a major drawback is that when we get back to our show, for a new season, we usually get disapointed. More specifically, we can quantify the difference in red below

that can be estimated using

sbase12 = sbase[sbase$season%in%c(a[ij],a[ij+1]),]
seuil = sbase12$series_ep[which(diff(sbase12$season)!=0)]+.5
s = function(x) (x-seuil)*(x>seuil)
reg = lm(UserRating~series_ep+s(series_ep)+I(series_ep>seuil),data=sbase12)

Here we have

summary(reg)
Coefficients:
                         Estimate Std. Error t value  Pr(|t|)    
(Intercept)               8.45000    0.16338  51.719    2e-16 ***
series_ep                 0.10000    0.03235   3.091 0.008598 ** 
s(series_ep)              0.02000    0.04218   0.474 0.643291    
I(series_ep)TRUE.        -1.01778    0.20486  -4.968 0.000257 ***

so the drop of 1 point (out of 10) cannot be claimed as being significant. That is the idea of regression discontinuity.

If we loop again over all our series, we have 485 pairs of consecutive seasons. As expected, in 75% of the casse, from season t-1 to season t, we observe a negative rupture. As previously, in 70% of the cases, it is not significat (with linear models before and after), and when it is significant, it is negative in 96% of the cases ! But an alternative can be to use nonparametric models, on both sides.

To illustrate, consider David Benioff and D. B. Weiss’s Game of Thrones,

sbase = base[base$series_name=="Game of Thrones",]

but let us remove the last season (no spoiler here, but clearly not worst watching)

Consider for instance the drop between season 1 and season 2,

library(rdd)
sbase12=sbase[sbase$season%in%c(1,2),]
lmr=RDestimate(UserRating~series_ep,data=sbase12,cutpoint=mean(range(sbase12$series_ep)))
plot(lmr)

This is very consistent with what we observed with our linear regressions actually,

seuil=10.5
s = function(x) (x-seuil)*(x>seuil)
reg = lm(UserRating~series_ep+s(series_ep)+I(series_ep>seuil),data=sbase12)
summary(reg)
 
Coefficients:
                         Estimate Std. Error t value  Pr(|t|)    
(Intercept)               8.70000    0.15458  56.281    2e-16 ***
series_ep                 0.07273    0.02491   2.919  0.01003 *  
s(series_ep)              0.01455    0.03523   0.413  0.68520    
I(series_ep)TRUE         -0.94000    0.20316  -4.627  0.00028 ***

Here, the drop of one point is significant…

So, your favorite show had an outstanding finale ? and you can’t wait to watch the new season… Well, statistically, it’s very likely that you will be disapointed by the first episode of the forthcoming season…

Du deuxième effet kiss-cool (régression multiple, scoring et évaluation)

Lorsque j’étais petit (il y a fort longtemps, à une époque où je regardais pas mal la télévision) il y avait une publicité pour les pastilles kiss cool,

Et quand je présente la régression multiple à mes étudiants, je ne peux m’empêcher d’y penser… Mais avant d’aller plus loin sur le parallèle, faisons un peu de mathématiques.

Les techniques de régression permettent d’avoir des jolis théorèmes, souvent d’une portée incroyablement générale (moyennant quelques petites hypothèses techniques). Par exemple le théorème de Frisch-Waugh, en régression multiple, dont j’ai déjà parlé dans des vieux billets. Un des corolaires est que lorsque les variables explicatives dans un modèle de régression sont orthogonales, la régression multiple correspond à une collection de régressions simples (autrement dit, les estimateurs par moindres carrés coïncident). Formellement, si on considère le modèley_i=\beta_0+\beta_1x_{1,i}+\beta_2x_{2,i}+\varepsilon_i(avec les hypothèses usuelles des modèles de régression) alors, si les variables x_1 et x_2 sont non-corrélées, \widehat{\beta}_1 coïncide avec \widehat{b}_1 dans le modèley_i=b_0+b_1x_{1,i}+\eta_iOn peut faire une petite simulation pour confirmer (sinon, bien entendu, on peut regarder la démonstration qui se trouve dans tous les livres d’économétrie, qui est d’ailleurs un simple résultat d’algèbre linéaire, ou de géométrie, avec des projections successives sur des sous-espaces orthogonaux – même si c’est à Michael Lovell que l’on doit l’approche géométrique).

library(mnormt)
r = 0
S = matrix(c(1,r,r,1),2,2)
n = 1000
set.seed(1)
X = rmnorm(n,c(0,0),S)
E = rnorm(n,0,.3)
Y = 2+X[,1]-2*X[,2]+E
base = data.frame(Y=Y-mean(Y),X1=X[,1]-mean(X[,1]),X2=X[,2]-mean(X[,2]))

Petite note technique: je vais centrer les variables, histoire de ne pas avoir à garder la constante dans mon modèle (qui va compliquer les notations, et potentiellement embrouiller un peu le billet). La constante est nulle ici, on le voit,

reg = lm(Y~X1+X2,data=base)
summary(reg)
 
Coefficients:
              Estimate Std. Error t value  Pr(|t|)    
(Intercept)  7.449e-18  9.777e-03     0.0        1    
X1           1.012e+00  9.171e-03   110.3    2e-16 ***
X2          -1.988e+00  9.719e-03  -204.6    2e-16 ***

Bref, je régresse sans constante

reg = lm(Y~0+X1+X2,data=base)
summary(reg)
 
Coefficients:
    Estimate Std. Error t value  Pr(|t|)    
X1  1.011520   0.009166   110.3   2e-16 ***
X2 -1.988321   0.009714  -204.7   2e-16 ***

Maintenant, on va regarder les deux régressions simple

reg1 = lm(Y~0+X1,data=base)
summary(reg1)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
X1  1.01300    0.06006   16.86   2e-16 ***

quand on régresse juste sur la première variable, et pour la seconde, on obtient

reg2 = lm(Y~0+X2,data=base)
summary(reg2)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
X2 -1.98916    0.03528  -56.39   2e-16 ***

Autrement dit, nos estimateurs sont très proches (si on avait laissé la constante, ils coïncideraient).

Maintenant, le gros soucis est que ce résultat n’est plus valide lorsque les variables explicatives sont corrélées. Le théorème de Frisch-Waugh nous explique comment ces estimateurs divergent, mais le point ici est que si les variables sont corrélées, utiliser les régressions simples donne deux estimateurs biaisés des vrais paramètres (du modèle multiple). Recommençons l’exercice précédant

r=.9
S=matrix(c(1,r,r,1),2,2)
set.seed(1)
X=rmnorm(n,c(0,0),S)
Y = 2+X[,1]-2*X[,2]+E
base = data.frame(Y=Y-mean(Y),X1=X[,1]-mean(X[,1]),X2=X[,2]-mean(X[,2]))
reg = lm(Y~X1+X2,data=base)
summary(reg)
reg = lm(Y~0+X1+X2,data=base)
summary(reg)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
X1  0.98740    0.02205   44.79   2e-16 ***
X2 -1.97321    0.02229  -88.54   2e-16 ***

(on retrouve des valeurs proches de celles utilisées pour simuler nos données, donc tout va bien). En revanche, pour les régressions simples, on obtient

reg1 = lm(Y~0+X1,data=base)
summary(reg1)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
X1 -0.78784    0.02726   -28.9   2e-16 ***

et

reg2 = lm(Y~0+X2,data=base)
summary(reg2)
 
Coefficients:
   Estimate Std. Error t value Pr(|t|)    
X2 -1.06543    0.01607  -66.31   2e-16 ***

Autrement dit, \widehat{b}_1\neq \widehat{\beta}_1 (et pareil pour le second). Ce qui signifie que si on construit une prévision à partir de ce modèle, \widetilde{y}_i=\widehat{b}_1x_{1,i}+\widehat{b}_2x_{2,i}, on sera potentiellement très loin de la “bonne” prévision \widehat{y}_i=\widehat{\beta}_1x_{1,i}+\widehat{\beta}_2x_{2,i} (qui sera sans biais, etc, je renvoie ici vers n’importe quel cours d’économétrie). On peut le voir sur un dessin,

Yp=reg1$coefficients[1]*base$X1+reg2$coefficients[1]*base$X2
plot(base$Y,predict(reg),ylim=range(Yp),col=rgb(0,0,1,.5),cex=.7,xlab="observé",ylab="prédit")
abline(a=0,b=1,lty=2)
points(base$Y,Yp,col=rgb(1,0,0,.5),cex=.7)
abline(lm(predict(reg)~base$Y),col="blue")
abline(lm(Yp~base$Y),col="red")

En bleu, on a les prévisions avec le modèle linéaire multiple, et en rouge, en faisant deux régressions indépendantes… Si on regarde sur la droite, le modèle rouge sur-valorise, ou disons sur-estime largement, alors qu’il sous-valorise à gauche. La différence entre les deux droites s’interprète ici comme un biais. Sur le graphique ci-dessous, on peut visualiser la distribution des \widetilde{y}_i=\widehat{b}_1x_{1,i}+\widehat{b}_2x_{2,i}, en rouge, et les \widehat{y}_i=\widehat{\beta}_1x_{1,i}+\widehat{\beta}_2x_{2,i} , en bleu. Cet excès de dispersion, de variance, qu’on observe sur les points rouges, j’interprète ça comme de la polarisation

plot(density(Yp),col="red")lwd=2)
lines(density(predict(reg)),col="blue",lwd=2)

En fait, ce que raconte le théorème de Frisch-Waugh (et je renvoie à mon précédant billet pour plus de détails), c’est qu’on a le droit de faire plusieurs régressions, mais en cascade ! et surtout pas indépendamment : je peux expliquer y avec la première variable x_1, et ensuite régresser le résidu (ce qu’on n’a pas pu expliquer) sur la seconde x_2. Cette méthode donnera la même prévision que le modèle multiple.

On peut aller un peu plus loin: on peut jouer sur la valeur de la corrélation, pour mesure l’écart entre les deux prévisions (ici je prévois pour une observation au hasard, la 78ème)

comp=function(r){
S=matrix(c(1,r,r,1),2,2)
set.seed(1)
X=rmnorm(n,c(0,0),S)
Y = 2+X[,1]-2*X[,2]+E
base = data.frame(Y=Y-mean(Y),X1=X[,1]-mean(X[,1]),X2=X[,2]-mean(X[,2]))
reg = lm(Y~0+X1+X2,data=base)
reg1 = lm(Y~0+X1,data=base)
reg2 = lm(Y~0+X2,data=base)
y1=predict(reg)
y2=reg1$coefficients[1]*base$X1+reg2$coefficients[1]*base$X2
c(y1[78],y2[78],(y2[78]-y1[78])/y1[78])}
vR=seq(0,.98,by=.02)
vc=Vectorize(comp)(vR)
plot(vR,vc[3,]*100,ylab="Différence relative (%)",xlab="Corrélation",type="l")

On prédit ici pour une observation avec un large y_i, ce qui correspondait à la partie de droite du graphique précédant (avec les points rouges et bleus).

C’est ce que j’appelle le deuxième effet kiss-cool. Quand on est dans le premier cas, avec les variables indépendantes (corrélation nulle, c’est à dire à gauche sur ma figure), on explique ce qu’on peut avec la première variable, et on rajoute l’impact de la seconde. Et \widetilde{y}_i\sim\widehat{y}_i. Le soucis ici est que je n’ai pas le droit de considérer deux modèles indépendants lors que les variables sont très corrélées. Une partie de l’explication fournie par la seconde variable était déjà inclue dans la première. Par exemple avec deux variables très (positivement) corrélées, la prévision qu’on obtient en ajoutant les deux effets estimés indépendamment avec deux régressions simple \widetilde{y}_i=\widehat{b}_1x_{1,i}+\widehat{b}_2x_{2,i}, on sur-estime de 50% à 70% la “vraie” prévision \widehat{y}_i=\widehat{\beta}_1x_{1,i}+\widehat{\beta}_2x_{2,i}.

Tous les chercheurs savent savent ça… on parle ici de résultats du tout premier cours de modèles linéaires. Et malgré tout, en pratique, on continue à utiliser cette seconde méthode. Un exemple bien connu est celui de l’évaluation (des étudiants, des chercheurs, peu importe). Par exemple, quand on évalue un dossier de financement pour un chercheur, on nous demande de mettre un score

  • pour les publications scientifiques (nombre, qualité, etc)
  • pour l’encadrement d’étudiants (nombre, niveau, etc)
  • pour la qualité de l’environnement (prestige du labo, etc)
  • etc

Et à la fin, on somme tout. Mais on le voit, ces variables sont très très corrélées: si vous êtes dans un labo prestigieux, vous attirez beaucoup de candidatures d’étudiants (et des bons), et avoir beaucoup d’étudiants va permettre d’avoir plus de publications (si on ajoute son nom comme co-auteur). Bref, on est typiquement dans un modèle à double (voire triple) effet kiss-cool. Quelqu’un dans un bon labo aura un bon score sur le troisième item, mais aussi un bon sur le nombre d’étudiants, et aussi sur les publications. Ajouter ces scores est stupide, car on a une spirale infernale (les bons sont sur-évalués, et les moins bon, sous-évalués), c’est ce que racontait mon premier dessin, avec les points rouges et bleus. C’est un effet clivant de polarisation forte.

Si on voulait faire les choses proprement, ce que dit le le théorème de Frisch-Waugh, c’est que les scores devraient être attribués en corrigeant de la corrélation entre les variables

  • on peut commencer par calculer un score pour la qualité de l’environnement (prestige du labo, etc)
  • à environnement donné, calculer un score pour l’encadrement d’étudiants (nombre, niveau, etc)
  • à environnement donné, et à encadrement d’étudiants donné, calculer un score pour les publications
  • etc

C’est comme la situation que je voyais en France, où on pouvait avoir une variable qui tenait compte d’un prestige du chercheur (par exemple, être chercheur CNRS donnait un bonus) et une autre sur le dossier de publications. Sauf que les deux sont corrélés. Et la plupart des classements d’universités sont construits à partir de scores qui sont loin d’être indépendants.

Bref, tant que l’évaluation se fera en sommant des scores qui sont construits sur des critères souvent très corrélés, on polarise fortement la population.

Est-ce gênant? A priori oui. Car le message que cela envoie est qu’il existe deux classes, les bons, et les mauvais, alors qu’en réalité, le niveau est beaucoup plus homogène qu’il n’y paraît.  Un petit effet positif se retrouve démultiplié par le fait qu’il va se répercuter (positivement) sur plein d’autres variables. C’est mon effet kiss-cool. Mais on pourrait se dire que c’est un problème de distribution des notes finales. Si l’ordre est préservé, on pourrait se dire que ce n’est pas très grave. Malheureusement, ce n’est pas le cas.

Si on quitte un instant le cas de la corrélation très forte, les rangs des prédictions (c’est à dire les rangs des chercheurs une fois donnés les notes \widehat{y}_i ou \widetilde{y}_i ) sont moins corrélés si la corrélation sous-jacente est importante (mais pas trop)

comp=function(r){
S=matrix(c(1,r,r,1),2,2)
set.seed(1)
X=rmnorm(n,c(0,0),S)
Y = 2+X[,1]-2*X[,2]+E
base = data.frame(Y=Y-mean(Y),X1=X[,1]-mean(X[,1]),X2=X[,2]-mean(X[,2]))
reg = lm(Y~0+X1+X2,data=base)
reg1 = lm(Y~0+X1,data=base)
reg2 = lm(Y~0+X2,data=base)
y1=predict(reg)
y2=reg1$coefficients[1]*base$X1+reg2$coefficients[1]*base$X2
cor(y1,y2,method="spearman")}
vR=seq(0,.98,by=.02)
vc=Vectorize(comp)(vR)
plot(vR,vc,ylab="Corrélation de rangs",xlab="Corrélation",type="l")

Autrement dit, si les variables x_1 et x_2 sont très peu corrélées, on a les mêmes rangs  (globalement). En revanche, si la corrélation entre les variables x_1 et x_2 augmente, le rang des \widetilde{y}_i est de moins en moins cohérent avec celui entre les \widehat{y}_i  (qui devrait être celui que l’on recherche).

Bref, il serait temps de comprendre enfin sérieusement les conséquences de ce joli papier, publié il y a presque 90 ans

Combining automatically factor levels with trees

Last year, in a post, I discussed how to merge levels of factor variables, using combinatorial techniques (it was for my STT5100 cours, and trees are not in the syllabus), with an extension on trees at the end of the post.

consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
  x2=cut(x2,breaks=
  c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
  labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

Just by looking at the data (see the previous post), we could easily get the feeling that 10 levels was too much.

Following my post, Przemyslaw sent a comment suggesting to use

library(factorMerger)

It is indeed a nice package (unless you have really really big datasets with a lot of categories in your factor variables – as I experienced recently), and you can get great graphs

MF = mergeFactors(response = b$y, 
             factor = b$x2, 
             family = "gaussian")
plot(MF)

Here is suggests to create three categories. Recall that with student t-tests (changing the reference), we got

Another interesting package, by Piro Polo, is

library(tree.bins)

To use it, we simply call the following function, and we transform automatically our dataset : the continuous variables remain unchanged, and (possibly) categories of categorical variables are merged

b.bins = tree.bins(data=b, y=y)
str(b.bins)
Classes ‘data.table’ and 'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: chr  "Group.4" "Group.4" "Group.4" "Group.4" ...
 - attr(*, ".internal.selfref")= 
table(b.bins$x2)

Group.1 Group.2 Group.3 Group.4 
     23      35      26     116

here in four groups. To get the correspondance, use

tree.bins(data=b, y=y, return = "lkup.list")
[[1]]
   x2 Categories
1   E    Group.1
2   G    Group.2
3   C    Group.2
4   B    Group.3
5   J    Group.3
6   I    Group.4
7   A    Group.4
8   H    Group.4
9   F    Group.4
10  D    Group.4

(we have a list with one element, one dataframe, since there is only one factor variable). Cool, isn’t it ? I miss Przemyslaw’s plot, but this is rather quick, and efficient..

 

Estimates on training vs. validation samples

Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.

In order to visualize this problem, consider my (simple) dataset

MYOCARDE=read.table(
  "http://freakonometrics.free.fr/saporta.csv",
  head=TRUE,sep=";")

Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)

n=nrow(MYOCARDE)
M=matrix(NA,100,ncol(MYOCARDE))
colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7])
S1=S2=M1=M2=M
for(i in 1:100){
idx = which(sample(0:1,size=n, replace=TRUE)==1)
reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,]))
nm=names(reg$coefficients)
M1[i,nm]=reg$coefficients
S1[i,nm]=summary(reg)$coefficients[,2]
f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="")
reg=glm(f,data=MYOCARDE[-idx,])
M2[i,nm]=reg$coefficients
S2[i,nm]=summary(reg)$coefficients[,2]
}

Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)

for(j in 1:8){
idx=which(!is.na(M1[,j]))
plot(M1[idx,j],M2[idx,j])
abline(a=0,b=1,lty=2,col="gray")
segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j])  
segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j])  
}

For instance, with the intercept, we have the following

 

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).

We can also visualize the joint distribution of the two estimators,

for(j in 1:8){
library(ks)
idx = which(!is.na(M1[,j]))
Z = cbind(M1[idx,j],M2[idx,j])
H = Hpi(x=Z)
fhat = kde(x=Z, H=H)
image(fhat$eval.points[[1]],
fhat$eval.points[[2]],fhat$estimate)
abline(a=0,b=1,lty=2,col="gray")
abline(v=0,lty=2)
abline(h=0,lty=2)
}

which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).

On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).

Others are much more consistent (with some possible outliers)

 

 

On the next one, we have again significance on the training sample, but not on the validation sample,

 

 

and probably more interesting

where the two are very consistent.

Variance of the slope in a regression model

In my “applied linear models” exam, there was a tricky question (it was a multiple choice, so no details were asked). I was simply asking if the following statement was valid, or not

Consider a linear regression with one single covariate, y=\beta_0+\beta_1x_1+\varepsilon and the least-square estimates. The variance of the slope is \text{Var}[\widehat{\beta}_1] Do we decrease this variance if we add one variable, and consider y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon ?

For the exam, the expected answer was simply “no”. In a nutshell, there are two cases where we should expect different changes,

  • if x_1 and x_2 are highly correlated, then we should expect the variance to increase
  • if x_1 and x_2 are not correlated, then we should expect the variance to decrease

We did briefly observed (and discussed) those points on examples during the lecture… but I wanted to go a bit further, since I couldn’t find any analytical results. Let us generate a model y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon, and then compare the variance \text{Var}[\widehat{\beta}_1] on the two fitted modes, depending on the correlation between x_1 and x_2

library(mnormt)
n=200
s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+X[,2]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

Let us generate 500 samples for each value of the correlation, from -0.9 to _0.9

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and let us plot the ratio of the two variances

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

If the ratio exceeds 1, the variance increases when adding a covariate.

Indeed, here, when the two variables are independent, the variance is divided by two. But when covariates are highly correlated, the variance is multiplied by two…

Now, what if, actually, x_2 is not a real explanatory variable : the true model we generate is y=\beta_0+\beta_1x_1+\varepsilon. In that case,

s=function(r=0){
S=matrix(c(1,r,r,1),2,2)
X=rmnorm(n,c(0,0),S)
B=data.frame(y=-2+X[,1]+rnorm(n)/2,
x1=X[,1],
x2=X[,2])
reg12=lm(y~x1+x2,data=B)
reg1=lm(y~x1,data=B)
k=summary(reg12)$coefficients[2,2]/summary(reg1)$coefficients[2,2]
k}

we get our samples as previously

M=NULL
for(r in ((-9):9)/10) M=cbind(M,Vectorize(s)(rep(r,500)))

and we plot those ratios

plot(0:1,0:1,xlim=c(-1,1),ylim=c(0,2),col="white")
for(i in 1:19) points(rep((((-9):9)/10)[i],500),M[,i],col="light blue")
VM=apply(M,2,mean)
lines((((-9):9)/10),VM,col="red",lwd=2)
abline(h=1,lty=2)

In the case we add a useless variable x_2, whatever the correlation with x_1, it will always, on average, increase the variance of \widehat{\beta}_1.

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

Foundations of Machine Learning, part 5

This post is the nineth (and probably last) one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 8 is online here.

Optimization and algorithmic aspects

In econometrics, (numerical) optimization became omnipresent as soon as we left the Gaussian model. We briefly mentioned it in the section on the exponential family, and the use of the Fisher score (gradient descent) to solve the first order condition \mathbf{X}^T W(\beta)^{-1})[y-\widehat{y}]=\mathbf{0}. In learning, optimization is the central tool. And it is necessary to have effective optimization algorithms, to solve problems (described previously) of the form: \widehat{\beta}\in\underset{\beta\in\mathbb{R}^p}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda\Vert\boldsymbol{\beta}\Vert\right\rbraceIn some cases, instead of global optimization, it is sufficient to consider optimization by coordinates (widely studied in Daubechies et al. (2004)). If f:\mathbb{R}^d\rightarrow\mathbf{R} is convex and differentiable, if \mathbf{x} satisfies f(\mathbf{x}+h\boldsymbol{e}_i)\geq f(\mathbf{x}) for any h>0 and i\in\{1,\cdots, d\}then f(\mathbf{x})=\min\{f\}, where \mathbf{e}=(\mathbf{e}_i) is the canonical basis of \mathbb{R}^d. However, this property is not true in the non-differentiable case. But if we assume that the non-differentiable part is separable (additively), it becomes true again. More specifically, iff(\mathbf{x})=g(\mathbf{x})+\sum_{i=1}^d h_i(x_i)with\left\lbrace\begin{array}{l}g: \mathbb{R}^d\rightarrow\mathbb{R}\text{ convex-differentiable}\\h_i: \mathbb{R}\rightarrow\mathbb{R}\text{ convex}\end{array}\right.This was the case for Lasso regression, \beta)\mapsto\| \mathbf{y}-\beta_0-\mathbf{X}\beta\|_{\ell_2 }+\lambda\|\beta\|_{\ell_1}, as shown by Tsen (2001). Getting back to our initial notations, we can use a coordinate descent algorithm: from an initial value \mathbf{x}^{(0)}, we consider (by iterating)x_j^{(k)}\in\text{argmin}\big\lbrace f(x_1^{(k)},\cdots,x_{k-1}^{(k)},x_k,x_{k+1}^{(k-1)},\cdots,x_n^{(k-1)})\big\rbrace for j=1,2,\cdots,nThese algorithmic problems and numerical issues may seem secondary to econometricians. However, they are essential in automatic learning: a technique is interesting if there is a stable and fast algorithm, which allows to obtain a solution. These optimization techniques can be transposed: for example, this coordinate descent technique can be used in the case of SVM methods (known as “vector support” methods) when the space is not linearly separable, and the classification error must be penalized (we will come back to this technique in the next section).

In-sample, out-of-sample and cross-validation

These techniques seem intellectually interesting, but we have not yet discussed the choice of the penalty parameter \lambda. But this problem is actually more general, because comparing two parameters \widehat{\beta}_{\lambda_1} and \widehat{\beta}_{\lambda_2} is actually comparing two models. In particular, if we use a Lasso method, with different thresholds \lambda, we compare models that do not have the same dimension. Previously, we have addressed the problem of model comparison from an econometric perspective (by penalizing overly complex models). In the learning literature, judging the quality of a model on the data used to construct it does not make it possible to know how the model will behave on new data. This is the so-called “generalization” problem. The traditional approach then consists in separating the sample (size n) into two parts: a part that will be used to train the model (the training database, in-sample, size m) and a part that will be used to test the model (the testing database, out-of-sample, size n-m). The latter then makes it possible to measure a real predictive risk. Suppose that the data are generated by a linear model y_i=\mathbf{x}_i^T \beta_0+\varepsilon_i where \varepsilon_i are independent and centred law achievements. The empirical quadratic risk in-sample is here\frac{1}{m}\sum_{i=1}^m\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big)=\mathbb{E}\big([\mathbf{x}_i^T \widehat{\beta}-\mathbf{x}_i^T \beta_0]^2\big),for any observation i. Assuming the residuals \varepsilon Gaussian, then we can show that this risk is worth \sigma^2 \text{trace} (\Pi_X)/m is \sigma^2 p/m. On the other hand, the empirical out-of-sample quadratic risk is here \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) where \mathbf{x} is a new observation, independent of the others. It can be noted that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big\vert \mathbf{x}\big)=\text{Var}\big(\mathbf{x}^T \widehat{\beta}\big\vert \mathbf{x}\big)=\sigma^2\mathbf{x}^T(\mathbf{x}^T\mathbf{x})^{-1}\mathbf{x},and by integrating with respect to \mathbf{x}, \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T\beta_0]^2\big)=\sigma^2\text{trace}\big(\mathbb{E}[\mathbf{x}\mathbf{x}^T]\mathbb{E}\big[(\mathbf{x}^T\mathbf{x})^{-1}\big]\big).The expression is then different from that obtained in-sample, and using the Groves & Rothenberg (1969) increase, we can show that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \geq \sigma^2\frac{p}{m}which is pretty intuitive, when we start thinking about it. Except in some simple cases, there is no simple (explicit) formula. Note, however, that if \mathbf{X}\sim\mathcal{N}(0,\sigma^2 \mathbb{I}), then \mathbf{x}^T \mathbf{x} follows a Wishart law, and it can be shown that \mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big)=\sigma^2\frac{p}{m-p-1}.If we now look at the empirical version: if \widehat{\beta} is estimated on the first m observations,\widehat{\mathcal{R}}^{~\text{ IS}}=\sum_{i=1}^m [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2\text{ and }\widehat{\mathcal{R}}^{\text{ OS}}=\sum_{i=m+1}^{n} [y_i-\boldsymbol{x}_i^T\widehat{\boldsymbol{\beta}}]^2and as Leeb (2008) noted, \widehat{\mathcal{R}}^{\text{IS}}-\widehat{\mathcal{R}}^{\text{OS}}\approx 2\cdot\nu where \nu represents the number of degrees of freedom, which is not unlike the penalty used in the Akaike test.

Figure 4 shows the respective evolution of \widehat{\mathcal{R}}^{\text{IS}} and \widehat{\mathcal{R}}^{\text{OS}} according to the complexity of the model (number of degrees in a polynomial regression, number of nodes in splines, etc). The more complex the model, the more \widehat{\mathcal{R}}^{\text{IS}} will decrease (this is the red curve, below). But that’s not what we’re interested in here: we want a model that predicts well on new data (i. e. out-of-sample). As Figure 4 shows, if the model is too simple, it does not predict well (as it does with in-sample data). But what we can see is that if the model is too complex, we are in a situation of “overlearning”: the model will start to model the noise. Of course, this figure should remind us of the one we’ve seen in our second post of that series

Figure 4 : Generalization, under- and over-fitting

Instead of splitting the database in two, with some of the data that will be used to calibrate the model and some to study its performance, it is also possible to use cross-validation. To present the general idea, we can go back to the “jackknife”, introduced by Quenouille (1949) (and formalized by Quenouille (1956) and Tukey (1958)) relatively used in statistics to reduce bias. Indeed, if we assume that \{y_1,\cdots,y_n\} is a sample drawn according to a law F_\theta, and that we have an estimator T_n (\mathbf{y})=T_n (y_1,\cdots,y_n), but that this estimator is biased, with \mathbf{E}[T_n (\mathbf{Y})]=\theta+O(n^{-1}), it is possible to reduce the bias by considering \widetilde{T}_n(\mathbf{y})=\frac{1}{n}\sum_{i=1}^n T_{n-1}(\mathbf{y}_{(i)})\text{ where }\mathbf{y}_{(i)}=(y_1,\cdots,y_{i-1},y_{i+1},\cdots,y_n)It can then be shown that \mathbb{E}[\tilde{T}_n(Y)]=\theta+O(n^{-2})The idea of cross-validation is based on the idea of building an estimator by removing an observation. Since we want to build a predictive model, we will compare the forecast obtained with the estimated model, and the missing observation\widehat{\mathcal{R}}^{\text{ CV}}=\frac{1}{n}\sum_{i=1}^n \ell(y_i,\widehat{m}_{(i)}(\mathbf{x}_i))We will speak here of the “leave-one-out” (loocv) method.

This technique reminds us of the traditional method used to find the optimal parameter in exponential smoothing methods for time series. In simple smoothing, we will construct a forecast from a time series as {}_t\widehat{y}_{t+1} =\alpha\cdot{}_{t-1}\widehat{y}_t +(1-\alpha)\cdot y_t, where \alpha\in[0,1], and we will consider as “optimal” \alpha^\star = \underset{\alpha\in[0,1]}{\text{argmin}}\left\lbrace \sum_{t=2}^T \ell({}_{t-1}\widehat{y}_{t},y_{t}) \right\rbraceas described by Hyndman et al (2009).

The main problem with the leave-one-out method is that it requires calibration of n models, which can be problematic in large dimensions. An alternative method is cross validation by k-blocks (called “k-fold cross validation”) which consists in using a partition of \{1,\cdots,n\} in k groups (or blocks) of the same size, \mathcal{I}_1,\cdots,\mathcal{I}_k, and let us note \mathcal{I}_{\bar j}=\{1,\cdots,n\}\setminus \mathcal{I}_j. By noting \widehat{m}_{(j)} built on the sample \mathcal{I}_{\bar j}, we then set:\widehat{\mathcal{R}}^{k-\text{ CV}}=\frac{1}{k}\sum_{j=1}^k \mathcal{R}_j\text{ where }\mathcal{R}_j=\frac{k}{n}\sum_{i\in\mathcal{I}_{{j}}} \ell(y_i,\widehat{m}_{(j)}(\mathbf{x}_i))Standard cross-validation, where only one observation is removed each time (loocv), is a special case, with k=n. Using k=5 or 10 has a double advantage over k=n: (1) the number of estimates to be made is much smaller, 5 or 10 rather than n; (2) the samples used for estimation are less similar and therefore less correlated to each other, which tends to avoid excess variance, as recalled by James et al. (2013).

Another alternative is to use boosted samples. Let \mathcal{I}_b be a sample of size n obtained by drawing with replacement in \{1,\cdots,n\} to know which observations (y_i,\mathbf{x}_i) will be kept in the learning population (at each draw). Note \mathcal{I}_{\bar b}=\{1,\cdots,n\}\setminus\mathcal{I}_b. By noting \widehat{m}_{(b)} built on sample \mathcal{I}_b, we then set :\widehat{\mathcal{R}}^{\text{ B}}=\frac{1}{B}\sum_{b=1}^B \mathcal{R}_b\text{ where }\mathcal{R}_b=\frac{n_{\overline{b}}}{n}\sum_{i\in\mathcal{I}_{\overline{b}}} \ell(y_i,\widehat{m}_{(b)}(\mathbf{x}_i))where n_{\bar b} is the number of observations that have not been kept in \mathcal{I}_b. It should be noted that with this technique, on average e^{-1}\sim36.7\% of the observations do not appear in the boosted sample, and we find an order of magnitude of the proportions used when creating a calibration sample, and a test sample. In fact, as Stone (1977) had shown, the minimization of AIC is to be compared to the cross-validation criterion, and Shao (1997) showed that the minimization of BIC corresponds to k-fold cross-validation, with k=n/\log n.

All those techniques here are mentioned in the “machine learning” section since they rely on automatic, computational techniques, and no probabilistic foundations are necessary. In many cases we did use the notation m^\star (at least in the first posts on “machine learning” techniques) to highlight the fact that we want some sort of “optimal” model – and to make a distinction with estimators \widehat{m} considered earlier, when we had some probabilistic framework. But of course, it is possible (and necessary) to build bridges between those two cultures…

References are online here. As explained in the introduction, it is some sort of online version of an introduction to our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics (in English and in French).

Foundations of Machine Learning, part 1

This post is the fifth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 4 is online here.

In parallel with these tools developed by, and for economists, a whole literature has been developed on similar issues, centered on the problems of prediction and forecasting. For Breiman (2001a), a first difference comes from the fact that the statistic has developed around the principle of inference (or to explain the relationship linking y to variables \mathbf{x}) while another culture is primarily interested in prediction. In a discussion that follows the article, David Cox states very clearly that in statistic (and econometrics) “predictive success (…) is not the primary basis for model choice“. We will get back here on the roots of automatic learning techniques. The important point, as we will see, is that the main concern of machine learning is related to the generalization properties of a model, i.e. its performance – according to a criterion chosen a priori – on new data, and therefore on non-sample tests.

A learning machine

Today, we speak of “machine learning” to describe a whole set of techniques, often computational, as alternatives to the classical econometric approach. Before characterizing them as much as possible, it should be noted that historically other names have been given. For example, Friedman (1997) proposes to make the link between statistics (which closely resemble econometric techniques – hypothesis testing, ANOVA, linear regression, logistics, GLM, etc.) and what was then called “data mining” (which then included decision trees, methods from the closest neighbours, neural networks, etc.). The bridge between those two cultures corresponds to “statistical learning” techniques described in Hastie et al (2009). But one should keep in mind that machine learning is a very large field of research.

The so-called “natural” learning (as opposed to machine learning) is that of children, who learn to speak, read and play. Learning to speak means segmenting and categorizing sounds, and associating them with meanings. A child also learns simultaneously the structure of his or her mother tongue and acquires a set of words describing the world around him or her. Several techniques are possible, ranging from rote learning, generalization, discovery, more or less supervised or autonomous learning, etc. The idea in artificial intelligence is to take inspiration from the functioning of the brain to learn, to allow “artificial” or “automatic” learning, by a machine. A first application was to teach a machine to play a game (tic-tac-toe, chess, go, etc.). An essential step is to explain the objective it must achieve to win. One historical approach has been to teach the machine the rules of the game. If it allows you to play, it will not help the machine to play well. Assuming that the machine knows the rules of the game, and that it has a choice between several dozen possible moves, which one should it choose? The classical approach in artificial intelligence uses the so-called min-max algorithm using an evaluation function: in this algorithm, the machine searches forward in the possible moves tree, as far as the calculation resources allow (about ten moves in chess, for example). Then, it calculates different criteria (which have been previously indicated to her) for all positions (number of pieces taken, or lost, occupancy of the center, etc. in our example of the chess game), and finally, the machine plays the move that allows it to maximize its gain. Another example may be the classification and recognition of images or shapes. For example, the machine must identify a number in a handwritten handwriting (checks, ZIP code on envelopes, etc). It is a question of predicting the value of a variable y, knowing that a priori y\in\{0,1,2,\cdots,8,9\}. A classical strategy is to provide the machine with learning bases, in other words here millions of labelled (identified) images of handwritten numbers. A simple (and natural) strategy is to use a decision criterion based on the closest neighbors whose labels are known (using a predefined metric).

The method of the closest neighbors (“k-nearest neighbors”) can be described as follows: we consider (as in the previous part) a set of n observations, i. e. pairs (y_i,\mathbf{x}_i) with \mathbf{x}_i\in\mathbb{R}^p. Let us consider a distance \Delta on \mathbb{R}^p (the Euclidean distance or the Mahalanobis distance, for example). Given a new observation \mathbf{x}\in\mathbb{R}^p, let us assume the ordered observations as a function of the distance between the \mathbf{x}_i and \mathbf{x}, in the sense that \Delta(\mathbf{x}_1, \mathbf{x})\leq\Delta(\mathbf{x}_2, \mathbf{x})\leq\cdots\leq\Delta(\mathbf{x}_n, \mathbf{x}) then we can consider as prediction for y the average of the nearest k neighbours,\widehat{m}_k(\mathbf{x})=\frac{1}{k}\sum_{i=1}^k y_iLearning here works by induction, based on a sample (called the learning – or training – sample).

Automatic learning includes those algorithms that give computers the ability to learn without being explicitly programmed (as Arthur Samuel defined it in 1959). The machine will then explore the data with a specific objective (such as searching for the nearest neighbours in the example just described). Tom Mitchell proposed a more precise definition in 1998: a computer program is said to learn from experience E in relation to a task T and a performance measure P, if its performance on T, measured by P, improves with experience E. Task T can be a defect score for example, and performance P can be the percentage of errors made. The system learns if the percentage of predicted defects increases with experience.

As we can see, machine learning is basically a problem of optimizing a criterion based on data (from now on called learning). Many textbooks on machine learning techniques propose algorithms, without ever mentioning any probabilistic model. In Watt et al (2016) for example, the word “probability” is mentioned only once, with this footnote that will surprise and make smile any econometricians, “the logistic regression can also be interpreted from a probabilistic perspective” (page 86). But many recent books offer a review of machine learning approaches using probabilistic theories, following the work of Vaillant and Vapnik. By proposing the paradigm of “probably almost correct” learning (PAC), a probabilistic flavor has been added to the previously very computational approach, by quantifying the error of the learning algorithm (usually in a classification problem).

To be continued (references are online here)…

Probabilistic Foundations of Econometrics, part 4

This post is the fourth one of our series on the history and foundations of econometric and machine learning models. Part 3 is online here.

Goodness of Fit, and Model

In the Gaussian linear model, the determination coefficient – noted R^2 – is often used as a measure of fit quality. It is based on the variance decomposition formula \underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\bar{y})^2}_{\text{total variance}}=\underbrace{\frac{1}{n}\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{\text{residual variance}}+\underbrace{\frac{1}{n}\sum_{i=1}^n (\widehat{y}_i-\bar{y})^2}_{\text{explained variance}} The R^2 is defined as the ratio of explained variance and total variance, another interpretation of the coefficient that we had introduced from the geometry of the least squares R^2= \frac{\sum_{i=1}^n (y_i-\bar{y})^2-\sum_{i=1}^n (y_i-\widehat{y}_i)^2}{\sum_{i=1}^n (y_i-\bar{y})^2}The sums of the error squares in this writing can be rewritten as a log-likelihood. However, it should be remembered that, up to one additive constant (obtained with a saturated model) in generalized linear models, deviance is defined by {Deviance}(\widehat{\beta}) = -2\log[\mathcal{L}] which can also be noted Deviance(\widehat{\mathbf{y}}). A null deviance can be defined as the one obtained without using the explanatory variables \mathbf{x}, so that \widehat{y}_i=\overline{y}. It is then possible to define, in a more general context (with a non-Gaussian distribution for y)R^2=\frac{{Deviance}(\overline{y})-{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}=1-\frac{{Deviance}(\widehat{\mathbf{y}})}{{Deviance}(\overline{y})}However, this measure cannot be used to choose a model, if one wishes to have a relatively simple model in the end, because it increases artificially with the addition of explanatory variables without significant effect. We will then tend to prefer the adjusted R^2,\bar R^2 = {1-(1-R^{2})\cdot{n-1 \over n-p}} = R^{2}-\underbrace{(1-R^{2})\cdot{p-1 \over n-p}}_{\text{penalty}}where p is the number of parameters of the model. Measuring the quality of fit will penalize overly complex models.

This idea will be found in the Akaike criterion, where AIC=Deviance+2\cdot p or in the Schwarz criterion, BIC=Deviance+log(n)\cdot p. In large dimensions (typically p>\sqrt{n}), we will tend to use a corrected AIC, defined by AIC_c=Deviance+2⋅p⋅n/(n-p-1) .

These criterias are used in so-called “stepwise” methods, introducing the set methods. In the “forward” method, we start by regressing to the constant, then we add one variable at a time, retaining the one that lowers the AIC criterion the most, until adding a variable increases the AIC criterion of the model. In the “backward” method, we start by regressing on all variables, then we remove one variable at a time, removing the one that lowers the AIC criterion the most, until removing a variable increases the AIC criterion from the model.

Another justification for this notion of penalty (we will come back to this idea in machine learning) can be the following. Let us consider an estimator in the class of linear predictors, \mathcal{M}=\big\lbrace m:~m(\mathbf{x})=s_h(\mathbf{x})^T\mathbf{y} \text{ where }S=(s(\mathbf{x}_1),\cdots,s(\mathbf{x}_n))^T\text{ is some smoothing matrix}\big\rbrace and assume that y=m_0 (x)+\varepsilon, with \mathbb{E}[\varepsilon]=0 and Var[\varepsilon]=\sigma^2\mathbb{I}, so that m_0 (x)=\mathbb{E}[Y|X=x] . From a theoretical point of view, the quadratic risk, associated with an estimated model \widehat{m}, \mathbb{E}\big[(Y-\widehat{m}(\mathbf{X}))^2\big], is written\mathcal{R}(\widehat{m})=\underbrace{\mathbb{E}\big[(Y-m_0(\mathbf{X}))^2\big]}_{\text{error}}+\underbrace{\mathbb{E}\big[(m_0(\mathbf {X})-\mathbb{E}[\widehat{m}(\mathbf{X})])^2\big]}_{\text{bias}^2}+\underbrace{\mathbb{E}\big[(\mathbb{E}[\widehat{m}(\mathbf{X})]-\widehat{m}(\mathbf{X}))^2\big]}_{\text{variance}} if m_0 is the true model. The first term is sometimes called “Bayes error”, and does not depend on the estimator selected, \widehat{m}.

The empirical quadratic risk, associated with a model m, is here: \widehat{\mathcal{R}}_n(m)=\frac{1}{n}\sum_{i=1}^n (y_i-m(\mathbf{x}_i))^2 (by convention). We recognize here the mean square error, “mse”, which will more generally give the “risk” of the model m when using another loss function (as we will discuss later on). It should be noted that:\displaystyle{\mathbb{E}[\widehat{\mathcal{R}}_n(m)]=\frac{1}{n}\|m_0(\mathbf{x})-m(\mathbf{x})\|^2+\frac{1}{n}\mathbb{E}\big(\|{Y}-m_0(\mathbf{X})\|^2\big)} We can show that:n\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]=\mathbb{E}\big(\|Y-\widehat{m}(\mathbf{x})\|^2\big)=\|(\mathbb{I}-\mathbf{S})m_0\|^2+\sigma^2\|\mathbb{I}-\mathbf{S}\|^2so that the (real) risk of \widehat{m} is: {\mathcal{R}}_n(\widehat{m})=\mathbb{E}\big[\widehat{\mathcal{R}}_n(\widehat{m})\big]+2\frac{\sigma^2}{n}\text{trace}(\boldsymbol{S})So, if \text{trace}(\boldsymbol{S})\geq0 (which is not a too strong assumption), the empirical risk underestimates the true risk of the estimator. Actually, we recognize here the number of degrees of freedom of the model, the right-hand term corresponding to Mallow’s C_p, introduced in Mallows (1973) using not deviance but R^2.

Statistical Tests

The most traditional test in econometrics is probably the significance test, corresponding to the nullity of a coefficient in a linear regression model. Formally, it is the test of H_0:\beta_k=0 against H_1:\beta_k\neq 0. The so-called Student test, based on the statistics t_k=\widehat{\beta}_k/se_{\widehat{β}_k}, allows to decide between the two alternatives, using the test p-value, defined by \mathbb{P}[|T|>|t_k|] avec T\overset{\mathcal{L}}{\sim} Std_\nu, where \nu is the number of degrees of freedom of the model (\nu=p+1 for the standard linear model). In large dimension, however, this statistic is of very limited interest, given a significant FDR (“False Discovery Ratio”). Classically, with a level of significance \alpha=0.05, 5% of the variables are falsely significant. Suppose that we have p=100 explanatory variables, but that 5 (only) are really significant. We can hope that these 5 variables will pass the Student test, but we can also expect that 5 additional variables (false positive test) will emerge. We will then have 10 variables perceived as significant, while only half are significant, i.e. an FDR ratio of 50%. In order to avoid this recurrent pitfall in multiple tests, it is natural to use the procedure of Benjamini & Hochberg (1995).

From a correlation to some causal effect

Econometric models are used to implement public policy evaluations. It is therefore essential to fully understand the underlying mechanisms in order to know which variables actually make it possible to act on a variable of interest. But then we move on to another important dimension of econometrics. Jerry Neyman was responsible for the first work on the identification of causal mechanisms, and then Rubin (1974) formalized the test, called the “Rubin causal model” in Holland (1986). The first approaches to the notion of causality in econometrics were based on the use of instrumental variables, models with discontinuity of regression, analysis of differences in differences, and natural or unnatural experiments. Causality is usually inferred by comparing the effect of a policy – or more generally of a treatment – with its counterfactual, ideally given by a random control group. The causal effect of the treatment is then defined as \Delta=y_1-y_0, i.e. the difference between what the situation would be with treatment (noted t=1) and without treatment (noted t=0). The concern is that only y=t\cdot y_1+(1-t)\cdot y_0 and t are observed. In other words, the causal effect of variable t  on t  is not observed (since only one of the two potential variables – y_0 or y_1  is observed for each individual), but it is also individual, and therefore a function of x-covariates. Generally, by making assumptions about the distribution of the triplet (Y_0,Y_1,T) , some parameters of the causal effect distribution become identifiable, based on the density of the observable variables (Y,T) . Classically, we will be interested in the moments of this distribution, in particular the average effect of treatment in the population, \mathbb{E}[\Delta] , or even just the average effect of treatment in the case of treatment \mathbb{E}[\Delta|T=1] . If the result (Y_0,Y_1) is independent of the processing access variable T, it can be shown that \mathbb{E}[\Delta]=\mathbb{E}[Y|T=1]- \mathbb{E} [Y|T=0]. But if this independence hypothesis is not verified, there is a selection bias, often associated with \mathbb{E}[Y_0|T=1]- \mathbb{E} [Y_0|T=0]. Rosenbaum & Rubin (1983) propose to use a propensity to be treated score, p(x)=\mathbb{P}[T=1|X=x] , noting that if variable Y_0\ is independent of access to treatment T conditionally to the explanatory variables X, then it is independent of T  conditionally to the score p(X) : it is sufficient to match them using their propensity score. Heckman et al (2003) thus proposes a kernel estimator on the propensity score, which simply provides an estimator of the effect of the treatment, provided that it is treated.

To be continued next time, we’ll introduce “machine learning techniques” (references mentioned above are online here)

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…