Tag Archives: regression

Random thoughts on econometric models with (pure) random features

For my lectures on applied linear models, I wanted to illustrate the fact that the R^2 is never a good measure of the goodness of the model, since it’s quite easy to improve it. Consider the following dataset

n=100
df=data.frame(matrix(rnorm(n*n),n,n))
names(df)=c("Y",paste("X",1:99,sep=""))

with one variable of interest y, and 99 features x_j. All of them being (by construction) independent. And we have 100 observations… Consider here the regression on the first k features, and compute R_k^2 of that regression

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$adj.r.squared}

Let us see what’s going on…

plot(1:99,Vectorize(reg)(1:99))

(actually, it’s not exactly what we have on the graph…. we have the average obtained over 1,000 samples randomly generated, with 90% confidence bands). Oberve that \mathbb{E}[R^2_k]=k/n, i.e. if we add some pure random noise, we keep increasing the R^2 (up to 1, actually).

Good news, as we’ve seen in the course, the adjusted R^2 – denoted \bar R^2-might help. Observe that \mathbb{E}[\barR^2_k]=0, so, in some sense, adding features does not help here…

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  summary(model)$r.squared}
plot(1:99,Vectorize(reg)(1:99))

We can actually do the same with Akaike criteria AIC_k and Schwarz (bayesian) criteria BIC_k.

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  AIC(model)}
plot(1:99,Vectorize(reg)(1:99))

For the AIC, the intitial increase makes sense : we should not prefer the model with 10 covariates, compared with nothing. The strange thing is the far right behavior : we prefer here 80 random noise features to none ! Which I find hard to interprete… For the BIC the code is simply

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  BIC(model)}
plot(1:99,Vectorize(reg)(1:99))

and here also, we have the same pattern, where we prefer a big model with juste pure noise to nothing…

A last one to conclude (or not) : what about the leave-one-out cross validation mean squared error ? More precisely, CV=\frac{1}{n}\sum_{i=1}\widehat{\varepsilon}^2_{-i}where \widehat{\varepsilon}^2_{-i}=y_i-\widehat{y}_{-i} where \widehat{y}_{-i} is the predicted value obtained with the model is estimated when the ith observation is deleted. One can prove that \widehat{\beta}_{-i}=\widehat{\beta}-(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i\hat\varepsilon_i(1-H_{i,i})^{-1}where H is the classical hat matrix, thus\widehat{\varepsilon}_{-i}=(1-H_{i,i})^{-1}\hat\varepsilon_ii.e. we do note have to estimate (at each round) n models

reg=function(k){
  frm=paste("Y~",paste("X",1:k,collapse="+",sep=""))
  model=lm(frm,data=df)
  h=lm.influence(model)$hat/2
  mean( (residuals(model)/1-h)^2 ))}
plot(1:99,Vectorize(reg)(1:99))

Here, it make sense : adding noisy features yields overfit ! So the mean squared error is decreasing !

That’s all nice, but it might not be very realistic… Here, for my model with only one variable, I just pick one, at random…. In practice, we try to get the “best one”… So a more natural idea would be to order the variables according to their correlations with y,

df=data.frame(matrix(rnorm(n*n),n,n))
  df=df[,rev(order(abs(cor(df)[1,])))]
  names(df)=c("Y",paste("X",1:99,sep=""))}

and as before, we can plot the evolution of R^2_k as a function of k the number of features considered,

which is increasing, with a higher slope at the beginning… For the \bar R^2_k we might actually prefer a correlated noise to nothing (which makes sense actually). So here since we somehow chose our variables, \bar R^2_k seems to be always positive…

For the AIC_k here also, there is an improvement. Before coming back to the original situation (with about 80 features) and here also, we observe the drop on the far right part of the graph

The BIC_k might like the top three features, but soon, we have a deterioration…. even if here also, we have the drop at the far right (with more than 95 features… for 100 observations).

Finally, observe that here again, our (leave-one-out) cross-validation has not been mesled by our noisy variables : it is always decreasing !

So it seems that cross-validation techniques are more robust than the AIC and BIC (even if we mentioned in a previous post connexions between all those concepts) when we have a lot a noisy (non-relevent) features.

Probabilistic Foundations of Econometrics, part 2

This post is the second one of our series on the history and foundations of econometric and machine learning models. Part 1 is online here.

Geometric Properties of this Linear Model

Let’s define the scalar product in \mathbb{R}^n, ⟨\mathbf{a},\mathbf{b}⟩=\mathbf{a}^T\mathbf{b}, and let’s note \|\cdot\| the associated Euclidean standard, \|\mathbf{a}\|=\sqrt{\mathbf{a}^T\mathbf{a}} (denoted \|\cdot\|_{\ell_2} in the next post). Note \mathcal{E}_X the space generated by all linear combinations of the \mathbf{X} components (adding the constant). If the explanatory variables are linearly independent, \mathbf{X} is a full (column) rank matrix and \mathcal{E}_X is a space of dimension p+1. Let’s assume from now on that the variables \mathbf{x}  and y are centered here. Note that no law hypothesis is made in this section, the geometric properties are derived from the properties of expectation and variance in the set of finite variance variables.

With this notation, it should be noted that the linear model is written m(\mathbf{x})=⟨\mathbf{x},\beta⟩. The space H_z=\{\mathbf{x}\in\mathbb{R}^{p+1}:m(\mathbf{x})=z\} is a hyperplane (affine) that separates the space in two. Let’s define the orthogonal projection operator on \mathcal{E}_X, \Pi_X =\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T. Thus, the forecast that can be made for it is: \widehat{\mathbf{y}}=\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T\mathbf{y}=\Pi_X\mathbf{y}. As, \widehat{\varepsilon}=\mathbf{y}-\widehat{\mathbf{y}}=(\mathbb{I}-\Pi_X)\mathbf{y}=\Pi_{X^\perp}\mathbf{y}, we note that \widehat{\varepsilon}\perp\mathbf{x}, which will be interpreted as meaning that residuals are a term of innovation, unpredictable in the sense that \Pi_{X }\widehat{\varepsilon}=\mathbf{0}. The Pythagorean theorem is written here: \Vert \mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y} \Vert^2+\Vert \Pi_{ {X}^\perp}\mathbf{y} \Vert^2=\Vert \Pi_{ {X}}\mathbf{y}\Vert^2+\Vert \mathbf{y}-\Pi_{ {X}}\mathbf{y}\Vert^2=\Vert\widehat{\mathbf{y}}\Vert^2+\Vert\widehat{\mathbf{\varepsilon}}\Vert^2which is classically translated in terms of the sum of squares: \underbrace{\sum_{i=1}^n y_i^2}_{n\times\text{total variance}}=\underbrace{\sum_{i=1}^n \widehat{y}_i^2}_{n\times\text{explained variance}}+\underbrace{\sum_{i=1}^n (y_i-\widehat{y}_i)^2}_{n\times\text{residual variance}} The coefficient of determination, R^2, is then interpreted as the square of the cosine of the angle \theta between \mathbf{y} and \Pi_X \mathbf{y} : R^2=\frac{\Vert \Pi_{{X}} \mathbf{y}\Vert^2}{\Vert \mathbf{y}\Vert^2}=1-\frac{\Vert \Pi_{ {X}^\perp} \mathbf{y}\Vert^2}{\Vert \mathbf {y}\Vert^2}=\cos^2(\theta)An important application was obtained by Frish & Waugh (1933), when the explanatory variables are divided into two groups, \mathbf{X}=[\mathbf{X}_1 |\mathbf{X}_2], so that the regression becomes y=\beta_0+\mathbf{X}_1 β_1+\mathbf{X}_2 β_2+\varepsilon. Frish & Waugh (1933) showed that two successive projections could be considered. Indeed, if \mathbf{y}_2^\star=\Pi_{X_1^\perp} \mathbf{y} and X_2^\star=\Pi_{X_1^\perp}\mathbf{X}_2, we can show that \widehat{\beta} _2=[{\mathbf{X}_2^\star}^T \mathbf{X}_2^\star]^{-1}{\mathbf{X}_2^\star}^T \mathbf{y}_2^\star In other words, the overall estimate is equivalent to the combination of independent estimates of the two models if \mathbf{X}_2^\star=\mathbf{X}_2, i.e. \mathbf{X}_2\in \mathcal{E}_{X_1}^\perp, which can be noted \mathbf{x}_1\perp\mathbf{x}_2 We obtain here the Frisch-Waugh theorem which guarantees that if the explanatory variables between the two groups are orthogonal, then the overall estimate is equivalent to two independent regressions, on each of the sets of explanatory variables. This is a theorem of double projection, on orthogonal spaces. Many results and interpretations are obtained through geometric interpretations (fundamentally related to the links between conditional expectation and the orthogonal projection in space of variables of finite variance).

This geometric interpretation might help to get a better understanding of the problem of under-identification, i.e. the case where the real model would be y_i=\beta_0+ \mathbf{x}_1^T \beta_1+\mathbf{x}_2^T \beta_2+\varepsilon_i, but the estimated model is y_i=b_0+\mathbf{x}_1^T \mathbf{b}_1+\eta_i. The maximum likelihood estimator of \mathbf{b}_1 is \widehat{\mathbf{b}}_1=\mathbf {\beta}_1 + \underbrace{ (\mathbf {X}_1^T\mathbf {X}_1)^{-1} \mathbf {X}_1^T \mathbf {X}_{2} \mathbf{\beta}_2}_{\mathbf{\beta}_{12}}+\underbrace{(\mathbf{X}_1^{T}\mathbf{X}_1)^{-1} \mathbf{X}_1^T\varepsilon}_{\nu}so that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1+\beta_{12}, the bias ( \beta_{12}) being null only in the case where \mathbf{X}_1^T \mathbf{X}_2=\mathbf{0} (i. e. \mathbf{X}_1\perp \mathbf{X}_2 ): we find here a consequence of the Frisch-Waugh theorem.

On the other hand, over-identification corresponds to the case where the real model would be y_i=\beta_0+\mathbf{x}_1^T \beta_1+\varepsilon_i, but the estimated model is y_i=b_0+ \mathbf{x}_1^T \mathbf{b} _1+\mathbf{x}_2^T \mathbf{b}_2+\eta_i. In this case, the estimate is unbiased, in the sense that \mathbb{E}[\widehat{\mathbf{b}}_1]=\beta_1 but the estimator is not efficient. Later on, we will discuss an effective method for selecting variables (and avoid over-identification).

From parametric to non-parametric

We can rewrite equation (4) in the form \widehat{\mathbf{y}}=\Pi_X\mathbf{y} which helps us to see the forecast directly as a linear transformation of the observations. More generally, a linear predictor can be obtained by considering m(\mathbf{x})=\mathbf{s}_{\mathbf{x}}^T \mathbf{y}, where \mathbf{s}_{\mathbf{x}} is a weight vector, which depends on \mathbf{x}, interpreted as a smoothing vector. Using the vectors \mathbf{s}_{\mathbf{x}_i}, calculated from the observations \mathbf{x}_i, we obtain a matrix \mathbf{S} of size n\times n, and \widehat{\mathbf{y}}=\mathbf{S}\mathbf{y}. In the case of the linear regression described above, \mathbf{s}_{\mathbf{x}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{x}, and in that case \text{trace}(\mathbf{S}) is the number of columns in the \mathbf{X} matrix (the number of explanatory variables). In this context of more general linear predictors, \text{trace}(\mathbf{S}) is often seen as equivalent to the number of parameters (or complexity, or dimension, of the model), and \nu=n-\text{trace}(\mathbf{S}) is then the number of degrees of freedom (see Ruppert et al., 2003; Simonoff, 1996). The principle of parsimony says that we should minimize this dimension (the trace of the matrix \mathbf{S}) as much as possible. But in the general case, this dimension is more to obtain, explicitely.

The estimator introduced by Nadaraya (1964) and Watson (1964), in the case of a simple non-parametric regression, is also written in this form since\widehat{m}_h(x)=\mathbf{s}_{x}^T\mathbf{y}=\sum_{i=1}^n \mathbf{s}_{x,i}y_iwhere\mathbf{s}_{x,i}=\frac{K_h(x-x_i)}{K_h(x-x_1)+\cdots+K_h(x-x_n)} where K(\cdot) is a kernel function, which assigns a value that is lower the closer x_i is to x, and h>0 is the bandwidth. The introduction of this metaparameter h is an important issue, as it should be chosen wisely. Using asymptotic developments, we can show that if X has density f, \text{biais}[\widehat{m}_h(x)]=\mathbb{E}[\widehat{m}_h(x)]-m(x)\sim {h^2}\left(\frac{C_1 }{2}m''(x)+C_2 m'(x)\frac{f'(x)}{f(x)}\right)and \displaystyle{{\text{Var}[\widehat{m}_h(x)]\sim\frac{C_3}{{nh}}\frac{\sigma(x)}{f(x)}}}for some constants that can be estimated (see Simonoff (1996) for a discussion). These two functions evolve inversely with h, as shown in Figure 1 (where the metaparameter on the x-axis is here, actually, h^{-1}). Keep in ming that we will see a similar graph in the context of machine learning models.

Figure 1. Choice of meta-parameter and the Goldilocks problem: it must not be too large (otherwise there is too much variance), nor too small (otherwise there is too much bias).

The natural idea is then to try to minimize the mean square error, the MSE, defined as bias[\widehat{m}_h (x)]^2+Var[\widehat{m}_h (x)], and them integrate over x, which gives an optimal value for h of the form h^\star=O(n^{-1/5}) , and reminds us of Silverman’s rule – see Silverman (1986). In larger dimensions, for continuous \mathbf{x} variables, a multivariate kernel with matrix bandwidth \mathbf{H} can be used, and \mathbb{E}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim m(\mathbf{x})+\frac{C_1}{2}\text{trace}\big(\mathbf{H}^Tm''(\mathbf{x})\mathbf{H}\big)+C_2\frac{m'(\boldsymbol{x})^T\mathbf{H}\mathbf{H}^T \nabla f(\mathbf{x})}{f(\mathbf{x})}while\text{Var}[\widehat{m}_{\mathbf{H}}(\mathbf{x})]\sim\frac{C_3}{n~\text{det}(\mathbf{H})}\frac{\sigma(\mathbf{x})}{f(\mathbf{x})}
If \mathbf{H} is a diagonal matrix, with the same term h  on the diagonal, then h^\star=O(n^{-1/(4+dim(\mathbf{x}))}. However, in practice, there will be more interest in the integrated version of the quadratic error, MISE(\widehat{m}_{h})=\mathbb{E}[MSE(\widehat{m}_{h}(X))]=\int MSE(\widehat{m}_{h}(x))dF(x)and we can prove that MISE[\widehat{m}_h]\sim \overbrace{\frac{h^4}{4}\left(\int x^2k(x)dx\right)^2\int\big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]^2dx}^{\text{bias}^2} +\overbrace{\frac{\sigma^2}{nh}\int k^2(x)dx \cdot\int\frac{dx}{f(x)}}^{\text{variance}}as n→∞ and nh→∞. Here we find an asymptotic relationship that again recalls Silverman’s (1986) order of magnitude, h^\star =n^{-\frac{1}{5}}\left(\frac{C_1\int \frac{dx}{f(x)}}{C_2\int \big[m''(x)+2m'(x)\frac{f'(x)}{f(x)}\big]dx}\right)^{\frac{1}{5}}The main problem here, in practice, is that many of the terms in the expression above are unknown. Automatic learning offers computational techniques, when the econometrician used to searching for asymptotic (mathematical) properties.

To be continued (references mentioned above are online here)…

Régression sur une variable qualitative et ANOVA

Ce matin, pour le cours STT5100, on évoquait la régression sur une variable catégorielle. En particulier, on avait commencé par regarder ce que donnerait la régression sans la constante, et son interprétation. On s’était appuyé sur la base des poids et des tailles des élèves, et de la variable de genre.

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
Davis=data.frame(Y=Davis$weight * 2.204622,
                 X1=Davis$sex)

On voulait estimer le modèle y_i =\beta_F\boldsymbol{1}_F(x_i)+\beta_H\boldsymbol{1}_H(x_i)+\varepsilon_iOn avait vu que l’on pouvait passer par l’écriture matricielle

 X=cbind(Davis$X1=='F',Davis$X1=='M') 
 Y=Davis$Y

car la matrice \mathbf{X}^T\mathbf{X} est inversible (une fois que l’on enlève la constante)

 solve(t(X)%*%X)
            [,1]       [,2]
[1,] 0.008928571 0.00000000
[2,] 0.000000000 0.01136364

et donc l’estimateur par moindres carrés est (classiquement)\widehat{\mathbf{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}

 solve(t(X)%*%X) %*% (t(X)%*%Y)
         [,1]
[1,] 125.4272
[2,] 167.3258

ce qui correspond effectivement à la sortie de R,

 reg=lm(Y~0+X1,data=Davis)
 summary(reg)
 
Coefficients:
    Estimate Std. Error t value Pr(>|t|)    
X1F  125.427      1.960   64.00   <2e-16 ***
X1M  167.326      2.211   75.68   <2e-16 ***

Considérons maintenant les deux sous-populations, avec le poids des femmes, et le poids des hommes

x=Y[X[,1]==1]
y=Y[X[,2]==1]
nx=length(x)
ny=length(y)

On avait vu en cours que les \widehat{\mathbf{\beta}} avaient une interprétation très simple, puisque\widehat{{\beta}}_M = \frac{1}{n_M}\sum_{i:x_i=M} y_iautrement dit \widehat{{\beta}}_M   est le poids moyen des hommes. Et en effet

 mean(y)
[1] 167.3258

C’est finalement très naturel, ou intuitif.

On peut maintenant s’interroger sur l’écart-type de l’estimateur de \widehat{{\beta}}_M . Intuitivement, on aurait envie d’avoir la variance de l’estimateur de la moyenne, soit ici

 sqrt(var(y)/ny)
[1] 2.794391
 sqrt(1/(ny-1)*sum( (y-mean(y))^2 )/ny)
[1] 2.794391

car pour rappel\text{Var}[\overline{y}]=\frac{\text{Var}(y)}{n}Comme on l’a vue dans le modèle de régression multiple, la variance de l’estimateur de \mathbf{\beta} est proportionnel à \sigma^2 , la variance globale des résidus (c’est l’hypothèse d’homoscédasticité ! les deux groupes doivent avoir la même variance). On va donc calculer l’estimateur naturel de \sigma^2

 s2=1/(nx+ny-2)*(sum( (x-mean(x))^2 )+sum( (y-mean(y))^2))
 sqrt(s2/ny)
[1] 2.210863

et en effet, on retombe sur la valeur donnée dans le tableau de régresion

 sqrt(s2/nx)
[1] 1.959721

(pareil pour l’autre coefficient).

On avait ensuite regardé la régression telle qu’elle faite classiquement, sous R : on garde la constante, et on enlève une des variables indicatrices (qui devient alors la “modalité de référence”).

 X=cbind(1,Davis$X1=='M')

Là encore, le modèle devient identifiable, et on obtient ici

 solve(t(X)%*%X) %*% (t(X)%*%Y)
          [,1]
[1,] 125.42724
[2,]  41.89855

On avait noté qu’il y avait un interprétation de cette seconde valeur, comme un différentiel par rapport à la modalité de référence

mean(y)-mean(x)
[1] 41.89855

La sortie de régression devient ici

 reg2=lm(Y~X1,data=Davis)
 summary(reg2)
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  125.427      1.960   64.00   <2e-16 ***
X1M           41.899      2.954   14.18   <2e-16 ***

Et comme je l’avais dit, le test de Student correspond ici à un test d’égalité entre la taille moyenne des hommes et celle des femmes. Et en effet, si on fait le test, on voit que la différence est significative, comme attendu (pour la même raison qu’au dessus, on suppose la même variance dans les deux groupes)

 t.test(Y[X[,1]==1],Y[X[,2]==1],var.equal=TRUE)
 
	Two Sample t-test
 
data:  Y[X[, 1] == 1] and Y[X[, 2] == 1]
t = -6.4475, df = 286, p-value = 4.826e-10
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -30.62603 -16.30035
sample estimates:
mean of x mean of y 
 143.8626  167.3258

Je suis par contre un peu surpris que les p-values soient différente. Mon interprétation est que les p-values sont (de toutes façons) très faibles, et donc ça a peu d’importance. En fait, si on rend les deux variables indépendantes (par exemple en mélangeant la variable \mathbf{y} ), ça marche ! Posons

 Davis$Y=sample(Davis$Y)

ce qui revient à permuter toutes les observations de la variable dépendante (mais pas les autres !). La régression donne ici

 reg2=lm(Y~X1,data=Davis)
 summary(reg2)
 
Call:
lm(formula = Y ~ X1, data = Davis)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-57.458 -22.184  -5.512  17.809 118.912 
 
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 143.4382     2.7820   51.56   <2e-16 ***
X1M           0.9645     4.1940    0.23    0.818    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 29.44 on 198 degrees of freedom
Multiple R-squared:  0.000267,	Adjusted R-squared:  -0.004782 
F-statistic: 0.05289 on 1 and 198 DF,  p-value: 0.8183

autrement dit, le genre n’est plus significatif, avec une p-value de 81.8%. Ce qui est bien au dessus de 5%. Si on fait maintenant le test de comparaison de moyenne, sur les deux sous-groupes, on obtient

 Y=Davis$Y
 t.test(Y[X[,1]==1],Y[X[,2]==1],var.equal=TRUE)
 
	Two Sample t-test
 
data:  Y[X[, 1] == 1] and Y[X[, 2] == 1]
t = -0.22998, df = 198, p-value = 0.8183
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -9.235209  7.306165
sample estimates:
mean of x mean of y 
 143.4382  144.4027

et le test a ici également une p-value de 81.8%. Les deux tests sont donc rigoureusement équivalents.

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Parallelizing Linear Regression or Using Multiple Sources

My previous post was explaining how mathematically it was possible to parallelize computation to estimate the parameters of a linear regression. More speficially, we have a matrix \mathbf{X} which is n\times k matrix and \mathbf{y} a n-dimensional vector, and we want to compute \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y} by spliting the job. Instead of using the n observations, we’ve seen that it was to possible to compute “something” using the first n_1 rows, then the next n_2 rows, etc. Then, finally, we “aggregate” the m objects created to get our overall estimate.

Parallelizing on multiple cores

Let us see how it works from a computational point of view, to run each computation on a different core of the machine. Each core will see a slave, computing what we’ve seen in the previous post. Here, the data we use are

y = cars$dist
X = data.frame(1,cars$speed)
k = ncol(X)

On my laptop, I have three cores, so we will split it in m=3 chunks

library(parallel)
library(pbapply)
ncl = detectCores()-1
cl = makeCluster(ncl)

This is more or less what we will do: we have our dataset, and we split the jobs,

We can then create lists containing elements that will be sent to each core, as Ewen suggested,

chunk = function(x,n) split(x, cut(seq_along(x), n, labels = FALSE))
a_parcourir = chunk(seq_len(nrow(X)), ncl)
for(i in 1:length(a_parcourir)) a_parcourir[[i]] = rep(i, length(a_parcourir[[i]]))
Xlist = split(X, unlist(a_parcourir))
ylist = split(y, unlist(a_parcourir))

It is also possible to simplify the QR functions we will use

compute_qr = function(x){
  list(Q=qr.Q(qr(as.matrix(x))),R=qr.R(qr(as.matrix(x))))
}
get_Vlist = function(j){
  Q3 = QR1[[j]]$Q %*% Q2list[[j]]
  t(Q3) %*% ylist[[j]]
}
clusterExport(cl, c("compute_qr", "get_Vlist"), envir=environment())

Then, we can run our functions on each core. The first one is

  QR1 = parLapply(cl=cl,Xlist, compute_qr)

note that it is also possible to use

  QR1 = pblapply(Xlist, compute_qr, cl=cl)

which will include a progress bar (that can be nice when the database is rather large). Then use

  R1 = pblapply(QR1, function(x) x$R, cl=cl) %>% do.call("rbind", .)
  Q1 = qr.Q(qr(as.matrix(R1)))
  R2 = qr.R(qr(as.matrix(R1)))
  Q2list = split.data.frame(Q1, rep(1:ncl, each=k))
  clusterExport(cl, c("QR1", "Q2list", "ylist"), envir=environment())
  Vlist = pblapply(1:length(QR1), get_Vlist, cl=cl)
  sumV = Reduce('+', Vlist)

and finally the ouput is

solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Using multiple sources

In practice, it might also happen that various “servers” have the data, but we cannot get a copy. But it is possible to run some functions on their server, and get some output, that we can use afterwards.

Datasets are supposed to be available somewhere. We can send a request, and get a matrix. Then we we aggregate all of them, and send another request. That’s what we will do here. Provider j should run f_1(\mathbf{X}) on his part of the data, that function will return R^{(1)}_j. More precisely, to the first provider, send

function1 = function(subX){
return(qr.R(qr(as.matrix(subX))))}
R1 = function1(Xlist[[1]])

and actually, send that function to all providers, and aggregate the output

for(j in 2:m) R1 = rbind(R1,function1(Xlist[[j]]))

The create on your side the following objects

Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*k+1:k,]

Finally, contact one last time the providers, and send one of your objects

function2=function(subX,suby,Q){
Q1=qr.Q(qr(as.matrix(subX)))
Q2=Q
return(t(Q1%*%Q2) %*% suby)}

Provider j should then run f_2(\mathbf{X},\mathbf{y},Q_j^{(2)}) on his part of the data, using also Q_j^{(2)} as argument (that we obtained on own side) and that function will return (\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j)^{T}_j\mathbf{y}_j. For instance, ask the first provider to run

sumV = function2(Xlist[[1]],ylist[[1]], Q2list[[1]])

and do the same with all providers

for(j in 2:m) sumV = sumV+ function2(Xlist[[j]],ylist[[j]], Q2list[[j]])
solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Linear Regression, with Map-Reduce

Sometimes, with big data, matrices are too big to handle, and it is possible to use tricks to numerically still do the map. Map-Reduce is one of those. With several cores, it is possible to split the problem, to map on each machine, and then to agregate it back at the end.

Consider the case of the linear regression, \mathbf{y}=\mathbf{X}\mathbf{\beta}+\mathbf{\varepsilon} (with classical matrix notations). The OLS estimate of \mathbf{\beta} is \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}. To illustrate, consider a not too big dataset, and run some regression.

lm(dist~speed,data=cars)$coefficients
(Intercept)       speed 
 -17.579095    3.932409
y=cars$dist
X=cbind(1,cars$speed)
solve(crossprod(X,X))%*%crossprod(X,y)
           [,1]
[1,] -17.579095
[2,]   3.932409

How is this computed in R? Actually, it is based on the QR decomposition of \mathbf{X}, \mathbf{X}=\mathbf{Q}\mathbf{R}, where \mathbf{Q} is an orthogonal matrix (ie \mathbf{Q}^T\mathbf{Q}=\mathbb{I}). Then \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}

solve(qr.R(qr(as.matrix(X)))) %*% t(qr.Q(qr(as.matrix(X)))) %*% y
           [,1]
[1,] -17.579095
[2,]   3.932409

So far, so good, we get the same output. Now, what if we want to parallelise computations. Actually, it is possible.

Consider m blocks

m = 5

and split vectors and matrices
\mathbf{y}=\left[\begin{matrix}\mathbf{y}_1\\\mathbf{y}_2\\\vdots \\\mathbf{y}_m\end{matrix}\right] and \mathbf{X}=\left[\begin{matrix}\mathbf{X}_1\\\mathbf{X}_2\\\vdots\\\mathbf{X}_m\end{matrix}\right]=\left[\begin{matrix}\mathbf{Q}_1^{(1)}\mathbf{R}_1^{(1)}\\\mathbf{Q}_2^{(1)}\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{Q}_m^{(1)}\mathbf{R}_m^{(1)}\end{matrix}\right]
To split vectors and matrices, use (eg)

Xlist = list()
for(j in 1:m) Xlist[[j]] = X[(j-1)*10+1:10,]
ylist = list()
for(j in 1:m) ylist[[j]] = y[(j-1)*10+1:10]

and get small QR recomposition (per subset)

QR1 = list()
for(j in 1:m) QR1[[j]] = list(Q=qr.Q(qr(as.matrix(Xlist[[j]]))),R=qr.R(qr(as.matrix(Xlist[[j]]))))

Consider the QR decomposition of \mathbf{R}^{(1)} which is the first step of the reduce part\mathbf{R}^{(1)}=\left[\begin{matrix}\mathbf{R}_1^{(1)}\\\mathbf{R}_2^{(1)}\\\vdots \\\mathbf{R}_m^{(1)}\end{matrix}\right]=\mathbf{Q}^{(2)}\mathbf{R}^{(2)}where\mathbf{Q}^{(2)}=\left[\begin{matrix}\mathbf{Q}^{(2)}_1\\\mathbf{Q}^{(2)}_2\\\vdots\\\mathbf{Q}^{(2)}_m\end{matrix}\right]

R1 = QR1[[1]]$R
for(j in 2:m) R1 = rbind(R1,QR1[[j]]$R)
Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*2+1:2,]

Define – as step 2 of the reduce part\mathbf{Q}^{(3)}_j=\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j
and\mathbf{V}_j=\mathbf{Q}^{(3)T}_j\mathbf{y}_j

Q3list = list()
for(j in 1:m) Q3list[[j]] = QR1[[j]]$Q %*% Q2list[[j]]
Vlist = list()
for(j in 1:m) Vlist[[j]] = t(Q3list[[j]]) %*% ylist[[j]]

and finally set – as the step 3 of the reduce part\widehat{\mathbf{\beta}}=[\mathbf{R}^{(2)}]^{-1}\sum_{j=1}^m\mathbf{V}_j

sumV = Vlist[[1]]
for(j in 2:m) sumV = sumV+Vlist[[j]]
solve(R2) %*% sumV
           [,1]
[1,] -17.579095
[2,]   3.932409

It looks like we’ve been able to parallelise our linear regression…

Quantile Regression (home made)

After my series of post on classification algorithms, it’s time to get back to R codes, this time for quantile regression. Yes, I still want to get a better understanding of optimization routines, in R. Before looking at the quantile regression, let us compute the median, or the quantile, from a sample.

Median

Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n.
To illustrate, consider a sample from a lognormal distribution,

n = 101 
set.seed(1)
y = rlnorm(n)
median(y)
[1] 1.077415

For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,

library(lpSolve)
A1 = cbind(diag(2*n),0) 
A2 = cbind(diag(n), -diag(n), 1)
r = lp("min", c(rep(1,2*n),0),
rbind(A1, A2),c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y))
tail(r$solution,1) 
[1] 1.077415

It looks like it’s working well…

Quantile

Of course, we can adapt our previous code for quantiles

tau = .3
quantile(x,tau)
      30% 
0.6741586

The linear program is now\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. The R code is now

A1 = cbind(diag(2*n),0) 
A2 = cbind(diag(n), -diag(n), 1)
r = lp("min", c(rep(tau,n),rep(1-tau,n),0),
rbind(A1, A2),c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y))
tail(r$solution,1) 
[1] 0.6741586

So far so good…

Quantile Regression (simple)

Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.

base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)

The linear program for the quantile regression is now\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i-[\beta_0^\tau+\beta_1^\tau x_i]=a_i-b_i\forall i=1,\cdots,n. So use here

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area)
y = base$rent_euro
A1 = cbind(diag(2*n), 0,0) 
A2 = cbind(diag(n), -diag(n), X) 
r = lp("min",
       c(rep(tau,n), rep(1-tau,n),0,0), rbind(A1, A2),
       c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) 
tail(r$solution,2)
[1] 148.946864   3.289674

Of course, we can use R function to fit that model

library(quantreg)
rq(rent_euro~area, tau=tau, data=base)
Coefficients:
(Intercept)        area 
 148.946864    3.289674

Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot

plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")),
     ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5)
sf=0:250
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")
tau = .9
r = lp("min",
       c(rep(tau,n), rep(1-tau,n),0,0), rbind(A1, A2),
       c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) 
tail(r$solution,2)
[1] 121.815505   7.865536
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")

Quantile Regression (multiple)

Now that we understand how to run the optimization program with one covariate, why not try with two ? For instance, let us see if we can explain the rent of a flat as a (linear) function of the surface and the age of the building.

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area, base$yearc )
y = base$rent_euro
A1 = cbind(diag(2*n), 0,0,0) 
A2 = cbind(diag(n), -diag(n), X) 
r = lp("min",
       c(rep(tau,n), rep(1-tau,n),0,0,0), rbind(A1, A2),
       c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) 
tail(r$solution,3)
[1] 0.000000 3.257562 0.077501

Unfortunately, this time, it is not working well…

library(quantreg)
rq(rent_euro~area+yearc, tau=tau, data=base)
Coefficients:
 (Intercept)         area        yearc 
-5542.503252     3.978135     2.887234

Results are quite different. And actually, another technique can confirm the later (IRLS – Iteratively Reweighted Least Squares)

eps = residuals(lm(rent_euro~area+yearc, data=base))
for(s in 1:500){
  reg = lm(rent_euro~area+yearc, data=base, weights=(tau*(eps>0)+(1-tau)*(eps<0))/abs(eps))
  eps = residuals(reg)
}
reg$coefficients
 (Intercept)         area        yearc 
-5484.443043     3.955134     2.857943

I could not figure out what went wrong with the linear program. Not only coefficients are very different, but also predictions…

yr = r$solution[2*n+1]+r$solution[2*n+2]*base$area+r$solution[2*n+3]*base$yearc
plot(predict(reg),yr)
abline(a=0,b=1,lty=2,col="red")


It’s now time to investigate….

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

On the interpretation of a regression model

Yesterday, NaytaData (aka @NaytaData ) posted a nice graph on reddit, with bicycle traffic and mean air temperature, in Helsinki, Finland, per day,

I found that graph interesting, so I did ask for the data (NaytaData kindly sent them to me tonight).

df=read.csv("cyclistsTempHKI.csv")
library(ggplot2)
ggplot(df, aes(meanTemp, cyclists)) +
  geom_point() +
  geom_smooth(span = 0.3)

But as mentioned by someone on twitter, the interpretation is somehow trivial : people get out on their bike when the weather is nice. The hotter, the more cyclists on the road. Which is interpreted here in a causal way…

But actually, we can also visualize the data as follows, as suggested by Antoine Chambert-Loir

 ggplot(df, aes(cyclists, meanTemp)) +
  geom_point() +
  geom_smooth(span = 0.3)

The interpretation would be, somehow, that the more cyclists on the road, the hotter it is. Why not consider this causal interpretation here ? Like cyclists go so fast, or sweat so much, that they increase temperature…

Of course, it is the standard (recurrent) discussion “correlation is not causality”, but in regression models, we like to tell a story, to pretend that we have some sort of a causal story. But we do not prove it. Here, we know that the first one is more credible than the second one, but how do we know that ? To go further, how can we use machine learning techniques to prove causal relationships ? How could a machine choose between the first and the second story ?

 

 

Visualizing effects of a categorical explanatory variable in a regression

Recently, I’ve been working on two problems that might be related to semiotic issues in predictive modeling (i.e. instead of a standard regression table, how can we plot coefficient values in a regression model). To be more specific, I have a variable of interest Y that is observed for several individuals i, with explanatory variables \mathbf{x}_i, year t, in a specific region z_i\in\{A,B,C,D,E\}. Suppose that we have a simple (standard) linear model (forget about time here) y_i=\beta_0+\beta_1x_{1,i}+\cdots+\beta_kx_{k,i}+\sum_j \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i

Let us forget the temporal effect to focus on the spatial effect today. And consider some simulated dataset. There will be only one (continuous) explanatory variable. And I will generate correlated covariates, just to be more realistic.

n=1000
library(mnormt)
r=.5
Sigma=matrix(c(1,r,r,1), 2, 2)
set.seed(1)
X=rmnorm(n,c(0,0),Sigma)
X1=cut(X[,1],c(-100,quantile(X[,1],c(.1,.4,.7,.85)),
100),labels=LETTERS[1:5])
X2=X[,2]
Y=5+X[,1]-X[,2]+rnorm(n)/2
db=data.frame(Y,X1,X2)

Here we have y_i=\beta_0+\beta_1x_{1,i}+\sum_{j\in\{A,B,C,D,E\}} \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i The goal here is to get to graph to visualize the vector \hat\alpha=(\hat\alpha_A,\cdots,\hat\alpha_E). Let us run the linear regression

reg1=lm(Y~X1+X2,data=db)
idx=which(substr(names(reg1$coefficients), 1,2)=="X1")
v1=reg1$coefficients[idx]
names(v1)=LETTERS[2:5]
barplot(v1,col=rgb(0,0,1,.4))

Note that it is possible to add some sort of “confidence interval” to discuss significance (or to avoid to spend hours discussing differences in bar heights that are not significantly different)

library(Hmisc)
sv1=summary(reg1)$coefficients[idx,2]
(bp1=barplot(v1,ylim=range(c(0,v1+2*sv1))))
errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

My main concern here is the “reference” that is considered. Should A be the reference? Why not B

db$X1=relevel(db$X1,"B")
reg1=lm(Y~X1+X2,data=db)
idx=which(substr(names(reg1$coefficients),1,2)=="X1")
v1=reg1$coefficients[idx]
names(v1)=LETTERS[c(1,3:5)]
library(Hmisc)
sv1=summary(reg1)$coefficients[idx,2]
(bp1=barplot(v1)
errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

Why not the smallest one? Why not the largest one?… What if there is no simple way to choose. Furthermore, let us get back to the original point, which is that there might be some temporal aspects. More precisely, we can have \hat\alpha^{(t)}=(\hat\alpha_A^{(t)},\cdots,\hat\alpha_E^{(t)}). If we have also \hat\alpha^{(t+1)} and we get another plot, how do we interpret it. If for E the bar is taller, it means that relative to A, the difference has increased. I have the feeling that the interpretation is more complicated because we do not see, on that graph, changes in \hat\alpha^{(t)}_A.

Let us try something else. First, let us get back to the original setting

db$X1=relevel(db$X1,"A")

Consider here the regression without the intercept, so that all values remain

reg1=lm(Y~0+X1+X2,data=db)
idx=which(substr(names(reg1$coefficients),1,2)=="X1")
v1=reg2$coefficients[idx]
names(v1)=LETTERS[1:5]
barplot(v1)

It can be hard to read, especially if Y takes (very) large values, and you think that barplots should start at 0. But still, having those 5 values is nice. Why not rescale that graph?

A natural idea my be to consider the case where no spatial component is considered, and to look at the difference with that reference.

reg1=lm(Y~1+X2,data=db)
reg2=lm(Y~0+X1+X2,data=db)
idx=which(substr(names(reg2$coefficients),1,2)=="X1")
v1=reg2$coefficients[idx]
v2=v1-reg1$coefficients["(Intercept)"]
barplot(v2,col=rgb(0,0,1,.4))
sv2=summary(reg2)$coefficients[idx,2]
(bp2=barplot(v2,ylim=range(c(v2-2*sv2,v2+2*sv2))))
errbar(bp2[,1],v2,v2-2*sv2,v2+2*sv2,add=TRUE)

I like that graph, I should admit it. Now, I still have some remaining questions. For instance, can we insure that when only the intercept is considered, the value of \hat\beta_0 is somewhere between \hat\beta_A,\cdots,\hat\beta_E? Is it possible that \hat\beta_A-\hat\beta_0,\cdots,\hat\beta_E-\hat\beta_0 are all positive? In that case, I would find that hard to interpret.

Actually, if I really want values that can be seen as compared to some average, why not consider a (weighted) average of \hat\beta_A,\cdots,\hat\beta_E? (weights being here proportion in each class, in each region)

w=table(db$X1)
v3=v1-sum(w*v1)/sum(w)
(bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3))))
errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

I like that one. But what if, instead of normalizing at the end, we normalize the original dependent variable. By “normalize”, I mean “rescale”, to have a centered variable.

db$Y0=db$Y-mean(db$Y)
reg3=lm(Y0~0+X1+X2,data=db)
sv3=summary(reg3)$coefficients[idx,2]
(bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3))))
errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

This one is nice, because it is extremely simple to explain. But what if instead of a linear regression, we add a logistic one (with Y\in\{0,1\})? or a Poisson regression…

So maybe it cannot be the best solution here. Let us try something else… In insurance ratemaking, people like to use “zonier“. It is a two-stage regression. The idea is to run a regression without any spatial components, first. Then, consider the regression of residuals on spatial variables. Here, it would be something like

reg1=lm(Y~1+X2,data=db)
reg2=lm(Y~0+X1+X2,data=db)

Since we focus on residuals, those are centered, and we have an easy interpretation of respective values

sv4=summary(reg4)$coefficients[idx,2]
v4=reg4$coefficients
(bp4=barplot(v4,names.arg=LETTERS[1:5])))
errbar(bp4[,1],v4,v4-2*sv4,v4+2*sv4,add=TRUE)

I guess that it can also be use in generalized linear models, with Pearson (or deviance) residuals.

Another possible idea can be the following. Again, the goal is not to have the true values, but to visualize on a graph how regions can be different. Here, all of them are significantly different. And in region A, Y is smaller, ceteris paribus (other things equal in the sense that we have taken into account x_1). And in region E it is larger. Here, the graph helps to “see” those differences.

Why not consider a completely different graph. What if we plot vector a instead of \alpha, where a_A can be interpreted as the value of the coefficient if we consider region A against “not region A“. What if we consider 5 regressions where dichotomous versions of Z are considered : Z_j=\mathbf{1}_{Z=j}.

v5=sv5=rep(NA,5)
names(v5)=LETTERS[1:5]
for(k in 1:5){
reg=lm(Y~I(X1==LETTERS[k])+X2,data=db)
v5[k]=reg$coefficients[2]
sv5[k]=summary(reg)$coefficients[2,2]}

We can plot that sequence of values, including some confidence intervals (that would be related to significance with respect to all other regions)

(bp5=barplot(v5,ylim=range(c(v5-2*sv5,v5+2*sv5))))
errbar(bp5[,1],v5,v5-2*sv5,v5+2*sv5,add=TRUE)

Looking at values does not give intuitive results, but I have the feeling that it is easy to explain what we plot (we compare each region to “the rest of the world”), and the ordering of a seems to be consistent with \alpha (but I could not prove it).

Here are some ideas I got. I should be able to provide other graphs, but I would love to discuss with anyone on that topics, to find a proper and nice way to visualize effects of a categorical explanatory variable in a regression model (that can be a logistic one). Comments are open…

The myth of interpretability of econometric models

There are important discussions nowadays about data modeling, to choose between the “two cultures” (as mentioned in Breiman (2001)), i.e. either econometrics models or machine/statistical learning models. We did discuss this issue recently in Econométrie et Machine Learning (so far only in French) with Emmanuel Flachaire and Antoine Ly. One argument often used by econometricians is the interpretability of econometric models. Or at least the attempt to get an interpretable model.

We also have this discussion in actuarial science, for instance in ratemaking (or insurance pricing). Machine learning based models usually perform better (for some a priori chosen metric), but actuaries claim that econometric models are more easily interpretable. In actuarial literature, we assume that claim frequency Y is driven by some non-observable risk factor \Theta, and therefore, we do have heterogeneous risks in our portfolio. And, it can be seen as legitimate to differentiate prices. Assume that this risk factor \Theta is strongly correlated with X_1, the age of the driver. Because in our portfolio, old drivers tend to have more accidents. Here, we could pretend to have a “causal story” (as defined in Freedman (2009)) because of a possible interpretation of the model. So it is natural here to consider a regression model of Y on X_1 to derive our actuarial pricing model. But assume that, possibly, risk factor \Theta is also strongly correlated with X_2, that can be related to spatial features (say latitude, which denoted a north/south position). Because in our portfolio, drivers living in the south tend to have more accidents (reads are known to be more dangerous there). Here, we could pretend to have a second “causal story”.

Of course, since \Theta is strongly correlated with X_1 and X_2, it means that X_1 and X_2 are strongly correlated. Here also, this correlation can be interpreted (not in a causal way as previously, but still), since we know that old people like to live in southern regions. So, what should we do here ? Let us run some simulations to  illustrate.

 set.seed(123)
 n=1e5
 Theta=rnorm(n)
 X1=Theta+rnorm(n)/8
 X2=Theta+rnorm(n)/8
 L=exp(-3+Theta)
 Y=rpois(n,L)
 B=data.frame(Y,X1,X2)

Our first idea was to consider a model where Y is “explained” by the first variable X_1,

 g1=glm(Y~X1,data=B,family=poisson)
 summary(g1)
 
Coefficients:
         Estimate Std. Error z value Pr(>|z|)    
(Inter.) -2.97778    0.01544 -192.88   <2e-16 ***
X1        0.97926    0.01092   89.64   <2e-16 ***

As expected, our variable is “significant”, but also, probably more interesting, X_2, has no impact on the residuals

 B$e1=residuals(g1,type="pearson")
 g1e=lm(e1~X2,data=B)
 summary(g1e)
 
Coefficients:
          Estimate Std. Error t value Pr(>|t|)
(Inter.) 0.0003618  0.0031696   0.114    0.909
X2       0.0028601  0.0031467   0.909    0.363

The interpretation is that once we corrected claim frequency for the age of the drivers, there is no spatial effect here. So, a good model should be based only on the age of the drivers.

But we can also consider the other story. We can consider a model where Y is “explained” by the second variable X_2,

 g2=glm(Y~X2,data=B,family=poisson)
summary(g2)
 
Coefficients:
         Estimate Std. Error z value Pr(>|z|)    
(Inter.) -2.97724    0.01544 -192.81   <2e-16 ***
X2        0.97915    0.01093   89.56   <2e-16 ***

Here also we have a valid model, that can be interpreted, and here also X_1, has no impact on the residuals

 B$e2=residuals(g2,type="pearson")
 g2e=lm(e2~X1,data=B)
 summary(g2e)
 
Coefficients:
          Estimate Std. Error t value Pr(>|t|)
(Inter.) 0.0004863  0.0031733   0.153    0.878
X1       0.0027979  0.0031504   0.888    0.374

The story is similar here. If we correct from the spatial pattern, claims frequency does not depend on the age of the driver.

So, what should we do now? We do have two models, and each of them is as interpretable as the other one. Note that we can not use any statistical tool to distinguish the two: they are comparable

 AIC(g1)
[1] 51013.39
 AIC(g2)
[1] 51013.15

Why not incorporate the two explanatory variables X_1 and X_2, at the same time, in our regression model, and let “the model” decide what to do…?

 g=glm(Y~X1+X2,data=B,family=poisson)
 summary(g)
 
Coefficients:
         Estimate Std. Error  z value Pr(>|z|)    
(Inter.) -2.98132    0.01547 -192.723    2e-16 ***
X1        0.49310    0.06226    7.920 2.38e-15 ***
X2        0.49375    0.06225    7.931 2.17e-15 ***

It looks like we completely lost the interpretability of the model, since our two explanatory variables are (strongly) correlated. Actually, instead of saying “use one, and drop the other one (since it brings no further information)”, it says “use both, each one will explain half of the variable”. Strange interpretation, isn’t it?  So why not try some LASSO here?

library(glmnet)
fit=glmnet(x=as.matrix(B[,c("X1","X2")]), 
    y=B$Y,family="poisson")
plot(fit,xvar="lambda")

Here also, it says that we either keep both, or none. So it cannot be used for variable selection (which is an important motivation to use LASSO technique). So, what should be do if we several interpretable models, but no way to choose? Because usually, we claim that we prefer to use a model with an interpretation. But what should be done here?

What is a Linear Trend, by the way?

I had a very strange discussion on twitter (yes, another one), about regression curves. I think it started with a tweet based on some xkcd picture (just for fun, because it was New Year’s Day)

There were comments on that picture, by econometricians, mainly about ‘significant’ trends when datasets are very noisy. And I mentioned a graph that I saw earlier, a couple of days ago

Let us reproduce that graph (Roger kindly sent me the dataset)

db=data.frame(year=1990:2016,
ratio=c(.23,.27,.32,.37,.22,.26,.29,.15,.40,.28,.14,.09,.24,.18,.29,.51,.13,.17,.25,.13,.21,.29,.25,.2,.15,.12,.12))
library(ggplot2)

The graph is here (with the same aesthetic conventions as Roger’s initial graph, i.e. using some sort of barplot)

ggplot(db, aes(year, ratio)) +
geom_bar(stat="identity") +
stat_smooth(method = "lm", se = FALSE)

My point was that we miss the ‘confidence band’ of the regression

In R, at least, it is quite natural to get (and actually, it is the default version of the graph function)

ggplot(db, aes(year, ratio)) +
geom_bar(stat="identity") +
stat_smooth(method = "lm", se = TRUE)

It is hard to claim that the ‘regression line’ is significant (in the sense “significantly non horizontal”). To be more specific, if we look at the output of the regression model, we get

summary(lm(ratio~year,data=db))

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 9.158531 4.549672 2.013 0.055 .
year -0.004457 0.002271 -1.962 0.061 .
---
Signif. codes: 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(which is exactly what Roger used in his graph to plot his red straight line). The p-value of the estimator of the slope, in a linear regression model is here 6%. But I found Roger’s point puzzling

See also

First of all, let us get back to a more standard graph, with a scatterplot, and not bars,

ggplot(db, aes(year, ratio)) +
stat_smooth(method = "lm") +
geom_point()

Here, we observe points \{y_{1990},y_{1991},\cdots,y_{2016}\}. In order to draw that blue line, we assume (Econometrics 101, actually) that those observations are realizations of random variables \{Y_{1990},Y_{1991},\cdots,Y_{2016}\}. Randomness here does not come from a survey, or from ‘balls in an urn’. Randomness is because hurricanes and floods are themselves seen are realizations of random events. Yes, there might be measurement errors, but that’s not where randomness comes from (here). When we talk about ‘randomness’, it should be related to ‘model error’ i.e. the error we make if we consider a linear model (here), that is

?Y_t=\beta_0+\beta_1t+\varepsilon_t

Even if observations are not obtained from balls in an urn, there is some kind of randomness here. Randomness means that we might have errors (random errors) around the estimated value (that is on the blue curve), y_t=\widehat{y}_t+\widehat{\varepsilon}_t. One might consider a nonlinear model to reduce the error,

ggplot(db, aes(year, ratio)) +
geom_point() +
geom_smooth()

but in the case, the danger is to overfit

So yes, when we fit a linear model, there is always some kind of randomness, and it is possible to get a ‘confidence band’, that will be very useful for predictions (e.g. for reinsurance purpose here).

Ce que la courbe ROC (et l’AUC) ne raconte pas

En préparant une intervention pour mardi prochain, j’épluchais les résultats renvoyés pour un exercice, et j’ai eu un résultat assez étrange avec un modèle de classification. J’avais donné la même base cet automne à l’ensae, et j’avais donc près d’une trentaine d’autres modèles, pour comparer (disons plutôt que sur la même base de test, j’ai une trentaine de prévisions). Les observations noires sont celles obtenues cet automne (le trait correspond aux meilleurs AUC sur la base de test), et les observations rouges sont celles que j’ai obtenu pour l’intervention de mardi (là encore, le trait vertical correspond aux meilleurs modèles, au sens du AUC), sur une observation de la base de test,

Ce sont les probabilités prédites (mais j’ai enlevé l’échelle).

Pour presque toutes mes observations, les poids rouges sont bien au dessus des autres… Mais cela n’enlève en rien le fait que l’AUC obtenu (pour les deux modèles rouges) est très bon. C’est effectivement un résultat important (et connu) : le critère AUC (mais plus généralement la courbe ROC en entier) n’indique en rien si la valeur prédite est bonne, ou pas. Il nous dit juste si l’ordre obtenu est correct. Si les valeurs les plus importantes sont effectivement les valeurs pour lesquelles on a un 1, l’AUC sera très bon.

C’est ce qu’on peut observer sur le petit exemple ci-dessous. Considérons un modèle logistique simulé assez simple,

> n=1e3
> set.seed(1)
> x1=rnorm(n)
> x2=runif(n)
> u=-3+x2+x1
> p=exp(u)/(1+exp(u))
> y=rbinom(n,prob=p,size=1)
> library(ROCR)
> df=data.frame(y,x1,x2)
> mean(df$y)
[1] 0.116
> reg=glm(y~.,data=df,family=binomial)
> p=predict(reg,type="response")
> mean(p)
[1] 0.116
> pred1=prediction(p, df$y)
> L=performance(pred1, "tpr", "fpr")

L’AUC est ici

> auc=performance(pred1, "auc")@y.values[[1]]
> auc
[1] 0.7681191

et la courbe ROC est la suivante,

> plot(unlist(L@x.values),unlist(L@y.values),
+ type="s",col="blue")

Supposons que l’on change la constante du modèle logistique,

> reg$coefficients[1]=0

Dans ce cas, notre prévision est assez mauvaise, car la probabilité moyenne prédite est ici

> u=reg$coefficients[1]+reg$coefficients[2]*
+ df$x1+reg$coefficients[3]*df$x2
> p=exp(u)/(1+exp(u))
> mean(p)
[1] 0.6060676

(on est loin des 11,6% de 1 dans la base). Pourtant le AUC est bon

> pred1=prediction(p, df$y)
> L=performance(pred1, "tpr", "fpr")
> auc=performance(pred1, "auc")@y.values[[1]]
> auc
[1] 0.7681191

(c’est en fait la même valeur qu’auparavant, ce qui a du sens puisque la courbe ROC est identique)

> lines(unlist(L@x.values),unlist(L@y.values),
+ type="s",col="red")

Autrement dit, ces outils, classiquement utilisés pour juger la qualité d’un classifieur, ne permettent en aucun cas de dire que la probabilité prédite a du sens. Ces critères permettent juste de dire qu’on identifie assez bien les personnes qui ont le plus de chance d’avoir la réponse 1. Ce qui n’est pas si mal… mais c’est un autre problème que celui d’avoir une probabilité qui soit pertinente.

How long could it take to run a regression

This afternoon, while I was discussing with Montserrat (aka @mguillen_estany) we were wondering how long it might take to run a regression model. More specifically, how long it might take if we use a Bayesian approach. My guess was that the time should probably be linear in , the number of observations. But I thought I would be good to check.

Let us generate a big dataset, with one million rows,

> n=1e6
> X=runif(n)
> Y=2+5*X+rnorm(n)
> B=data.frame(X,Y)

Consider as a benchmark the standard linear regression,

> lm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = lm(Y~X,data=B[idx,])
+   summary(reg)
+ }

Here the regression is a subset of smaller size. We can do the same with a Bayesian approach, using stan,

> stan_lm ="
+ data {
+ int N;
+ vector[N] x;
+ vector[N] y;
+ }
+ parameters {
+ real alpha;
+ real beta;
+ real tau;
+ }
+ transformed parameters {
+ real sigma;
+ sigma <- 1 / sqrt(tau);
+ }
+ model{
+ y ~ normal(alpha + beta * x, sigma);
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ tau ~ gamma(0.001, 0.001);
+ }
+ "

Define then the model

> library(rstan)
> system.time( 
  stanmodel <<- stan_model(model_code = stan_lm))
utilisateur     système      écoulé 
      0.043       0.000       0.043

We want to see how long it might take to run a regression,

> lm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+       data = list(N=n,
+                   x=X[idx],
+                   y=Y[idx]),
+       iter = 1000, warmup=200)
+   summary(fit)
+ }

We use the following package to see how long it takes

> library(microbenchmark)
> time_lm = function(n){
+  M = microbenchmark(lm_freq(n),
+      lm_bayes(n),times=50)
+  return(apply( matrix(M$time,nrow=2),1,mean))
+ }

We can now compare the time it took with ten, one hundred, on thousand, and ten thousand observations,

> vN = c(10,100,1000,10000)
> T = Vectorize(time_lm)(vN)

we can then plot it

> plot(vN,T[2,]/1e6,log="xy",col="red",type="b",
+      xlab="Number of Observations",ylab="Time")
> lines(vN,T[1,]/1e6,col="blue",type="b")

It looks like (if we forget about the very small sample) that the time it takes to run a regression is linear, with the two techniques (the frequentist and the Bayesian ones).

And actually, the same story olds for logistic regressions. Consider the following dataset

> n=1e6
> X=runif(n)
> S=-3+2*X+rnorm(n)
> Y=rbinom(n,size=1,prob=exp(S)/(1+exp(S)))
> B=data.frame(X,Y)

The frequentist version of the logistic regression is

> glm_freq = function(n){
+   idx = sample(1:1e6,size=n)
+   reg = glm(Y~X,data=B[idx,],family=binomial)
+   summary(reg)
+ }

and the Bayesian one, using stan,

> stan_glm = "
+ data {
+ int N;
+ vector[N] x;
+ int<lower=0,upper=1> y[N];
+ }
+ parameters {
+ real alpha;
+ real beta;
+ }
+ model {
+ alpha ~ normal(0, 10);
+ beta ~ normal(0, 10);
+ y ~ bernoulli_logit(alpha + beta * x);
+ }
+ "
> stanmodel = stan_model(model_code = stan_glm) )
> glm_bayes = function(n){
+   idx = sample(1:1e6,size=n)
+   fit = sampling(stanmodel,
+        data = list(N=n,
+        x = X[idx],
+        y = Y[idx]),
+        iter = 1000, warmup=200)
+   summary(fit)
+ }

Again, we can see how long it takes to run those regression models

> time_gl m= function(n){
+   M = microbenchmark(glm_freq(n),
+   glm_bayes(n),times=50)
+   return(apply( matrix(M$time,nrow=2),1,mean))
+ }