Category Archives: Regression

On the robustness of LASSO

Probably the last post on lasso, before the summer break… More specifically, I was wondering about the interpretation of graphs \lambda\mapsto\widehat{\beta}_\lambda. We use them for variable selection, but my major concern was about confidence intervals : how can we trust those lines ?

As usual, a natural way is to use simulations on generated datasets. Consider for instance

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
              X5=runif(n),
              X6=exp(X[,3]),
              X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
              X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)

One can use other simulations of datasets, and store the output

vlambda = exp(seq(-8,1,length=201))
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,
             lambda=vlambda,standardize=TRUE)
VLASSO[[s]] = as.matrix(lasso$beta)

To visualize confidence bands, one can compute quantiles

Q05=Q95=Qm=matrix(NA,9,201)
for(i in 1:nrow(Q05)){
  for(j in 1:ncol(Q05)){
    v = unlist(lapply(VLASSO,function(x) x[i,j]))
    Q05[i,j] = quantile(v,.05)
    Q95[i,j] = quantile(v,.95)
    Qm[i,j]  = mean(v)
  }}

and get get the graph

plot(lasso,col=colrs,"lambda"ylim=c(min(Q05),max(Q95)))
colrs=c(brewer.pal(8,"Set1"))
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
          c(Q05[2,],rev(Q95[2,])),col=colrs[1],border=NA)
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
        c(Q05[5,],rev(Q95[5,])),col=colrs[2],border=NA)
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
        c(Q05[8,],rev(Q95[8,])),col=colrs[3],border=NA)

An alternative (more realistic on real data) is to use bootstrapped version of the dataset

id = sample(1:nrow(X),size=nrow(X),replace=TRUE)
lasso = glmnet(x=X[id,],y=df[id,"Y"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)


So far, it looks it’s working very well. Now, what if we have a smaller dataset

n = 100

On simulated new samples, we get


while the bootstrap version is

There is more uncertainty, clearly, but the conclusion is not ambiguous here.

Now, what about real data. Consider the following

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")
tail(chicago)
   Fire   X_1 X_2    X_3
42  4.8 0.152  19 13.323
43 10.4 0.408  25 12.960
44 15.6 0.578  28 11.260
45  7.0 0.114   3 10.080
46  7.1 0.492  23 11.428
47  4.9 0.466  27 13.731

with one variable of interest (the number of fires, per unhabitants) and 3 features. We can here use bootstrap to generate samples, and then fit a lasso regression. On the original dataset, the regression is

X = model.matrix(lm(Fire~.,data=chicago))
 id = sample(1:nrow(X),size=nrow(X),replace=TRUE)
 vlambda = exp(seq(-4,2,length=201))
 lasso = glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)

And if we just plot lines \lambda\mapsto\widehat{\beta}_\lambda we get

Now, consider bootstrap samples.

for(s in 1:100){
  id=sample(1:nrow(X),size=nrow(X),replace=TRUE)
  library(glmnet)
  vlambda=exp(seq(-4,2,length=201))
  lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)
  plot(lasso,col=colrs,"lambda",lwd=.2,add=TRUE)}

We get here

The interpretation here is much more difficult

What about the order ?

N=matrix(NA,100000,4)
for(s in 1:100000){
  id=sample(1:nrow(X),size=nrow(X),replace=TRUE)
  library(glmnet)
  vlambda=exp(seq(-4,2,length=201))
  lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],
               family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)
  N[s,]=names(sort(apply(as.matrix(lasso$beta),
        1,function(x) sum(x!=0))))}

The ordering that was obtained on the original dataset was the same in 56% of the scenarios,

mean(apply(N,1,function(x) paste(x,collapse="")=="(Intercept)X_1X_2X_3"))
[1] 0.5693

We can look at all the cases,

L=as.character(c(123,132,213,231,312,321))
Li=paste("(Intercept)X_",substr(L,1,1),"X_",
         substr(L,2,2),"X_",substr(L,3,3),sep="")
g=function(y) mean(apply(N,1,function(x) paste(x,collapse="")==y))
vL=unlist(lapply(Li,g))
names(vL)=L
barplot(vL,las=2,horiz=TRUE)

Standardization in LASSO

The lasso regression is based on the idea of solving\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_1}\rbracewhere\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|for any \mathbf{a}\in\mathbb{R}^d. In a recent post, we’ve seen computational aspects of the optimization problem. But I went quickly throught the story of the \ell_1-norm. Because it means, somehow, that the value of \beta_1 and \beta_2 should be comparable. Somehow, with two significant variables, with very different scales, we should expect orders (or relative magnitudes) of \widehat{\beta}_1 and \widehat{\beta}_2 to be very very different. So people say that it is therefore necessary to center and reduce (or standardize) the variables.

Consider the following (simulated) dataset

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
X5=runif(n),X6=exp(X[,3]),
X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)
X = model.matrix(lm(Y~.,data=df))

Use the following colors for the graphs and the value of \lambda

library("RColorBrewer")
colrs = c(brewer.pal(8,"Set1"))[c(1,4,5,2,6,3,7,8)]
vlambda=exp(seq(-8,1,length=201))

The first regression we can run is a non-standardized one

library(glmnet)
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=FALSE)

We can visualize the graphs of \lambda\mapsto\widehat{\beta}_\lambda

idx = which(apply(lasso$beta,1,function(x) sum(x==0))<200)
plot(lasso,col=colrs,'lambda',xlim=c(-5.5,2.3),lwd=2)
legend(1.2,.9,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,lwd=2)

At least, observe that the most significant variables are the one that were used to generate the data.

Now, consider the case that we standardize the data

lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=TRUE)

The graphs of \lambda\mapsto\widehat{\beta}_\lambda

The graph is (strangely) very similar to the previous one. Except perhaps for the green curve. Maybe that categorical are not simular to continuous variables… Because somehow, standardisation of categorical variables might be not natural…

Why not consider some home-made function ? Let us transform (linearly) all variable in the X matrix (except the first one, which is the intercept)

Xc = X
for(j in 2:ncol(X)) Xc[,j]=(Xc[,j]-mean(Xc[,j]))/sd(Xc[,j])

Now, we can run our lasso regression on that one (with the intercept since all the variables are centered, but y)

lasso = glmnet(x=Xc,y=df$Y,family="gaussian",alpha=1,intercept=TRUE,lambda=vlambda)

The plot is now

plot(lasso,col=colrs,"lambda",xlim=c(-6.7,1.3),lwd=2)
idx = which(apply(lasso$beta,1,function(x) sum(x==0))<length(vlambda))
legend(.15,.45,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,bty="n",lwd=2)

Actually, why not also center the y variable, and remove also the intercept

Yc = (df[,"Y"]-mean(df[,"Y"]))/sd(df[,"Y"])
lasso = glmnet(x=Xc,y=Yc,family="gaussian",alpha=1,intercept=FALSE,lambda=vlambda)

Hopefully, those graphs are very consistent (and if we use those for variable selection, they suggest to use variables that were actually used to generate the dataset). And having qualitative and quantitative variable is not a big deal. But still, I do not feel confortable with the differences…

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

On the interpretation of a regression model

Yesterday, NaytaData (aka @NaytaData ) posted a nice graph on reddit, with bicycle traffic and mean air temperature, in Helsinki, Finland, per day,

I found that graph interesting, so I did ask for the data (NaytaData kindly sent them to me tonight).

df=read.csv("cyclistsTempHKI.csv")
library(ggplot2)
ggplot(df, aes(meanTemp, cyclists)) +
  geom_point() +
  geom_smooth(span = 0.3)

But as mentioned by someone on twitter, the interpretation is somehow trivial : people get out on their bike when the weather is nice. The hotter, the more cyclists on the road. Which is interpreted here in a causal way…

But actually, we can also visualize the data as follows, as suggested by Antoine Chambert-Loir

 ggplot(df, aes(cyclists, meanTemp)) +
  geom_point() +
  geom_smooth(span = 0.3)

The interpretation would be, somehow, that the more cyclists on the road, the hotter it is. Why not consider this causal interpretation here ? Like cyclists go so fast, or sweat so much, that they increase temperature…

Of course, it is the standard (recurrent) discussion “correlation is not causality”, but in regression models, we like to tell a story, to pretend that we have some sort of a causal story. But we do not prove it. Here, we know that the first one is more credible than the second one, but how do we know that ? To go further, how can we use machine learning techniques to prove causal relationships ? How could a machine choose between the first and the second story ?

 

 

Visualizing effects of a categorical explanatory variable in a regression

Recently, I’ve been working on two problems that might be related to semiotic issues in predictive modeling (i.e. instead of a standard regression table, how can we plot coefficient values in a regression model). To be more specific, I have a variable of interest Y that is observed for several individuals i, with explanatory variables \mathbf{x}_i, year t, in a specific region z_i\in\{A,B,C,D,E\}. Suppose that we have a simple (standard) linear model (forget about time here) y_i=\beta_0+\beta_1x_{1,i}+\cdots+\beta_kx_{k,i}+\sum_j \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i

Let us forget the temporal effect to focus on the spatial effect today. And consider some simulated dataset. There will be only one (continuous) explanatory variable. And I will generate correlated covariates, just to be more realistic.

n=1000
library(mnormt)
r=.5
Sigma=matrix(c(1,r,r,1), 2, 2)
set.seed(1)
X=rmnorm(n,c(0,0),Sigma)
X1=cut(X[,1],c(-100,quantile(X[,1],c(.1,.4,.7,.85)),
100),labels=LETTERS[1:5])
X2=X[,2]
Y=5+X[,1]-X[,2]+rnorm(n)/2
db=data.frame(Y,X1,X2)

Here we have y_i=\beta_0+\beta_1x_{1,i}+\sum_{j\in\{A,B,C,D,E\}} \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i The goal here is to get to graph to visualize the vector \hat\alpha=(\hat\alpha_A,\cdots,\hat\alpha_E). Let us run the linear regression

reg1=lm(Y~X1+X2,data=db)
idx=which(substr(names(reg1$coefficients), 1,2)=="X1")
v1=reg1$coefficients[idx]
names(v1)=LETTERS[2:5]
barplot(v1,col=rgb(0,0,1,.4))

Note that it is possible to add some sort of “confidence interval” to discuss significance (or to avoid to spend hours discussing differences in bar heights that are not significantly different)

library(Hmisc)
sv1=summary(reg1)$coefficients[idx,2]
(bp1=barplot(v1,ylim=range(c(0,v1+2*sv1))))
errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

My main concern here is the “reference” that is considered. Should A be the reference? Why not B

db$X1=relevel(db$X1,"B")
reg1=lm(Y~X1+X2,data=db)
idx=which(substr(names(reg1$coefficients),1,2)=="X1")
v1=reg1$coefficients[idx]
names(v1)=LETTERS[c(1,3:5)]
library(Hmisc)
sv1=summary(reg1)$coefficients[idx,2]
(bp1=barplot(v1)
errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

Why not the smallest one? Why not the largest one?… What if there is no simple way to choose. Furthermore, let us get back to the original point, which is that there might be some temporal aspects. More precisely, we can have \hat\alpha^{(t)}=(\hat\alpha_A^{(t)},\cdots,\hat\alpha_E^{(t)}). If we have also \hat\alpha^{(t+1)} and we get another plot, how do we interpret it. If for E the bar is taller, it means that relative to A, the difference has increased. I have the feeling that the interpretation is more complicated because we do not see, on that graph, changes in \hat\alpha^{(t)}_A.

Let us try something else. First, let us get back to the original setting

db$X1=relevel(db$X1,"A")

Consider here the regression without the intercept, so that all values remain

reg1=lm(Y~0+X1+X2,data=db)
idx=which(substr(names(reg1$coefficients),1,2)=="X1")
v1=reg2$coefficients[idx]
names(v1)=LETTERS[1:5]
barplot(v1)

It can be hard to read, especially if Y takes (very) large values, and you think that barplots should start at 0. But still, having those 5 values is nice. Why not rescale that graph?

A natural idea my be to consider the case where no spatial component is considered, and to look at the difference with that reference.

reg1=lm(Y~1+X2,data=db)
reg2=lm(Y~0+X1+X2,data=db)
idx=which(substr(names(reg2$coefficients),1,2)=="X1")
v1=reg2$coefficients[idx]
v2=v1-reg1$coefficients["(Intercept)"]
barplot(v2,col=rgb(0,0,1,.4))
sv2=summary(reg2)$coefficients[idx,2]
(bp2=barplot(v2,ylim=range(c(v2-2*sv2,v2+2*sv2))))
errbar(bp2[,1],v2,v2-2*sv2,v2+2*sv2,add=TRUE)

I like that graph, I should admit it. Now, I still have some remaining questions. For instance, can we insure that when only the intercept is considered, the value of \hat\beta_0 is somewhere between \hat\beta_A,\cdots,\hat\beta_E? Is it possible that \hat\beta_A-\hat\beta_0,\cdots,\hat\beta_E-\hat\beta_0 are all positive? In that case, I would find that hard to interpret.

Actually, if I really want values that can be seen as compared to some average, why not consider a (weighted) average of \hat\beta_A,\cdots,\hat\beta_E? (weights being here proportion in each class, in each region)

w=table(db$X1)
v3=v1-sum(w*v1)/sum(w)
(bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3))))
errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

I like that one. But what if, instead of normalizing at the end, we normalize the original dependent variable. By “normalize”, I mean “rescale”, to have a centered variable.

db$Y0=db$Y-mean(db$Y)
reg3=lm(Y0~0+X1+X2,data=db)
sv3=summary(reg3)$coefficients[idx,2]
(bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3))))
errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

This one is nice, because it is extremely simple to explain. But what if instead of a linear regression, we add a logistic one (with Y\in\{0,1\})? or a Poisson regression…

So maybe it cannot be the best solution here. Let us try something else… In insurance ratemaking, people like to use “zonier“. It is a two-stage regression. The idea is to run a regression without any spatial components, first. Then, consider the regression of residuals on spatial variables. Here, it would be something like

reg1=lm(Y~1+X2,data=db)
reg2=lm(Y~0+X1+X2,data=db)

Since we focus on residuals, those are centered, and we have an easy interpretation of respective values

sv4=summary(reg4)$coefficients[idx,2]
v4=reg4$coefficients
(bp4=barplot(v4,names.arg=LETTERS[1:5])))
errbar(bp4[,1],v4,v4-2*sv4,v4+2*sv4,add=TRUE)

I guess that it can also be use in generalized linear models, with Pearson (or deviance) residuals.

Another possible idea can be the following. Again, the goal is not to have the true values, but to visualize on a graph how regions can be different. Here, all of them are significantly different. And in region A, Y is smaller, ceteris paribus (other things equal in the sense that we have taken into account x_1). And in region E it is larger. Here, the graph helps to “see” those differences.

Why not consider a completely different graph. What if we plot vector a instead of \alpha, where a_A can be interpreted as the value of the coefficient if we consider region A against “not region A“. What if we consider 5 regressions where dichotomous versions of Z are considered : Z_j=\mathbf{1}_{Z=j}.

v5=sv5=rep(NA,5)
names(v5)=LETTERS[1:5]
for(k in 1:5){
reg=lm(Y~I(X1==LETTERS[k])+X2,data=db)
v5[k]=reg$coefficients[2]
sv5[k]=summary(reg)$coefficients[2,2]}

We can plot that sequence of values, including some confidence intervals (that would be related to significance with respect to all other regions)

(bp5=barplot(v5,ylim=range(c(v5-2*sv5,v5+2*sv5))))
errbar(bp5[,1],v5,v5-2*sv5,v5+2*sv5,add=TRUE)

Looking at values does not give intuitive results, but I have the feeling that it is easy to explain what we plot (we compare each region to “the rest of the world”), and the ordering of a seems to be consistent with \alpha (but I could not prove it).

Here are some ideas I got. I should be able to provide other graphs, but I would love to discuss with anyone on that topics, to find a proper and nice way to visualize effects of a categorical explanatory variable in a regression model (that can be a logistic one). Comments are open…

Justice “actuarielle”, algorithmes… et données

Il y a un peu plus d’un an, Virginie Gautron m’envoyait plein de documents sur la “justice actuarielle”, concept que je découvrais alors. Pour comprendre un peu de quoi il s’agit, je peux renvoyer vers pénalité et gestion des risques : vers une justice « actuarielle » en Europe ? qui dresse un état de l’art, en francais. J’avoue avoir un peu mis de côté ensuite (faute de temps), et puis l’autre jour, j’ai dévoré surveillé et punir de Nicolas Bourgoin, qui contient un chapitre sur le sujet.

Sans vouloir tomber dans le cliché des deux cultures (à la CP Snow), il faut bien admettre que pour un économètre, lire des écrits de juristes parlait de modèles prédictifs (ou actuariels) est une expérience marquante. Et ce qui me trouble (j’ai d’ailleurs le même soucis avec la psychologie – c’est un sujet proche) c’est que l’accent est souvent mis sur le modèle, l’algorithme, ce truc qui semble fasciner les juristes, alors qu’il me semble que le coeur du problème, ce sont les données. On nous dit, par exemple, que les algorithmes sont racistes (c’est un peu la thèse de Cathy O’Neil dans Weapons of Math Destruction), mais le problème ce n’est pas le modèle. C’est le fait qu’il y a un énorme biais de sélection dans les données ! Et selon l’adage “garbage in, garbage out“… dit de manière plus policée, je pourrais tenter un “purgamentum init, exit purgamentum” pour parler comme un juriste. Ou dit autrement, si on entre des mauvaises données en entrée, faut pas espérer grand chose en sortie…

Pour comprendre le biais de sélection, on peut relire Mostly Harmless Econometrics de Joshua Angrist et Jörn-Steffen Pischke. En particulier, dans l’introduction, ils posent la question simple suivant : vaut-il mieux, suite à une opération, renvoyer les gens chez eux, ou bien les laisser en chambre de convalescence, à l’hôpital ? Le soucis est que les données sont biaisées : oui, si on regarde les données brutes, ceux qui sont rentré chez eux se sont remis plus vite… mais peut être a-t-on autorisé les gens à rentrer justement parce qu’ils allaient bien ? Et on retrouve ce biais de sélection un peu partout. Probablement même encore davantage dans les problèmes relatifs à la justice.

Prenons un exemple simple. On va s’intéresser à la probabilité d’être coupable d’un crime, et d’être condamné par la justice. C’est ma variable Y. Pour avoir un modèle économétrique, il me faut des variables explicatives. J’ai une variable X_3 et une autre X_4, la première influençant vraiment la probabilité de commettre un crime, et l’autre étant juste du bruit. Maintenant, entrons dans le coeur du modèle. Supposons que j’ai une variable corrélée à ma richesse \tilde{X}_2, comme le lieu d’habitation, un critère racial, ou pour faire simple le diplôme. Notons X_1 le niveau de diplôme de la personne (mais encore une fois, on peut imaginer n’importe quoi, l’idée est que cette variable est corrélée à la richesse). Et surtout, cette variable n’est pas du tout une variable causale dans mon modèle. La probabilité de commettre un crime n’est pas influencée par X_1. Une personne ayant un diplôme de second cycle universitaire a autant de chance de commettre un crime qu’une personne ayant raté le brevet. On va même supposer que la probabilité de commettre un crime soit indépendante de la richesse. En fait, la richesse me permet de payer un bon avocat. Et avoir un bon avocat permet de diminuer la probabilité d’être condamné. Pour faire simple, j’ai deux niveaux de richesse, qui me donne soit un avocat commis d’office, soit un avocat qui s’impliquera peut être davantage (je simplifie à des fins pédagogiques). Bref, mon modèle aura la forme suivante : la probabilité d’être condamné pour un crime s’écrit p=(0.025+0.05\cdot X_3)\cdot[k\cdot \mathbb{1}(X_2=R)+\mathbb{1}(X_2=P)] Autrement dit, ma probabilité est une fonction croissante (linéaire) de X_3, et si je suis riche (ce qu’on note X_2=R), j’ai moins de chance d’être condamné, avec un facteur k\in[0,1], toutes choses étant égales par ailleurs, ceteris paribus, histoire de parler latin comme les juristes.

Pour simuler un jeu de données, on considère une population d’un million de personnes, et je suppose que ma richesse est très corrélée avec mon niveau de diplôme. Pour info, la classe A ce sont les gens les plus diplômés, et je suppose qu’un bon avocat divise par trois la chance d’être condamné.

 n=1000000
 r=.95
 k=1/3
 S=matrix(c(1,r,r,1),2,2)
 library(mnormt)
 set.seed(1)
 vectX=rmnorm(n,varcov=S)
 X1=cut(vectX[,1],breaks=c(-10,1,-1,10), c("C","B","A"))
 X1=relevel(X1,"A")
 X2=cut(vectX[,2],breaks=c(-10,1,10),c("P","R"))
 X3=runif(n)
 X4=runif(n)
 Y1=rbinom(n,size = 1,prob=(.025+.05*X3)*(k*(X2=="R")+(X2=="P")))
 B0=data.frame(X1,X2,X3,X4,Y1)

On peut faire un peu de statistiques pour regarder les caractéristiques des gens condamnés (uniquement)

B1=data.frame(X1=X1[Y1==1],X2=X2[Y1==1],X3=X3[Y1==1],X4=X4[Y1==1])

Par exemple si je regarde la proportion de gens les plus diplômés: ils sont 15% dans la population totale, et 5% parmi les personnes condamnées

 mean(B0$X1=="A")
[1] 0.158734
 mean(B1$X1=="A")
[1] 0.05016072
 t.test(B0$X1=="A",B1$X1=="A")
data:  B0$X1 == "A" and B1$X1 == "A"
t = 219.5, df = 1283500, p-value < 2.2e-16
95 percent confidence interval:
 0.1076038 0.1095428
sample estimates:
 mean of x  mean of y 
0.15873400 0.05016072

Autrement dit, on a une différence significative entre ces deux pourcentages. Pareil si je regarde la valeur moyenne de X_3, qui était de 50% dans la population et de 58% dans la population condamnée

 t.test(B0$X3,B1$X3)
data:  B0$X3 and B1$X3
t = -163.46, df = 844940, p-value < 2.2e-16
95 percent confidence interval:
 -0.08455430 -0.08255058
sample estimates:
mean of x mean of y 
0.5000094 0.5835618

alors que pour X_4, on a 50% dans la population totale, et 50% dans la population condamnée. Mais on s’y attendait….

 t.test(B0$X4,B1$X4)
data:  B0$X4 and B1$X4
t = -0.81161, df = 48645, p-value = 0.417
95 percent confidence interval:
 -0.003869879  0.001603452
sample estimates:
mean of x mean of y 
0.5001326 0.5012659

Passons à la seconde phase. Il semble y avoir un vrai intérêt dans la littérature sur les modèles actuariels pour la construction de modèles liés à la probabilité de récidive. Là encore, je vais faire une hypothèse forte : faire de la prison n’influence en rien la probabilité de récidiver. Autrement dit, pour cette population déjà condamnée, je vais refaire tourner mes simulations, avec une probabilité de commettre un crime qui dépend seulement de X_3, mais avec toujours notre effet richesse qui permet de payer un avocat qui fera baisser la probabilité d’être condamné.

 set.seed(2)
 n1=nrow(B1)
 Y2=rbinom(n1,size = 1,prob=(.025+.05*B1$X3)*(k*(B1$X2=="R")+(B1$X2=="P")))
 B2=cbind(B1,Y2)
 reg=glm(Y2~X1+X3+X4,data=B2,family=binomial)

J’ai construit ici mon modèle permettant de calculer une probabilité de récidive. Le modèle est construit sur le niveau de diplôme X_1 (qui n’influence en rien, encore une fois, la probabilité de commettre un crime), la variable X_3 (qui effectivement joue un rôle) et la variable de bruit X_4.

 set.seed(2)
 n1=nrow(B1)
 Y2=rbinom(n1,size = 1,prob=(.025+.05*B1$X3)*(k*(B1$X2=="R")+(B1$X2=="P")))
 B2=cbind(B1,Y2)
 reg=glm(Y2~X1+X3+X4,data=B2,family=binomial)

Je fais ici un score logistique car c’est le plus logique.

 summary(reg)
Coefficients:
            Estimate Std. Error z value  Pr(|z|)    
Inter.) -4.04753    0.11592 -34.916    2e-16 ***
X1C      0.61311    0.10914   5.618 1.93e-08 ***
X1B      0.68807    0.09977   6.896 5.34e-12 ***
X3       1.00610    0.08052  12.495    2e-16 ***
X4      -0.13562    0.07286  -1.861   0.0627 .

Premier élément à observer, la variable X_4 est à la limite d’être significative… Certains auteurs sur ces thèmes diraient que c’est un signal faible. Pourtant, en l’occurence, c’est du bruit ! Juste du bruit !! Et parmi les signaux forts, très clairement significatifs, on retrouve X_1, notre niveau de diplôme. Ici, si on est faiblement qualifié (niveaux B ou C) alors on a une chance de récidive beaucoup plus importante ! C’est très fortement significatif ! Alors que pour rappel, la probabilé de commettre un crime, et de récidiver n’est aucunement causé par cette variable ! C’est juste une variable corrélée avec la variable richesse, qui permet de faire baisser la probabilité d’être condamné.

Plus intéressant, si je regarde la valeur du coefficient, en fonction de k, autrement dit la valeur d’un bon avocat (que l’on peut s’offrir si on est riche) on obtient la courbe suivante.

Autrement, sur la partie droite, même si un bon avocat ne baisse que de 5% la chance d’être condamné, le niveau de diplôme semble être significativement liée à la probabilté de récidive. Avec un pic autour de 30%. Si les avocats ont un pouvoir incroyable, par exemple en divisant par 20 la probabilité d’être condamné, étrangement, la variable de diplôme redevient très peu significative, mais je pense que c’est parce qu’on n’a alors plus personne de très diplômé dans la base.

On pourrait me reprocher d’avoir une corrélation trop forte entre richesse et niveau de diplôme, ici. En mettant une corrélation à 70% (dans mon modèle Gaussien latent), j’obtiens

qui donne un effet moindre, mais là encore, si un bon avocat permet de réduire la probabilité d’être condamné par un facteur compris entre 5% et 90%, alors la variable de diplôme semblera significative.

Mon modèle n’est pas faux ici. C’est essentiellement que j’ai un biais dans mes données. Et ne pas en tenir compte fait que le modèle n’a aucun sens, et ne peut pas être interprété sans commettre d’erreurs (et ici je cumule les âneries, en voyant un signal faible alors qu’il y a du bruit, et en voyant un signal fort pour une variable qui n’a aucun lien causal). J’insiste lourdement, mais le diplôme n’intervient aucunement ici. Et on peut remplacer le diplôme par toute variable corrélée à la richesse, comme locataire et propriétaire, etc.

Bref, l’utilisation de modèles économétrique dans des problèmes judiciaires n’est pas simple ! Je parle ici de biais de sélection, mais la formulation qui me semble la plus juste serait une mauvaise spécification du modèle (on n’utilise pas les bonnes variables, mais ces dernières sont peut-être non-observables – ou difficiles à justifier éthiquement : peut-on demander le revenu de la personne pour faire un score de récidive, et avoir un score qui dépende effectivement du revenu ?). Il y a aussi un problème d’identification, car on a du mal à savoir ce qui influence le fait de commettre un crime, et la condamnation. Ce qu’on observe, ce sont les condamnations, pas les actes (je ne parle pas des condamnations à tort ici, que j’avais pu évoquer dans un autre billet, sur le principe de précaution). Mais j’essayerais de revenir sur ce point après les vacances….

Dernière anecdote pour finir. Après avoir découvert ces modèles via Virginie Gautron, j’avais discuté avec une amie, présidente de tribunal, qui me parlait de la nullité des experts qu’elle avait sous la main. Et elle voulait faire un peu de ménage. Bref, elle me demandait si je pouvais faire un modèle attribuant un score à un expert. Et là encore, ma réponse a été que oui, faire un modèle c’est facile, à condition d’avoir des données. Comment puis-je avoir un score Y d’un expert? Qui sait ce qu’est un bon expert? C’est quelqu’un contre qui on ne demande pas de contre-expertise? Malheureusement, la justice ne fonctionne pas ainsi, et la demande n’a rien à voir avec la qualité de l’expert. Et la qualité, vu du point de vue de la juge, est une notion qui peut être très différente de celle des deux parties qui s’opposent… Bref, je pourrais facilement faire un modèle. Mais en avoir un qui ait du sens, c’est une autre paire de manches !

I Got The Feelin’

Last week, I’ve been going through my CD collection, trying to find records I haven’t been listing for a while. And I got the feeling that music I listen to nowadays is slower than the one I was listening to in my 20’s. I was wondering if that was an age issue, or it was simply the fact that music in the 90s was “faster” than the one released in 2015. So I tried to scrap the BPM database to get a more appropriate answer about this “feeling” I have. I did extract two information: the BPM (beat per minute) and the year (of release).

Here is a function to extract information from the website,

> library(XML)
> extractbpm = function(VBP,P){
+ url=paste("https://www.bpmdatabase.com/music/search/?artist=&title=&bpm=",VBP,"&genre=&page=",P,sep="")
+ download.file(url,destfile = "page.html")
+ tables=readHTMLTable("page.html")
+ return(tables)}

For instance

> extractbpm(115,13)
$`track-table`
Artist Title
1 Eros Ramazzotti y Claudio Guidetti Dimelo A Mi
2 Everclear Volvo Driving Soccer Mom
3 Evils Toy Dear God
4 Expose In Walked Love
5 Fabolous ft. 2 Chainz When I Feel Like It
6 Fabolous ft. 2 Chainz When I Feel Like It
7 Fabolous ft. 2 Chainz When I Feel Like It
8 Fanny Lu Fanfarron
9 Featurecast Ain't My Style
10 Fem 2 Fem Obsession
11 Fernando Villalona Mi Delito
12 Fever Ray Triangle Walks
13 Firstlove Freaky
14 Fito Blanko Pegadito Suavecito
15 Flechazo Del Norte Mariposa Traicionera
16 Fluke Switch/Twitch
17 Flyleaf Something Better
18 FM Static The Next Big Thing
19 Fonseca Eres Mi Sueno
20 Fonseca ft. Maffio & Nayer Eres Mi Sueno
21 Francesca Battistelli Have Yourself A Merry Little Christmas
22 Frankie Ballard Young & Crazy
23 Frankie J. More Than Words
24 Frank Sinatra The Hucklebuck
25 Franz Ferdinand The Dark Of The Matinée
Mix BPM Genre Label Year
1 — 115 — Sony 2009
2 — 115 — Capitol Records 2003
3 — 115 — — —
4 — 115 — Arista Records 1994
5 Explicit 115 Urban Def Jam/Island Def Jam 2013
6 — 115 Urban Def Jam/Island Def Jam 2013
7 Radio Edit 115 Urban Def Jam/Island Def Jam 2013
8 — 115 Latin Pop Universal Latino 2011
9 Psychemagik Dub 115 — Jalapeno 2012
10 — 115 — Critique Records 1993
11 — 115 — Mt&vi Records/caminante Records 2001
12 Rex The Dog Remix 115 — Little Idiot/Mute 2012
13 — 115 — Jwp Music 2000
14 — 115 Merengue Mambo Crown Loyalty 2012
15 — 115 — Hacienda 2010
16 Album Version 115 — One Little Indian Records 2004
17 — 115 Alternative A&M/Octone 2013
18 — 115 — Tooth & Nail Records 2007
19 — 115 Merengue Mambo 10 2012
20 Urban Version 115 — 10 2012
21 — 115 — Word/Fervent/Warner Bros 2009
22 — 115 Country Warner Bros 2015
23 Mynt Rocks Radio Edit 115 — Columbia 2005
24 — 115 Jazz Columbia 1950
25 — 115 New Wave — 2004

We have here one of the few old songs, a 1950 tune by Frank Sinatra. If we scrap the website, with a simple loop (where the bpm is from 40 to 200). Start with

BASE=NULL
> vbp=40
> p=1

and then, a loop based on

> while(vbp<=2017){
+ F=extractbmp(vbp,p)
+ if(length(F)==1){
+ BASE=rbind(BASE,F[[1]][,c("Artist","Title","BPM","Year")])
+ p=p+1}
+ if(length(F)==0){
+ vbp=vbp+1
+ p=1}}

Then we should clean the dataset

BASE=BASE[-duplicated(BASE),]
BASE=BASE[-which(BASE$Year=="—"),]
BASE$y=as.numeric(as.character(BASE$Year))
BASE$bpm=as.numeric(as.character(BASE$BPM))
BASE=BASE[BASE$y>=1940,]

and we end up with almost 50,000 tunes.

boxplot(BASE$bpm~as.factor(BASE$y),
col="light blue")

Over the past 20 years, it looks like speed of tunes has declined (let us forget tunes of 2017, clearly we have a problem here…)

library(mgcv)
plot(BASE$y,BASE$bpm)
reg=gam(bpm~s(y),data=BASE)
B=data.frame(y=1950:2017)
p=predict(reg,newdata=B)
lines(B$y,p,lwd=3,col="red")

which is confirmed with a (smoothed) regression

p2=predict(reg,newdata=B,se.fit=TRUE)
plot(B$y,p2$fit,lwd=3,col="red",type="l",ylim=c(90,140))
lines(B$y,p2$fit+p2$se.fit)
lines(B$y,p2$fit-p2$se.fit)

even when incorporating the confidence band. Bumps are probably related to smoothing parameters, but indeed, it looks like the average speed of music tune has decreased, from 110-115 in the 90’s to less than 100 nowadays. Now to be honest, I would love to have access to personal information from itunes, deezer or spotify, to get a better understanding (eg when in the week, in the day, do we like to listen to faster music for instance). But so far, I could not have access to such data. Too bad…

Regression with Splines: Should we care about Non-Significant Components?

Following the course of this morning, I got a very interesting question from a student of mine. The question was about having non-significant components in a splineregression.  Should we consider a model with a small number of knots and all components significant, or one with a (much) larger number of knots, and a lot of knots non-significant?

My initial intuition was to prefer the second alternative, like in autoregressive models in R. When we fit an AR(6) model, it’s not really a big deal if most coefficients are not significant (but the last one). It’s won’t affect much the forecast. So here, it might be the same. With a larger number of knots, we should be able to capture small bumps that we’ll never capture with a smaller number.

Here is what a have with a small number of knots, and cubic splines

and with a larger number of knots

In order to understand what’s going on, consider a simple model, with the two splines above, in red

> set.seed(1)
> library(splines)
> x=seq(0,1,by=.01)
> v=bs(x,10)
> x2=v[,2]
> x10=v[,10]
> set.seed(1)
> y=1+3*x2+5*x10+rnorm(length(x))/4
> y_test=1+3*x2+5*x10+rnorm(length(x))/4

Note that here I have generated two sets of data, one to train a model, and one to test it.  Here, the data looks like that

> plot(x,y)

It is based on two splines,

> lines(df$x,1+3*x2+5*x10)

If we use a spline model with 10 degrees of freedom, we get

> df=data.frame(x,y)
> reg=lm(y~bs(x,10),data=df)
> summary(reg)
 
Coefficients:
            Estimate Std. Er t value Pr(>|t|)    
(Intercept)  0.91671 0.17068   5.371 6.08e-07 ***
bs(x, 10)1   0.20485 0.32696   0.627    0.533    
bs(x, 10)2   3.15593 0.22534  14.005  < 2e-16 ***
bs(x, 10)3   0.04847 0.25075   0.193    0.847    
bs(x, 10)4   0.09373 0.21597   0.434    0.665    
bs(x, 10)5   0.11624 0.22939   0.507    0.614    
bs(x, 10)6   0.24829 0.22293   1.114    0.268    
bs(x, 10)7  -0.06825 0.23498  -0.290    0.772    
bs(x, 10)8   0.19633 0.26241   0.748    0.456    
bs(x, 10)9   0.27557 0.26976   1.022    0.310    
bs(x, 10)10  4.78134 0.24116  19.826  < 2e-16 ***

which makes sense, from what we have generated. Indeed, most of the components are not significant, but the second and the tenth. We can actually test that all those components are null (at the same time)

> A=matrix(0,8,11)
> colnames(A)=names(coefficients(reg))
> A[1,2]=A[2,4]=A[3,5]=A[4,6]=A[5,7]=
+ A[6,8]=A[7,9]=A[8,10]=1
> b=rep(0,8)
> linearHypothesis(reg, A,b)
Linear hypothesis test
 
Hypothesis:
bs(x, 10)1 = 0
bs(x, 10)3 = 0
bs(x, 10)4 = 0
bs(x, 10)5 = 0
bs(x, 10)6 = 0
bs(x, 10)7 = 0
bs(x, 10)8 = 0
bs(x, 10)9 = 0
 
Model 1: restricted model
Model 2: y ~ bs(x, 10)
 
  Res.Df    RSS Df Sum of Sq      F Pr(>F)
1     98 4.8766                           
2     90 4.6196  8   0.25701 0.6259  0.754

and yes, those coefficients are not significant.

> yp10=predict(reg)
> lines(df$x,yp10,col="red")

Continue reading Regression with Splines: Should we care about Non-Significant Components?

I Fought the (distribution) Law (and the Law did not win)

A few days ago, I was asked if we should spend a lot of time to choose the distribution we use, in GLMs, for (actuarial) ratemaking. On that topic, I usually claim that the family is not the most important parameter in the regression model. Consider the following dataset

> db <- data.frame(x=c(1,2,3,4,5),y=c(1,2,4,2,6))
> plot(db,xlim=c(0,6),ylim=c(-1,8),pch=19)

To visualize a regression model, use the following code

> nd=data.frame(x=seq(0,6,by=.1))
> add_predict = function(reg){
+ prd1=predict(reg,newdata=nd,se.fit = TRUE,type="response")
+ y1=prd1$fit
+ y1_upp=prd1$fit+prd1$residual.scale*1.96*
prd1$se.fit   
+ y1_low=prd1$fit-prd1$residual.scale*1.96*
prd1$se.fit 
+ polygon(c(nd$x,rev(nd$x)),c(y1_upp,
rev(y1_low)),col="light green",angle=90,
density=40,border=NA)
+ lines(nd$x,y1,col="red",lwd=2)
+ }

For instance, with a Poisson regression (with a log link function) we get

> plot(db)
> reg1=glm(y~x,family=poisson(link="log"),
+ data=db)
> add_predict(reg1)

while, with a Gaussian regresion (but still with a log link function), we get

> plot(db)
> reg2=glm(y~x,family=gaussian(link="log"),
+ data=db)
> add_predict(reg2)

If we just care about the expected value of our prediction, the output is more or less the same

> plot(db)
> lines(nd$x,predict(reg1,newdata=nd,
+ type="response"),col="red",lwd=1.5)
> lines(nd$x,predict(reg2,newdata=nd,
+ type="response"),col="blue",lwd=1.5)

So, indeed, forget about the (distribution) law when running a GLM. Not convinced? Consider – on the same dataset – a Poisson regression (with an identity link function this time)

> plot(db)
> reg1=glm(y~x,family=poisson(link="identity"),
+ data=db)
> add_predict(reg1)

while, with a Gaussian regresion (but still with an identity link function), we get

> plot(db)
> reg2=glm(y~x,family=gaussian(link="identity"),
+ data=db)
> add_predict(reg2)

Again, if we just plot the expected value of our prediction, the output is more or less the same

> plot(db)
> lines(nd$x,predict(reg1,newdata=nd,
+ type="response"),col="red",lwd=1.5)
> lines(nd$x,predict(reg2,newdata=nd,
+ type="response"),col="blue",lwd=1.5)

So clearly, the simplistic message you should not care too much about the (distribution) law seems to be valid…

Continue reading I Fought the (distribution) Law (and the Law did not win)

Classification with Categorical Variables (the fuzzy side)

The Gaussian and the (log) Poisson regressions share a very interesting property,

i.e. the average predicted value is the empirical mean of our sample.

> mean(predict(lm(dist~speed,data=cars)))
[1] 42.98
> mean(cars$dist)
[1] 42.98

One can prove that it is also the prediction for the average individual in our sample

> predict(lm(dist~speed,data=cars),
+ newdata=data.frame(speed=mean(cars$speed))) 
42.98

The geometric interpretation is that the regression line passes through the centroid,

> plot(cars)
> abline(lm(dist~speed,data=cars),col="red")
> abline(h=mean(cars$dist),col="blue")
> abline(v=mean(cars$speed),col="blue")
> points(mean(cars$speed),mean(cars$dist))

But in all other cases, it is no longer the case. Consider for instance the case of a logistic regression. And to ask for something even more complicated, consider the case where we have only categorical explanatory variables. In that context, it is more difficult to get a prediction for the “average individual”. Unless we consider some fuzzy interpretation of the regression.

Continue reading Classification with Categorical Variables (the fuzzy side)

Regression Models, It’s Not Only About Interpretation

Yesterday, I did upload a post where I tried to show that “standard” regression models where not performing bad. At least if you include splines (multivariate splines) to take into accound joint effects, and nonlinearities. So far, I do not discuss the possible high number of features (but with boostrap procedures, it is possible to assess something related to variable importance, that people from machine learning like).

But my post was not complete: I was simply plotting the prediction obtained by some model. And it “looked like” the regression was nice, but so were the random forrest, the https://latex.codecogs.com/gif.latex?k-nearest neighbour and boosting algorithm. What if we compare those models on new data?

Continue reading Regression Models, It’s Not Only About Interpretation