Tag Archives: LASSO

On the robustness of LASSO

Probably the last post on lasso, before the summer break… More specifically, I was wondering about the interpretation of graphs \lambda\mapsto\widehat{\beta}_\lambda. We use them for variable selection, but my major concern was about confidence intervals : how can we trust those lines ?

As usual, a natural way is to use simulations on generated datasets. Consider for instance

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
              X5=runif(n),
              X6=exp(X[,3]),
              X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
              X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)

One can use other simulations of datasets, and store the output

vlambda = exp(seq(-8,1,length=201))
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,
             lambda=vlambda,standardize=TRUE)
VLASSO[[s]] = as.matrix(lasso$beta)

To visualize confidence bands, one can compute quantiles

Q05=Q95=Qm=matrix(NA,9,201)
for(i in 1:nrow(Q05)){
  for(j in 1:ncol(Q05)){
    v = unlist(lapply(VLASSO,function(x) x[i,j]))
    Q05[i,j] = quantile(v,.05)
    Q95[i,j] = quantile(v,.95)
    Qm[i,j]  = mean(v)
  }}

and get get the graph

plot(lasso,col=colrs,"lambda"ylim=c(min(Q05),max(Q95)))
colrs=c(brewer.pal(8,"Set1"))
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
          c(Q05[2,],rev(Q95[2,])),col=colrs[1],border=NA)
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
        c(Q05[5,],rev(Q95[5,])),col=colrs[2],border=NA)
polygon(c(log(lasso$lambda),rev(log(lasso$lambda))),
        c(Q05[8,],rev(Q95[8,])),col=colrs[3],border=NA)

An alternative (more realistic on real data) is to use bootstrapped version of the dataset

id = sample(1:nrow(X),size=nrow(X),replace=TRUE)
lasso = glmnet(x=X[id,],y=df[id,"Y"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)


So far, it looks it’s working very well. Now, what if we have a smaller dataset

n = 100

On simulated new samples, we get


while the bootstrap version is

There is more uncertainty, clearly, but the conclusion is not ambiguous here.

Now, what about real data. Consider the following

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")
tail(chicago)
   Fire   X_1 X_2    X_3
42  4.8 0.152  19 13.323
43 10.4 0.408  25 12.960
44 15.6 0.578  28 11.260
45  7.0 0.114   3 10.080
46  7.1 0.492  23 11.428
47  4.9 0.466  27 13.731

with one variable of interest (the number of fires, per unhabitants) and 3 features. We can here use bootstrap to generate samples, and then fit a lasso regression. On the original dataset, the regression is

X = model.matrix(lm(Fire~.,data=chicago))
 id = sample(1:nrow(X),size=nrow(X),replace=TRUE)
 vlambda = exp(seq(-4,2,length=201))
 lasso = glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)

And if we just plot lines \lambda\mapsto\widehat{\beta}_\lambda we get

Now, consider bootstrap samples.

for(s in 1:100){
  id=sample(1:nrow(X),size=nrow(X),replace=TRUE)
  library(glmnet)
  vlambda=exp(seq(-4,2,length=201))
  lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)
  plot(lasso,col=colrs,"lambda",lwd=.2,add=TRUE)}

We get here

The interpretation here is much more difficult

What about the order ?

N=matrix(NA,100000,4)
for(s in 1:100000){
  id=sample(1:nrow(X),size=nrow(X),replace=TRUE)
  library(glmnet)
  vlambda=exp(seq(-4,2,length=201))
  lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],
               family="gaussian",alpha=1,
               lambda=vlambda,standardize=TRUE)
  N[s,]=names(sort(apply(as.matrix(lasso$beta),
        1,function(x) sum(x!=0))))}

The ordering that was obtained on the original dataset was the same in 56% of the scenarios,

mean(apply(N,1,function(x) paste(x,collapse="")=="(Intercept)X_1X_2X_3"))
[1] 0.5693

We can look at all the cases,

L=as.character(c(123,132,213,231,312,321))
Li=paste("(Intercept)X_",substr(L,1,1),"X_",
         substr(L,2,2),"X_",substr(L,3,3),sep="")
g=function(y) mean(apply(N,1,function(x) paste(x,collapse="")==y))
vL=unlist(lapply(Li,g))
names(vL)=L
barplot(vL,las=2,horiz=TRUE)

Standardization in LASSO

The lasso regression is based on the idea of solving\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_1}\rbracewhere\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|for any \mathbf{a}\in\mathbb{R}^d. In a recent post, we’ve seen computational aspects of the optimization problem. But I went quickly throught the story of the \ell_1-norm. Because it means, somehow, that the value of \beta_1 and \beta_2 should be comparable. Somehow, with two significant variables, with very different scales, we should expect orders (or relative magnitudes) of \widehat{\beta}_1 and \widehat{\beta}_2 to be very very different. So people say that it is therefore necessary to center and reduce (or standardize) the variables.

Consider the following (simulated) dataset

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
X5=runif(n),X6=exp(X[,3]),
X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)
X = model.matrix(lm(Y~.,data=df))

Use the following colors for the graphs and the value of \lambda

library("RColorBrewer")
colrs = c(brewer.pal(8,"Set1"))[c(1,4,5,2,6,3,7,8)]
vlambda=exp(seq(-8,1,length=201))

The first regression we can run is a non-standardized one

library(glmnet)
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=FALSE)

We can visualize the graphs of \lambda\mapsto\widehat{\beta}_\lambda

idx = which(apply(lasso$beta,1,function(x) sum(x==0))<200)
plot(lasso,col=colrs,'lambda',xlim=c(-5.5,2.3),lwd=2)
legend(1.2,.9,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,lwd=2)

At least, observe that the most significant variables are the one that were used to generate the data.

Now, consider the case that we standardize the data

lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=TRUE)

The graphs of \lambda\mapsto\widehat{\beta}_\lambda

The graph is (strangely) very similar to the previous one. Except perhaps for the green curve. Maybe that categorical are not simular to continuous variables… Because somehow, standardisation of categorical variables might be not natural…

Why not consider some home-made function ? Let us transform (linearly) all variable in the X matrix (except the first one, which is the intercept)

Xc = X
for(j in 2:ncol(X)) Xc[,j]=(Xc[,j]-mean(Xc[,j]))/sd(Xc[,j])

Now, we can run our lasso regression on that one (with the intercept since all the variables are centered, but y)

lasso = glmnet(x=Xc,y=df$Y,family="gaussian",alpha=1,intercept=TRUE,lambda=vlambda)

The plot is now

plot(lasso,col=colrs,"lambda",xlim=c(-6.7,1.3),lwd=2)
idx = which(apply(lasso$beta,1,function(x) sum(x==0))<length(vlambda))
legend(.15,.45,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,bty="n",lwd=2)

Actually, why not also center the y variable, and remove also the intercept

Yc = (df[,"Y"]-mean(df[,"Y"]))/sd(df[,"Y"])
lasso = glmnet(x=Xc,y=Yc,family="gaussian",alpha=1,intercept=FALSE,lambda=vlambda)

Hopefully, those graphs are very consistent (and if we use those for variable selection, they suggest to use variables that were actually used to generate the dataset). And having qualitative and quantitative variable is not a big deal. But still, I do not feel confortable with the differences…

Classification from scratch, penalized Lasso logistic 5/8

Fifth post of our series on classification from scratch, following the previous post on penalization using the \ell_2 norm (so-called Ridge regression), this time, we will discuss penalization based on the \ell_1 norm (the so-called Lasso regression).

First of all, one should admit that if the name stands for least absolute shrinkage and selection operator, that’s actually a very cool name… Funny story, a few years before, Leo Breiman introduce a concept of garrote technique… “The garrote eliminates some variables, shrinks others, and is relatively stable”.

I guess that somehow, the lasso is the extension of the garotte technique

Normalization of the covariates

As previously, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

The heuristics about Lasso regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue square is the constraint we have, if we rewite the optimization problem as a contrained optimization problem,

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
polygon(c(-1,0,1,0),c(0,1,0,-1),border="blue")

The nice thing here is that is works as a variable selection tool, since some components can be null here. That’s the idea behind the following (popular) graph


(with lasso on the left, and ridge on the right).

Heuristically, the maths explanation is the following. Consider a simple regression y_i=x_i\beta+\varepsilon, with \ell_1-penality and a \ell_2-loss fuction. The optimization problem becomes\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda{\color{red}{|}}\beta{\color{red}{|}}\big\}The first order condition can be written-2\mathbf{y}^T\mathbf{x}+2\mathbf{x}^T\mathbf{x}\widehat{\beta}{\color{red}{\pm} }2\lambda=0(the sign in {\color{red}{\pm}} being the sign of \widehat{\beta}).
Assume that \mathbf{y}^T\mathbf{x}>0, then solution is
\widehat{\beta}_{\lambda}^{lasso}=\max\left\lbrace\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}},0\right\rbrace(we get a corner solution when \lambda is large).

Optimization routine

As in our previous post, let us start with standard (R) optimization routines, such as BFGS

PennegLogLik = function(bbeta,lambda=0){
  b0=bbeta[1]
  beta=bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
(1-y)*log(1 + exp(b0+X%*%beta)))+lambda*sum(abs(beta))
}
opt_lasso = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]
}
v_lambda=c(exp(seq(-4,2,length=61)))
est_lasso=Vectorize(opt_lasso)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_lasso[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_lasso[i,],col=colrs[i],lwd=2)


But it is very heratic… or non stable.

Using glmnet

Just to compare, with R routines dedicated to lasso, we get the following

library(glmnet)
glm_lasso = glmnet(X, y, alpha=1)
plot(glm_lasso,xvar="lambda",col=colrs,lwd=2)

plot(glm_lasso,col=colrs,lwd=2)

If we look carefully what’s in the ouput, we can see that there is variable selection, in the sense that some \widehat{\beta}_{j,\lambda}=0, in the sense “really null”

glmnet(X, y, alpha=1,lambda=exp(-4))$beta
7x1 sparse Matrix of class "dgCMatrix"
               s0
FRCAR  .         
INCAR  0.11005070
INSYS  0.03231929
PRDIA  .         
PAPUL  .         
PVENT -0.03138089
REPUL -0.20962611

Of course, with out optimization routine, we cannot expect to have null values

opt_lasso(.2)
         FRCAR         INCAR         INSYS         PRDIA
  0.4810999782  0.0002813658  1.9117847987 -0.3873926427
          PAPUL         PVENT        REPUL 
 -0.0863050787 -0.4144139379 -1.3849264055

So clearly, it will be necessary to spend more time today, to understand how it works…

Orthogonal covariates

Before getting into the maths, observe that when covariates are orthogonal, there is some very clear “variable” selection process,

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord
glm_lasso = glmnet(pca_X, y, alpha=1)
plot(glm_lasso,xvar="lambda",col=colrs)
plot(glm_lasso,col=colrs)

Interior Point approach

The penalty is now expressed using the \ell_1 so intuitively, it should be possible to consider algorithms related to linear programming. That was actually suggested in Koh, Kim & Boyd (2007), with some implementation in matlab, see http://web.stanford.edu/~boyd/l1_logreg/. If I can find some time, later one, maybe I will try to recode it. But actually, it is not the technique used in most R functions.

Now, o be honest, we face a double challenge today: the first one is to understand how lasso works for the “standard” (least square) problem, the second one is to see how to adapt it to the logistic case.

Standard lasso (with weights)

If we get back to the original Lasso approach, the goal was to solve\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace(with standard notions, as in wikipedia or Jocelyn Chi’s post – most of the code in this section is inspired by Jocelyn’s great post).

Observe that the intercept is not subject to the penalty. The first order condition is then\frac{\partial}{\partial\beta_0}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}-\beta_0\mathbf{1}\|^2=(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}+\beta_0\|\mathbf{1}\|^2=0i.e.\beta_0=\frac{1}{n^2}(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}Assume now that KKT conditions are satisfied, since we cannot differentiate (to find points where the gradient is \mathbf{0}), we can check if \mathbf{0} contains the subdifferential at the minimum.

Namely\mathbf{0}\in\partial \left(\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right)=\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\partial(\lambda\|\mathbf{\beta}\|_{\ell_1})
For the term on the left, we recognize \frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2=-\mathbf{X}^T(\mathbf{y}-\mathbf{X}\mathbf{\beta})=-\mathbf{g}so that the previous equation can be writeng_k\in\partial(\lambda|\beta_k|)=\begin{cases}\{+\lambda\}\text{ if }\beta_k>0 \\ \{-\lambda\}\text{ if }\beta_k<0 \\ (-\lambda,+\lambda)\text{ if }\beta_k=0\end{cases}i.e. if \beta_k\neq 0, then g_k = \text{sign}(\beta_k)\cdot\lambda.

Then we write the KKT conditions for this formulation and simplify them to produce a set of rules for checking our solution

We can split \beta_j into a sum of its positive and negative parts by replacing \beta_j with \beta_j^+-\beta_j^- where \beta_j^+,\beta_j^-\geq0. Then the Lasso problem becomes-\log\mathcal{L}(\mathbf{\beta})+\lambda\sum_j(\beta_j^+-\beta_j^-)with constraints \beta_j^+-\beta_j^-.

Let \alpha_j^+,\alpha_j^- denote the Lagrange multipliers for \beta_j^+,\beta_j^-, respectively.

L({\mathbf{\beta}}) + \lambda \sum_{j} (\beta_{j}^{+} - \beta_{j}^{-}) - \sum_{j}\alpha_{j}^{+}\beta_{j}^{+} - \sum_{j} \alpha_{j}^{-}\beta_{j}^{-}.To satisfy the stationarity condition, we take the gradient of the Lagrangian with respect to \beta_{j}^{+} and set it to zero to obtain\nabla L({\mathbf{\beta}})_{j} + \lambda - \alpha_{j}^{+} = 0We do the same with respect to \beta_{j}^{-} to obtain-\nabla L({\mathbf{\beta}})_{j}+\lambda-\alpha_{j}^{-} = 0

As discussed in Jocelyn Chi’s post, primal feasibility requires that the primal constraints be satisfied so this gives us \beta_{j}^{+} \ge 0 and \beta_{j}^{-} \ge 0. Then dual feasibility requires non-negativity of the Lagrange multipliers so we get \alpha_{j}^{+} \ge 0 and \alpha_{j}^{-} \ge 0. And finally, complementary slackness requires that \alpha_{j}^{+}\beta_{j}^{+} = 0 and \alpha_{j}^{-}\beta_{j}^{-} = 0. We can simplify these conditions to obtain a simple set of rules for checking whether or not our solution is a minimum. The following is inspired by Jocelyn Chi’s post.

From \nabla L(\beta)_{j} + \lambda - \alpha_{j}^{+} = 0, we have \nabla L(\beta)_{j} + \lambda= \alpha_{j}^{+} \ge 0. This gives us \nabla L(\beta)_{j} \ge -\lambda. From -\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{-} = 0, we have -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0. This gives us -\nabla L(\beta)_{j} \ge -\lambda, which gives us \nabla L(\beta)_{j} \le \lambda. Hence, \lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j

When \beta_{j}^{+} > 0, \lambda > 0, complementary slackness requires \alpha_{j}^{+} = 0. So \nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} = 0. Hence, \nabla L(\beta)_{j} = -\lambda < 0 since \lambda > 0. At the same time, -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0 so 2 \lambda = \alpha_{j}^{-} > 0 since \lambda > 0. Then complementary slackness requires \beta_{j}^{-} = 0. Hence, when \beta_{j}^{+} > 0, we have \beta_{j}^{-}=0 and \nabla L(\beta)_{j} = -\lambda

Similarly, when \beta_{j}^{-} > 0, \lambda > 0, complementary slackness requires \alpha_{j}^{-}=0. So -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} = 0 and \nabla L(\beta)_{j}=\lambda>0 since \lambda > 0. Then from \nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} \ge 0 and the above, we get 2 \lambda = \alpha_{j}^{+} > 0. Then complementary slackness requires \beta_{j}^{+} = 0. Hence, when \beta_{j}^{-} > 0, we have \beta_{j}^{+}=0 and \nabla L(\beta)_{j} = \lambda.

Since \beta_{j} = \beta_{j}^{+} - \beta_{j}^{-}, this means that when \beta_{j} > 0, \nabla L(\beta)_{j} = -\lambda. And when \beta_{j} <0, \nabla L(\beta)_{j} = \lambda. Combining this with \lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j, we arrive at the same convergence requirements that we obtained before using subdifferential calculus.

For conveniency, introduce the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}
Noticing that the optimization problem \frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}can also be written
\min\left\lbrace\sum_{j=1}^p -\widehat{\beta}_j^{ols}\cdot\beta_j+\frac{1}{2}\beta_j^2+\lambda|\beta_j|\right\rbraceobserve that\widehat{\beta}_{j,\lambda}=S(\widehat{\beta}_j^{ols},\lambda)which is a coordinate-wise update.

Now, if we consider a (slightly) more general problem, with weights in the first part\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n{\color{red}{\omega_i}} [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbracethe coordinate-wise update becomes
\widehat{\beta}_{j,\lambda,{\color{red}{\omega}}}=S(\widehat{\beta}_j^{{\color{red}{\omega-}}ols},\lambda)
An alternative is to set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

soft_thresholding = function(x,a){
  result = numeric(length(x))
  result[which(x &gt; a)]  a)] - a
  result[which(x &lt; -a)] &lt;- x[which(x &lt; -a)] + a
  return(result)
}

and the code

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
    beta0list = numeric(length(maxiter+1))
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[1] = beta0
    for (j in 1:maxiter){
      for (k in 1:length(beta)){
        r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
        beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
      }
      beta0 = sum(y-X%*%beta)/(length(y))
      beta0list[j+1] = beta0
      betalist[[j+1]] = beta
      obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
      if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } 
    } 
return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

Let’s keep that one warm, and let’s get back to our initial problem.

The lasso logistic regression

The trick here is that the logistic problem can be formulated as a quadratic programming problem. Recall that the log-likelihood is here \log\mathcal{L}=\frac{1}{n}\sum_{i=1}^n y_i\cdot(\beta_0+\mathbf{x}_i^T\mathbf{\beta})-\log[1+\exp(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]
which is a concave function of the parameters. Hence, one can use a quadratic approximation of the log-likelihood – using Taylor expansion,\log\mathcal{L}\approx\log\mathcal{L}'=\frac{1}{n}\sum_{i=1}^n \omega_i\cdot[z_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2
where z_i is the working response
z_i=(\beta_0+\mathbf{x}_i^T\mathbf{\beta})+\frac{y_i-p_i}{p_i[1-p_i]}
p_i is the predictionp_i = \frac{\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{1+\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}and \omega_i are weights \omega_i = p_i[1-p_i].

Thus, we obtain a penalized least-square problem. And we can use what was done previously

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0 = sum(y-X%*%beta)/(length(y))
  p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta))
  z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p))
  omega = p*(1-p)/(sum((p*(1-p))))
    beta0list = numeric(length(maxiter+1))
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[1] = beta0
    for (j in 1:maxiter){
      for (k in 1:length(beta)){
        r = z - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
       beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
      }
      beta0 = sum(y-X%*%beta)/(length(y))
      beta0list[j+1] = beta0
      betalist[[j+1]] = beta
      obj[j] = (1/2)*(1/length(y))*norm(omega*(z - X%*%beta - 
beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
  p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta))
  z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p))
  omega = p*(1-p)/(sum((p*(1-p))))
      if (norm(rbind(beta0list[j],betalist[[j]]) - 
rbind(beta0,beta),'F') &lt; tol) { break } 
        } 
return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

It looks like what can get when calling glmnet… and here, we do have null components for some \lambda large enough ! Really null… and that’s cool actually.

Application on our second dataset

Consider now the second dataset, with two covariates. The code to get lasso estimates is

df0 = df
df0$y = as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] &lt;- (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type="response")}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)}

Consider some small values, for [\lambda], so that we only have some sort of shrinkage of parameters,

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=exp(-2.8))
plot_lambda(exp(-2.8))


But with a larger \lambda, there is variable selection: here \widehat{\beta}_{1,\lambda}=0

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=exp(-2.1))
plot_lambda(exp(-2.1))


(to be continued…)

Graduate Course on Advanced Tools for Econometrics (2)

This Tuesday, I will be giving the second part of the (crash) graduate course on advanced tools for econometrics. It will take place in Rennes, IMAPP room, and I have been told that there will be a visio with Nantes and Angers. Slides for the morning are online, as well as slides for the afternoon.

In the morning, we will talk about variable section and penalization, and in the afternoon, it will be on changing the loss function (quantile regression).

Graduate Course on Advanced Methods in Econometrics

I will give a short graduate course for PhD students, in Rennes, on Thurday mornings, in March (2nd, 9th, 23rd and 30th). The agenda will be

  1. Nonlinear Regression Models and Smoothing Techniques

  2. Bootstrapping and Regression

  3. Penalized Regression Models and LASSO

  4. Quantile Regression and Expectiles

There will be slides available by the end of February.

 

Actuariat de l’Assurance Non-Vie #9

Pour le neuvième chapitre du cours d’actuariat de l’assurance non-vie à l’ENSAE, un petit fourre-tout avant d’attaquer la modélisation du passif, en parlant un peu de modèles Tweedie (modèle collectif vs. modèles individuels), de choix de variables, et de choix de modèles. Les slides sont en ligne (la version pdf téléchargeable est comme souvent plus complète que celle sur slideshare)