Lasso Regression (home made)

Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, \min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbraceWe have to define the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}The R function would be

soft_thresholding = function(x,a){
sign(x) * pmax(abs(x)-a,0)
}

To solve our optimization problem, set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0list = numeric(length(maxiter+1))
  beta0 = sum(y-X%*%beta)/(length(y))
  beta0list[1] = beta0
  for (j in 1:maxiter){
    for (k in 1:length(beta)){
      r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
      beta[k] = (1/sum(omega*X[,k]^2))*
        soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
    }
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[j+1] = beta0
    betalist[[j+1]] = beta
    obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
           beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
    if (norm(rbind(beta0list[j],betalist[[j]]) - 
             rbind(beta0,beta),'F') &lt; tol) { break } 
  } 
  return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

For instance, consider the following (simple) dataset, with three covariates

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")

that we can “normalize” (or “standardize“)

X = model.matrix(lm(Fire~.,data=chicago))[,2:4]
for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
y = chicago$Fire
y = (y-mean(y))/sd(y)

To initialize the algorithm, use the OLS estimate

beta_init = lm(Fire~0+.,data=chicago)$coef

For instance

lasso_coord_desc(X,y,beta_init,lambda=.001)
$obj
[1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119
 
$beta
          [,1]
X_1  0.0000000
X_2  0.3836087
X_3 -0.5026137
 
$intercept
[1] 2.060999e-16

and we can get the standard Lasso plot by looping,

Quantile Regression (home made, part 2)

A few months ago, I posted a note with some home made codes for quantile regression… there was something odd on the output, but it was because there was a (small) mathematical problem in my equation. So since I should teach those tomorrow, let me fix them.

Median

Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. Heuristically, the idea is to write y_i=\mu+\varepsilon_i, and then define a_i‘s and b_i‘s so that \varepsilon_i=a_i-b_i and |\varepsilon_i|=a_i+b_i, i.e. a_i=(\varepsilon_i)_+=\max\lbrace0,\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i>0}andb_i=(-\varepsilon_i)_+=\max\lbrace0,-\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i<0}denote respectively the positive and the negative parts.

Unfortunately (that was the error in my previous post), the expression of linear programs is\min_{\mathbf{z}}\left\lbrace\boldsymbol{c}^\top\mathbf{z}\right\rbrace\text{ s.t. }\boldsymbol{A}\mathbf{z}=\boldsymbol{b},\mathbf{z}\geq\boldsymbol{0}In the equation above, with the a_i‘s and b_i‘s, we’re not far away. Except that we have \mu\in\mathbb{R}, while it should be positive. So similarly, set \mu=\mu^+-\mu^- where \mu^+=(\mu)_+ and \mu^-=(-\mu)_+.

Thus, let\mathbf{z}=\big(\mu^+;\mu^-;\boldsymbol{a},\boldsymbol{b}\big)^\top\in\mathbb{R}_+^{2n+2}and then write the constraint as \boldsymbol{A}\mathbf{z}=\boldsymbol{b} with \boldsymbol{b}=\boldsymbol{y} and \boldsymbol{A}=\big[\boldsymbol{1}_n;-\boldsymbol{1}_n;\mathbb{I}_n;-\mathbb{I}_n\big]And for the objective function\boldsymbol{c}=\big(\boldsymbol{0},\boldsymbol{1}_n,-\boldsymbol{1}_n\big)^\top\in\mathbb{R}_+^{2n+2}

To illustrate, consider a sample from a lognormal distribution,

n = 101 
set.seed(1)
y = rlnorm(n)
median(y)
[1] 1.077415

For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,

library(lpSolve) 
X = rep(1,n) 
A = cbind(X, -X, diag(n), -diag(n))
b = y
c = c(rep(0,2), rep(1,n),rep(1,n))
equal_type = rep("=", n) 
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 1.077415

It looks like it’s working well…

Quantile

Of course, we can adapt our previous code for quantiles

tau = .3
quantile(y,tau)
      30% 
0.6741586

The linear program is now\min_{q^+,q^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i,q^+,q^-\geq 0 and y_i=q^+-q^-+a_i-b_i, \forall i=1,\cdots,n. The R code is now

c = c(rep(0,2), tau*rep(1,n),(1-tau)*rep(1,n))
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 0.6741586

So far so good…

Quantile Regression

Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.

base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)

The linear program for the quantile regression is now\min_{\boldsymbol{\beta}^+,\boldsymbol{\beta}^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i=\boldsymbol{x}^\top[\boldsymbol{\beta}^+-\boldsymbol{\beta}^-]+a_i-b_i\forall i=1,\cdots,n and \beta_j^+,\beta_j^-\geq 0 \forall j=0,\cdots,k. So use here

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area)
y = base$rent_euro
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] 148.946864   3.289674

Of course, we can use R function to fit that model

library(quantreg)
rq(rent_euro~area, tau=tau, data=base)
Coefficients:
(Intercept)        area 
 148.946864    3.289674

Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot

plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")),
     ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5)
sf=0:250
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")
tau = .9
r = lp("min",c,A,const_type,b)
tail(r$solution,2)
[1] 121.815505   7.865536
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")

And we can adapt the later to multiple regressions, of course,

X = cbind(1,base$area,base$yearc)
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] -5542.503252     3.978135     2.887234

to be compared with

library(quantreg)
rq(rent_euro~ area + yearc, tau=tau, data=base)
 
Coefficients:
 (Intercept)         area        yearc 
-5542.503252     3.978135     2.887234 
 
Degrees of freedom: 4571 total; 4568 residual