Tag Archives: standardized

Standardization in LASSO

The lasso regression is based on the idea of solving\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_1}\rbracewhere\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|for any \mathbf{a}\in\mathbb{R}^d. In a recent post, we’ve seen computational aspects of the optimization problem. But I went quickly throught the story of the \ell_1-norm. Because it means, somehow, that the value of \beta_1 and \beta_2 should be comparable. Somehow, with two significant variables, with very different scales, we should expect orders (or relative magnitudes) of \widehat{\beta}_1 and \widehat{\beta}_2 to be very very different. So people say that it is therefore necessary to center and reduce (or standardize) the variables.

Consider the following (simulated) dataset

Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3)
n = 1000
library(mnormt)
X = rmnorm(n,rep(0,3),Sigma)
set.seed(123)
df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n),
X5=runif(n),X6=exp(X[,3]),
X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)),
X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5)))
df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n)
X = model.matrix(lm(Y~.,data=df))

Use the following colors for the graphs and the value of \lambda

library("RColorBrewer")
colrs = c(brewer.pal(8,"Set1"))[c(1,4,5,2,6,3,7,8)]
vlambda=exp(seq(-8,1,length=201))

The first regression we can run is a non-standardized one

library(glmnet)
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=FALSE)

We can visualize the graphs of \lambda\mapsto\widehat{\beta}_\lambda

idx = which(apply(lasso$beta,1,function(x) sum(x==0))<200)
plot(lasso,col=colrs,'lambda',xlim=c(-5.5,2.3),lwd=2)
legend(1.2,.9,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,lwd=2)

At least, observe that the most significant variables are the one that were used to generate the data.

Now, consider the case that we standardize the data

lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=TRUE)

The graphs of \lambda\mapsto\widehat{\beta}_\lambda

The graph is (strangely) very similar to the previous one. Except perhaps for the green curve. Maybe that categorical are not simular to continuous variables… Because somehow, standardisation of categorical variables might be not natural…

Why not consider some home-made function ? Let us transform (linearly) all variable in the X matrix (except the first one, which is the intercept)

Xc = X
for(j in 2:ncol(X)) Xc[,j]=(Xc[,j]-mean(Xc[,j]))/sd(Xc[,j])

Now, we can run our lasso regression on that one (with the intercept since all the variables are centered, but y)

lasso = glmnet(x=Xc,y=df$Y,family="gaussian",alpha=1,intercept=TRUE,lambda=vlambda)

The plot is now

plot(lasso,col=colrs,"lambda",xlim=c(-6.7,1.3),lwd=2)
idx = which(apply(lasso$beta,1,function(x) sum(x==0))<length(vlambda))
legend(.15,.45,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,bty="n",lwd=2)

Actually, why not also center the y variable, and remove also the intercept

Yc = (df[,"Y"]-mean(df[,"Y"]))/sd(df[,"Y"])
lasso = glmnet(x=Xc,y=Yc,family="gaussian",alpha=1,intercept=FALSE,lambda=vlambda)

Hopefully, those graphs are very consistent (and if we use those for variable selection, they suggest to use variables that were actually used to generate the dataset). And having qualitative and quantitative variable is not a big deal. But still, I do not feel confortable with the differences…

“standardized” version of the maximum

For the first homework, there was a tricky question in problem 29, chapter 5. Here http://freakonometrics.free.fr/blog/latex/rice-max-o1.png is the maximum of n random variables i.i.d. uniformly distributed on the unit interval http://freakonometrics.free.fr/blog/latex/rice-max-02.png. I gave a hint last week about the cumulative distribution function for the maximum, i.e.

http://freakonometrics.free.fr/blog/latex/rice-max-03.png

is equal to the probability that all are smaller than http://freakonometrics.free.fr/blog/latex/rice-max-04.png,

http://freakonometrics.free.fr/blog/latex/rice-max-05.png

Then, we use independent to obtain that this probability is a product, of equal quantities since all random variables are identically distributed, i.e.

http://freakonometrics.free.fr/blog/latex/rice-max-06.png

Then, the exercise ask the following

i.e. find a standardized version of that the maximum so that the cumulated distribution of that standardized version has a (non degenerated) limiting value. A hint is given in the answers, at the end of the book,

Actually, the question is not that simple (see here for the history of that question).
What I said during the course is that if http://freakonometrics.free.fr/blog/latex/rice-max-07.png is a random variable with finite variance, then

http://freakonometrics.blog.free.fr/public/perso2/rice-max-08b.png

is a standardized (or normalized) version of http://freakonometrics.free.fr/blog/latex/rice-max-07.png, in the sense that it is centered, i.e.

http://freakonometrics.free.fr/blog/latex/rice-max-09.png

and with a unit variance, i.e.

http://freakonometrics.free.fr/blog/latex/rice-max-10.png

This is the kind of standardization (or normalization) that is used in the central limit theorem i.e. it is interesting when we study the core of our distribution (i.e. the mean).
Here we focus on the maxima (not on the expected value). Note that here

http://freakonometrics.free.fr/blog/latex/rice-max-11.png

while

http://freakonometrics.free.fr/blog/latex/rice-max-12.png

(up to some typing mistakes). Thus, our previous standardization would be

http://freakonometrics.free.fr/blog/latex/rice-max-13.png

that can be simplified as

http://freakonometrics.free.fr/blog/latex/rice-max-14.png

Hence, that random variable can be approximated by

http://freakonometrics.free.fr/blog/latex/rice-max-15.png

since http://freakonometrics.free.fr/blog/latex/rice-max-16.png as http://freakonometrics.free.fr/blog/latex/rice-max-17.png. Here, it is then possible to get

http://freakonometrics.free.fr/blog/latex/rice-max-18.png

since if http://freakonometrics.free.fr/blog/latex/rice-max-20.png, then http://freakonometrics.free.fr/blog/latex/rice-max-21.png (see the prof of the central limit theorem we got a few days ago).
But this is usually not the way we work with maxima. Actually, Fréchet, Fisher, Tippett, Gnedenko proved that the appropriate standardization to work with maxima is to consider

http://freakonometrics.free.fr/blog/latex/rice-max-22.png

where http://freakonometrics.free.fr/blog/latex/rice-max-23.png is the cumulative distribution of the http://freakonometrics.free.fr/blog/latex/rice-max-24.png‘s (the random variables used to build up the maximum). This work since the http://freakonometrics.free.fr/blog/latex/rice-max-24.png‘s have a finite support, i.e. the http://freakonometrics.free.fr/blog/latex/rice-max-24.pngare bounded, with an upper limit (here 1).
Note that

http://freakonometrics.free.fr/blog/latex/rice-max-25.png

assuming that the density associated with http://freakonometrics.free.fr/blog/latex/rice-max-23.png exists. Hence, here the standardization becomes

http://freakonometrics.free.fr/blog/latex/rice-max-26.png

which is exactly the one that John Rice is suggesting… And the proper motivation comes from extreme value theory, but it is a bit far away from what we shall see in that course…