After a few years, I decided to put online some lectures notes I had from a graduate course I gave over one (long) day in 2014, in Leuven, entitled “an introduction to multivariate and dynamic risk measures”. The notes are now available on hal. I just hope that it might be usefull to someone…
Monthly Archives: July 2018
On the robustness of LASSO
Probably the last post on lasso, before the summer break… More specifically, I was wondering about the interpretation of graphs \lambda\mapsto\widehat{\beta}_\lambda. We use them for variable selection, but my major concern was about confidence intervals : how can we trust those lines ?
As usual, a natural way is to use simulations on generated datasets. Consider for instance
Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3) n = 1000 library(mnormt) X = rmnorm(n,rep(0,3),Sigma) set.seed(123) df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n), X5=runif(n), X6=exp(X[,3]), X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)), X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5))) df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n) |
One can use other simulations of datasets, and store the output
vlambda = exp(seq(-8,1,length=201)) lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) VLASSO[[s]] = as.matrix(lasso$beta) |
To visualize confidence bands, one can compute quantiles
Q05=Q95=Qm=matrix(NA,9,201) for(i in 1:nrow(Q05)){ for(j in 1:ncol(Q05)){ v = unlist(lapply(VLASSO,function(x) x[i,j])) Q05[i,j] = quantile(v,.05) Q95[i,j] = quantile(v,.95) Qm[i,j] = mean(v) }} |
and get get the graph
plot(lasso,col=colrs,"lambda"ylim=c(min(Q05),max(Q95))) colrs=c(brewer.pal(8,"Set1")) polygon(c(log(lasso$lambda),rev(log(lasso$lambda))), c(Q05[2,],rev(Q95[2,])),col=colrs[1],border=NA) polygon(c(log(lasso$lambda),rev(log(lasso$lambda))), c(Q05[5,],rev(Q95[5,])),col=colrs[2],border=NA) polygon(c(log(lasso$lambda),rev(log(lasso$lambda))), c(Q05[8,],rev(Q95[8,])),col=colrs[3],border=NA) |
An alternative (more realistic on real data) is to use bootstrapped version of the dataset
id = sample(1:nrow(X),size=nrow(X),replace=TRUE) lasso = glmnet(x=X[id,],y=df[id,"Y"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) |
So far, it looks it’s working very well. Now, what if we have a smaller dataset
n = 100 |
On simulated new samples, we get
while the bootstrap version is
There is more uncertainty, clearly, but the conclusion is not ambiguous here.
Now, what about real data. Consider the following
chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";") tail(chicago) Fire X_1 X_2 X_3 42 4.8 0.152 19 13.323 43 10.4 0.408 25 12.960 44 15.6 0.578 28 11.260 45 7.0 0.114 3 10.080 46 7.1 0.492 23 11.428 47 4.9 0.466 27 13.731 |
with one variable of interest (the number of fires, per unhabitants) and 3 features. We can here use bootstrap to generate samples, and then fit a lasso regression. On the original dataset, the regression is
X = model.matrix(lm(Fire~.,data=chicago)) id = sample(1:nrow(X),size=nrow(X),replace=TRUE) vlambda = exp(seq(-4,2,length=201)) lasso = glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) |
And if we just plot lines \lambda\mapsto\widehat{\beta}_\lambda we get
Now, consider bootstrap samples.
for(s in 1:100){ id=sample(1:nrow(X),size=nrow(X),replace=TRUE) library(glmnet) vlambda=exp(seq(-4,2,length=201)) lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) plot(lasso,col=colrs,"lambda",lwd=.2,add=TRUE)} |
We get here
The interpretation here is much more difficult
What about the order ?
N=matrix(NA,100000,4) for(s in 1:100000){ id=sample(1:nrow(X),size=nrow(X),replace=TRUE) library(glmnet) vlambda=exp(seq(-4,2,length=201)) lasso=glmnet(x=X[id,],y=chicago[id,"Fire"], family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) N[s,]=names(sort(apply(as.matrix(lasso$beta), 1,function(x) sum(x!=0))))} |
The ordering that was obtained on the original dataset was the same in 56% of the scenarios,
mean(apply(N,1,function(x) paste(x,collapse="")=="(Intercept)X_1X_2X_3")) [1] 0.5693 |
We can look at all the cases,
L=as.character(c(123,132,213,231,312,321)) Li=paste("(Intercept)X_",substr(L,1,1),"X_", substr(L,2,2),"X_",substr(L,3,3),sep="") g=function(y) mean(apply(N,1,function(x) paste(x,collapse="")==y)) vL=unlist(lapply(Li,g)) names(vL)=L barplot(vL,las=2,horiz=TRUE) |
Standardization in LASSO
The lasso regression is based on the idea of solving\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_1}\rbracewhere\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|for any \mathbf{a}\in\mathbb{R}^d. In a recent post, we’ve seen computational aspects of the optimization problem. But I went quickly throught the story of the \ell_1-norm. Because it means, somehow, that the value of \beta_1 and \beta_2 should be comparable. Somehow, with two significant variables, with very different scales, we should expect orders (or relative magnitudes) of \widehat{\beta}_1 and \widehat{\beta}_2 to be very very different. So people say that it is therefore necessary to center and reduce (or standardize) the variables.
Consider the following (simulated) dataset
Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3) n = 1000 library(mnormt) X = rmnorm(n,rep(0,3),Sigma) set.seed(123) df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n), X5=runif(n),X6=exp(X[,3]), X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)), X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5))) df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n) X = model.matrix(lm(Y~.,data=df)) |
Use the following colors for the graphs and the value of \lambda
library("RColorBrewer") colrs = c(brewer.pal(8,"Set1"))[c(1,4,5,2,6,3,7,8)] vlambda=exp(seq(-8,1,length=201)) |
The first regression we can run is a non-standardized one
library(glmnet) lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=FALSE) |
We can visualize the graphs of \lambda\mapsto\widehat{\beta}_\lambda
idx = which(apply(lasso$beta,1,function(x) sum(x==0))<200) plot(lasso,col=colrs,'lambda',xlim=c(-5.5,2.3),lwd=2) legend(1.2,.9,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,lwd=2) |
At least, observe that the most significant variables are the one that were used to generate the data.
Now, consider the case that we standardize the data
lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=TRUE) |
The graphs of \lambda\mapsto\widehat{\beta}_\lambda
The graph is (strangely) very similar to the previous one. Except perhaps for the green curve. Maybe that categorical are not simular to continuous variables… Because somehow, standardisation of categorical variables might be not natural…
Why not consider some home-made function ? Let us transform (linearly) all variable in the X matrix (except the first one, which is the intercept)
Xc = X for(j in 2:ncol(X)) Xc[,j]=(Xc[,j]-mean(Xc[,j]))/sd(Xc[,j]) |
Now, we can run our lasso regression on that one (with the intercept since all the variables are centered, but y)
lasso = glmnet(x=Xc,y=df$Y,family="gaussian",alpha=1,intercept=TRUE,lambda=vlambda) |
The plot is now
plot(lasso,col=colrs,"lambda",xlim=c(-6.7,1.3),lwd=2) idx = which(apply(lasso$beta,1,function(x) sum(x==0))<length(vlambda)) legend(.15,.45,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,bty="n",lwd=2) |
Actually, why not also center the y variable, and remove also the intercept
Yc = (df[,"Y"]-mean(df[,"Y"]))/sd(df[,"Y"]) lasso = glmnet(x=Xc,y=Yc,family="gaussian",alpha=1,intercept=FALSE,lambda=vlambda) |
Hopefully, those graphs are very consistent (and if we use those for variable selection, they suggest to use variables that were actually used to generate the dataset). And having qualitative and quantitative variable is not a big deal. But still, I do not feel confortable with the differences…
Biometrics Conference, Barcelona
This week, I will be at the XXIX International Biometric conference, in Barcelona, to give a talk on massive collaborative data to study mortality (in an invited session, on Tuesday afternoon). Slides are available online.
Convex Regression Model
This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.
Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.
Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.
For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace
Hence, quadratic program with n-2 linear constraints.
m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).
If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace
Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,
library(cobs) |
To get a convex regression, use
plot(cars) x = cars$speed y = cars$dist rc = conreg(x,y,convex=TRUE) lines(rc, col = 2) |
Here we can get the values of the knots
rc Call: conreg(x = x, y = y, convex = TRUE) Convex regression: From 19 separated x-values, using 5 inner knots, 7, 8, 9, 20, 23. RSS = 1356; R^2 = 0.8766; needed (5,0) iterations |
and actually, if we use them in a linear-spline regression, we get the same output here
reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars) u = seq(4,25,by=.1) v = predict(reg,newdata=data.frame(speed=u)) lines(u,v,col="green") |
Let us add vertical lines for the knots
abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2) |
7eme rencontres R
Cette fin de semaine, les 7emes rencontres R sont organisées a Rennes. Ewen fera une (courte) présentation vendredi matin de nos travaux en démographie… Les slides sont en ligne (et l’article aussi).
Summer School, Big Data and Economics
This week I will be giving a lecture at the 2018 edition of the Summer School at the UB School of Economics, in Barcelona. It will be a four day crash course, starting on Tuesday (morning).
Lecture 1: Introduction : Why Big Data brings New Questions
Lecture 2: Simulation Based Techniques & Bootstrap
Lecture 3: Loss Functions : from OLS to Quantile Regression
Lecture 4: Nonlinearities and Discontinuities
Lecture 5: Cross-Validation and Out-of-Sample diagnosis
Lecture 6: Variable and model selection
Lecture 7: New Tools for Classification Problems
Lecture 8: New Tools for Time Series & Forecasting
Some slides are available on github, and probably more interesting, I will upload a R markdown with all the codes.