# Interpretability and explainability of predictive models

In 400 AD, in his Confessiones, Augustine wrote

quid est ergo tempus? si nemo ex me quaerat, scio; si quaerenti explicare velim, nescio

that can be translated as

What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.

To go a little further (because often, if we are asked to explain, we have some ideas), in A Study in Scarlet by Sir Arthur Conan Doyle, published in 1887, we have the following exchange, between Sherlock Holmes and Doctor Watson

– “I wonder what that fellow is looking for?” I asked, pointing to a stalwart, plainly-dressed individual who was walking slowly down the other side of the street, looking anxiously at the numbers. He had a large blue envelope in his hand, and was evidently the bearer of a message.
– “You mean the retired sergeant of Marines,” said Sherlock Holmes.

then, as it turns out that the person is indeed a sergeant in the navy (as is another character in the story, someone named Arthur Charpentier), Dr. Holmes asks him for an explanation, he wants to know how he arrived at this conclusion

– “How in the world did you deduce that?” I asked.
“Deduce what?” said he, petulantly.
“Why, that he was a retired sergeant of Marines.”
“I have no time for trifles,” he answered, brusquely; then with a smile, “Excuse my rudeness. You broke the thread of my thoughts; but perhaps it is as well. So you actually were not able to see that that man was a sergeant of Marines?”
“No, indeed.”
– “It was easier to know it than to explain why I knew it. If you were asked to prove that two and two made four, you might find some difficulty, and yet you are quite sure of the fact. Even across the street I could see a great blue anchor tattooed on the back of the fellow’s hand. That smacked of the sea. He had a military carriage, however, and regulation side whiskers. There we have the marine. He was a man with some amount of self-importance and a certain air of command. You must have observed the way in which he held his head and swung his cane. A steady, respectable, middle-aged man, too, on the face of him – all facts which led me to believe that he had been a sergeant.”

(to be honest, it is Liu Cixin who talks about it in The Three-Body Problem). For the record, this is the first story of the Holmes-Watson couple, which introduces Sherlock Holmes’ working method. For those who are familiar with the short stories, this narrative approach will be widely used thereafter: Sherlock Holmes states a fact, Dr. Watson is astonished and asks for an explanation, and Sherlock Holmes explains, point by point, how he arrived at this conclusion. This is a bit like the approach we try to implement when we build a predictive model: on the basis of the Titanic data, if we predict that such and such a person will die, and that such and such a person will survive, we want to understand why the model arrives at this conclusion.
Continue reading Interpretability and explainability of predictive models

Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, $$\min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbrace$$We have to define the soft-thresholding function$$S(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}$$The R function would be

soft_thresholding = function(x,a){ sign(x) * pmax(abs(x)-a,0) }

To solve our optimization problem, set$$\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}$$
so that the optimization problem can be written, equivalently
$$\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace$$
hence$$\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace$$
and one gets
$$\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)$$
or, if we develop
$$\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)$$
Again, if there are weights $\mathbf{\omega}=(\omega_i)$, the coordinate-wise update becomes
$$\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)$$
The code to compute this componentwise descent is

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){ beta = as.matrix(beta) X = as.matrix(X) omega = rep(1/length(y),length(y)) obj = numeric(length=(maxiter+1)) betalist = list(length(maxiter+1)) betalist[[1]] = beta beta0list = numeric(length(maxiter+1)) beta0 = sum(y-X%*%beta)/(length(y)) beta0list[1] = beta0 for (j in 1:maxiter){ for (k in 1:length(beta)){ r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y)) beta[k] = (1/sum(omega*X[,k]^2))* soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda) } beta0 = sum(y-X%*%beta)/(length(y)) beta0list[j+1] = beta0 betalist[[j+1]] = beta obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta)) if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } } return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

For instance, consider the following (simple) dataset, with three covariates

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")

that we can “normalize” (or “standardize“)

X = model.matrix(lm(Fire~.,data=chicago))[,2:4] for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) y = chicago$Fire y = (y-mean(y))/sd(y) To initialize the algorithm, use the OLS estimate beta_init = lm(Fire~0+.,data=chicago)$coef

For instance

lasso_coord_desc(X,y,beta_init,lambda=.001) $obj [1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119$beta [,1] X_1 0.0000000 X_2 0.3836087 X_3 -0.5026137   $intercept [1] 2.060999e-16 and we can get the standard Lasso plot by looping, # Foundations of Machine Learning, part 4 This post is the eighth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 7 is online here. ## Penalization and variables selection One important concept in econometrics is Ockham’s razor – also known as the law of parsimony (lex parsimoniae) – which can be related to abductive reasoning. Akaike’s criterion was based on a penalty of likelihood taking into account the complexity of the model (the number of explanatory variables retained). If in econometrics, it is customary to maximize the likelihood (to build an asymptotically unbiased estimator), and to judge the quality of the ex-post model by penalizing the likelihood, the strategy here will be to penalize ex-ante in the objective function, even if it means building a biased estimator. Typically, we will build: $$(\widehat{\beta}_{0,\lambda},\widehat{\beta}_{\lambda})=\text{argmin}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda \text{ penalization}( \boldsymbol{\beta})\right\rbrace, ~~~(11)$$where the penalty function will often be a norm $\|\cdot\|$ chosen a priori, and a penalty parameter $\lambda$ (we find in a way the distinction between AIC and BIC if the penalty function is the complexity of the model – the number of explanatory variables retained). In the case of the $\ell_2$ norm, we find the ridge estimator, and for the $\ell_1$ norm, we find the lasso estimator (“Least Absolute Shrinkage and Selection Operator”). The penalty previously used involved the number of degrees of freedom of the model, so it may seem surprising to use $\|\beta\|_{\ell_2}$ as in the ridge regression. However, we can envisage a Bayesian vision of this penalty. It should be recalled that in a Bayesian model : $$\underbrace{\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]}_{\text{posterior}} \propto \underbrace{\mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{likelihood}} \cdot \underbrace{\mathbb{P}[\boldsymbol{\theta}]}_{\text{prior}}$$or$$\log\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]= \underbrace{\log \mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{log likelihood}} + \underbrace{\log\mathbb{P}[\boldsymbol{\theta}]}_{\text{{penalty}}}$$In a Gaussian linear model, if we assume that the a priori law of $\theta$ follows a centred Gaussian distribution, we find a penalty based on a quadratic form of the components of $\theta$. Before going back in detail to these two estimators, obtained using the $\ell_1$ or $\ell_2$ norm, let us return for a moment to a very similar problem: the best choice of explanatory variables. Classically (and this will be even more true in large dimension), we can have a large number of explanatory variables, $p$, but many are just noise, in the sense that $\beta_j=0$ for a large number of $j$. Let $s$ be the number of (really) relevant covariates, $s=\#S$, with $$S=\{j=1,\cdots,p:\beta_j\neq 0\}$$. If we note $\mathbf{X}_S$ the matrix composed of the relevant variables (in columns), then we assume that the real model is of the form $y=\mathbf{x}_S^T \beta_S+\varepsilon$. Intuitively, an interesting estimator would then be $\widehat{\beta}_S=[\mathbf{X}_S^T \mathbf{X}_S ]^{-1} \mathbf{X}_S^T \mathbf{y}$, but this estimator is only theoretical because the set $S$ is unknown, here. This estimator can actually be seen as the oracle estimator mentioned above. One may then be tempted to solve $$(\widehat{\beta}_{0,s},\widehat{\beta}_{s})=\underset{\beta_S\in\mathbb{R}^s}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta_S)\right\rbrace,\text{ s.t. } \# {S}=s$$This problem was introduced by Foster & George (1994) using the $\ell_0$ notation. More precisely, let us define here the following three norms, where $\mathbf{a}\in\mathbb{R}^d$, $$\Vert\boldsymbol{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~ \Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|~~\text{ and }~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}$$ Table 1: Constrained optimization and regularization. Let us consider the optimization problems in Table 1. If we consider the classical problem where the quadratic norm is used for $\ell$, the two problems of the equation $(\ell1)$ of Table 1 are equivalent, in the sense that, for any solution $(\beta^\star,s)$ to the left problem, there is $\lambda^\star$ such that $(\beta^\star,\lambda^\star)$ is the solution of the right problem; and vice versa. The result is also true for problems$(\ell2)$. These are indeed convex problems. On the other hand, the two problems $(\ell0)$ are not equivalent: if for $(\beta^\star,\lambda^\star)$ solution of the right problem, there is $s^\star$ such that $\beta^\star$ is solution of the left problem, the reverse is not true. More generally, if you want to use an $\ell_p$ norm, sparsity is obtained if $p\leq 1$ whereas you need $p\geq1$ to have the convexity of the optimization program. One may be tempted to resolve the penalized program $(\ell0)$ directly, as suggested by Foster & George (1994). Numerically, it is a complex combinatorial problem in large dimension (Natarajan (1995) notes that it is a NP-difficult problem), but it is possible to show that if $\lambda\sim\sigma^2 \log(p)$, then $$\mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \leq \underbrace{\mathbb{E}\big(\mathbf{x}_{ {S}}^T\widehat{\beta}_{{S}}-\mathbf{x}^T \beta_0]^2\big)}_{=\sigma^2 \#{S}}\cdot \big(4\log p+2+o(1)\big)$$Observe that in this case $$\widehat{\beta}_{\lambda,j}^{\text{sub}} = \left\lbrace\begin{array}{l}0 \text{ if } j\notin{S}_\lambda(\beta)\\ \widehat{\beta}_{j}^{\text{ols}} \text{ if } j\in{S}_\lambda(\beta),\end{array}\right.$$where $S_\lambda (\beta)$ refers to all non-zero coordinates when solving $(\ell0)$. The problem $(\ell2)$ is strictly convex if $\ell$ is the quadratic norm, in other words, the Ridge estimator is always well defined, with in addition an explicit form for the estimator, $$\widehat{ {\beta}}_\lambda^{\text{ ridge}}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{y}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}(\mathbf{X}^T\mathbf{X})\widehat{ {\beta}}^{\text{ ols}}$$Therefore, it can be deduced that $$\text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=-\lambda[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}~\widehat{ {\beta}}^{\text{ ols}}$$and$$\text{Var}[\widehat{\beta}_\lambda^{\text{ ridge}}]=\sigma^2[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{X}[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}$$With a matrix of orthonormal explanatory variables (i.e. $\mathbf{X}^T \mathbf{X}=\mathbb{I}$), the expressions can be simplified $$\text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\lambda}{1+\lambda}~\widehat{ {\beta}}^{\text{ ols}}\text{ and }\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\sigma^2}{(1+\lambda)^2}\mathbb{I}$$Observe that $\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]<\text{Var}[\widehat{ {\beta}}^{\text{ ols}}]$. And because $$\text{mse}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{p\sigma^2}{(1+\lambda)^2}+\frac{\lambda^2}{(1+\lambda)^2}\beta^T\beta$$we obtain an optimal value for $\lambda$: $\lambda^\star=k\sigma^2/\beta^T\beta$ On the other hand, if $\ell$ is no longer the quadratic norm but the $\ell_1$ norm, the problem $(\ell1)$ is not always strictly convex, and in particular, the optimum is not always unique (for example if $\mathbf{X}^T \mathbf{X}$ is singular). But if it is strictly convex, then predictions $\mathbf{X}\beta$ will be unique. It should also be noted that two solutions are necessarily consistent in terms of sign of coefficients: it is not possible to have $\beta_j<0$ for one solution and $\beta_j>0$ for another. From a heuristic point of view, the program $(\ell1)$ is interesting because it allows to obtain in many cases a corner solution, which corresponds to a problem resolution of type $(\ell0)$ – as shown visually on Figure 2. Figure 2 : Penalization based on norms $\ell_0$, $\ell_1$ and $\ell_2$ (from Hastie et al. (2016)). Let us consider a very simple model: $y_i=x_i \beta+\varepsilon$, with a penalty $\ell_1$ and a loss function $\ell_2$. The problem $(\ell1)$ then becomes $$\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda|\beta|\big\}$$The first order condition is then $$-2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}\pm 2\lambda=0$$And the sign of the last term depends on the sign of $\beta$. Suppose that the least square estimator (obtained by setting $\lambda=0$) is (strictly) positive, i. e. $\mathbf{y}^T \mathbf{x}>0$. If $\lambda$ is not too big, we can imagine that $\beta$ is of the same sign as $\widehat{\beta}^{\text{mco}}$, and therefore the condition becomes $-2\mathbf{y}^T \mathbf{x}+2\mathbf{x}^T \mathbf{x}\beta+2\lambda=0$, and the solution is $$\widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}}$$By increasing $\lambda$, there will be a time such that $\widehat{\beta}_λ=0$. If we increase $\lambda$ a bit little more, $\widehat{\beta}_λ$ does not become negative because in this case the last term of the first order condition changes, and in this case we try to solve $$-2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}- 2\lambda=0$$whose solution is then $$\widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}+\lambda}{\mathbf{x}^T\mathbf{x}}$$But this solution is positive (we assumed $\mathbf{y}^T \mathbf{x}>0$), and so it is possible to have $\widehat{\beta}_\lambda <0$at the same time. Also, after a while, $\widehat{\beta}_\lambda=0$, which is then a corner solution. Things are of course more complicated in larger dimensions (Tibshirani & Wasserman (2016) goes back at length on the geometry of the solutions) but as Candès & Plan (2009) notes, under minimal assumptions guaranteeing that the predictors are not strongly correlated, the Lasso obtains a quadratic error almost as good as if we had an oracle providing perfect information on the set of $\beta_j$‘s that are not zero. With some additional technical hypotheses, it can be shown that this estimator is “sparsistant” in the sense that the support of $\widehat{\beta}_\lambda^{\text{lasso}}$ is that of $\beta$, in other words Lasso has made it possible to select variables (more discussions on this point can be obtained in Hastie et al. (2016)). More generally, it can be shown that $\widehat{\beta}_\lambda^{\text{lasso}}$ is a biased estimator, but may be of sufficiently low variance that the mean square error is lower than using least squares. To compare the three techniques, relative to the least square estimator (obtained when $\lambda=0$), if we assume that the explanatory variables are orthonormal, then $$\widehat{\beta}_{\lambda,j}^{\text{ subset}}=\widehat{\beta}_{j}^{\text{ ols}}\boldsymbol{1}_{|\widehat{\beta}_{\lambda,j}^{\text{ subset}}|>b}, ~~\widehat{\beta}_{\lambda,j}^{\text{ ridge}}=\frac{\widehat{\beta}_{j}^{\text{ ols}}}{1+\lambda}$$and$$\widehat{\beta}_{\lambda,j}^{\text{ lasso}}=\text{sign}[\widehat{\beta}_{j}^{\text{ ols}}]\cdot(|\widehat{\beta}_{j}^{\text{ ols}}|-\lambda)_+$$ Figure 3 : Penalization based on norms , and (from Hastie et al. (2016)). To be continued with probably a final post this week (references are online here)… # On the robustness of LASSO Probably the last post on lasso, before the summer break… More specifically, I was wondering about the interpretation of graphs $\lambda\mapsto\widehat{\beta}_\lambda$. We use them for variable selection, but my major concern was about confidence intervals : how can we trust those lines ? As usual, a natural way is to use simulations on generated datasets. Consider for instance Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3) n = 1000 library(mnormt) X = rmnorm(n,rep(0,3),Sigma) set.seed(123) df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n), X5=runif(n), X6=exp(X[,3]), X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)), X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5))) df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n) One can use other simulations of datasets, and store the output vlambda = exp(seq(-8,1,length=201)) lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) VLASSO[[s]] = as.matrix(lasso$beta)

To visualize confidence bands, one can compute quantiles

Q05=Q95=Qm=matrix(NA,9,201) for(i in 1:nrow(Q05)){ for(j in 1:ncol(Q05)){ v = unlist(lapply(VLASSO,function(x) x[i,j])) Q05[i,j] = quantile(v,.05) Q95[i,j] = quantile(v,.95) Qm[i,j] = mean(v) }}

and get get the graph

plot(lasso,col=colrs,"lambda"ylim=c(min(Q05),max(Q95))) colrs=c(brewer.pal(8,"Set1")) polygon(c(log(lasso$lambda),rev(log(lasso$lambda))), c(Q05[2,],rev(Q95[2,])),col=colrs[1],border=NA) polygon(c(log(lasso$lambda),rev(log(lasso$lambda))), c(Q05[5,],rev(Q95[5,])),col=colrs[2],border=NA) polygon(c(log(lasso$lambda),rev(log(lasso$lambda))), c(Q05[8,],rev(Q95[8,])),col=colrs[3],border=NA)

An alternative (more realistic on real data) is to use bootstrapped version of the dataset

id = sample(1:nrow(X),size=nrow(X),replace=TRUE) lasso = glmnet(x=X[id,],y=df[id,"Y"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE)

So far, it looks it’s working very well. Now, what if we have a smaller dataset

n = 100

On simulated new samples, we get

while the bootstrap version is

There is more uncertainty, clearly, but the conclusion is not ambiguous here.

Now, what about real data. Consider the following

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";") tail(chicago) Fire X_1 X_2 X_3 42 4.8 0.152 19 13.323 43 10.4 0.408 25 12.960 44 15.6 0.578 28 11.260 45 7.0 0.114 3 10.080 46 7.1 0.492 23 11.428 47 4.9 0.466 27 13.731

with one variable of interest (the number of fires, per unhabitants) and 3 features. We can here use bootstrap to generate samples, and then fit a lasso regression. On the original dataset, the regression is

X = model.matrix(lm(Fire~.,data=chicago)) id = sample(1:nrow(X),size=nrow(X),replace=TRUE) vlambda = exp(seq(-4,2,length=201)) lasso = glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE)

And if we just plot lines $\lambda\mapsto\widehat{\beta}_\lambda$ we get

Now, consider bootstrap samples.

for(s in 1:100){ id=sample(1:nrow(X),size=nrow(X),replace=TRUE) library(glmnet) vlambda=exp(seq(-4,2,length=201)) lasso=glmnet(x=X[id,],y=chicago[id,"Fire"],family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) plot(lasso,col=colrs,"lambda",lwd=.2,add=TRUE)}

We get here

The interpretation here is much more difficult

N=matrix(NA,100000,4) for(s in 1:100000){ id=sample(1:nrow(X),size=nrow(X),replace=TRUE) library(glmnet) vlambda=exp(seq(-4,2,length=201)) lasso=glmnet(x=X[id,],y=chicago[id,"Fire"], family="gaussian",alpha=1, lambda=vlambda,standardize=TRUE) N[s,]=names(sort(apply(as.matrix(lasso$beta), 1,function(x) sum(x!=0))))} The ordering that was obtained on the original dataset was the same in 56% of the scenarios, mean(apply(N,1,function(x) paste(x,collapse="")=="(Intercept)X_1X_2X_3")) [1] 0.5693 We can look at all the cases, L=as.character(c(123,132,213,231,312,321)) Li=paste("(Intercept)X_",substr(L,1,1),"X_", substr(L,2,2),"X_",substr(L,3,3),sep="") g=function(y) mean(apply(N,1,function(x) paste(x,collapse="")==y)) vL=unlist(lapply(Li,g)) names(vL)=L barplot(vL,las=2,horiz=TRUE) # Standardization in LASSO The lasso regression is based on the idea of solving$$\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_1}\rbrace$$where$$\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|$$for any $\mathbf{a}\in\mathbb{R}^d$. In a recent post, we’ve seen computational aspects of the optimization problem. But I went quickly throught the story of the $\ell_1$-norm. Because it means, somehow, that the value of $\beta_1$ and $\beta_2$ should be comparable. Somehow, with two significant variables, with very different scales, we should expect orders (or relative magnitudes) of $\widehat{\beta}_1$ and $\widehat{\beta}_2$ to be very very different. So people say that it is therefore necessary to center and reduce (or standardize) the variables. Consider the following (simulated) dataset Sigma = matrix(c(1,.8,.2,.8,1,.4,.2,.4,1),3,3) n = 1000 library(mnormt) X = rmnorm(n,rep(0,3),Sigma) set.seed(123) df = data.frame(X1=X[,1],X2=X[,2],X3=X[,3],X4=rnorm(n), X5=runif(n),X6=exp(X[,3]), X7=sample(c("A","B"),size=n,replace=TRUE,prob=c(.5,.5)), X8=sample(c("C","D"),size=n,replace=TRUE,prob=c(.5,.5))) df$Y = 1+df$X1-df$X4+5*(df$X7=="A")+rnorm(n) X = model.matrix(lm(Y~.,data=df)) Use the following colors for the graphs and the value of $\lambda$ library("RColorBrewer") colrs = c(brewer.pal(8,"Set1"))[c(1,4,5,2,6,3,7,8)] vlambda=exp(seq(-8,1,length=201)) The first regression we can run is a non-standardized one library(glmnet) lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=FALSE) We can visualize the graphs of $\lambda\mapsto\widehat{\beta}_\lambda$ idx = which(apply(lasso$beta,1,function(x) sum(x==0))&lt;200) plot(lasso,col=colrs,'lambda',xlim=c(-5.5,2.3),lwd=2) legend(1.2,.9,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,lwd=2)

At least, observe that the most significant variables are the one that were used to generate the data.

Now, consider the case that we standardize the data

lasso = glmnet(x=X,y=df[,"Y"],family="gaussian",alpha=1,lambda=vlambda,standardize=TRUE)

The graphs of $\lambda\mapsto\widehat{\beta}_\lambda$

The graph is (strangely) very similar to the previous one. Except perhaps for the green curve. Maybe that categorical are not simular to continuous variables… Because somehow, standardisation of categorical variables might be not natural…

Why not consider some home-made function ? Let us transform (linearly) all variable in the $X$ matrix (except the first one, which is the intercept)

Xc = X for(j in 2:ncol(X)) Xc[,j]=(Xc[,j]-mean(Xc[,j]))/sd(Xc[,j])

Now, we can run our lasso regression on that one (with the intercept since all the variables are centered, but $y$)

lasso = glmnet(x=Xc,y=df$Y,family="gaussian",alpha=1,intercept=TRUE,lambda=vlambda) The plot is now plot(lasso,col=colrs,"lambda",xlim=c(-6.7,1.3),lwd=2) idx = which(apply(lasso$beta,1,function(x) sum(x==0))&lt;length(vlambda)) legend(.15,.45,legend=paste('X',0:8,sep='')[idx],col=colrs,lty=1,bty=&quot;n&quot;,lwd=2)

Actually, why not also center the $y$ variable, and remove also the intercept

Yc = (df[,"Y"]-mean(df[,"Y"]))/sd(df[,"Y"]) lasso = glmnet(x=Xc,y=Yc,family="gaussian",alpha=1,intercept=FALSE,lambda=vlambda)

Hopefully, those graphs are very consistent (and if we use those for variable selection, they suggest to use variables that were actually used to generate the dataset). And having qualitative and quantitative variable is not a big deal. But still, I do not feel confortable with the differences…

# Classification from scratch, penalized Lasso logistic 5/8

Fifth post of our series on classification from scratch, following the previous post on penalization using the $\ell_2$ norm (so-called Ridge regression), this time, we will discuss penalization based on the $\ell_1$ norm (the so-called Lasso regression).

First of all, one should admit that if the name stands for least absolute shrinkage and selection operator, that’s actually a very cool name… Funny story, a few years before, Leo Breiman introduce a concept of garrote technique… “The garrote eliminates some variables, shrinks others, and is relatively stable”.

I guess that somehow, the lasso is the extension of the garotte technique

## Normalization of the covariates

As previously, the first step will be to consider linear transformations of all covariates $x_j$ to get centered and scaled variables (with unit variance)

y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) ## Ridge Regression (from scratch) The heuristics about Lasso regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue square is the constraint we have, if we rewite the optimization problem as a contrained optimization problem, LogLik = function(bbeta){ b0=bbeta[1] beta=bbeta[-1] sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))} u = seq(-4,4,length=251) v = outer(u,u,function(x,y) LogLik(c(1,x,y))) image(u,u,v,col=rev(heat.colors(25))) contour(u,u,v,add=TRUE) polygon(c(-1,0,1,0),c(0,1,0,-1),border="blue") The nice thing here is that is works as a variable selection tool, since some components can be null here. That’s the idea behind the following (popular) graph (with lasso on the left, and ridge on the right). Heuristically, the maths explanation is the following. Consider a simple regression $y_i=x_i\beta+\varepsilon$, with $\ell_1$-penality and a $\ell_2$-loss fuction. The optimization problem becomes$$\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda{\color{red}{|}}\beta{\color{red}{|}}\big\}$$The first order condition can be written$$-2\mathbf{y}^T\mathbf{x}+2\mathbf{x}^T\mathbf{x}\widehat{\beta}{\color{red}{\pm} }2\lambda=0$$(the sign in ${\color{red}{\pm}}$ being the sign of $\widehat{\beta}$). Assume that $\mathbf{y}^T\mathbf{x}>0$, then solution is $$\widehat{\beta}_{\lambda}^{lasso}=\max\left\lbrace\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}},0\right\rbrace$$(we get a corner solution when $\lambda$ is large). ## Optimization routine As in our previous post, let us start with standard (R) optimization routines, such as BFGS PennegLogLik = function(bbeta,lambda=0){ b0=bbeta[1] beta=bbeta[-1] -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))+lambda*sum(abs(beta)) } opt_lasso = function(lambda){ beta_init = lm(PRONO~.,data=myocarde)$coefficients logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), hessian=TRUE, method = "BFGS", control=list(abstol=1e-9)) logistic_opt$par[-1] } v_lambda=c(exp(seq(-4,2,length=61))) est_lasso=Vectorize(opt_lasso)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_lasso[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_lasso[i,],col=colrs[i],lwd=2) But it is very heratic… or non stable. ## Using glmnet Just to compare, with R routines dedicated to lasso, we get the following library(glmnet) glm_lasso = glmnet(X, y, alpha=1) plot(glm_lasso,xvar="lambda",col=colrs,lwd=2) plot(glm_lasso,col=colrs,lwd=2) If we look carefully what’s in the ouput, we can see that there is variable selection, in the sense that some $\widehat{\beta}_{j,\lambda}=0$, in the sense “really null” glmnet(X, y, alpha=1,lambda=exp(-4))$beta 7x1 sparse Matrix of class "dgCMatrix" s0 FRCAR . INCAR 0.11005070 INSYS 0.03231929 PRDIA . PAPUL . PVENT -0.03138089 REPUL -0.20962611

Of course, with out optimization routine, we cannot expect to have null values

opt_lasso(.2) FRCAR INCAR INSYS PRDIA 0.4810999782 0.0002813658 1.9117847987 -0.3873926427 PAPUL PVENT REPUL -0.0863050787 -0.4144139379 -1.3849264055

So clearly, it will be necessary to spend more time today, to understand how it works…

## Orthogonal covariates

Before getting into the maths, observe that when covariates are orthogonal, there is some very clear “variable” selection process,

library(factoextra) pca = princomp(X) pca_X = get_pca_ind(pca)$coord glm_lasso = glmnet(pca_X, y, alpha=1) plot(glm_lasso,xvar="lambda",col=colrs) plot(glm_lasso,col=colrs) ## Interior Point approach The penalty is now expressed using the $\ell_1$ so intuitively, it should be possible to consider algorithms related to linear programming. That was actually suggested in Koh, Kim & Boyd (2007), with some implementation in matlab, see http://web.stanford.edu/~boyd/l1_logreg/. If I can find some time, later one, maybe I will try to recode it. But actually, it is not the technique used in most R functions. Now, o be honest, we face a double challenge today: the first one is to understand how lasso works for the “standard” (least square) problem, the second one is to see how to adapt it to the logistic case. ## Standard lasso (with weights) If we get back to the original Lasso approach, the goal was to solve$$\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace$$(with standard notions, as in wikipedia or Jocelyn Chi’s post – most of the code in this section is inspired by Jocelyn’s great post). Observe that the intercept is not subject to the penalty. The first order condition is then$$\frac{\partial}{\partial\beta_0}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}-\beta_0\mathbf{1}\|^2=(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}+\beta_0\|\mathbf{1}\|^2=0$$i.e.$$\beta_0=\frac{1}{n^2}(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}$$Assume now that KKT conditions are satisfied, since we cannot differentiate (to find points where the gradient is $\mathbf{0}$), we can check if $\mathbf{0}$ contains the subdifferential at the minimum. Namely$$\mathbf{0}\in\partial \left(\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right)=\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\partial(\lambda\|\mathbf{\beta}\|_{\ell_1})$$ For the term on the left, we recognize $$\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2=-\mathbf{X}^T(\mathbf{y}-\mathbf{X}\mathbf{\beta})=-\mathbf{g}$$so that the previous equation can be writen$$g_k\in\partial(\lambda|\beta_k|)=\begin{cases}\{+\lambda\}\text{ if }\beta_k>0 \\ \{-\lambda\}\text{ if }\beta_k<0 \\ (-\lambda,+\lambda)\text{ if }\beta_k=0\end{cases}$$i.e. if $\beta_k\neq 0$, then $g_k = \text{sign}(\beta_k)\cdot\lambda$. Then we write the KKT conditions for this formulation and simplify them to produce a set of rules for checking our solution We can split $\beta_j$ into a sum of its positive and negative parts by replacing $\beta_j$ with $\beta_j^+-\beta_j^-$ where $\beta_j^+,\beta_j^-\geq0$. Then the Lasso problem becomes$$-\log\mathcal{L}(\mathbf{\beta})+\lambda\sum_j(\beta_j^+-\beta_j^-)$$with constraints $\beta_j^+-\beta_j^-$. Let $\alpha_j^+,\alpha_j^-$ denote the Lagrange multipliers for $\beta_j^+,\beta_j^-$, respectively. $$L({\mathbf{\beta}}) + \lambda \sum_{j} (\beta_{j}^{+} - \beta_{j}^{-}) - \sum_{j}\alpha_{j}^{+}\beta_{j}^{+} - \sum_{j} \alpha_{j}^{-}\beta_{j}^{-}.$$To satisfy the stationarity condition, we take the gradient of the Lagrangian with respect to $\beta_{j}^{+}$ and set it to zero to obtain$$\nabla L({\mathbf{\beta}})_{j} + \lambda - \alpha_{j}^{+} = 0$$We do the same with respect to $\beta_{j}^{-}$ to obtain$$-\nabla L({\mathbf{\beta}})_{j}+\lambda-\alpha_{j}^{-} = 0$$ As discussed in Jocelyn Chi’s post, primal feasibility requires that the primal constraints be satisfied so this gives us $\beta_{j}^{+} \ge 0$ and $\beta_{j}^{-} \ge 0$. Then dual feasibility requires non-negativity of the Lagrange multipliers so we get $\alpha_{j}^{+} \ge 0$ and $\alpha_{j}^{-} \ge 0$. And finally, complementary slackness requires that $\alpha_{j}^{+}\beta_{j}^{+} = 0$ and $\alpha_{j}^{-}\beta_{j}^{-} = 0$. We can simplify these conditions to obtain a simple set of rules for checking whether or not our solution is a minimum. The following is inspired by Jocelyn Chi’s post. From $\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{+} = 0$, we have $\nabla L(\beta)_{j} + \lambda= \alpha_{j}^{+} \ge 0$. This gives us $\nabla L(\beta)_{j} \ge -\lambda$. From $-\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{-} = 0$, we have $-\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0$. This gives us $-\nabla L(\beta)_{j} \ge -\lambda$, which gives us $\nabla L(\beta)_{j} \le \lambda$. Hence, $\lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j$ When $\beta_{j}^{+} > 0, \lambda > 0$, complementary slackness requires $\alpha_{j}^{+} = 0$. So $\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} = 0$. Hence, $\nabla L(\beta)_{j} = -\lambda < 0$ since $\lambda > 0$. At the same time, $-\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0$ so $2 \lambda = \alpha_{j}^{-} > 0$ since $\lambda > 0$. Then complementary slackness requires $\beta_{j}^{-} = 0$. Hence, when $\beta_{j}^{+} > 0$, we have $\beta_{j}^{-}=0$ and $\nabla L(\beta)_{j} = -\lambda$ Similarly, when $\beta_{j}^{-} > 0, \lambda > 0$, complementary slackness requires $\alpha_{j}^{-}=0$. So $-\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} = 0$ and $\nabla L(\beta)_{j}=\lambda>0$ since $\lambda > 0$. Then from $\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} \ge 0$ and the above, we get $2 \lambda = \alpha_{j}^{+} > 0$. Then complementary slackness requires $\beta_{j}^{+} = 0$. Hence, when $\beta_{j}^{-} > 0$, we have $\beta_{j}^{+}=0$ and $\nabla L(\beta)_{j} = \lambda$. Since $\beta_{j} = \beta_{j}^{+} - \beta_{j}^{-}$, this means that when $\beta_{j} > 0$, $\nabla L(\beta)_{j} = -\lambda$. And when $\beta_{j} <0$, $\nabla L(\beta)_{j} = \lambda$. Combining this with $\lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j$, we arrive at the same convergence requirements that we obtained before using subdifferential calculus. For conveniency, introduce the soft-thresholding function$$S(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}$$ Noticing that the optimization problem $$\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}$$can also be written $$\min\left\lbrace\sum_{j=1}^p -\widehat{\beta}_j^{ols}\cdot\beta_j+\frac{1}{2}\beta_j^2+\lambda|\beta_j|\right\rbrace$$observe that$$\widehat{\beta}_{j,\lambda}=S(\widehat{\beta}_j^{ols},\lambda)$$which is a coordinate-wise update. Now, if we consider a (slightly) more general problem, with weights in the first part$$\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n{\color{red}{\omega_i}} [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace$$the coordinate-wise update becomes $$\widehat{\beta}_{j,\lambda,{\color{red}{\omega}}}=S(\widehat{\beta}_j^{{\color{red}{\omega-}}ols},\lambda)$$ An alternative is to set$$\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}$$ so that the optimization problem can be written, equivalently $$\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace$$ hence$$\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace$$ and one gets $$\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)$$ or, if we develop $$\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)$$ Again, if there are weights $\mathbf{\omega}=(\omega_i)$, the coordinate-wise update becomes $$\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)$$ The code to compute this componentwise descent is soft_thresholding = function(x,a){ result = numeric(length(x)) result[which(x &gt; a)] a)] - a result[which(x &lt; -a)] &lt;- x[which(x &lt; -a)] + a return(result) } and the code lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){ beta = as.matrix(beta) X = as.matrix(X) omega = rep(1/length(y),length(y)) obj = numeric(length=(maxiter+1)) betalist = list(length(maxiter+1)) betalist[[1]] = beta beta0list = numeric(length(maxiter+1)) beta0 = sum(y-X%*%beta)/(length(y)) beta0list[1] = beta0 for (j in 1:maxiter){ for (k in 1:length(beta)){ r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y)) beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda) } beta0 = sum(y-X%*%beta)/(length(y)) beta0list[j+1] = beta0 betalist[[j+1]] = beta obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta)) if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } } return(list(obj=obj[1:j],beta=beta,intercept=beta0)) } Let’s keep that one warm, and let’s get back to our initial problem. ## The lasso logistic regression The trick here is that the logistic problem can be formulated as a quadratic programming problem. Recall that the log-likelihood is here $$\log\mathcal{L}=\frac{1}{n}\sum_{i=1}^n y_i\cdot(\beta_0+\mathbf{x}_i^T\mathbf{\beta})-\log[1+\exp(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]$$ which is a concave function of the parameters. Hence, one can use a quadratic approximation of the log-likelihood – using Taylor expansion,$$\log\mathcal{L}\approx\log\mathcal{L}'=\frac{1}{n}\sum_{i=1}^n \omega_i\cdot[z_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2$$ where $z_i$ is the working response $$z_i=(\beta_0+\mathbf{x}_i^T\mathbf{\beta})+\frac{y_i-p_i}{p_i[1-p_i]}$$ $p_i$ is the prediction$$p_i = \frac{\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{1+\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}$$and $\omega_i$ are weights $\omega_i = p_i[1-p_i]$. Thus, we obtain a penalized least-square problem. And we can use what was done previously lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){ beta = as.matrix(beta) X = as.matrix(X) obj = numeric(length=(maxiter+1)) betalist = list(length(maxiter+1)) betalist[[1]] = beta beta0 = sum(y-X%*%beta)/(length(y)) p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta)) z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p)) omega = p*(1-p)/(sum((p*(1-p)))) beta0list = numeric(length(maxiter+1)) beta0 = sum(y-X%*%beta)/(length(y)) beta0list[1] = beta0 for (j in 1:maxiter){ for (k in 1:length(beta)){ r = z - X[,-k]%*%beta[-k] - beta0*rep(1,length(y)) beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda) } beta0 = sum(y-X%*%beta)/(length(y)) beta0list[j+1] = beta0 betalist[[j+1]] = beta obj[j] = (1/2)*(1/length(y))*norm(omega*(z - X%*%beta - beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta)) p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta)) z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p)) omega = p*(1-p)/(sum((p*(1-p)))) if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } } return(list(obj=obj[1:j],beta=beta,intercept=beta0)) } It looks like what can get when calling glmnet… and here, we do have null components for some $\lambda$ large enough ! Really null… and that’s cool actually. ## Application on our second dataset Consider now the second dataset, with two covariates. The code to get lasso estimates is df0 = df df0$y = as.numeric(df$y)-1 plot_lambda = function(lambda){ m = apply(df0,2,mean) s = apply(df0,2,sd) for(j in 1:2) df0[,j] &lt;- (df0[,j]-m[j])/s[j] reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1,lambda=lambda) u = seq(0,1,length=101) p = function(x,y){ xt = (x-m[1])/s[1] yt = (y-m[2])/s[2] predict(reg,newx=cbind(x1=xt,x2=yt),type="response")} v = outer(u,u,p) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)}

Consider some small values, for [\lambda], so that we only have some sort of shrinkage of parameters,

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=exp(-2.8)) plot_lambda(exp(-2.8)) But with a larger $\lambda$, there is variable selection: here $\widehat{\beta}_{1,\lambda}=0$ reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=exp(-2.1)) plot_lambda(exp(-2.1))

This Tuesday, I will be giving the second part of the (crash) graduate course on advanced tools for econometrics. It will take place in Rennes, IMAPP room, and I have been told that there will be a visio with Nantes and Angers. Slides for the morning are online, as well as slides for the afternoon.

In the morning, we will talk about variable section and penalization, and in the afternoon, it will be on changing the loss function (quantile regression).

# Actuariat de l’Assurance Non-Vie #9

Cette semaine, nous avons fini les modèles de tarification, avec une extention sur les modèles globaux (sans passer par une séparation entre fréquence et coût moyen), et sur le choix de variables, et le choix de modèles. Les slides introductifs sont en ligne.

On Thursday, March 23rd, I will give the third lecture of the PhD course on advanced tools for econometrics, on model selection and variable selection, where we will focus on ridge and lasso regressions . Slides are available online.

The first part was on on Nonlinearities in Econometric models, and the second one on Simulations.

I will give a short graduate course for PhD students, in Rennes, on Thurday mornings, in March (2nd, 9th, 23rd and 30th). The agenda will be

1. Nonlinear Regression Models and Smoothing Techniques

2. Bootstrapping and Regression

3. Penalized Regression Models and LASSO

4. Quantile Regression and Expectiles

There will be slides available by the end of February.

# Actuariat de l’Assurance Non-Vie #9

Pour le neuvième chapitre du cours d’actuariat de l’assurance non-vie à l’ENSAE, un petit fourre-tout avant d’attaquer la modélisation du passif, en parlant un peu de modèles Tweedie (modèle collectif vs. modèles individuels), de choix de variables, et de choix de modèles. Les slides sont en ligne (la version pdf téléchargeable est comme souvent plus complète que celle sur slideshare)