# Foundations of Machine Learning, part 4

This post is the eighth one of our series on the history and foundations of econometric and machine learning models. The first fours were on econometrics techniques. Part 7 is online here.

## Penalization and variables selection

One important concept in econometrics is Ockham’s razor – also known as the law of parsimony (lex parsimoniae) – which can be related to abductive reasoning.

Akaike’s criterion was based on a penalty of likelihood taking into account the complexity of the model (the number of explanatory variables retained). If in econometrics, it is customary to maximize the likelihood (to build an asymptotically unbiased estimator), and to judge the quality of the ex-post model by penalizing the likelihood, the strategy here will be to penalize ex-ante in the objective function, even if it means building a biased estimator. Typically, we will build: $$(\widehat{\beta}_{0,\lambda},\widehat{\beta}_{\lambda})=\text{argmin}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta)+\lambda \text{ penalization}( \boldsymbol{\beta})\right\rbrace, ~~~(11)$$where the penalty function will often be a norm $\|\cdot\|$ chosen a priori, and a penalty parameter $\lambda$ (we find in a way the distinction between AIC and BIC if the penalty function is the complexity of the model – the number of explanatory variables retained). In the case of the $\ell_2$ norm, we find the ridge estimator, and for the $\ell_1$ norm, we find the lasso estimator (“Least Absolute Shrinkage and Selection Operator”). The penalty previously used involved the number of degrees of freedom of the model, so it may seem surprising to use $\|\beta\|_{\ell_2}$ as in the ridge regression. However, we can envisage a Bayesian vision of this penalty. It should be recalled that in a Bayesian model : $$\underbrace{\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]}_{\text{posterior}} \propto \underbrace{\mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{likelihood}} \cdot \underbrace{\mathbb{P}[\boldsymbol{\theta}]}_{\text{prior}}$$or$$\log\mathbb{P}[\boldsymbol{\theta}\vert\boldsymbol{y}]= \underbrace{\log \mathbb{P}[\boldsymbol{y}\vert\boldsymbol{\theta}]}_{\text{log likelihood}} + \underbrace{\log\mathbb{P}[\boldsymbol{\theta}]}_{\text{{penalty}}}$$In a Gaussian linear model, if we assume that the a priori law of $\theta$ follows a centred Gaussian distribution, we find a penalty based on a quadratic form of the components of $\theta$.

Before going back in detail to these two estimators, obtained using the $\ell_1$ or $\ell_2$ norm, let us return for a moment to a very similar problem: the best choice of explanatory variables. Classically (and this will be even more true in large dimension), we can have a large number of explanatory variables, $p$, but many are just noise, in the sense that $\beta_j=0$ for a large number of $j$. Let $s$ be the number of (really) relevant covariates, $s=\#S$, with $$S=\{j=1,\cdots,p:\beta_j\neq 0\}$$. If we note $\mathbf{X}_S$ the matrix composed of the relevant variables (in columns), then we assume that the real model is of the form $y=\mathbf{x}_S^T \beta_S+\varepsilon$. Intuitively, an interesting estimator would then be $\widehat{\beta}_S=[\mathbf{X}_S^T \mathbf{X}_S ]^{-1} \mathbf{X}_S^T \mathbf{y}$, but this estimator is only theoretical because the set $S$ is unknown, here. This estimator can actually be seen as the oracle estimator mentioned above. One may then be tempted to solve $$(\widehat{\beta}_{0,s},\widehat{\beta}_{s})=\underset{\beta_S\in\mathbb{R}^s}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,\beta_0+\mathbf{x}^T\beta_S)\right\rbrace,\text{ s.t. } \# {S}=s$$This problem was introduced by Foster & George (1994) using the $\ell_0$ notation. More precisely, let us define here the following three norms, where $\mathbf{a}\in\mathbb{R}^d$, $$\Vert\boldsymbol{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~ \Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|~~\text{ and }~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}$$

Table 1: Constrained optimization and regularization.

Let us consider the optimization problems in Table 1. If we consider the classical problem where the quadratic norm is used for $\ell$, the two problems of the equation $(\ell1)$ of Table 1 are equivalent, in the sense that, for any solution $(\beta^\star,s)$ to the left problem, there is $\lambda^\star$ such that $(\beta^\star,\lambda^\star)$ is the solution of the right problem; and vice versa. The result is also true for problems$(\ell2)$. These are indeed convex problems. On the other hand, the two problems $(\ell0)$ are not equivalent: if for $(\beta^\star,\lambda^\star)$ solution of the right problem, there is $s^\star$ such that $\beta^\star$ is solution of the left problem, the reverse is not true. More generally, if you want to use an $\ell_p$ norm, sparsity is obtained if $p\leq 1$ whereas you need $p\geq1$ to have the convexity of the optimization program.

One may be tempted to resolve the penalized program $(\ell0)$ directly, as suggested by Foster & George (1994). Numerically, it is a complex combinatorial problem in large dimension (Natarajan (1995) notes that it is a NP-difficult problem), but it is possible to show that if $\lambda\sim\sigma^2 \log(p)$, then $$\mathbb{E}\big([\mathbf{x}^T \widehat{\beta}-\mathbf{x}^T \beta_0]^2\big) \leq \underbrace{\mathbb{E}\big(\mathbf{x}_{ {S}}^T\widehat{\beta}_{{S}}-\mathbf{x}^T \beta_0]^2\big)}_{=\sigma^2 \#{S}}\cdot \big(4\log p+2+o(1)\big)$$Observe that in this case $$\widehat{\beta}_{\lambda,j}^{\text{sub}} = \left\lbrace\begin{array}{l}0 \text{ if } j\notin{S}_\lambda(\beta)\\ \widehat{\beta}_{j}^{\text{ols}} \text{ if } j\in{S}_\lambda(\beta),\end{array}\right.$$where $S_\lambda (\beta)$ refers to all non-zero coordinates when solving $(\ell0)$.

The problem $(\ell2)$ is strictly convex if $\ell$ is the quadratic norm, in other words, the Ridge estimator is always well defined, with in addition an explicit form for the estimator, $$\widehat{ {\beta}}_\lambda^{\text{ ridge}}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{y}=(\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I})^{-1}(\mathbf{X}^T\mathbf{X})\widehat{ {\beta}}^{\text{ ols}}$$Therefore, it can be deduced that $$\text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=-\lambda[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}~\widehat{ {\beta}}^{\text{ ols}}$$and$$\text{Var}[\widehat{\beta}_\lambda^{\text{ ridge}}]=\sigma^2[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{X}[\mathbf{X}^T\mathbf{X}+\lambda\mathbb{I}]^{-1}$$With a matrix of orthonormal explanatory variables (i.e. $\mathbf{X}^T \mathbf{X}=\mathbb{I}$), the expressions can be simplified $$\text{bias}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\lambda}{1+\lambda}~\widehat{ {\beta}}^{\text{ ols}}\text{ and }\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{\sigma^2}{(1+\lambda)^2}\mathbb{I}$$Observe that $\text{Var}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]<\text{Var}[\widehat{ {\beta}}^{\text{ ols}}]$. And because  $$\text{mse}[\widehat{ {\beta}}_\lambda^{\text{ ridge}}]=\frac{p\sigma^2}{(1+\lambda)^2}+\frac{\lambda^2}{(1+\lambda)^2}\beta^T\beta$$we obtain an optimal value for $\lambda$: $\lambda^\star=k\sigma^2/\beta^T\beta$

On the other hand, if $\ell$ is no longer the quadratic norm but the $\ell_1$ norm, the problem $(\ell1)$ is not always strictly convex, and in particular, the optimum is not always unique (for example if $\mathbf{X}^T \mathbf{X}$ is singular). But if it is strictly convex, then predictions $\mathbf{X}\beta$ will be unique. It should also be noted that two solutions are necessarily consistent in terms of sign of coefficients: it is not possible to have $\beta_j<0$ for one solution and $\beta_j>0$ for another. From a heuristic point of view, the program $(\ell1)$ is interesting because it allows to obtain in many cases a corner solution, which corresponds to a problem resolution of type $(\ell0)$ – as shown visually on Figure 2.

Figure 2 : Penalization based on norms $\ell_0$, $\ell_1$ and $\ell_2$ (from Hastie et al. (2016)).

Let us consider a very simple model: $y_i=x_i \beta+\varepsilon$, with a penalty $\ell_1$ and a loss function $\ell_2$. The problem $(\ell1)$ then becomes  $$\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda|\beta|\big\}$$The first order condition is then $$-2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}\pm 2\lambda=0$$And the sign of the last term depends on the sign of $\beta$. Suppose that the least square estimator (obtained by setting $\lambda=0$) is (strictly) positive, i. e. $\mathbf{y}^T \mathbf{x}>0$. If $\lambda$ is not too big, we can imagine that $\beta$ is of the same sign as $\widehat{\beta}^{\text{mco}}$, and therefore the condition becomes $-2\mathbf{y}^T \mathbf{x}+2\mathbf{x}^T \mathbf{x}\beta+2\lambda=0$, and the solution is $$\widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}}$$By increasing $\lambda$, there will be a time such that $\widehat{\beta}_λ=0$. If we increase $\lambda$ a bit little more, $\widehat{\beta}_λ$ does not become negative because in this case the last term of the first order condition changes, and in this case we try to solve $$-2\mathbf{y}^T\mathbf{x} + 2\mathbf{x}^T\mathbf{x}\widehat{\beta}- 2\lambda=0$$whose solution is then $$\widehat{\beta}_{\lambda}^{\text{ lasso}}=\frac{\mathbf{y}^T\mathbf{x}+\lambda}{\mathbf{x}^T\mathbf{x}}$$But this solution is positive (we assumed $\mathbf{y}^T \mathbf{x}>0$), and so it is possible to have $\widehat{\beta}_\lambda <0$at the same time. Also, after a while, $\widehat{\beta}_\lambda=0$, which is then a corner solution. Things are of course more complicated in larger dimensions (Tibshirani & Wasserman (2016) goes back at length on the geometry of the solutions) but as Candès & Plan (2009) notes, under minimal assumptions guaranteeing that the predictors are not strongly correlated, the Lasso obtains a quadratic error almost as good as if we had an oracle providing perfect information on the set of $\beta_j$‘s that are not zero. With some additional technical hypotheses, it can be shown that this estimator is “sparsistant” in the sense that the support of $\widehat{\beta}_\lambda^{\text{lasso}}$ is that of $\beta$, in other words Lasso has made it possible to select variables (more discussions on this point can be obtained in Hastie et al. (2016)).

More generally, it can be shown that $\widehat{\beta}_\lambda^{\text{lasso}}$ is a biased estimator, but may be of sufficiently low variance that the mean square error is lower than using least squares. To compare the three techniques, relative to the least square estimator (obtained when $\lambda=0$), if we assume that the explanatory variables are orthonormal, then $$\widehat{\beta}_{\lambda,j}^{\text{ subset}}=\widehat{\beta}_{j}^{\text{ ols}}\boldsymbol{1}_{|\widehat{\beta}_{\lambda,j}^{\text{ subset}}|>b}, ~~\widehat{\beta}_{\lambda,j}^{\text{ ridge}}=\frac{\widehat{\beta}_{j}^{\text{ ols}}}{1+\lambda}$$and$$\widehat{\beta}_{\lambda,j}^{\text{ lasso}}=\text{sign}[\widehat{\beta}_{j}^{\text{ ols}}]\cdot(|\widehat{\beta}_{j}^{\text{ ols}}|-\lambda)_+$$

Figure 3 : Penalization based on norms ,  and  (from Hastie et al. (2016)).

To be continued with probably a final post this week (references are online here)…

# Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

## Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, $$\widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbrace$$The objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on$$-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})$$We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define$$\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}$$for any $\mathbf{a}\in\mathbb{R}^d$. One might say that the AIC could be written$$-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}$$And actually, this will be our objective function. More specifically, we will consider
$$\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbrace$$for some norm $\|\cdot\|$. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for $\|\cdot\|_{\ell_2}$ (the Ridge regression) and for $\|\cdot\|_{\ell_1}$ (the LASSO regression).

## Normalization of the covariates

The problem of $\|\mathbf{\beta}\|$ is that the norm should make sense, somehow. A small $\mathbf{\beta}_j$ is with respect to the “dimension” of $x_j$‘s. So, the first step will be to consider linear transformations of all covariates $x_j$ to get centered and scaled variables (with unit variance)

y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) ## Ridge Regression (from scratch) Before running some codes, recall that we want to solve something like$$\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace$$ In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression. The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : $$\min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbrace$$can be written equivalently (it is a strictly convex problem)$$\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbrace$$Thus, the constrained maximum should lie in the blue disk LogLik = function(bbeta){ b0=bbeta[1] beta=bbeta[-1] sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))} u = seq(-4,4,length=251) v = outer(u,u,function(x,y) LogLik(c(1,x,y))) image(u,u,v,col=rev(heat.colors(25))) contour(u,u,v,add=TRUE) u = seq(-1,1,length=251) lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue") lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue") Let us consider the objective function, with the following code PennegLogLik = function(bbeta,lambda=0){ b0 = bbeta[1] beta = bbeta[-1] -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)* log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2) } Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case lambda = 1 beta_init = lm(PRONO~.,data=myocarde)$coefficients vpar = matrix(NA,1000,8) for(i in 1:1000){ vpar[i,] = optim(par = beta_init*rnorm(8,1,2), function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par} par(mfrow=c(1,2)) plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1]) plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2]) Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum. The code to compute $\widehat{\mathbf{\beta}}_{\lambda}$ would then be opt_ridge = function(lambda){ beta_init = lm(PRONO~.,data=myocarde)$coefficients logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), method = "BFGS", control=list(abstol=1e-9)) logistic_opt$par[-1]} and we can visualize the evolution of $\widehat{\mathbf{\beta}}_{\lambda}$ as a function of ${\lambda}$ v_lambda = c(exp(seq(-2,5,length=61))) est_ridge = Vectorize(opt_ridge)(v_lambda) library("RColorBrewer") colrs = brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1]) for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i]) At least it seems to make sense: we can observe the shrinkage as $\lambda$ increases (we’ll get back to that later on). ## Ridge, using Netwon Raphson algorithm We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was$$\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}$$where $$\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})$$and$$\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}$$where $\mathbf{\Delta}_{old}$ is the diagonal matrix with terms $\mathbf{p}_{old}(1-\mathbf{p}_{old})$ on the diagonal. Thus$$\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]$$that we can also write$$\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}$$where $\mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]$. Here, on the penalized problem, we can easily prove that$$\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}$$while$$\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}$$Hence$$\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}$$ The code is then Y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) X = cbind(1,X) colnames(X) = c("Inter",names(myocarde[,1:7])) beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1) for(s in 1:9){ pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi)) z = X%*%beta[,s] + solve(Delta)%*%(Y-pi) B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z) beta = cbind(beta,B)} beta[,8:10] [,1] [,2] [,3] XInter 0.59619654 0.59619654 0.59619654 XFRCAR 0.09217848 0.09217848 0.09217848 XINCAR 0.77165707 0.77165707 0.77165707 XINSYS 0.69678521 0.69678521 0.69678521 XPRDIA -0.29575642 -0.29575642 -0.29575642 XPAPUL -0.23921101 -0.23921101 -0.23921101 XPVENT -0.33120792 -0.33120792 -0.33120792 XREPUL -0.84308972 -0.84308972 -0.84308972 Again, it seems that convergence is very fast. And interestingly, with that algorithm, we can also derive the variance of the estimator$$\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}$$where$\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}$ The code to compute $\widehat{\mathbf{\beta}}_{\lambda}$ as a function of $\lambda$ is then newton_ridge = function(lambda=1){ beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8) for(s in 1:20){ pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi)) z = X%*%beta[,s] + solve(Delta)%*%(Y-pi) B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z) beta = cbind(beta,B)} Varz = solve(Delta) Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*% Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of $\widehat{\mathbf{\beta}}_{\lambda}$ (as a function of $\lambda$)

v_lambda=c(exp(seq(-2,5,length=61))) est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i]) and to get the evolution of the variance v_lambda=c(exp(seq(-2,5,length=61))) est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)

Recall that when $\lambda=0$ (on the left of the graphs), $\widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco}$ (no penalty). Thus as $\lambda$ increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

## Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with $\alpha=0$

y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) library(glmnet) glm_ridge = glmnet(X, y, alpha=0) plot(glm_ridge,xvar="lambda",col=colrs,lwd=2) as a function of the norm the $\ell_1$ norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post… ## Ridge with orthogonal covariates An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates. library(factoextra) pca = princomp(X) pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet) glm_ridge = glmnet(pca_X, y, alpha=0) plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that $$\widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}$$

## Application

Let us try with our second set of data

df0 = df df0$y=as.numeric(df$y)-1 plot_lambda = function(lambda){ m = apply(df0,2,mean) s = apply(df0,2,sd) for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j] reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda) u = seq(0,1,length=101) p = function(x,y){ xt = (x-m[1])/s[1] yt = (y-m[2])/s[2] predict(reg,newx=cbind(x1=xt,x2=yt),type='response')} v = outer(u,u,p) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) } We can try various values of $\lambda$ reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=log(.2)) plot_lambda(.2)

or

reg = glmnet(cbind(df0$x1,df0$x2), df0\$y==1, alpha=0) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=log(1.2)) plot_lambda(1.2)

Next step is to change the norm of the penality, with the $\ell_1$ norm (to be continued…)