Category Archives: Computer

Classification from scratch, penalized Lasso logistic 5/8

Fifth post of our series on classification from scratch, following the previous post on penalization using the \ell_2 norm (so-called Ridge regression), this time, we will discuss penalization based on the \ell_1 norm (the so-called Lasso regression).

First of all, one should admit that if the name stands for least absolute shrinkage and selection operator, that’s actually a very cool name… Funny story, a few years before, Leo Breiman introduce a concept of garrote technique… “The garrote eliminates some variables, shrinks others, and is relatively stable”.

I guess that somehow, the lasso is the extension of the garotte technique

Normalization of the covariates

As previously, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

The heuristics about Lasso regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue square is the constraint we have, if we rewite the optimization problem as a contrained optimization problem,

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
polygon(c(-1,0,1,0),c(0,1,0,-1),border="blue")

The nice thing here is that is works as a variable selection tool, since some components can be null here. That’s the idea behind the following (popular) graph


(with lasso on the left, and ridge on the right).

Heuristically, the maths explanation is the following. Consider a simple regression y_i=x_i\beta+\varepsilon, with \ell_1-penality and a \ell_2-loss fuction. The optimization problem becomes\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda{\color{red}{|}}\beta{\color{red}{|}}\big\}The first order condition can be written-2\mathbf{y}^T\mathbf{x}+2\mathbf{x}^T\mathbf{x}\widehat{\beta}{\color{red}{\pm} }2\lambda=0(the sign in {\color{red}{\pm}} being the sign of \widehat{\beta}).
Assume that \mathbf{y}^T\mathbf{x}>0, then solution is
\widehat{\beta}_{\lambda}^{lasso}=\max\left\lbrace\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}},0\right\rbrace(we get a corner solution when \lambda is large).

Optimization routine

As in our previous post, let us start with standard (R) optimization routines, such as BFGS

PennegLogLik = function(bbeta,lambda=0){
  b0=bbeta[1]
  beta=bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
(1-y)*log(1 + exp(b0+X%*%beta)))+lambda*sum(abs(beta))
}
opt_lasso = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]
}
v_lambda=c(exp(seq(-4,2,length=61)))
est_lasso=Vectorize(opt_lasso)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_lasso[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_lasso[i,],col=colrs[i],lwd=2)


But it is very heratic… or non stable.

Using glmnet

Just to compare, with R routines dedicated to lasso, we get the following

library(glmnet)
glm_lasso = glmnet(X, y, alpha=1)
plot(glm_lasso,xvar="lambda",col=colrs,lwd=2)

plot(glm_lasso,col=colrs,lwd=2)

If we look carefully what’s in the ouput, we can see that there is variable selection, in the sense that some \widehat{\beta}_{j,\lambda}=0, in the sense “really null”

glmnet(X, y, alpha=1,lambda=exp(-4))$beta
7x1 sparse Matrix of class "dgCMatrix"
               s0
FRCAR  .         
INCAR  0.11005070
INSYS  0.03231929
PRDIA  .         
PAPUL  .         
PVENT -0.03138089
REPUL -0.20962611

Of course, with out optimization routine, we cannot expect to have null values

opt_lasso(.2)
         FRCAR         INCAR         INSYS         PRDIA
  0.4810999782  0.0002813658  1.9117847987 -0.3873926427
          PAPUL         PVENT        REPUL 
 -0.0863050787 -0.4144139379 -1.3849264055

So clearly, it will be necessary to spend more time today, to understand how it works…

Orthogonal covariates

Before getting into the maths, observe that when covariates are orthogonal, there is some very clear “variable” selection process,

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord
glm_lasso = glmnet(pca_X, y, alpha=1)
plot(glm_lasso,xvar="lambda",col=colrs)
plot(glm_lasso,col=colrs)

Interior Point approach

The penalty is now expressed using the \ell_1 so intuitively, it should be possible to consider algorithms related to linear programming. That was actually suggested in Koh, Kim & Boyd (2007), with some implementation in matlab, see http://web.stanford.edu/~boyd/l1_logreg/. If I can find some time, later one, maybe I will try to recode it. But actually, it is not the technique used in most R functions.

Now, o be honest, we face a double challenge today: the first one is to understand how lasso works for the “standard” (least square) problem, the second one is to see how to adapt it to the logistic case.

Standard lasso (with weights)

If we get back to the original Lasso approach, the goal was to solve\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace(with standard notions, as in wikipedia or Jocelyn Chi’s post – most of the code in this section is inspired by Jocelyn’s great post).

Observe that the intercept is not subject to the penalty. The first order condition is then\frac{\partial}{\partial\beta_0}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}-\beta_0\mathbf{1}\|^2=(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}+\beta_0\|\mathbf{1}\|^2=0i.e.\beta_0=\frac{1}{n^2}(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}Assume now that KKT conditions are satisfied, since we cannot differentiate (to find points where the gradient is \mathbf{0}), we can check if \mathbf{0} contains the subdifferential at the minimum.

Namely\mathbf{0}\in\partial \left(\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right)=\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\partial(\lambda\|\mathbf{\beta}\|_{\ell_1})
For the term on the left, we recognize \frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2=-\mathbf{X}^T(\mathbf{y}-\mathbf{X}\mathbf{\beta})=-\mathbf{g}so that the previous equation can be writeng_k\in\partial(\lambda|\beta_k|)=\begin{cases}\{+\lambda\}\text{ if }\beta_k>0 \\ \{-\lambda\}\text{ if }\beta_k<0 \\ (-\lambda,+\lambda)\text{ if }\beta_k=0\end{cases}i.e. if \beta_k\neq 0, then g_k = \text{sign}(\beta_k)\cdot\lambda.

Then we write the KKT conditions for this formulation and simplify them to produce a set of rules for checking our solution

We can split \beta_j into a sum of its positive and negative parts by replacing \beta_j with \beta_j^+-\beta_j^- where \beta_j^+,\beta_j^-\geq0. Then the Lasso problem becomes-\log\mathcal{L}(\mathbf{\beta})+\lambda\sum_j(\beta_j^+-\beta_j^-)with constraints \beta_j^+-\beta_j^-.

Let \alpha_j^+,\alpha_j^- denote the Lagrange multipliers for \beta_j^+,\beta_j^-, respectively.

L({\mathbf{\beta}}) + \lambda \sum_{j} (\beta_{j}^{+} - \beta_{j}^{-}) - \sum_{j}\alpha_{j}^{+}\beta_{j}^{+} - \sum_{j} \alpha_{j}^{-}\beta_{j}^{-}.To satisfy the stationarity condition, we take the gradient of the Lagrangian with respect to \beta_{j}^{+} and set it to zero to obtain\nabla L({\mathbf{\beta}})_{j} + \lambda - \alpha_{j}^{+} = 0We do the same with respect to \beta_{j}^{-} to obtain-\nabla L({\mathbf{\beta}})_{j}+\lambda-\alpha_{j}^{-} = 0

As discussed in Jocelyn Chi’s post, primal feasibility requires that the primal constraints be satisfied so this gives us \beta_{j}^{+} \ge 0 and \beta_{j}^{-} \ge 0. Then dual feasibility requires non-negativity of the Lagrange multipliers so we get \alpha_{j}^{+} \ge 0 and \alpha_{j}^{-} \ge 0. And finally, complementary slackness requires that \alpha_{j}^{+}\beta_{j}^{+} = 0 and \alpha_{j}^{-}\beta_{j}^{-} = 0. We can simplify these conditions to obtain a simple set of rules for checking whether or not our solution is a minimum. The following is inspired by Jocelyn Chi’s post.

From \nabla L(\beta)_{j} + \lambda - \alpha_{j}^{+} = 0, we have \nabla L(\beta)_{j} + \lambda= \alpha_{j}^{+} \ge 0. This gives us \nabla L(\beta)_{j} \ge -\lambda. From -\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{-} = 0, we have -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0. This gives us -\nabla L(\beta)_{j} \ge -\lambda, which gives us \nabla L(\beta)_{j} \le \lambda. Hence, \lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j

When \beta_{j}^{+} > 0, \lambda > 0, complementary slackness requires \alpha_{j}^{+} = 0. So \nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} = 0. Hence, \nabla L(\beta)_{j} = -\lambda < 0 since \lambda > 0. At the same time, -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0 so 2 \lambda = \alpha_{j}^{-} > 0 since \lambda > 0. Then complementary slackness requires \beta_{j}^{-} = 0. Hence, when \beta_{j}^{+} > 0, we have \beta_{j}^{-}=0 and \nabla L(\beta)_{j} = -\lambda

Similarly, when \beta_{j}^{-} > 0, \lambda > 0, complementary slackness requires \alpha_{j}^{-}=0. So -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} = 0 and \nabla L(\beta)_{j}=\lambda>0 since \lambda > 0. Then from \nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} \ge 0 and the above, we get 2 \lambda = \alpha_{j}^{+} > 0. Then complementary slackness requires \beta_{j}^{+} = 0. Hence, when \beta_{j}^{-} > 0, we have \beta_{j}^{+}=0 and \nabla L(\beta)_{j} = \lambda.

Since \beta_{j} = \beta_{j}^{+} - \beta_{j}^{-}, this means that when \beta_{j} > 0, \nabla L(\beta)_{j} = -\lambda. And when \beta_{j} <0, \nabla L(\beta)_{j} = \lambda. Combining this with \lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j, we arrive at the same convergence requirements that we obtained before using subdifferential calculus.

For conveniency, introduce the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}
Noticing that the optimization problem \frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}can also be written
\min\left\lbrace\sum_{j=1}^p -\widehat{\beta}_j^{ols}\cdot\beta_j+\frac{1}{2}\beta_j^2+\lambda|\beta_j|\right\rbraceobserve that\widehat{\beta}_{j,\lambda}=S(\widehat{\beta}_j^{ols},\lambda)which is a coordinate-wise update.

Now, if we consider a (slightly) more general problem, with weights in the first part\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n{\color{red}{\omega_i}} [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbracethe coordinate-wise update becomes
\widehat{\beta}_{j,\lambda,{\color{red}{\omega}}}=S(\widehat{\beta}_j^{{\color{red}{\omega-}}ols},\lambda)
An alternative is to set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

soft_thresholding = function(x,a){
  result = numeric(length(x))
  result[which(x &gt; a)]  a)] - a
  result[which(x &lt; -a)] &lt;- x[which(x &lt; -a)] + a
  return(result)
}

and the code

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
    beta0list = numeric(length(maxiter+1))
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[1] = beta0
    for (j in 1:maxiter){
      for (k in 1:length(beta)){
        r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
        beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
      }
      beta0 = sum(y-X%*%beta)/(length(y))
      beta0list[j+1] = beta0
      betalist[[j+1]] = beta
      obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
      if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } 
    } 
return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

Let’s keep that one warm, and let’s get back to our initial problem.

The lasso logistic regression

The trick here is that the logistic problem can be formulated as a quadratic programming problem. Recall that the log-likelihood is here \log\mathcal{L}=\frac{1}{n}\sum_{i=1}^n y_i\cdot(\beta_0+\mathbf{x}_i^T\mathbf{\beta})-\log[1+\exp(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]
which is a concave function of the parameters. Hence, one can use a quadratic approximation of the log-likelihood – using Taylor expansion,\log\mathcal{L}\approx\log\mathcal{L}'=\frac{1}{n}\sum_{i=1}^n \omega_i\cdot[z_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2
where z_i is the working response
z_i=(\beta_0+\mathbf{x}_i^T\mathbf{\beta})+\frac{y_i-p_i}{p_i[1-p_i]}
p_i is the predictionp_i = \frac{\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{1+\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}and \omega_i are weights \omega_i = p_i[1-p_i].

Thus, we obtain a penalized least-square problem. And we can use what was done previously

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0 = sum(y-X%*%beta)/(length(y))
  p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta))
  z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p))
  omega = p*(1-p)/(sum((p*(1-p))))
    beta0list = numeric(length(maxiter+1))
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[1] = beta0
    for (j in 1:maxiter){
      for (k in 1:length(beta)){
        r = z - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
       beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
      }
      beta0 = sum(y-X%*%beta)/(length(y))
      beta0list[j+1] = beta0
      betalist[[j+1]] = beta
      obj[j] = (1/2)*(1/length(y))*norm(omega*(z - X%*%beta - 
beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
  p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta))
  z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p))
  omega = p*(1-p)/(sum((p*(1-p))))
      if (norm(rbind(beta0list[j],betalist[[j]]) - 
rbind(beta0,beta),'F') &lt; tol) { break } 
        } 
return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

It looks like what can get when calling glmnet… and here, we do have null components for some \lambda large enough ! Really null… and that’s cool actually.

Application on our second dataset

Consider now the second dataset, with two covariates. The code to get lasso estimates is

df0 = df
df0$y = as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] &lt;- (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type="response")}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)}

Consider some small values, for [\lambda], so that we only have some sort of shrinkage of parameters,

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=exp(-2.8))
plot_lambda(exp(-2.8))


But with a larger \lambda, there is variable selection: here \widehat{\beta}_{1,\lambda}=0

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=exp(-2.1))
plot_lambda(exp(-2.1))


(to be continued…)

Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, \widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbraceThe objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}for any \mathbf{a}\in\mathbb{R}^d. One might say that the AIC could be written-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}And actually, this will be our objective function. More specifically, we will consider
\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbracefor some norm \|\cdot\|. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for \|\cdot\|_{\ell_2} (the Ridge regression) and for \|\cdot\|_{\ell_1} (the LASSO regression).

Normalization of the covariates

The problem of \|\mathbf{\beta}\| is that the norm should make sense, somehow. A small \mathbf{\beta}_j is with respect to the “dimension” of x_j‘s. So, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

Before running some codes, recall that we want to solve something like\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression.

The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : \min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbracecan be written equivalently (it is a strictly convex problem)\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbraceThus, the constrained maximum should lie in the blue disk

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
u = seq(-1,1,length=251)
lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue")
lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue")

Let us consider the objective function, with the following code

PennegLogLik = function(bbeta,lambda=0){
  b0   = bbeta[1]
  beta = bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*
  log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2)
}

Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case

lambda = 1
beta_init = lm(PRONO~.,data=myocarde)$coefficients
vpar = matrix(NA,1000,8)
for(i in 1:1000){
vpar[i,] = optim(par = beta_init*rnorm(8,1,2), 
function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par}
par(mfrow=c(1,2))
plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1])
plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2])


Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum.

The code to compute \widehat{\mathbf{\beta}}_{\lambda} would then be

opt_ridge = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]}

and we can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} as a function of {\lambda}

v_lambda = c(exp(seq(-2,5,length=61)))
est_ridge = Vectorize(opt_ridge)(v_lambda)
library("RColorBrewer")
colrs = brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1])
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])

At least it seems to make sense: we can observe the shrinkage as \lambda increases (we’ll get back to that later on).

Ridge, using Netwon Raphson algorithm

We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}where
\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})and\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}where \mathbf{\Delta}_{old} is the diagonal matrix with terms \mathbf{p}_{old}(1-\mathbf{p}_{old}) on the diagonal.

Thus\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]that we can also write\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. Here, on the penalized problem, we can easily prove that\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}while\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}Hence\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}
The code is then

Y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
X = cbind(1,X)
colnames(X) = c("Inter",names(myocarde[,1:7]))
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
beta[,8:10]
              [,1]        [,2]        [,3]
XInter  0.59619654  0.59619654  0.59619654
XFRCAR  0.09217848  0.09217848  0.09217848
XINCAR  0.77165707  0.77165707  0.77165707
XINSYS  0.69678521  0.69678521  0.69678521
XPRDIA -0.29575642 -0.29575642 -0.29575642
XPAPUL -0.23921101 -0.23921101 -0.23921101
XPVENT -0.33120792 -0.33120792 -0.33120792
XREPUL -0.84308972 -0.84308972 -0.84308972

Again, it seems that convergence is very fast.

And interestingly, with that algorithm, we can also derive the variance of the estimator\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}where\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}

The code to compute \widehat{\mathbf{\beta}}_{\lambda} as a function of \lambda is then

newton_ridge = function(lambda=1){
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:20){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
Varz = solve(Delta)
Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*%
  Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X)))
return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} (as a function of \lambda)

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])


and to get the evolution of the variance

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)


Recall that when \lambda=0 (on the left of the graphs), \widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco} (no penalty). Thus as \lambda increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with \alpha=0

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
library(glmnet)
glm_ridge = glmnet(X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

as a function of the norm

the \ell_1 norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post…

Ridge with orthogonal covariates

An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates.

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet)
glm_ridge = glmnet(pca_X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that \widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}

Application

Let us try with our second set of data

df0 = df
df0$y=as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type='response')}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}

We can try various values of \lambda

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(.2))
plot_lambda(.2)


or

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(1.2))
plot_lambda(1.2)


Next step is to change the norm of the penality, with the \ell_1 norm (to be continued…)

Classification from scratch, logistic with kernels 3/8

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})

Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code

mean_x = function(x,bw){
  w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1)
  weighted.mean(myocarde$PRONO,w)}
u = seq(5,55,length=201)
v = Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better.

Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1))
plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk)
f=function(bm){
  v = Vectorize(function(x) mean_x(x,bm))(reg$x)
  z=reg$y-v
  sum((z[!is.na(z)])^2)}
optim(bk,f)$par}
x=seq(1,10,by=.1)
y=Vectorize(g)(x)
plot(x,y)
abline(0,exp(-1),col="red")
abline(0,.37,col="blue")


There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest…

Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101)
p = function(x,y){
  bw1 = .2; bw2 = .2
  w = dnorm((df$x1-x)/bw1, mean=0,sd=1)*
      dnorm((df$x2-y)/bw2, mean=0,sd=1)
  weighted.mean(df$y=="1",w)
}
v = outer(u,u,Vectorize(p))
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with
\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7])
Sigma_Inv = solve(Sigma)
d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)}
k_closest = function(i,k){
  vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv)
vect = Vectorize(vect_dist)((1:nrow(myocarde))) 
which((rank(vect)))}

Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.

k_majority = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2]
  return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)])
  return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO,
MAJORITY=k_majority(7),PROPORTION=k_mean(7))
      OBSERVED MAJORITY PROPORTION
 [1,]        1        1  0.7142857
 [2,]        0        1  0.5714286
 [3,]        0        0  0.1428571
 [4,]        1        1  0.5714286
 [5,]        0        1  0.7142857
 [6,]        0        0  0.2857143
 [7,]        1        1  0.7142857
 [8,]        1        0  0.4285714
 [9,]        1        1  0.7142857
[10,]        1        1  0.8571429
[11,]        1        1  1.0000000
[12,]        1        1  1.0000000

Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){
  w = rank(abs(myocarde$INSYS-x),ties.method ="random")
  mean(myocarde$PRONO[which(w&lt;=9)])}
u=seq(5,55,length=201)
v=Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")]))
u = seq(0,1,length=51)
p = function(x,y){
  k = 6
  vect_dist = function(j)  d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv)
  vect = Vectorize(vect_dist)(1:nrow(df)) 
  idx  = which(rank(vect)&lt;=k)
  return(mean((df$y==1)[idx]))}
v = outer(u,u,Vectorize(p))
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x&gt;=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x&gt;=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(&gt;|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(&gt;|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

Visualization of Airline Transportation Data

Tuesday, I will be in Paris, as a member of the jury on dataviz, organized by the Direction Generale de l’Aviation Civile, during the “assises nationales du transport aérien“.

Dans ce contexte, le Ministère de la transition écologique et solidaire lance un appel à projets pour la conception d’une application open source facilitant la visualisation et le partage des données du transport aérien. Volume de trafic (passagers et mouvements d’avion), retards et émissions aux abords des aéroports sont autant de données recensées par la DGAC, pour élaborer un outil de data-visualisation innovant, interactif et pédagogique au service des professionnels et du grand public.

There were some nice studies based on those data, available from the dedicated website (even if sometime, it can be hard to get a clear understanding, but that’s actually the main challenge with dataviz)

I can also upload some screen shots from some apps that were submited, and there were nice things, such as

or the following one

Some candidates were selected to present their viz to the jury, and then, there will be prices. More to come on Wednesday, probably.

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar)
src = osmsource_api()
bb = center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua = get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids = find(ua, way(tags(k %in% vc)))
nat_ids = find_down(ua, way(nat_ids))
nat = subset(ua, ids = nat_ids)
nat_poly = as_sp(nat, type)}
 
listev = function(vc,type="polygons"){
  nat_ids = find(ua, way(tags(v %in% vc)))
  nat_ids = find_down(ua, way(nat_ids))
  nat = subset(ua, ids = nat_ids)
  nat_poly  as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines")
H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M)
plot(W,add=TRUE,col="blue")
plot(P,add=TRUE,col="green")
if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp)
library(raster)
library(rgdal)
library(rgeos)
library(maptools)
identification = function(xy,h,PLG){
  b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2),
               y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h)))
  pb1=Polygon(b)    
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))
  UC=gUnionCascaded(PLG)
  return(gIntersection(SPb1,UC))
}

and then, we identify, as follows

whichidtf = function(xy,h){
  h=.7*h
  label="EMPTY"
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"
return(label)
}

Let is use colored rectangle to make sure it works

nx=length(vx)
vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2)
ny=length(vy)
vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2)
 plot(M,border="white")
 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         lb=whichidtf(c(vx[i],vy[j]),h)
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      
     }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png)
 library(grid)
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png")
 tree = readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree
 rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png")
water = readPNG("angle-double-up.png")
 rev_water=water
 rev_water[,,3]=water[,,4]
 home = readPNG("home.png")
 rev_home=home
 rev_home[,,4]=home[,,4]*.5
 church = readPNG("church.png")
 rev_church=church
 rev_church[,,1]=church[,,4]*.5
 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white")
 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     lb=whichidtf(c(vx[i],vy[j]),h)
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     
   }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).

 

When “learning Python” becomes “practicing R” (spoiler)

15 years ago, a student of mine told me that I should start learning Python, that it was really a great language. Students started to learn it, but I kept postponing. A few years ago, I started also Python for Kids, which is really nice actually, with my son. That was nice, but not really challenging. A few weeks ago, I also started a crash course in Python, taught by Pierre. The truth is I think I will probably give up. I keep telling myself (1) I can do anything much faster in R (2) Python is not intuitive, especially when you’re used to practice R for almost 20 years… Last week, I also had to link Python and R for our pricing game : Ali wrote some template codes in Python, and I had to translate them in R. And it was difficult…

Anyway, since it was a school break this week, I said to my son that we should try to practice together, with a nice challenge. For those willing to try it, you’d better stop here, because I will spoil it.

Continue reading When “learning Python” becomes “practicing R” (spoiler)

Actuarial Pricing Game

Within a few days, we will launch the fourth actuarial pricing game.

  1. People have to register for that new game. People have to fill a form online before February 28th to get a training dataset. As in our previous versions, players can be individuals or in a team (identified as one player).
  2. When registration closes, players will receive a training dataset (training.csv). The training dataset will be based on two years of data on household policies. There will be information about the insurance policy from underwriters, as well as, claims data. They will also receive a pricing dataset (pricing.csv) with only underwriters information and no claims. The goal is to propose a premium for all records in the pricing dataset. Note that those registered will also receive additional information and template files after registration closes.
  3. Before  April 9th, 2018, players will have to provide two files: a pricing function (either a formula.py Python function, or a formula.R R function) and a dataset with prices prices.csv). The pricing function should be an understandable explicit function which, once applied to the training dataset, will yield premium prices for each record. There will be limits on the complexity of this function. To ensure this, players will receive instructions on what can be used to construct formulas either in Python or R.
  4. Before May 14th, 2018, players are asked again to submit two files: a dataset with prices (second_prices.csv) and a pricing model (either a model.py Python file, or a model.R R file). For this round there are no restrictions on what can be used in the model, as long as, by using the model file submitted the prices dataset (second_prices.csv) can be reproduced. Players will again receive further instruction on this after registration.

Unicode, UTF-8… la base de l’encodage

Un petit billet rapide, sans aucune prétention. Juste pour revenir sur des trucs bizarres sur l’encodage… J’avais commencé à m’y intéresser il y a 3 ans, alors que je discutais avec des utilisateurs de R en Inde et au Japon. Mais revenons à la base… Si on regarde la page “unicode” de Wikipedia (ou plus précisément, si on regarde le code source, on peut avoir des informations (dans les métadonnées) : en l’occurrence, on voit que la page est encodée en UTF-8

C’est cohérent car de plus en plus de documents en ligne semblent encodé dans ce format. Si on revient à la source, l’encodage c’est lié au passage par les bytes 0/1. Plein de sites expliquent (en gros) la philosophie. Quand j’étais petit, on nous expliquait les caractères ASCII, qui utilisent 7 bits ce qui permet de disposer  128 caractères uniquement, numérotés de 0 à 127.  Genre “A” était le 65ème caractère, et “a” le 97ème. Il y avait des symboles exotiques (comme “@” qui est le 64ème) ou la ponctuation, mais pas d’accents, comme “à”. Si on veut les lettres avec des accents, il faut plus que 128 caractères. Est-alors arrivée ISO/CEI 8859, qui proposait d’encoder les caractères sur 8 bits (et pas 7). On parle aussi de ISO 8859-1 plus populaire sous le nom “latin-1” très populaires dans les pays latins, avec le “à“, et puis on a continué d’enrichir pour arriver au ISO 8859-15 (latin-9) avec dedans le symbole “€”. Pour les langues asiatiques, il a fallu enrichir encore. Bref, c’est un jeu sans fin. Dans les années 90, est arrivée la norme ISO 10646 correspondant au “jeu universel de caractères” ou UCS pour Universal Character Set (en anglais), qui donnera Unicode. Je crois que les puristes font la différence entre les jeux de caractères (l’ensemble de caractères auxquels on attribue à chacun un point de code unique, la liste des caractères ASCII par exemple) et l’encodage (qui est la façon de représenter ce point de code en mémoire). Mais on va faire simple…

Là où ça se complique c’est que les OS utilisaient (par défaut) des encodages différents. Sous Windows, UTF-16 et Unicode sont majoritairement utilisé. Sous Mac OS,il y avait un truc appelé MacRoman mais il semble que UTF-8 l’emporte aujourd’hui. Enfin, sous linux, historiquement c’était du latin-1 par défaut, mais progressivement l’UTF-8 semble prendre le dessus. Bref, tous ceux qui naviguent entre les OS le savent : les accents c’est l’enfer. Et le principal soucis c’est qu’il est impossible, quand on récupère un fichier d’avoir son encodage a priori. Regardons le fichier suivant BaseANSI.txt

On peut tenter de regarder s’il y a des sortes de méta-données dans le document…

mais ce n’est pas le cas. Le document est un document brut, avec juste les données, et aucune métadonnées. Rien sur l’encodage (en particulier, car c’est ce qui nous intéresse aujourd’hui). Pour l’anecdote, si au lieu d’ouvrir le fichier avec mon bloc-note sous Windows, on tente sur un mac, on a

Et quand je passe par le terminal DOS sous Windows, c’est la même chose (ce qui est un peu rassurant)

Regardons un autre fichier BaseUNICODE.txt. Si vous voyez une différence avec l’autre, dites le moi…

Pourtant il y en a une : la preuve, dans un terminal, on peut lire les accents du fichier,

Un dernier ? Regardons BaseUTF8.txt,Là encore, on a l’impression d’avoir le même fichier. Et pourtant non.

Commençons par récupérer les trois fichiers (via R)

download.file(url="http://freakonometrics.free.fr/BaseUTF8.txt",
destfile="BaseUTF8.txt")
download.file(url="http://freakonometrics.free.fr/BaseANSI.txt",
destfile="BaseANSI.txt")
download.file(url="http://freakonometrics.free.fr/BaseUNICODE.txt",
destfile="BaseUNICODE.txt")

Techniquement, ces documents sont différents, même si dans le bloc note, on ne voyait rien : il n’ont pas la même taille !

file.size("BaseUNICODE.txt")
[1] 322
file.size("BaseANSI.txt")
[1] 163
file.size("BaseUTF8.txt")
[1] 171

On peut essayer de les lire, pour voir…

read.table("BaseANSI.txt",header=TRUE,sep=";")
                       Département No Habitants
1                              Ain 01    631877
2                          Ardèche 07    324209
3                 Bouches-du-Rhône 13   2016622
4                     Corse-du-Sud 2A    152730
5 Côte-dOr;21;533147\nCôtes-dArmor 22    598357

On a un petit soucis avec l’apostrophe, mais rien de grave

read.table("BaseANSI.txt",header=TRUE,sep=";",quote="\"")
       Département No Habitants
1              Ain 01    631877
2          Ardèche 07    324209
3 Bouches-du-Rhône 13   2016622
4     Corse-du-Sud 2A    152730
5        Côte-d'Or 21    533147
6    Côtes-d'Armor 22    598357

R arrive parfaitement a lire (sur mon ordinateur du moins) cette première base de données. Tentons une seconde

read.table("BaseUTF8.txt",header=TRUE,sep=";",quote="\"")
    ï..DÃ.partement No Habitants
1               Ain 01    631877
2          Ardèche 07    324209
3 Bouches-du-Rhône 13   2016622
4      Corse-du-Sud 2A    152730
5        Côte-d'Or 21    533147
6    Côtes-d'Armor 22    598357

On a manifestement un soucis, un soucis d’encodage. Le soucis c’est que, d’ordinaire, on ne sait pas comment le fichier a été encodé. Et comme on l’a vu, il n’y a pas de métadonnées qui nous donne l’encodage. Comment faire ? Heureusement, il existe une fonction sous R qui devine l’encodage. Littéralement.

library(readr)
guess_encoding("BaseUTF8.txt", n_max = 1000)
# A tibble: 3 x 2
  encoding   confidence
 
1 UTF-8           1.00 
2 ISO-8859-1      0.620
3 ISO-8859-2      0.430

Autrement dit, le best-guess est ici un encodage UTF-8. Essayons pour voir,

read.table("BaseUTF8.txt",header=TRUE,sep=";",encoding="UTF-8",quote="\"")
  X.U.FEFF.Département No Habitants
1                  Ain 01    631877
2              Ardèche 07    324209
3     Bouches-du-Rhône 13   2016622
4         Corse-du-Sud 2A    152730
5            Côte-d'Or 21    533147
6        Côtes-d'Armor 22    598357

Ça marche ! ou disons que ça marche presque. J’ai un soucis dans mes libellés de variables. Mais disons que c’est anecdotique ici.

Tentons le dernier fichier

read.table("BaseUNICODE.txt",header=TRUE,sep=";",quote="\"")
  ÿþD
1  NA
2  NA
3  NA
4  NA
5  NA
6  NA
7  NA
8  NA

Comme ça ne marche pas, essayons d’avoir un best-guess sur l’encodage

guess_encoding("BaseUNICODE.txt", n_max = 1000)
# A tibble: 3 x 2
  encoding   confidence
 
1 UTF-16LE        1.00 
2 ISO-8859-1      0.530
3 ISO-8859-2      0.270

On en tient un !

read.table("BaseUNICODE.txt",header=TRUE,encoding="UTF-16LE")
  ÿþD
1  \n
2   B
3  \n
4   C
5  \n
6   C
7  \n
8   C

Damned, ça ne marche pas… Heureusement, Ewen est venu à ma rescousse. La solution est relativement étrange

read.table("BaseUNICODE.txt",header=TRUE,fileEncoding="UTF-16LE", sep=";", quote="")
       Département No Habitants
1              Ain 01    631877
2          Ardèche 07    324209
3 Bouches-du-Rhône 13   2016622
4     Corse-du-Sud 2A    152730
5        Côte-d'Or 21    533147
6    Côtes-d'Armor 22    598357

Oui, on n’utilise pas l’encodage via “encoding” mais “fileEncoding” ! Subtil non… Cela dit, si on voulait être cohérent on devrait utiliser la fonction du package pour lire la base, non ? Car dans le package “readr” il y a une fonction “read_table”

read_table("BaseUNICODE.txt",locale=locale(encoding = "UTF-16LE"))
Error in guess_types_(datasource, tokenizer, locale, n = guess_max) : 
  Incomplete multibyte sequence

Sauf que quand on la lance, on a un message d’erreur… Bref, l’encodage c’est compliqué. C’est ce qui est expliqué dans le forum de discussion du package cela dit. Je ne suis pas sûr que mon billet serve à quoi que ce soit, mais je voulais garder une trace de tout ça !

Using convolutions (S3) vs distributions (S4)

Usually, to illustrate the difference between S3 and S4 classes in R, I mention glm (from base) and vglm (from VGAM) that provide similar outputs, but one is based on S3 codes, while the second one is based on S4 codes. Another way to illustrate is to manipulate distributions.

Consider the case where we want to sum (independent) random variables. For instance two lognormal distribution. Let us try to compute the median of the sum.

The distribution function of the sum of two independent (positive) random variables is F_{S_2}(x)=\int_0^x F_{X_1}(x-y)dF_{X_2}(x)

pSum2 = function(x) integrate(function(y) 
plnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value

Let us visualize that cumulative distribution function

vx=seq(0.1,50,by=.1)
vy=Vectorize(pSum2)(vx)
plot(vx,vy,type="l",ylim=c(0,1))
abline(h=.5,lty=2)

Let us find an upper bound to compute (in a decent time) quantiles

pSum2(350)
[1] 0.99195

and then use the uniroot function to inverse that function

qSum = function(u) uniroot(function(x) 
Vectorize(pSum2)(x)-u, interval=c(0,350))$root
vu=seq(.01,.99,by=.01)
vv=Vectorize(qSum)(vu)

The median is here

qSum(.5)
[1] 14.155

Why not consider the sum of three (independent) distributions ? Its cumulative distribution function can be writen using our previous function F_{S_3}(x)=\int_0^x F_{S_2}(x-y)dF_{X_3}(x)

pSum3 = function(x) integrate(function(y) 
pSum2(x-y)*dlnorm(y,2,2),0,x)$value

If we look at some values we good

pSum3(4)
[1] 0.015624
pSum3(5)
Error in integrate(function(y) plnorm(x - y, 1, 2) * 
dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

So obviously, there are computational issues here.

Let us consider the following alternative expression F_{S_3}(x)=\int_0^x F_{X_3}(x-y)dF_{S_2}(x). Of course, it is necessary here to compute the density of the sum of two variables

dSum2 = function(x) integrate(function(y) 
dlnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value
pSum3 = function(x) integrate(function(y) 
dlnorm(x-y,2,2)*dSum2(y),0,x)$value

Again, let us compute some values

pSum3(4)
[1] 0.0090285
pSum3(5)
[1] 0.01186

This one seems to work quite well. But it is just an illusion.

pSum3(9)
Error in integrate(function(y) dlnorm(x - y, 1, 2) *
 dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

Clearly, with those S3-type functions, it wlll be complicated to run computations with 3 variables, or more.

Let us consider distributions in the S4-type format of the following package

library(distr)
X1 = Lnorm(mean=1,sd=2)
X2 = Lnorm(mean=2,sd=1)
S2 = X1+X2

To compute the median, we simply have to use

distr::q(S2)(.5)
[1] 14.719

We can also visualize it easily

plot(q(S2))

which looks (very) close to what we got, manually.  But here, it is also possible to work with the sum of 3 (independent) random variables

X3 = Lnorm(mean=2,sd=2)
S3 = X1+X2+X3

To compute the median, use

distr::q(S3)(.5)
[1] 33.208

The function is here

plot(q(S3))

Tous des (potentiels) terroristes ?

En fin de semaine dernière, je commencais surveiller et prévenir: L’ère de la pénalité prédictive de Nicolas Bourgoin, et au début, il évoque des mesures de sécurité renforcées dans un rayon de 20km des gares, des aéroports et des ports, qui pourraient être prises dans le cadre de la réforme de lois sur la sécurité intérieure.  Cette information était reprise récemment sur rtl,

Le périmètre est aussi agrandi : les contrôles pourront avoir lieu aux abords des gares internationales (et non plus seulement à l’intérieur) ainsi que dans un rayon de 20 kilomètres autour des aéroports et des ports.

ou sur le site de l’obs

ces opérations de contrôles seront mises en place “aux abords” de 373 gares, ports et aéroports, dans un rayon de 20 kilomètres. Une extension considérable puisque jusqu’à présent, ces contrôles restaient cantonnées à l’intérieur de ces espaces accessibles au public.

En lisant le livre, je trouvais incroyable cette histoire de 20km, car tout le monde habite à moins de 20km d’une gare (ne sachant trop quelles pouvaient être ces 373 gares, je pars du fait que toutes les gares pourraient être concernées). C’est un peu ce que note la cimade, en écrivant

Le projet de loi prévoit ainsi de permettre les contrôles d’identité aux frontières pour une durée de 12 heures (contre 6 aujourd’hui), de les élargir « aux abords » de 373 gares, ports et aéroports, ainsi que dans un rayon de 20 km des 118 points de passages frontaliers. Bien au-delà des simples frontières de l’Hexagone, c’est presque tout le territoire qui est couvert. Le dispositif porte ainsi une atteinte disproportionnée aux droits et libertés des personnes. Des villes entières, comme Paris et toute la région Île-de-France, Lyon, Nantes, Rennes, Bordeaux, Montpellier, Toulouse ou Marseille seraient soumises à un régime de légalisation du contrôle au faciès. Des personnes assimilées par la police comme étant étrangères, quelle que soit leur situation en France, risquent ainsi d’être les victimes de ces contrôles d’identité.

Etant d’un naturel (très) sceptique, j’ai voulu vérifier par moi même, non seulement en terme de surface (ce qui est dit ici) mais surtout en terme de population. La première étape est de récupérer la liste des gares géolocalisées, sur https://ressources.data.sncf.com/. On peut aussi récupérer la liste des aéroports sur https://www.data.gouv.fr/ si on veut aller plus loin. Mais contentons nous des gares pour l’instant. La seconde étape est de récupérer les contours des communes et leur population. En fait, plus que la superficie, ce qui m’intéresse le plus, c’est le nombre de personnes qui y habitent. On peut trouver un tel fichier sur https://www.data.gouv.fr/ là encore. Mais commencons par charger tous nos packages de cartographie,

library(maptools)
library(rgeos)
library(rgdal)
library(ggplot2)
library(plyr)
library(maptools)

On peut ensuite récupérer les données des gares

loc = "/home/arthur/referentiel-gares-voyageurs.shp"
gare = readOGR(loc)

Pour regarder où sont ces gares, on récupère un fond de carte

Proj = CRS("+proj=longlat +datum=WGS84")
France = readShapePoly("departements-20140306-100m.shp", verbose=TRUE, proj4string=Proj)
metropolitaine = as.character(1:95)
metropolitaine[1:9] = paste("0",1:9,sep="")
France = France[France$code_insee%in%metropolitaine,]

On utilise aussi une base avec les contours des communes,

loc2 = "/home/arthur/communes-20150101-5m.shp"
communes_lim = readOGR(loc2)
communes_lim = spTransform(communes_lim, CRS("+proj=longlat +datum=WGS84"))

et une base pour la population des communes

 base = read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE)
base$insee = base$dep*1000+base$com

Cette fois on est prêt. Considérons un rayon de 20km

r=20

On va constituer tous les polygônes correspondant à une cercle de rayon 20km, centré sur les gares francaises

PL = data.frame(i=1:(3000*120),id=rep(1:3000,each=120),lon=NA,lat=NA)
 for(i in 1:nrow(gare)){
   x=as.numeric(as.character(gare$longitude_w[i]))
   y=as.numeric(as.character(gare$latitude_wg[i]))
   vx=c(x+u*r/111,x+rev(u)*r/111)
   vy=c(y+v*r/111,y-rev(v)*r/111)
   polygon(vx,vy,border=NA,col="blue",pch=19)
   PL[PL$id==i,2:4]=data.frame(id=i,lon=vx,lat=vy) }

(comme 1 degré fait environ 111km). On doit alors bricoler un peu pour constituer une collection de polygones, que l’on pourra ensuite manipuler à notre guise

PL=PL[!is.na(PL$lat),]
 PLdf=PL[,c(3,4,2)]
 PLdf[,3]=as.factor(PLdf[,3])
 PL_list <- split(PLdf, PL$id)
 PL_list <- lapply(PL_list, 
  function(x) { x["id"] <- NULL; x })
 PPL <- sapply(PL_list, Polygon)
 PPL1 <- lapply(seq_along(PPL), 
    function(i) Polygons(list(PPL[[i]]),
        ID = names(PL_list)[i]  ))
PPLs <- SpatialPolygons(PPL1, 
proj4string = CRS("+proj=longlat +datum=WGS84") )

Si on visualise ces périmètres de sécurité, on obtient

G=gUnionCascaded(PPLs)
F=gUnionCascaded(France)
FG=gIntersection(F,G)
plot(F)
plot(G,add=TRUE,col="blue")

soit, si on regarde l’intersection entre les deux,

plot(F)
plot(FG,add=TRUE)

Pour trouver la population dans cette vaste région, on va supposer la population uniformément répartie sur une commune. Et on regarde la proportion de la surface de la commune qui est à moins de 20km d’une gare (c’est en gros la technique qu’on utilisait dans Kernel density estimation based on Ripley’s correction pour corriger d’effet de bords dans du lissage spatial, sur des cartes, avec Ewen Gallic). Bref, pour une commune donnée, on a le code suivant, qui permet de récuperer la proportion de la surface située à moins de 20km d’une gare, et sa population

f=function(i){
insee=as.numeric(as.character(communes_lim@data$insee[i]))
  POPULATION=base[base$insee==insee,"pop_2010"]
  B_list=list()
  for(j in 1:length(communes_lim@polygons[[1]]@Polygons)){ B_list[[j]]=data.frame(communes_lim@polygons[[i]]@Polygons[[j]]@coords,id=j)}
  B_list <- lapply(B_list,function(x) { x["id"] <- NULL; x })
  BL <- sapply(B_list, Polygon)
  BL1 <- lapply(seq_along(BL), function(i) Polygons(list(BL[[i]]),ID = names(PL_list)[i]  ))
  BLs <- SpatialPolygons(BL1, proj4string = CRS("+proj=longlat +datum=WGS84") ) 
  t=try(FGB<-gIntersection(BLs,FG),silent=TRUE)  t1=try(l<-length(BLs@pointobj@coords),silent=TRUE)
  if((!inherits(t1, "try-error"))){ 
    a_list=list()
    for(j in 1:length(BLs@pointobj@coords)){
      a_list[[j]]=BLs@polyobj@polygons[[j]]@area}
    a1=sum(unlist(a_list))}
  if((inherits(t1, "try-error"))){ 
    a1=BLs@polygons[[1]]@area}
  a2=0
  if(!is.null(FGB)){
t2=try(l<-length(FGB@pointobj@coords),silent=TRUE) if((!inherits(t2, "try-error"))){ a_list=list() for(j in 1:length(FGB@pointobj@coords)){ a_list[[j]]=FGB@polyobj@polygons[[j]]@area} a2=sum(unlist(a_list))} if((inherits(t2, "try-error"))){ a2=FGB@polygons[[1]]@area}} p=c(1,NA,0) if((!inherits(t, "try-error"))&(!is.null(t))&(length(POPULATION)==0)) p=c(a2/a1,a1,0) if((!inherits(t, "try-error"))&(!is.null(t))&(length(POPULATION)>0) ) p=c(a2/a1,a1,POPULATION) 
cat(i,insee,sum(unlist(lapply(B_list,nrow))),p,"\n")
  return(p)}

On n’a plus qu’à faire tourner sur notre 36000 communes,

F = lapply(unique(communes_lim_df$id),f)

On obtient alors

> F0=F[!is.na(F[,2]),]
> sum(F0[,1]*F0[,2])/sum(F0[,2])
[1] 0.9031736
> F=F1
> sum(F[,1]*F[,3])/sum(F[,3])
[1] 0.9661656

Autrement dit, dans un rayon de 20km des gares (seulement), on a 90% du territoire, et plus de 95% de la population. Si on passe à 10km, on note que l’on couvre environ 75% du territoire, et plus de 90% de la population, et en passant à 5km, on couvre moins de 50% de la surface du territoire, et environ 75% de la population.

On retrouve des résultats du même ordre que ceux obtenus dans un précédant billet, qui établissait que 80% de la population francaise était à moins de 3km d’une agence bancaire. On pourrait bien sûr rajouter les ports, et les aéroports, et surprimer quelques gares… mais je doute qu’on ait une conclusion très différente… Après, les codes sont disponibles, il suffit de les adapter sur une autre base de centres de cercles…