# Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

## Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, $$\widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbrace$$The objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on$$-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})$$We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define$$\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}$$for any $\mathbf{a}\in\mathbb{R}^d$. One might say that the AIC could be written$$-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}$$And actually, this will be our objective function. More specifically, we will consider
$$\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbrace$$for some norm $\|\cdot\|$. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for $\|\cdot\|_{\ell_2}$ (the Ridge regression) and for $\|\cdot\|_{\ell_1}$ (the LASSO regression).

## Normalization of the covariates

The problem of $\|\mathbf{\beta}\|$ is that the norm should make sense, somehow. A small $\mathbf{\beta}_j$ is with respect to the “dimension” of $x_j$‘s. So, the first step will be to consider linear transformations of all covariates $x_j$ to get centered and scaled variables (with unit variance)

y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) ## Ridge Regression (from scratch) Before running some codes, recall that we want to solve something like$$\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace$$ In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression. The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : $$\min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbrace$$can be written equivalently (it is a strictly convex problem)$$\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbrace$$Thus, the constrained maximum should lie in the blue disk LogLik = function(bbeta){ b0=bbeta[1] beta=bbeta[-1] sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))} u = seq(-4,4,length=251) v = outer(u,u,function(x,y) LogLik(c(1,x,y))) image(u,u,v,col=rev(heat.colors(25))) contour(u,u,v,add=TRUE) u = seq(-1,1,length=251) lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue") lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue") Let us consider the objective function, with the following code PennegLogLik = function(bbeta,lambda=0){ b0 = bbeta[1] beta = bbeta[-1] -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)* log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2) } Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case lambda = 1 beta_init = lm(PRONO~.,data=myocarde)$coefficients vpar = matrix(NA,1000,8) for(i in 1:1000){ vpar[i,] = optim(par = beta_init*rnorm(8,1,2), function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par} par(mfrow=c(1,2)) plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1]) plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2]) Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum. The code to compute $\widehat{\mathbf{\beta}}_{\lambda}$ would then be opt_ridge = function(lambda){ beta_init = lm(PRONO~.,data=myocarde)$coefficients logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), method = "BFGS", control=list(abstol=1e-9)) logistic_opt$par[-1]} and we can visualize the evolution of $\widehat{\mathbf{\beta}}_{\lambda}$ as a function of ${\lambda}$ v_lambda = c(exp(seq(-2,5,length=61))) est_ridge = Vectorize(opt_ridge)(v_lambda) library("RColorBrewer") colrs = brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1]) for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i]) At least it seems to make sense: we can observe the shrinkage as $\lambda$ increases (we’ll get back to that later on). ## Ridge, using Netwon Raphson algorithm We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was$$\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}$$where $$\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})$$and$$\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}$$where $\mathbf{\Delta}_{old}$ is the diagonal matrix with terms $\mathbf{p}_{old}(1-\mathbf{p}_{old})$ on the diagonal. Thus$$\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]$$that we can also write$$\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}$$where $\mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]$. Here, on the penalized problem, we can easily prove that$$\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}$$while$$\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}$$Hence$$\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}$$ The code is then Y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) X = cbind(1,X) colnames(X) = c("Inter",names(myocarde[,1:7])) beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1) for(s in 1:9){ pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi)) z = X%*%beta[,s] + solve(Delta)%*%(Y-pi) B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z) beta = cbind(beta,B)} beta[,8:10] [,1] [,2] [,3] XInter 0.59619654 0.59619654 0.59619654 XFRCAR 0.09217848 0.09217848 0.09217848 XINCAR 0.77165707 0.77165707 0.77165707 XINSYS 0.69678521 0.69678521 0.69678521 XPRDIA -0.29575642 -0.29575642 -0.29575642 XPAPUL -0.23921101 -0.23921101 -0.23921101 XPVENT -0.33120792 -0.33120792 -0.33120792 XREPUL -0.84308972 -0.84308972 -0.84308972 Again, it seems that convergence is very fast. And interestingly, with that algorithm, we can also derive the variance of the estimator$$\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}$$where$\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}$ The code to compute $\widehat{\mathbf{\beta}}_{\lambda}$ as a function of $\lambda$ is then newton_ridge = function(lambda=1){ beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8) for(s in 1:20){ pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi)) z = X%*%beta[,s] + solve(Delta)%*%(Y-pi) B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z) beta = cbind(beta,B)} Varz = solve(Delta) Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*% Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of $\widehat{\mathbf{\beta}}_{\lambda}$ (as a function of $\lambda$)

v_lambda=c(exp(seq(-2,5,length=61))) est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i]) and to get the evolution of the variance v_lambda=c(exp(seq(-2,5,length=61))) est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_ridge[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)

Recall that when $\lambda=0$ (on the left of the graphs), $\widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco}$ (no penalty). Thus as $\lambda$ increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

## Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with $\alpha=0$

y = myocarde$PRONO X = myocarde[,1:7] for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) X = as.matrix(X) library(glmnet) glm_ridge = glmnet(X, y, alpha=0) plot(glm_ridge,xvar="lambda",col=colrs,lwd=2) as a function of the norm the $\ell_1$ norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post… ## Ridge with orthogonal covariates An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates. library(factoextra) pca = princomp(X) pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet) glm_ridge = glmnet(pca_X, y, alpha=0) plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that $$\widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}$$

## Application

Let us try with our second set of data

df0 = df df0$y=as.numeric(df$y)-1 plot_lambda = function(lambda){ m = apply(df0,2,mean) s = apply(df0,2,sd) for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j] reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda) u = seq(0,1,length=101) p = function(x,y){ xt = (x-m[1])/s[1] yt = (y-m[2])/s[2] predict(reg,newx=cbind(x1=xt,x2=yt),type='response')} v = outer(u,u,p) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) } We can try various values of $\lambda$ reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=log(.2)) plot_lambda(.2)

or

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=log(1.2)) plot_lambda(1.2) Next step is to change the norm of the penality, with the $\ell_1$ norm (to be continued…) # Classification from scratch, logistic with kernels 3/8 Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric. ## kernel based estimated, from scratch I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate $\hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x})$. Heuritically, we want to compute the (conditional) expected value on the neighborhood of $\mathbf{x}$. If we consider some spatial model, where $\mathbf{x}$ is the location, we want the expected value of some variable $Y$, “on the neighborhood” of $\mathbf{x}$. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of $\mathcal{X}$ (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider $$\hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}$$or the moving regressogram $$\hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}$$In that case, the neighborhood is defined as the interval $(x\pm h)$. That’s nice, but clearly very simplistic. If $\mathbf{x}_i=\mathbf{x}$ and $\mathbf{x}_j=\mathbf{x}-h+\varepsilon$ (with $\varepsilon>0$), both observations are used to compute the conditional expected value. But if $\mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon$, only $\mathbf{x}_i$ is considered. Even if the distance between $\mathbf{x}_{j}$ and $\mathbf{x}_{j'}$ is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between $\mathbf{x}_{i}$‘s and $\mathbf{x}$.Use$$\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}$$where (classically)$$k_h(x)=k\left(\frac{x}{h}\right)$$for some kernel $k$ (a non-negative function that integrates to one) and some bandwidth $h$. Usually, kernels are denoted with capital letter $K$, but I prefer to use $k$, because it can be interpreted as the density of some random noise we add to all observations (independently). Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that$$\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)$$ Now, use the fact that the expected value can be defined as$$m(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}$$Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by$$\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)$$while the denominator is estimated by$$\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)$$In a general setting, we still use product kernels between $Y$ and $\mathbf{X}$ and write $$\widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}$$for some symmetric positive definite bandwidth matrix $\mathbf{H}$, and $$k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})$$ Now that we know what kernel estimates are, let us use them. For instance, assume that $k$ is the density of the $\mathcal{N}(0,1)$ distribution. At point $x$, with a bandwidth $h$ we get the following code mean_x = function(x,bw){ w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1) weighted.mean(myocarde$PRONO,w)} u = seq(5,55,length=201) v = Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) and of course, we can change the bandwidth. v = Vectorize(function(x) mean_x(x,2))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point $x$, so the smaller the neighborhood, the better. ## Using ksmooth R function Actually, there is a function in R to compute this kernel regression. reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1)) plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19) We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before g=function(bk=3){ reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk) f=function(bm){ v = Vectorize(function(x) mean_x(x,bm))(reg$x) z=reg$y-v sum((z[!is.na(z)])^2)} optim(bk,f)$par} x=seq(1,10,by=.1) y=Vectorize(g)(x) plot(x,y) abline(0,exp(-1),col="red") abline(0,.37,col="blue")

There is a slope of $0.37$, which is actually $e^{-1}$. Coincidence ? I don’t know to be honest…

## Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101) p = function(x,y){ bw1 = .2; bw2 = .2 w = dnorm((df$x1-x)/bw1, mean=0,sd=1)* dnorm((df$x2-y)/bw2, mean=0,sd=1) weighted.mean(df$y=="1",w) } v = outer(u,u,Vectorize(p)) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

## k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point $\mathbf{x}$ but the $k$-neighbors, with the $n$ observations we got.$$\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i$$
where $\omega_{i,k}(\mathbf{x})=n/k$ if $i\in\mathcal{I}_{\mathbf{x}}^k$ with
$$\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}$$
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7]) Sigma_Inv = solve(Sigma) d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)} k_closest = function(i,k){ vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv) vect = Vectorize(vect_dist)((1:nrow(myocarde))) which((rank(vect)))}

Here we have a function to find the $k$ closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for $y_i$ is the same as the one the majority of the neighbors.

k_majority = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2] return(Y)} But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels), k_mean = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)]) return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO, MAJORITY=k_majority(7),PROPORTION=k_mean(7)) OBSERVED MAJORITY PROPORTION [1,] 1 1 0.7142857 [2,] 0 1 0.5714286 [3,] 0 0 0.1428571 [4,] 1 1 0.5714286 [5,] 0 1 0.7142857 [6,] 0 0 0.2857143 [7,] 1 1 0.7142857 [8,] 1 0 0.4285714 [9,] 1 1 0.7142857 [10,] 1 1 0.8571429 [11,] 1 1 1.0000000 [12,] 1 1 1.0000000 Here, we got a prediction for an observed point, located at $\boldsymbol{x}_i$, but actually, it is possible to seek the $k$ closest neighbors of any point $\boldsymbol{x}$. Back on our univariate example (to get a graph), we have mean_x = function(x,k=9){ w = rank(abs(myocarde$INSYS-x),ties.method ="random") mean(myocarde$PRONO[which(w&lt;=9)])} u=seq(5,55,length=201) v=Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19) That’s not very smooth, but we do not have a lot of points either. If we use that technique on our two-dimensional dataset, we obtain the following Sigma_Inv = solve(var(df[,c("x1","x2")])) u = seq(0,1,length=51) p = function(x,y){ k = 6 vect_dist = function(j) d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv) vect = Vectorize(vect_dist)(1:nrow(df)) idx = which(rank(vect)&lt;=k) return(mean((df$y==1)[idx]))} v = outer(u,u,Vectorize(p)) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of $\mathbf{x}$ or simply using the $k$ nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

# Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

## Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of $x$, and larger values of $x$. The most convenient way to do so is to use the positive part function $(x-s)_+$ which is the difference between $x$ and $s$ if that difference is positive, and $0$ otherwise. For instance $$\beta_1 x+\beta_2(x-s)_+$$ is the following piecewise linear function, continuous, with a “rupture” at knot $s$.

Observe also the following interpretation: for small values of $x$, there is a linear increase, with slope $\beta_1$, and for lager values of $x$, there is a linear decrease, with slope $\beta_1+\beta_2$. Hence, $\beta_2$ is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x&gt;=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+ pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -0.1109 3.2783 -0.034 0.9730 INSYS -0.1751 0.2526 -0.693 0.4883 pos(INSYS, 15) 0.7900 0.3745 2.109 0.0349 * pos(INSYS, 25) -0.5797 0.2903 -1.997 0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201) v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,type="l") points(myocarde$INSYS,myocarde$PRONO,pch=19) abline(v=c(5,15,25,55),lty=2)

## Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support $(5,55)$ and with knots $\{15,25\}$

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02") x = seq(0,60,by=.25) B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1) matplot(x,B,type="l",lty=1,lwd=2,col=clr6)

as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment $(5,15)$, $(15,25)$ and $(25,55)$. But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25), Boundary.knots=c(5,55),degre=1), data=myocarde,family=binomial) summary(reg)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -0.9863 2.0555 -0.480 0.6314 bs(INSYS,..)1 -1.7507 2.5262 -0.693 0.4883 bs(INSYS,..)2 4.3989 2.0619 2.133 0.0329 * bs(INSYS,..)3 5.4572 5.4146 1.008 0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) abline(v=c(5,15,25,55),lty=2)

Nevertheless, the prediction is the same… and that’s nice.

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on $x,(x-s_1)_+$ and $(x-s_2)_+$ consider now a decomposition on $x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+$ and $(x-s_2)^{\color{red}{2}}_+$.

 pos2 = function(x,s) (x-s)^2*(x&gt;=s) reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25), data=myocarde,family=binomial) summary(reg)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) 29.9842 15.2368 1.968 0.0491 * poly(INSYS, 2)1 408.7851 202.4194 2.019 0.0434 * poly(INSYS, 2)2 199.1628 101.5892 1.960 0.0499 * pos2(INSYS, 15) -0.2281 0.1264 -1.805 0.0712 . pos2(INSYS, 25) 0.0439 0.0805 0.545 0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here $(15,25)$ – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19) abline(v=c(5,15,25,55),lty=2)

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25) B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2) matplot(x,B,type="l",xlab="INSYS",col=clr6)

If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25), Boundary.knots=c(5,55),degre=2),data=myocarde, family=binomial) summary(reg)   Coefficients: Estimate Std. Error z value Pr(&gt;|z|) (Intercept) 7.186 5.261 1.366 0.1720 bs(INSYS, ..)1 -14.656 7.923 -1.850 0.0643 . bs(INSYS, ..)2 -5.692 4.638 -1.227 0.2198 bs(INSYS, ..)3 -2.454 8.780 -0.279 0.7799 bs(INSYS, ..)4 6.429 41.675 0.154 0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) abline(v=c(5,15,25,55),lty=2)

## Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) $x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+$, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3) matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2)) abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25), Boundary.knots=c(5,55),degre=3), data=myocarde,family=binomial) u = seq(5,55,length=201) v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red",lwd=2) points(myocarde$INSYS,myocarde$PRONO,pch=19) abline(v=c(5,15,25,55),lty=2)

Two last things before concluding (for today), the location of the knots, and the extension to additive models.

## Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]] bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), Boundary.knots = c(8.7, 54), intercept = FALSE) which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles quantile(myocarde$INSYS,(0:4)/4) 0% 25% 50% 75% 100% 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red",lwd=2) points(myocarde$INSYS,myocarde$PRONO,pch=19) abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2) If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles B = bs(x,degree=1,df=4) B = cbind(1,B) y = B%*%coefficients(reg) plot(x,y,type="l",col="red",lwd=2) abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)

Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial) attr(reg$terms, "predvars")[[3]] bs(INSYS, degree = 2L, knots = numeric(0), Boundary.knots = c(8.7,54), intercept = FALSE) and if we look at the prediction u = seq(5,55,length=201) v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red",lwd=2) points(myocarde$INSYS,myocarde$PRONO,pch=19) actually, it is the same as a quadratic regression (as expected actually) reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial) v = predict(reg,newdata=data.frame(INSYS=u),type="response") plot(u,v,ylim=0:1,type="l",col="red",lwd=2) points(myocarde$INSYS,myocarde$PRONO,pch=19) ## Additive models Consider now the second dataset, with two variables. Consider here a model like $$\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}$$ where $$\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}$$ $$\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+$$ and $$\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+$$ It might seem a little bit restrictive, but that’s actually the idea of additive models. reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit")) u = seq(0,1,length=101) p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response") v = outer(u,u,p) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"

Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit")) u = seq(0,1,length=101) p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response") v = outer(u,u,p) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one. In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on). But maybe I should also mention another smoothing tool before, kernels (and maybe also $k$-nearest neighbors). To be continued # Classification from scratch, overview 0/8 Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools. According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ? When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features) x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) plot(x,y,pch=c(1,19)[1+z]) Here is some code to get a visualization of the prediction (here the probability to be a black point) rmatrix_model = function(model){ u = seq(0,1,length=101) p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response") v = outer(u,u,p) return(v)} nice_graph=function(v){ u = seq(0,1,length=101) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10) points(x,y,pch=19,cex=1.5,col="white") points(x,y,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) } reg = glm(y~x1+x2,data=df,family=binomial) nice_graph(rmatrix_model(reg)) Note that colors are defined here as clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b") or with some nonlinear model The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable). myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";") myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1 y = myocarde$PRONO X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.