Tag Archives: computer

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Parallelizing Linear Regression or Using Multiple Sources

My previous post was explaining how mathematically it was possible to parallelize computation to estimate the parameters of a linear regression. More speficially, we have a matrix \mathbf{X} which is n\times k matrix and \mathbf{y} a n-dimensional vector, and we want to compute \widehat{\mathbf{\beta}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y} by spliting the job. Instead of using the n observations, we’ve seen that it was to possible to compute “something” using the first n_1 rows, then the next n_2 rows, etc. Then, finally, we “aggregate” the m objects created to get our overall estimate.

Parallelizing on multiple cores

Let us see how it works from a computational point of view, to run each computation on a different core of the machine. Each core will see a slave, computing what we’ve seen in the previous post. Here, the data we use are

y = cars$dist
X = data.frame(1,cars$speed)
k = ncol(X)

On my laptop, I have three cores, so we will split it in m=3 chunks

library(parallel)
library(pbapply)
ncl = detectCores()-1
cl = makeCluster(ncl)

This is more or less what we will do: we have our dataset, and we split the jobs,

We can then create lists containing elements that will be sent to each core, as Ewen suggested,

chunk = function(x,n) split(x, cut(seq_along(x), n, labels = FALSE))
a_parcourir = chunk(seq_len(nrow(X)), ncl)
for(i in 1:length(a_parcourir)) a_parcourir[[i]] = rep(i, length(a_parcourir[[i]]))
Xlist = split(X, unlist(a_parcourir))
ylist = split(y, unlist(a_parcourir))

It is also possible to simplify the QR functions we will use

compute_qr = function(x){
  list(Q=qr.Q(qr(as.matrix(x))),R=qr.R(qr(as.matrix(x))))
}
get_Vlist = function(j){
  Q3 = QR1[[j]]$Q %*% Q2list[[j]]
  t(Q3) %*% ylist[[j]]
}
clusterExport(cl, c("compute_qr", "get_Vlist"), envir=environment())

Then, we can run our functions on each core. The first one is

  QR1 = parLapply(cl=cl,Xlist, compute_qr)

note that it is also possible to use

  QR1 = pblapply(Xlist, compute_qr, cl=cl)

which will include a progress bar (that can be nice when the database is rather large). Then use

  R1 = pblapply(QR1, function(x) x$R, cl=cl) %>% do.call("rbind", .)
  Q1 = qr.Q(qr(as.matrix(R1)))
  R2 = qr.R(qr(as.matrix(R1)))
  Q2list = split.data.frame(Q1, rep(1:ncl, each=k))
  clusterExport(cl, c("QR1", "Q2list", "ylist"), envir=environment())
  Vlist = pblapply(1:length(QR1), get_Vlist, cl=cl)
  sumV = Reduce('+', Vlist)

and finally the ouput is

solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Using multiple sources

In practice, it might also happen that various “servers” have the data, but we cannot get a copy. But it is possible to run some functions on their server, and get some output, that we can use afterwards.

Datasets are supposed to be available somewhere. We can send a request, and get a matrix. Then we we aggregate all of them, and send another request. That’s what we will do here. Provider j should run f_1(\mathbf{X}) on his part of the data, that function will return R^{(1)}_j. More precisely, to the first provider, send

function1 = function(subX){
return(qr.R(qr(as.matrix(subX))))}
R1 = function1(Xlist[[1]])

and actually, send that function to all providers, and aggregate the output

for(j in 2:m) R1 = rbind(R1,function1(Xlist[[j]]))

The create on your side the following objects

Q1 = qr.Q(qr(as.matrix(R1)))
R2 = qr.R(qr(as.matrix(R1)))
Q2list=list()
for(j in 1:m) Q2list[[j]] = Q1[(j-1)*k+1:k,]

Finally, contact one last time the providers, and send one of your objects

function2=function(subX,suby,Q){
Q1=qr.Q(qr(as.matrix(subX)))
Q2=Q
return(t(Q1%*%Q2) %*% suby)}

Provider j should then run f_2(\mathbf{X},\mathbf{y},Q_j^{(2)}) on his part of the data, using also Q_j^{(2)} as argument (that we obtained on own side) and that function will return (\mathbf{Q}^{(2)}_j\mathbf{Q}^{(1)}_j)^{T}_j\mathbf{y}_j. For instance, ask the first provider to run

sumV = function2(Xlist[[1]],ylist[[1]], Q2list[[1]])

and do the same with all providers

for(j in 2:m) sumV = sumV+ function2(Xlist[[j]],ylist[[j]], Q2list[[j]])
solve(R2) %*% sumV
         [,1]
X1 -17.579095
X2   3.932409

which is what we were expecting…

Discrete or continuous modeling ?

Tuesday, we got our conference “Insurance, Actuarial Science, Data & Models” and Dylan Possamaï gave a very interesting concluding talk. In the introduction, he came back briefly on a nice discussion we usually have in economics on the kind of model we should consider. It was about optimal control. In many applications, we start with a one period economy, then a two period economy, and pretend that we can extend it to n period economy. And then, the continuous case can also be considered. A few years ago, I was working on sports game as an optimal effort startegy (within in a game – fixed time). It was with a discrete model, I was running simulations to get an efficient frontier, where coaches might say “ok, now we have enough (positive) difference, and we get closer to the end of the game, so we can ‘lower the effort’ i.e. top players can relax a little bit” (it was on basket-ball games). I asked a good friend of mine, Romuald, to help me on some technical parts of proofs, but he did not like so much my discrete-time model, and wanted to move to continuous time. And for now six years, we keep saying that someday we should get back to that paper….

My initial thoughts were that the difference was really “cultural”: you are either a continuous-time sort of guy, or a discrete-time one (or maybe none of the two, but that’s another problem). He works with stochastic processes, I work with time series. Of course, we can find connections, but most of the time, the techniques are very different. And tuesday, Dylan mentioned a very nice illustration that it’s not necessarily a cultural difference, and sometimes, it is great to move to continuous time. So I wanted to illustrate that idea.

Consider for instance the following curve.

vu = seq(0,1,length=601)
vv = sin(vu*pi)
plot(vu,vv,type="l",lwd=2)

The goal is to find the value of the maximum, numerically. And here, there are two (very) different strategies

  • the discrete one: we see a (finite) collection of points – for instance, the graph above is a collection of 601 points (connected with a straight line) – and in that case, we need a standard algorithm (in O(n)) to get the value of the maximum
  • the continuous one: we see a function x\mapsto \sin(\pi x), and in that case, we use optimization routines

In the second case, use for instance

optim(0,function(x) -sin(pi*x))
$par
[1] 0.5
 
$value
[1] -1

For the first case, we can use the standard R function, and see how long it takes to use simulations to get an approximation of the maximum

library(microbenchmark)
max_time = function(n) median(microbenchmark(max(sin(runif(n)*pi)))$time)
vn = 10^(seq(1,6,length=21))
vt = Vectorize(max_time)(vn)
plot(vn,vt/1e9,col="blue",pch=19,type="b",log="xy")

but of course, some home-made code can also be used

c_max = function(n=100){
  x = sin(runif(n)*pi)
  y = x[1]
  for(i in 2:length(x)) { 
    if(x[i] > y) { y = x[i] }}
  return(y)}
max_time=function(n) median(microbenchmark(c_max(n))$time)
lines(vn,vt/1e9,type="b")

We can add that horizontal red line using

abline(h=median(microbenchmark(optim(.5,function(x) sin(pi*x)))$time)/1e9,lty=2,col="red")

So, indeed, it looks like computational time to find the maximum in a list of n elements is linear in n, i.e. O(n). And R code is faster than home-made code. But also, interestingly, using continus time (based on analysis techniques) can be much faster. So, sometimes, considering continuous time models can be much easier to solve, from a numerical perspective.

Classification from scratch, boosting 11/8

Eleventh post of our series on classification from scratch. Today, that should be the last one… unless I forgot something important. So today, we discuss boosting.

An econometrician perspective

I might start with a non-conventional introduction. But that’s actually how I understood what boosting was about. And I am quite sure it has to do with my background in econometrics.

The goal here is to solve something which looks likem^\star=\underset{m\in\mathcal{M}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m(\mathbf{x}_i))\right\rbracefor some loss function \ell, and for some set of predictors \mathcal{M}. This is an optimization problem. Well, optimization is here in a function space, but still, that’s simply an optimization problem. And from a numerical perspective, optimization is solve using gradient descent (this is why this technique is also called gradient boosting). And the gradient descent can be visualized like below

Again, the optimum is not some some real value x^\star, but some function m^\star. Thus, here we will have something likem^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(y_i,m^{(k-1)}(\mathbf{x}_i)+h(\mathbf{x}_i))\right\rbrace(as they write it is serious articles) where the term on the right can also be writtenm^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\mathbf{x}_i)}_{\varepsilon_{k,i}},h(\mathbf{x}_i))\right\rbraceI prefer the later, because we see clearly that f is some model we fit on the remaining residuals.

We can rewrite it like that: definer_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x}_i)=m^{(k-1)}(\mathbf{x}_i)}for all i=1,\cdots,n. The goal is to fit a model so that r_{i,k}=h^\star(\mathbf{x}_i), and when we have that optimal function, set m_k(\mathbf{x})=m_{k-1}(\mathbf{x})+\gamma_k h^\star(\mathbf{x}) (yes, we can include some shrinkage here).

Two important comments here. First of all, the idea should be weird to any econometrician. First, we fit a model to explain y by some covariates \mathbf{x}. Then consider the residuals \widehat{\varepsilon}, and to explain them with the same covariate \mathbf{x}. If you try that with a linear regression, you’d done at the end of step 1, since residuals \widehat{\varepsilon} are orthogonal to covariates \mathbf{x}: no way that we can learn from them. Here it works because we consider simple non linear model. And actually, something that can be used is to add a shrinkage parameter. Do not consider \widehat{\varepsilon}=y-\widehat{m}(\mathbf{x}) but \widehat{\varepsilon}=y-\gamma\widehat{m}(\mathbf{x}). The idea of weak learners is extremely important here. The more we shrink, the longer it will take, but that’s not (too) important.

I should also mention that it’s nice to keep learning from our mistakes. But somehow, we should stop, someday. I said that I will not mention this part in this series of posts, maybe later on. But heuristically, we should stop when we start to overfit. And this can be observed either using a split training/validation of the initial dataset or to use cross validation. I will get back on that issue later one in this post, but again, those ideas should probably be dedicated to another series of posts.

Learning with splines

Just to make sure we get it, let’s try to learn with splines. Because standard splines have fixed knots, actually, we do not really “learn” here (and after a few iterations we get to what we would have with a standard spline regression). So here, we will (somehow) optimize knots locations. There is a package to do so. And just to illustrate, use a Gaussian regression here, not a classification (we will do that later on). Consider the following dataset (with only one covariate)

n=300
 set.seed(1)
 u=sort(runif(n)*2*pi)
 y=sin(u)+rnorm(n)/4
 df=data.frame(x=u,y=y)

For an optimal choice of knot locations, we can use

library(freeknotsplines)
xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)

With 5% shrinkage, the code it simply the following

v=.05
 library(splines)
 xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)
 fit=lm(y~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
 yp=predict(fit,newdata=df)
 df$yr=df$y - v*yp
 YP=v*yp
 for(t in 1:200){
   xy.freekt=freelsgen(df$x, df$yr, degree = 1, numknot = 2, 555)
   fit=lm(yr~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
   yp=predict(fit,newdata=df)
   df$yr=df$yr - v*yp
   YP=cbind(YP,v*yp)}
 nd=data.frame(x=seq(0,2*pi,by=.01))
 viz=function(M){
    if(M==1)  y=YP[,1]
    if(M>1)   y=apply(YP[,1:M],1,sum)
    plot(df$x,df$y,ylab="",xlab="")
    lines(df$x,y,type="l",col="red",lwd=3)
    fit=lm(y~bs(x,degree=1,df=3),data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="l",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}

To visualize the ouput after 100 iterations, use

viz(100)


Clearly, we see that we learn from the data here… Cool, isn’t it?

Learning with stumps (and trees)

Let us try something else. What if we consider at each step a regression tree, instead of a linear-by-parts regression (that was considered with linear splines).

library(rpart)
v=.1 
fit=rpart(y~x,data=df)
yp=predict(fit)
df$yr=df$y - v*yp
YP=v*yp
for(t in 1:100){
  fit=rpart(yr~x,data=df)
  yp=predict(fit,newdata=df)
  df$yr=df$yr - v*yp
  YP=cbind(YP,v*yp)}

Again, to visualise the learning process, use

viz=function(M){
y=apply(YP[,1:M],1,sum)
plot(df$x,df$y,ylab="",xlab="")
lines(df$x,y,type="s",col="red",lwd=3)
fit=rpart(y~x,data=df)
yp=predict(fit,newdata=nd)
lines(nd$x,yp,type="s",col="blue",lwd=3)
lines(nd$x,sin(nd$x),lty=2)}


This time, with those trees, it looks like not only we have a good model, but also a different model from the one we can get using a single regression tree.

What if we change the shrinkage parameter?

viz=function(v=0.05){
  fit=rpart(y~x,data=df)
  yp=predict(fit)
  df$yr=df$y - v*yp
  YP=v*yp
  for(t in 1:100){
    fit=rpart(yr~x,data=df)
    yp=predict(fit,newdata=df)
    df$yr=df$yr - v*yp
    YP=cbind(YP,v*yp)}
  y=apply(YP,1,sum)
    plot(df$x,df$y,xlab="",ylab="")
    lines(df$x,y,type="s",col="red",lwd=3)
    fit=rpart(y~x,data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="s",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}


There is clearly an impact of that shrinkage parameter. It has to be small to get a good model. This is the idea of using weak learners to get a good prediction.

Classification and Adaboost

Now that we understand how bootsting works, let’s try to adapt it to classification. It will be more complicated because residuals are usually not very informative in a classification. And it will be hard to shrink. So let’s try something slightly different, to introduce the adaboost algorithm.

In our initial discussion, the goal was to minimize a convex loss function. Here, if we express classes as \{-1,+1\}, the loss function we consider is e^{-y\cdot m(\mathbf{x})} (this product y\cdot m(\mathbf{x})) was already discussed when we’ve seen the SVM algorithm. Note that the loss function related to the logistic model would be \log(1+e^{-y\cdot m(\mathbf{x})}).

What we do here is related to gradient descent (or Newton algorithm). Previously, we were learning from our errors. At each iteration, the residuals are computed and a (weak) model is fitted to these residuals. The the contribution of this weak model is used in a gradient descent optimization process. Here things will be different, because (from my understanding) it is more difficult to play with residuals, because null residuals never exist in classifications. So we will add weights. Initially, all the observations will have the same weights. But iteratively, we ill change them. We will increase the weights of the wrongly predicted individuals and decrease the ones of the correctly predicted individuals. Somehow, we want to focus more on the difficult predictions. That’s the trick. And I guess that’s why it performs so well. This algorithm is well described in wikipedia, so we will use it.

We start with \mathbf{\omega}_0=\mathbf{1}/n, then at each step fit a model (a classification tree) with weights \mathbf{\omega}_k(we did not discuss weights in the algorithms of trees, but it is straigtforward in the formula actually). Let \widehat{h}_{\mathbf{\omega}_k} denote that model (i.e. the probability in each leaves). Then consider the classifier 2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\cdot)>0.5]-1 which returns a value in \{-1,+1\}. Then set \varepsilon_k=\sum_{i\in\mathcal{I}_k}\omega_i where \mathcal{I}_k is the set of misclassified individuals,\mathcal{I}_k=\big\lbrace i:2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)>0.5]-1\neq y_i\big\rbrace Then set \alpha_k = \frac{1}{2} \ln \left(\frac{1-\epsilon_k}{\epsilon_k}\right)and update finally the model usingm_{k=1}=m_k+\alpha_k\widehat{h}_{\mathbf{\omega}_k}as well as the weights\mathbf{\omega}_{k+1}=\mathbf{\omega}_k e^{-\mathbf{y} \alpha_k \widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)}(of course, devide by the sum to insure that the total sum is then 1). And as previously, one can include some shrinkage. To visualize the convergence of the process, we will plot the total error on our dataset.

n_iter = 100
y = (myocarde[,"PRONO"]==1)*2-1
x = myocarde[,1:7]
error = rep(0,n_iter) 
f = rep(0,length(y)) 
w = rep(1,length(y)) #
alpha = 1
library(rpart)
for(i in 1:n_iter){
  w = exp(-alpha*y*f) *w 
  w = w/sum(w)
  rfit = rpart(y~., x, w, method="class")
  g = -1 + 2*(predict(rfit,x)[,2]>.5) 
  e = sum(w*(y*g<0))
  alpha = .5*log ( (1-e) / e )
  alpha = 0.1*alpha 
  f = f + alpha*g
  error[i] = mean(1*f*y<0)
}
plot(seq(1,n_iter),error,type="l",
     ylim=c(0,.25),col="blue",
     ylab="Error Rate",xlab="Iterations",lwd=2)


Here we face a classical problem in machine learning: we have a perfect model. With zero error. That is nice, but not interesting. It is also possible in econometrics, with polynomial fits: with 10 observations, and a polynomial of degree 9, we have a perfect fit. But a poor model. Here it is the same. So the trick is to split our dataset in two, a training dataset, and a validation one

set.seed(123)
id_train = sample(1:nrow(myocarde), size=45, replace=FALSE)
train_myocarde = myocarde[id_train,]
test_myocarde = myocarde[-id_train,]

We construct the model on the first one, and we check on the second one that it’s not that bad…

y_train = (train_myocarde[,"PRONO"]==1)*2-1
x_train =  train_myocarde[,1:7]
y_test = (test_myocarde[,"PRONO"]==1)*2-1
x_test = test_myocarde[,1:7]
train_error = rep(0,n_iter) 
test_error = rep(0,n_iter)
f_train = rep(0,length(y_train))
f_test = rep(0,length(y_test)) 
w_train = rep(1,length(y_train)) 
alpha = 1
for(i in 1:n_iter){
  w_train = w_train*exp(-alpha*y_train*f_train) 
  w_train = w_train/sum(w_train)
  rfit = rpart(y_train~., x_train, w_train, method="class")
  g_train = -1 + 2*(predict(rfit,x_train)[,2]>.5)
  g_test = -1 + 2*(predict(rfit,x_test)[,2]>.5)
  e_train = sum(w_train*(y_train*g_train<0))
  alpha = .5*log ( (1-e_train) / e_train )
  alpha = 0.1*alpha 
  f_train = f_train + alpha*g_train
  f_test = f_test + alpha*g_test
  train_error[i] = mean(1*f_train*y_train<0)
  test_error[i] = mean(1*f_test*y_test<0)}
plot(seq(1,n_iter),test_error,col='red')
lines(train_error,lwd=2,col='blue')


Here, as previously, after 80 iterations, we have a perfect model on the training dataset, but it behaves badly on the validation dataset. But with 20 iterations, it seems to be ok…

R function

Of course, it’s possible to use R functions,

library(gbm)
gbmWithCrossValidation = gbm(PRONO ~ .,distribution = "bernoulli",
data = myocarde,n.trees = 2000,shrinkage = .01,cv.folds = 5,n.cores = 1)
bestTreeForPrediction = gbm.perf(gbmWithCrossValidation)

Here cross-validation is considered, and not training/validation, as well as forests instead of single trees, but overall, the idea is the same… Off course, the output is much nicer (here the shrinkage is a very small parameter, and learning is extremely slow)

Classification from scratch, trees 9/8

Nineth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside classification trees. And yes, I promised eight posts in that series, but clearly, that was not sufficient… sorry for the poor prediction.

Decision Tree

Decision trees are easy to read. So easy to read that they are everywhere

We start from the top, and we go down, with a binary choice, at each stop, each node. Let us see how it works on our dataset

library(rpart)
cart = rpart(PRONO~.,data=myocarde)
library(rpart.plot)
prp(cart,type=2,extra=1)


We start here with one single leaf. If we have two explanatory variable (the x-axis and the y-axis if we want to plot it), we will check what happens if we cut the leaf accoring to the value of the first variable (and there will be two subgroups, the one on the left and the one on the right)

or if we cut according to the second one (and there will be two subgroups, the one on top and the one below).

Why and where do we cut? Let us formalize a little bit. A node (a leaf) constains observations, i.e. \{y_i,\mathbf{x})i\}) for some i\in\mathcal{I}\subset\{1,\cdots,n\}. Hence, a leaf a caracterized by \mathcal{I}. For instance, the first node in the tree is \mathcal{I}=\{1,\cdots,n\}. A (binary) split is based on one specific variable – say x_j – and a cutoff, say s. Then, there are two options:

  • either x_{i,j}\leq s, then observation i goes on the left, in \mathcal{I}_L
  • or x_{i,j}> s, then observation i goes on the right, in \mathcal{I}_R

Thus, \mathcal{I}=\mathcal{I}_L\cup\mathcal{I}_R.

Now, define some impurity index, in some node. In the context of a classification tree, the most popular index used (the so-called impurity index) is Gini for node \mathcal{I} is defined as G(\mathcal{I})=-\sum_{y\in\{0,1\}}p_y(1-p_y)where p_y is the proportion of individuals in the leaf of type y. I use this notation here because it can be extended to the case of more than one class. Here, we consider only binary classification. Now, why p_y(1-p_y)? Because we want leaves that are extremely homogeneous. In our dataset, out of 71 individuals, 42 died, 29 survived. A perfect classification would be obtained if we can split in two, with the 29 survivors on the left, and the 42 dead on the right. In that case, leaves would be perfectly homogneous. So, when p_0\approx1 or p_1\approx1, we have strong homogenity. If we want an index to maximize, -p_y(1-p_y) might be an interesting candidate. Further more, the worst case would be a leaf with p_0\approx1/2, which is exactly what we have here. Note that we can also writeG(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\left(1-\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)where n_{y,\mathcal{I}} is the number of individuals of type y in the leaf \mathcal{I}, and n_{\mathcal{I}} is the number of individuals in the leaf \mathcal{I}.

If we do not split, we have indexG(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\left(1-\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)while if we split, define indexG(\mathcal{I}_L,\mathcal{I}_R)=-\sum_{x\in\{L,R\}}\frac{n_x}{n_{\mathcal{I}_x}}{n_{\mathcal{I}}}\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\left(1-\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\right)The code to compute is would be

gini = function(y,classe){
T. = table(y,classe)
nx = apply(T,2,sum)
n. = sum(T)
pxy = T/matrix(rep(nx,each=2),nrow=2)
omega = matrix(rep(nx,each=2),nrow=2)/n
g. = -sum(omega*pxy*(1-pxy))
return(g)}

Actually, one can consider other indices, like the entropic measureE(\mathcal{I})=-\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\log\left(\frac{n_{y,\mathcal{I}}}{n_{\mathcal{I}}}\right)while if we split, E(\mathcal{I}_L,\mathcal{I}_R)=-\sum_{x\in\{L,R\}}\frac{n_x}{n_{\mathcal{I}_x}}{n_{\mathcal{I}}}\sum_{y\in\{0,1\}}\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\log\left(\frac{n_{y,\mathcal{I}_x}}{n_{\mathcal{I}_x}}\right)

entropy = function(y,classe){
  T. = table(y,classe)
  nx = apply(T,2,sum)
  n. = sum(T)
  pxy = T/matrix(rep(nx,each=2),nrow=2)
  omega = matrix(rep(nx,each=2),nrow=2)/n
  g  = sum(omega*pxy*log(pxy))
return(g)}

This index was used originally in C4.5 algorithm.

Dividing a leaf (or not)

For instance, consider the very first split. Assume that we want to split according to the very first variable

CLASSE = myocarde[,1] <=100
table(CLASSE)
CLASSE
FALSE  TRUE 
   13    58

In that case, there will be 13 invididuals on one side (the left, say), and 58 on the other side (the right).

gini(y=myocarde$PRONO,classe=CLASSE)
[1] -0.4640415

Initially, without any split, it was

-2*mean(myocarde$PRONO)*(1-mean(myocarde$PRONO))
[1] -0.4832375

which can actually also be obtained with

CLASSE = myocarde[,1] gini(y=myocarde$PRONO,classe=CLASSE)
[1] -0.4832375

There is a net gain in spliting of

gini(y=myocarde$PRONO,classe=(myocarde[,1]<=100))-
gini(y=myocarde$PRONO,classe=(myocarde[,1]<=Inf))
[1] 0.01919591

Now, how do we split? Which variable and which cutoff? Well… let’s try all possible splits… Here, we have 7 variables. We can consider all possible values, using

sort(unique(myocarde[,1]))

But in massive datasets, it can be very long. Here, I prefer

seq(min(myocarde[,1]),max(myocarde[,1]),length=101)

so that we try 101 values of possible cutoff. Overall, the number of computations is rather low, with 707 Gini indices to compute. Again, I won’t get back here on the motivations for such a technique to create partitions, I will keep that for the course in Barcelona, but it is fast.

mat_gini = mat_v=matrix(NA,7,101)
for(v in 1:7){
  variable=myocarde[,v]
  v_seuil=seq(quantile(myocarde[,v],
6/length(myocarde[,v])),
quantile(myocarde[,v],1-6/length(
myocarde[,v])),length=101)
  mat_v[v,]=v_seuil
  for(i in 1:101){
CLASSE=variable<=v_seuil[i]
mat_gini[v,i]=
  gini(y=myocarde$PRONO,classe=CLASSE)}}

Actually, the range of possible values is slightly different: I do not want cutoff too much on the left or on the right… having a leaf with one or two observations is not the idea, here. Not, if we plot all the functions, we get

par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
  ylim=range(mat_gini),xlab="",ylab="",
  main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}


Here, the most homogenous leaves obtained using a cut in two parts is when we use variable ‘INSYS’. And the optimal cutoff variable is close to 19. So far, that’s the only information we use. Well, actually no. If the gain is sufficiently large, we go for a split. Here, the gain is

gini(y=myocarde$PRONO,classe=(myocarde[,3]<19))-
gini(y=myocarde$PRONO,classe=(myocarde[,3]<=Inf))
[1] 0.2832801

which is large. Sufficiently large to go for it, and to split in two. Actually, we look at the relative gain

-(gini(y=myocarde$PRONO,classe=(myocarde[,3]<19))-
gini(y=myocarde$PRONO,classe=(myocarde[,3]<=Inf)))/
gini(y=myocarde$PRONO,classe=(myocarde[,3]<=Inf))
[1] 0.5862131

If that gain exceed 1% (the default value in R), we split in two.

Then, we do it again. Twice. First, on go on the leaf on the left, with 27 observations. And we try to see if we can split it.

idx = which(myocarde$INSYS<19)
mat_gini = mat_v = matrix(NA,7,101)
for(v in 1:7){
  variable = myocarde[idx,v]
  v_seuil = seq(quantile(myocarde[idx,v],
7/length(myocarde[idx,v])),
quantile(myocarde[idx,v],1-7/length(
myocarde[idx,v])), length=101)
  mat_v[v,] = v_seuil
  for(i in 1:101){
    CLASSE = variable<=v_seuil[i]
    mat_gini[v,i]=
      gini(y=myocarde$PRONO[idx],classe=CLASSE)}}
par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
       ylim=range(mat_gini),xlab="",ylab="",
       main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}

The graph is here the following,

and observe that the best split is obtained using ‘REPUL’, with a cutoff around 1585. We check that the (relative) gain is sufficiently large, and then we go for it.
And then, we consider the other leaf, and we run the same code

idx = which(myocarde$INSYS>=19)
mat_gini = mat_v = matrix(NA,7,101)
for(v in 1:7){
  variable=myocarde[idx,v]
  v_seuil=seq(quantile(myocarde[idx,v],
6/length(myocarde[idx,v])),
quantile(myocarde[idx,v],1-6/length(
myocarde[idx,v])), length=101)
  mat_v[v,]=v_seuil
  for(i in 1:101){
    CLASSE=variable<=v_seuil[i]
    mat_gini[v,i]=
      gini(y=myocarde$PRONO[idx],
           classe=CLASSE)}}
par(mfrow=c(3,2))
for(v in 2:7){
  plot(mat_v[v,],mat_gini[v,],type="l",
       ylim=range(mat_gini),xlab="",ylab="",
       main=names(myocarde)[v]) 
  abline(h=max(mat_gini),col="blue")
}


Here, we should split according to ‘REPUL’, and the cutoff is about 1094. Here again, we have to make sure that the split is worth it. And we cut.

Now we have four leaves. And we should run the same code, again. Actually, not on the very first one, which is homogenous. But we should do the same for the other three. If we do it, we can see that we cannot split them any further. Gains will not be sufficiently interesting.

Now guess what… that’s exactly what we have obtained with our initial code

Note that the case of categorical explanatory variables has been discussed in a previous post, a few years ago.

Application on our small dataset

On our small dataset, we obtain (after changing the default values since in R, we should not have leaves with less than 10 observations… and here, the dataset is too small).

tree = rpart(y ~ x1+x2,data=df, 
control = rpart.control(cp = 0.25,
minsplit = 7))
prp(tree,type=2,extra=1)

u = seq(0,1,length=101)
p = function(x,y){predict(tree,newdata=data.frame(x1=x,x2=y),type="prob")[,2]}
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We have a nice and simple cut

With less observations in the leaves, we can easily get a perfect model here

tree = rpart(y ~ x1+x2,data=df, 
control = rpart.control(cp = 0.25,
minsplit = 2))
prp(tree,type=2,extra=1)

u = seq(0,1,length=101)
p = function(x,y){predict(tree,newdata=data.frame(x1=x,x2=y),type="prob")[,2]}
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Nice, isn’t it? Now, just two little additional comments before growing some more trees…

Pruning

I did not mention pruning here. Because there are two possible strategies when growing trees. Either we keep spliting, until we obtain only homogeneous leaves. Once we have a big, deep tree, we go for pruning. Or we use the stategy mentionned here : at each step, we check if the split is worth it. If not, we stop.

Variable Importance

An interesting tool is the variable importance function. The heuristic idea is that if we use variable ‘INSYS’ to split, it is an important variable. And its importance is related to the gain in Gini index. If we get back to the visualization of the tree, it seems that two variables are interesting here: ‘INSYS’ and ‘REPUL’. And we should get back to previous computation to quantify how important both are.

This will be used in our next post, on random forests. But actually it is not the case here, with one single tree. Let us get back to the graph on the initial node.

Indeed, ‘INSYS’ is important, since we decided to use it. But what about ‘INCAR’ or ‘REPUL’? They were very close… And actually, in R, those surrogate splits are considered in the computation, as briefly explained in the vignette. Let us look more carefully at the output of the R function

cart = rpart(PRONO~., myocarde)
split = summary(cart)$splits

If we look at the first part of that object, we get

split
      count ncat    improve    index       adj
INSYS    71   -1 0.58621312   18.850 0.0000000
REPUL    71    1 0.55440034 1094.500 0.0000000
INCAR    71   -1 0.54257020    1.690 0.0000000
PRDIA    71    1 0.27284114   17.000 0.0000000
PAPUL    71    1 0.20466714   23.250 0.0000000

So indeed, ‘INSYS’ was the most important variable, but surrogate splits can also be considered, and ‘INCAR’ and ‘REPUL’ are indeed very important. The gain was 58% (as we obtained) using ‘INSYS’ but there were gains of 55% (nothing to be ashamed of). So it would be unfair to claim that they have no importance, at all. And it is the same for the other leaves that we split,

REPUL    27    1 0.18181818 1585.000 0.0000000
PVENT    27   -1 0.10803571   14.500 0.0000000
PRDIA    27    1 0.10803571   18.500 0.0000000
PAPUL    27    1 0.10803571   22.500 0.0000000
INCAR    27    1 0.04705882    1.195 0.0000000

On the left, we did use ‘REPUL’ (with 18% gain), but ‘PVENT’, ‘PRDIA’ and ‘PAPUL’ were not that bad, with (almost) 11% gain… We can obtain variable importance by summing all those values, and we have

cart$variable.importance
     INSYS      REPUL      INCAR      PAPUL      PRDIA      FRCAR      PVENT 
10.3649847 10.0510872  8.2121267  3.2441501  2.8276121  1.8623046  0.3373771

that we can visualize using

barplot(t(cart$variable.importance),horiz=TRUE)


To be continued with more trees…

Classification from scratch, linear discrimination 8/8

Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis.

Bayes (naive) classifier

Consider the follwing naive classification rulem^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}orm^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}(where \mathbb{P}[\mathbf{X}=\mathbf{x}] is the density in the continuous case).

In the case where y takes two values, that will be standard \{0,1\} here, one can rewrite the later asm^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}and the set\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}is called the decision boundary.

Assume that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then explicit expressions can be derived.m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}where r_y^2 is the Manalahobis distance, r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]

Let \delta_ybe defined as\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)the decision boundary of this classifier is \{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}which is quadratic in {\color{blue}{\mathbf{x}}}. This is the quadratic discriminant analysis. This can be visualized bellow.

The decision boundary is here

But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.

In that case, actually, \delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y) and the decision frontier is now linear in {\color{blue}{\mathbf{x}}}. This is the linear discriminant analysis. This can be visualized bellow

Here the two samples have the same variance matrix and the frontier is

Link with the logistic regression

Assume as previously that\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})and\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})then\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}is equal to \mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}which is linear in \mathbf{x}\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule.

Observe furthermore that the slope is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}which is maximal when \mathbf{\omega} is proportional to \mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0], when \mathbf{\Sigma}_0=\mathbf{\Sigma}_1.

Homebrew linear discriminant analysis

To compute vector \mathbf{\omega}

m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean)
m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean)
Sigma = var(myocarde[,1:7])
omega = solve(Sigma)%*%(m1-m0)
omega
                 [,1]
FRCAR -0.012909708542
INCAR  1.088582058796
INSYS -0.019390084344
PRDIA -0.025817110020
PAPUL  0.020441287970
PVENT -0.038298291091
REPUL -0.001371677757

For the constant – in the equation \omega^T\mathbf{x}+b=0 – if we have equiprobable probabilities, use

b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2

Application (on the small dataset)

In order to visualize what’s going on, consider the small dataset, with only two covariates,

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
m0 = apply(df[df$y=="0",1:2],2,mean)
m1 = apply(df[df$y=="1",1:2],2,mean)
Sigma = var(df[,1:2])
omega = solve(Sigma)%*%(m1-m0)
omega
         [,1]
x1 -2.640613174
x2  4.858705676


Using R regular function, we get

library(MASS)
fit_lda = lda(y ~x1+x2 , data=df)
fit_lda
 
Coefficients of linear discriminants:
            LD1
x1 -2.588389554
x2  4.762614663

which is the same coefficient as the one we got with our own code. For the constant, use

b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2

If we plot it, we get the red straight line

plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")])
abline(a=b/omega[2],b=-omega[1]/omega[2],col="red")


As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters

points(m0["x1"],m0["x2"],pch=4)
points(m1["x1"],m1["x2"],pch=4)
segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue")
points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19)

Of course, we can also use R function

predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1
vv=outer(vu,vu,predlda)
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)


One can also consider the quadratic discriminent analysis since it might be difficult to argue that \mathbf{\Sigma}_0=\mathbf{\Sigma}_1

fit_qda = qda(y ~x1+x2 , data=df)

The separation curve is here

plot(df$x1,df$x2,pch=19,
col=c("blue","red")[1+(df$y=="1")])
predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1
vv=outer(vu,vu,predlda)
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5)

Classification from scratch, SVM 7/8

Seventh post of our series on classification from scratch. The latest one was on the neural nets, and today, we will discuss SVM, support vector machines.

A formal introduction

Here y takes values in \{-1,+1\}. Our model will be m(\mathbf{x})=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b] Thus, the space is divided by a (linear) border\Delta:\lbrace\mathbf{x}\in\mathbb{R}^p:\mathbf{\omega}^T\mathbf{x}+b=0\rbrace

The distance from point \mathbf{x}_i to \Delta is d(\mathbf{x}_i,\Delta)=\frac{\mathbf{\omega}^T\mathbf{x}_i+b}{\|\mathbf{\omega}\|}If the space is linearly separable, the problem is ill posed (there is an infinite number of solutions). So consider
\max_{\mathbf{\omega},b}\left\lbrace\min_{i=1,\cdots,n}\left\lbrace\text{distance}(\mathbf{x}_i,\Delta)\right\rbrace\right\rbrace

The strategy is to maximize the margin. One can prove that we want to solve \max_{\mathbf{\omega},m}\left\lbrace\frac{m}{\|\mathbf{\omega}\|}\right\rbrace
subject to y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=m, \forall i=1,\cdots,n. Again, the problem is ill posed (non identifiable), and we can consider m=1: \max_{\mathbf{\omega}}\left\lbrace\frac{1}{\|\mathbf{\omega}\|}\right\rbrace
subject to y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=1, \forall i=1,\cdots,n. The optimization objective can be written\min_{\mathbf{\omega}}\left\lbrace\|\mathbf{\omega}\|^2\right\rbrace

The primal problem

In the separable case, consider the following primal problem,\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R}}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2\right\rbracesubject to y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1, \forall i=1,\cdots,n.

In the non-separable case, introduce slack (error) variables \mathbf{\xi} : if y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1, there is no error \xi_i=0.

Let C denote the cost of misclassification. The optimization problem becomes\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R},{\color{red}{\mathbf{\xi}}}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2 + C\sum_{i=1}^n\xi_i\right\rbracesubject to y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1-{\color{red}{\xi_i}}, with {\color{red}{\xi_i}}\geq 0, \forall i=1,\cdots,n.

Let us try to code this optimization problem. The dataset is here

n = length(myocarde[,"PRONO"])
myocarde0 = myocarde
myocarde0$PRONO = myocarde$PRONO*2-1
C = .5

and we have to set a value for the cost C. In the (linearly) constrained optimization function in R, we need to provide the objective function f(\mathbf{\theta}) and the gradient \nabla f(\mathbf{\theta}).

f = function(param){
  w  = param[1:7]
  b  = param[8]
  xi = param[8+1:nrow(myocarde)]
  .5*sum(w^2) + C*sum(xi)}
grad_f = function(param){
  w  = param[1:7]
  b  = param[8]
  xi = param[8+1:nrow(myocarde)]
  c(2*w,0,rep(C,length(xi)))}

and (linear) constraints are written as \mathbf{U}\mathbf{\theta}-\mathbf{c}\geq \mathbf{0}

U = rbind(cbind(myocarde0[,"PRONO"]*as.matrix(myocarde[,1:7]),diag(n),myocarde0[,"PRONO"]),
cbind(matrix(0,n,7),diag(n,n),matrix(0,n,1)))
C = c(rep(1,n),rep(0,n))

Then we use

constrOptim(theta=p_init, f, grad_f, ui = U,ci = C)

Observe that something is missing here: we need a starting point for the algorithm, \mathbf{\theta}_0. Unfortunately, I could not think of a simple technique to get a valid starting point (that satisfies those linear constraints).

Let us try something else. Because those functions are quite simple: either linear or quadratic. Actually, one can recognize in the separable case, but also in the non-separable case, a classic quadratic program\min_{\mathbf{z}\in\mathbb{R}^d}\left\lbrace\frac{1}{2}\mathbf{z}^T\mathbf{D}\mathbf{z}-\mathbf{d}\mathbf{z}\right\rbracesubject to \mathbf{A}\mathbf{z}\geq\mathbf{b}.

library(quadprog)
eps = 5e-4
y = myocarde[,&quot;PRONO&quot;]*2-1
X = as.matrix(cbind(1,myocarde[,1:7]))
n = length(y)
D = diag(n+7+1)
diag(D)[8+0:n] = 0 
d = matrix(c(rep(0,7),0,rep(C,n)), nrow=n+7+1)
A = Ui
b = Ci
sol = solve.QP(D+eps*diag(n+7+1), d, t(A), b, meq=1, factorized=FALSE)
qpsol = sol$solution
(omega = qpsol[1:7])
[1] -0.106642005446 -0.002026198103 -0.022513312261 -0.018958578746 -0.023105767847 -0.018958578746 -1.080638988521
(b     = qpsol[n+7+1])
[1] 997.6289927

Given an observation \mathbf{x}, the prediction is
y=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]

y_pred = 2*((as.matrix(myocarde0[,1:7])%*%omega+b)&gt;0)-1

Observe that here, we do have a classifier, depending if the point lies on the left or on the right (above or below, etc) the separating line (or hyperplane). We do not have a probability, because there is no probabilistic model here. So far.

The dual problem

The Lagrangian of the separable problem could be written introducing Lagrange multipliers \mathbf{\alpha}\in\mathbb{R}^n, \mathbf{\alpha}\geq \mathbf{0} as\mathcal{L}(\mathbf{\omega},b,\mathbf{\alpha})=\frac{1}{2}\|\mathbf{\omega}\|^2-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1\big)Somehow, \alpha_i represents the influence of the observation (y_i,\mathbf{x}_i).

Consider the Dual Problem, with \mathbf{G}=[G_{ij}] and G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i
\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace
subject to \mathbf{y}^T\mathbf{\alpha}=\mathbf{0} and \mathbf{\alpha}\geq\mathbf{0}.

The Lagrangian of the non-separable problem could be written introducing Lagrange multipliers \mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\in\mathbb{R}^n, \mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\geq \mathbf{0}, and define the Lagrangian \mathcal{L}(\mathbf{\omega},b,{\color{red}{\mathbf{\xi}}},\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}) as\frac{1}{2}\|\mathbf{\omega}\|^2+{\color{blue}{C}}\sum_{i=1}^n{\color{red}{\xi_i}}-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1+{\color{red}{\xi_i}}\big)-\sum_{i=1}^n{\color{red}{\beta_i}}{\color{red}{\xi_i}}
Somehow, \alpha_i represents the influence of the observation (y_i,\mathbf{x}_i).

The Dual Problem become with \mathbf{G}=[G_{ij}] and G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace
subject to \mathbf{y}^T\mathbf{\alpha}=\mathbf{0}, \mathbf{\alpha}\geq\mathbf{0} and \mathbf{\alpha}\leq {\color{blue}{C}}.
As previsouly, one can also use quadratic programming

library(quadprog)
eps = 5e-4
y = myocarde[,"PRONO"]*2-1
X = as.matrix(cbind(1,myocarde[,1:7]))
n = length(y)
Q = sapply(1:n, function(i) y[i]*t(X)[,i])
D = t(Q)%*%Q
d = matrix(1, nrow=n)
A = rbind(y,diag(n),-diag(n))
C = .5
b = c(0,rep(0,n),rep(-C,n))
sol = solve.QP(D+eps*diag(n), d, t(A), b, meq=1, factorized=FALSE)
qpsol = sol$solution

The two problems are connected in the sense that for all \mathbf{x}\mathbf{\omega}^T\mathbf{x}+b = \sum_{i=1}^n \alpha_i y_i (\mathbf{x}^T\mathbf{x}_i)+b

To recover the solution of the primal problem,\mathbf{\omega}=\sum_{i=1}^n \alpha_iy_i \mathbf{x}_ithus

omega = apply(qpsol*y*X,2,sum)
omega
                           1                        FRCAR                        INCAR                        INSYS 
 0.0000000000000002439074265  0.0550138658687635215271960 -0.0920163239049630876653652  0.3609571899422952534486342 
                       PRDIA                        PAPUL                        PVENT                        REPUL 
-0.1094017965288692356695677 -0.0485213403643276475207813 -0.0660058643191372279579454  0.0010093656567606212794835

while b=y-\mathbf{\omega}^T\mathbf{x} (but actually, one can add the constant vector in the matrix of explanatory variables).

More generally, consider the following function (to make sure that D is a definite-positive matrix, we use the nearPD function).

svm.fit = function(X, y, C=NULL) {
 n.samples = nrow(X)
 n.features = ncol(X)
 K = matrix(rep(0, n.samples*n.samples), nrow=n.samples)
 for (i in 1:n.samples){
  for (j in 1:n.samples){
   K[i,j] = X[i,] %*% X[j,] }}
 Dmat = outer(y,y) * K
 Dmat = as.matrix(nearPD(Dmat)$mat) 
 dvec = rep(1, n.samples)
 Amat = rbind(y, diag(n.samples), -1*diag(n.samples))
 bvec = c(0, rep(0, n.samples), rep(-C, n.samples))
 res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1)
 a = res$solution 
 bomega = apply(a*y*X,2,sum)
 return(bomega)
}

On our dataset, we obtain

M = as.matrix(myocarde[,1:7])
center = function(z) (z-mean(z))/sd(z)
for(j in 1:7) M[,j] = center(M[,j])
bomega = svm.fit(cbind(1,M),myocarde$PRONO*2-1,C=.5)
y_pred = 2*((cbind(1,M)%*%bomega)&gt;0)-1
table(obs=myocarde0$PRONO,pred=y_pred)
    pred
obs  -1  1
  -1 27  2
  1   9 33

i.e. 11 misclassification, out of 71 points (which is also what we got with the logistic regression).

Kernel Based Approach

In some cases, it might be difficult to “separate” by a linear separators the two sets of points, like below,

It might be difficult, here, because which want to find a straight line in the two dimensional space (x_1,x_2). But maybe, we can distort the space, possible by adding another dimension

That’s heuristically the idea. Because on the case above, in dimension 3, the set of points is now linearly separable. And the trick to do so is to use a kernel. The difficult task is to find the good one (if any).

A positive kernel on \mathcal{X} is a function K:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R} symmetric, and such that for any n, \forall\alpha_1,\cdots,\alpha_n and \forall\mathbf{x}_1,\cdots,\mathbf{x}_n,\sum_{i=1}^n\sum_{j=1}^n\alpha_i\alpha_j k(\mathbf{x}_i,\mathbf{x}_j)\geq 0.
For example, the linear kernel is k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i^T\mathbf{x}_j. That’s what we’ve been using here, so far. One can also define the product kernel k(\mathbf{x}_i,\mathbf{x}_j)=\kappa(\mathbf{x}_i)\cdot\kappa(\mathbf{x}_j) where \kappa is some function \mathcal{X}\rightarrow\mathbb{R}.

Finally, the Gaussian kernel is k(\mathbf{x}_i,\mathbf{x}_j)=\exp[-\|\mathbf{x}_i-\mathbf{x}_j\|^2].

Since it is a function of \|\mathbf{x}_i-\mathbf{x}_j\|, it is also called a radial kernel.

linear.kernel = function(x1, x2) {
 return (x1%*%x2)
}
svm.fit = function(X, y, FUN=linear.kernel, C=NULL) {
 n.samples = nrow(X)
 n.features = ncol(X)
 K = matrix(rep(0, n.samples*n.samples), nrow=n.samples)
 for (i in 1:n.samples){
  for (j in 1:n.samples){
   K[i,j] = FUN(X[i,], X[j,])
  }
 }
 Dmat = outer(y,y) * K
 Dmat = as.matrix(nearPD(Dmat)$mat) 
 dvec = rep(1, n.samples)
 Amat = rbind(y, diag(n.samples), -1*diag(n.samples))
 bvec = c(0, rep(0, n.samples), rep(-C, n.samples))
 res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1)
 a = res$solution 
 bomega = apply(a*y*X,2,sum)
 return(bomega)
}

Link to the regression

To relate this duality optimization problem to OLS, recall that y=\mathbf{x}^T\mathbf{\omega}+\varepsilon, so that \widehat{y}=\mathbf{x}^T\widehat{\mathbf{\omega}}, where \widehat{\mathbf{\omega}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}
But one can also write y=\mathbf{x}^T\widehat{\mathbf{\omega}}=\sum_{i=1}^n \widehat{\alpha}_i\cdot \mathbf{x}^T\mathbf{x}_i
where \widehat{\mathbf{\alpha}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\widehat{\mathbf{\omega}}, or conversely, \widehat{\mathbf{\omega}}=\mathbf{X}^T\widehat{\mathbf{\alpha}}.

Application (on our small dataset)

One can actually use a dedicated R package to run a SVM. To get the linear kernel, use

library(kernlab)
df0 = df
df0$y = 2*(df$y=="1")-1
SVM1 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , type="C-svc")

Since the dataset is not linearly separable, there will be some mistakes here

table(df0$y,predict(SVM1))
 
     -1 1
  -1  2 2
  1   1 5

The problem with that function is that it cannot be used to get a prediction for other points than those in the sample (and I could neither extract \omega nor b from the 24 slots of that objet). But it’s possible by adding a small option in the function

SVM2 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , prob.model=TRUE, type="C-svc")

With that function, we convert the distance as some sort of probability. Someday, I will try to replicate the probabilistic version of SVM, I promise, but today, the goal is just to understand what is done when running the SVM algorithm. To visualize the prediction, use

pred_SVM2 = function(x,y){
return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])}
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],
     cex=1.5,xlab="",
     ylab="",xlim=c(0,1),ylim=c(0,1))
vu = seq(-.1,1.1,length=251)
vv = outer(vu,vu,function(x,y) pred_SVM2(x,y))
contour(vu,vu,vv,add=TRUE,lwd=2,nlevels = .5,col="red")


Here the cost is C=.5, but of course, we can change it

SVM2 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "vanilladot" , prob.model=TRUE, type="C-svc")
pred_SVM2 = function(x,y){
return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])}
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],
     cex=1.5,xlab="",
     ylab="",xlim=c(0,1),ylim=c(0,1))
vu = seq(-.1,1.1,length=251)
vv = outer(vu,vu,function(x,y) pred_SVM2(x,y))
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red")


As expected, we have a linear separator. But slightly different. Now, let us consider the “Radial Basis Gaussian kernel”

SVM3 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "rbfdot" , prob.model=TRUE, type="C-svc")

Observe that here, we’ve been able to separare the white and the black points

table(df0$y,predict(SVM3))
 
     -1 1
  -1  4 0
  1   0 6
plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],
     cex=1.5,xlab="",
     ylab="",xlim=c(0,1),ylim=c(0,1))
vu = seq(-.1,1.1,length=251)
vv = outer(vu,vu,function(x,y) pred_SVM3(x,y))
contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red")


Now, to be completely honest, if I understand the theory of the algorithm used to compute \omega and b with linear kernel (using quadratic programming), I do not feel confortable with this R function. Especially if you run it several times… you can get (with exactly the same set of parameters)

or

(to be continued…)

Classification from scratch, neural nets 6/8

Sixth post of our series on classification from scratch. The latest one was on the lasso regression, which was still based on a logistic regression model, assuming that the variable of interest Y has a Bernoulli distribution. From now on, we will discuss technique that did not originate from those probabilistic models, even if they might still have a probabilistic interpretation. Somehow. Today, we will start with neural nets.

Maybe I should start with a disclaimer. The goal is not to replicate well designed R functions, used for predictive modeling. It is simply to get a basic understanding of what’s going on.

Networs, nodes and edges

First of all, neurals nets are nets, or networks. I will skip the parallel with “neural” stuff because it does not help me understanding what is happening (all apologies for my poor knowledge on biology, and cells)

So, it’s about some network. Networks have nodes, and edges (possibly connected) that connect nodes,

or maybe, to more specific (at least it helped me understanding what’s going on), some sort of flow network,

In such a network, we usually have sources (here multiple) sources (here \color{red}\{s_1,s_2,s_3\}), on the left, on a sink (here \{\color{blue}t\}), on the right. To continue with this metaphorical introduction, information from the sources should reach the sink. An usually, sources are explanatory variables, \{\mathbf{x}_1,\cdots,\mathbf{x}_p\}, and the sink is our variable of interest \mathbf{y}. And we want to create a graph, from the sources to the sink. We will have directed edges, with only one (unique) direction, where we will put weights. It is not a flow, the parallel with flow will stop here. For instance, the most simple network will be the following one, with no layer (i.e no node between the source and the sink)

The output here is a binary variable y\in\{0,1\} (it can also be y\in\{-1,+1\} but here, it’s not a big deal). In our network, our output will be y\in(0,1), because it is more easy to handly. For instance, consider y=f(something), for some function f taking values in (0,1). One can consider the sigmoid functionf(x)=\frac{1}{1+e^{-x}}=\frac{e^{x}}{e^{x}+1}which is actually the logistic function (so we should not be surprised to have results somehow close the logistic regression…). This function f is called the activation function, and there are thousands of such functions. If y\in\{-1,+1\}, people consider the hyperbolic tangentf(x)=\tanh(x)={\frac {(e^{x}-e^{-x})}{(e^{x}+e^{-x})}}or the inverse tangent function
f(x)=\tan ^{-1}(x)And as input for such function, we consider a weighted sum of incoming nodes. So herey_i=f\left(\sum_{j=1}^p\omega_j x_{j,i}\right)We can also add a constant actuallyy_i=f\left(\omega_0+\sum_{j=1}^p\omega_j x_{j,i}\right)So far, we are not far away from the logistic regression. Except that our starting point was a probabilistic model, in the sense that the later was interpreted as a probability (the probability that Y=1) and we wanted the model with the highest likelihood. But we’ll talk about selection of weights later one. First, let us construct our first (very simple) neural network. First, we have the sigmoid function

sigmoid = function(x) 1 / (1 + exp(-x))

The consider some weights. In our model with seven explanatory variables, with need 7 weights. Or 8 if we include the constant term. Let us consider \mathbf{\omega}=\mathbf{1},

weights_0 = rep(1,8)
X = as.matrix(cbind(1,myocarde[,1:7]))
y_5_1 = sigmoid(X %*% weights_0)

that’s kind of stupid because all our predictions are 1, here. Let us try something else. Like \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. It is optimized, somehow, but we needed something to visualize what’s going on

weights_0 = lm(PRONO~.,data=myocarde)$coefficients

then use

y_5_1 = sigmoid(X %*% weights_0)

In order to see if we get a “good” prediction, let use plot the ROC curve, and compare it with the one we got with a (simple) logistic regression

library(ROCR)
pred = ROCR::prediction(y_5_1,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))
y_0 = predict(reg,type="response")
pred0 = ROCR::prediction(y_0,myocarde$PRONO)
perf0 = ROCR::performance(pred0,"tpr", "fpr")
plot(perf0,add=TRUE,col="red")


That’s not bad for a very first attempt. Except that we’ve been cheating here, since we did use \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. How, for real, should we choose those weights?

Using a loss function

Well, if we want an “optimal” set of weights, we need to “optimize” an objective function. So we need to quantify the loss of a mistake, between the prediction, and the observation. Consider here a quadratic loss function

loss = function(weights){
  mean( (myocarde$PRONO-sigmoid(X %*% weights))^2) }

It might be stupid to use a quadratic loss function for a classification, but here, it’s not the point. We just want to understand what is the algorithm we use, and the loss function \ell is just one parameter. Then we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,f(\omega_0+\mathbf{x}_i^T\mathbf{\omega})\right)\right\rbraceThus, consider

weights_1 = optim(weights_0,loss)$par

(where the starting point is the OLS estimate). Again, to see what’s going on, let us visualize the ROC curve

y_5_2 = sigmoid(X %*% weights_1)
pred = ROCR::prediction(y_5_2,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf0,add=TRUE,col="red")


That’s not amazing, but again, that’s only a first step.

A single layer

Let us add a single layer in our network.

Those nodes are connected to the sources (incoming from sources) from the left, and then connected to the sink, on the right. Those nodes are not inter-connected. And again, for that network, we need edges (i.e series of weights). For instance, on the network above, we did add one single layer, with (only) three nodes.

For such a network, the prediction formula is \mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3\omega_h f_h\left(\omega_{h,0}+ \sum_{j=1}^p \omega_{h,j} x_j\right)\right)or more synthetically\mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)Usually, we consider the same activation function everywhere. Don’t ask me why, I find that weird.

Now, we have a lot of weights to choose. Let us use again OLS estimates

weights_1 &lt;- lm(PRONO~1+FRCAR+INCAR+INSYS+PAPUL+PVENT,data=myocarde)$coefficients
X1 = as.matrix(cbind(1,myocarde[,c("FRCAR","INCAR","INSYS","PAPUL","PVENT")]))
weights_2 &lt;- lm(PRONO~1+INSYS+PRDIA,data=myocarde)$coefficients
X2=as.matrix(cbind(1,myocarde[,c("INSYS","PRDIA")]))
weights_3 &lt;- lm(PRONO~1+PAPUL+PVENT+REPUL,data=myocarde)$coefficients
X3=as.matrix(cbind(1,myocarde[,c("PAPUL","PVENT","REPUL")]))

In that case, we did specify edges, and which sources (explanatory variables) should be used for each additional node. Actually, here, other techniques could be have been used, like using a PCA. Each node will then be one of the components. But we’ll use that idea later one…

X = cbind(sigmoid(X1 %*% weights_1), sigmoid(X2 %*% weights_2), sigmoid(X3 %*% weights_3))

But we’re not done here. Those were weights from the source to the know nodes, in the layer. We still need the weights from the nodes to the sink. Here, let use use a simple average

weights = c(1/3,1/3,1/3)
y_5_3 &lt;- sigmoid(X %*% weights)

Again, we can plot the ROC curve to see what we’ve done…

pred = ROCR::prediction(y_5_3,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf0,add=TRUE,col="red")

On back propagation

Now, we need some optimal selection of those weights. Observe that with only 3 nodes, there are already (7+1)\times3+3=27 parameters in that model! Clearly, parcimony is not the major issue when you start using neural nets! If p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,p(\mathbf{x}_i)\right)\right\rbracefor some loss function, which is\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i-p(\mathbf{x}_i))^2 \right\rbracefor the quadratic norm, or\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i\log p(\mathbf{x}_i)+[1-y_i]\log [1-p(\mathbf{x}_i)]) \right\rbraceif we want to use cross-entropy.

For convenience, let us center all the variable we create, otherwise, we get numerical problems.

center = function(z) (z-mean(z))/sd(z)
loss = function(weights){
weights_1 = weights[0+(1:7)]
weights_2 = weights[7+(1:7)]
weights_3 = weights[14+(1:7)]
weights_  = weights[21+1:4]
X1=X2=X3=as.matrix(myocarde[,1:7])
Z1 = center(X1 %*% weights_1)
Z2 = center(X2 %*% weights_2)
Z3 = center(X3 %*% weights_3)
X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3))
mean( (myocarde$PRONO-sigmoid(X %*% weights_))^2)}

Now that we have our objective function, consider some starting points. We can consider weights from a PCA, and then use a gradient descent algorithm,

pca = princomp(myocarde[,1:7])
W = get_pca_var(pca)$contrib
weights_0 = c(W[,1],W[,2],W[,3],c(-1,rep(1,3)/3))
weights_opt = optim(weights_0,loss)$par

The prediction is then obtained using

weights_1 = weights_opt[0+(1:7)]
weights_2 = weights_opt[7+(1:7)]
weights_3 = weights_opt[14+(1:7)]
weights_  = weights_opt[21+1:4]
X1=X2=X3=as.matrix(myocarde[,1:7])
Z1 = center(X1 %*% weights_1)
Z2 = center(X2 %*% weights_2)
Z3 = center(X3 %*% weights_3)
X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3))
y_5_4 = sigmoid(X %*% weights_)

And as previously, why not plot the ROC curve of that model

pred = ROCR::prediction(y_5_4,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf,add=TRUE,col="red")


That’s not too bad. But with 27 coefficients, that’s what we would expect, no?

Using nnet() function

That’s more or less what is done in neural nets functions. Let us now have a look at some dedicated R functions.

library(nnet)
myocarde_minmax = myocarde
minmax = function(z) (z-min(z))/(max(z)-min(z))
for(j in 1:7) myocarde_minmax[,j] = minmax(myocarde_minmax[,j])

Here, variables are linearly transformed, to take values in (0,1). Then we can construct a neural network with one single layer, and three nodes,

model_nnet = nnet(PRONO~.,data=myocarde_minmax,size=3)
summary(model_nnet)
a 7-3-1 network with 28 weights
options were -
 b-&gt;h1 i1-&gt;h1 i2-&gt;h1 i3-&gt;h1 i4-&gt;h1 i5-&gt;h1 i6-&gt;h1 i7-&gt;h1 
 -9.60  -1.79  21.00  14.72 -20.45  -5.05  14.37 -17.37 
 b-&gt;h2 i1-&gt;h2 i2-&gt;h2 i3-&gt;h2 i4-&gt;h2 i5-&gt;h2 i6-&gt;h2 i7-&gt;h2 
  4.72   2.83  -3.37  -1.64   1.49   2.12   2.31   4.00 
 b-&gt;h3 i1-&gt;h3 i2-&gt;h3 i3-&gt;h3 i4-&gt;h3 i5-&gt;h3 i6-&gt;h3 i7-&gt;h3 
 -0.58  -6.03  25.14  18.03  -1.19   7.52 -19.47 -12.95 
  b-&gt;o  h1-&gt;o  h2-&gt;o  h3-&gt;o 
 -1.32  29.00 -10.32  26.27

Here, it is the complete full network. And actually, there are (online) some functions that can he used to visualize that network

library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff044412703516c34f1a4684a5/nnet_plot_update.r')
plot.nnet(model_nnet)


Nice, isn’t it? We clearly see the intermediary layer, with three nodes, and on top the constants. Edges are the plain lines, the darker, the heavier (in terms of weights).

Using neuralnet()

Other R functions can actually be considered.

library(neuralnet)
model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)),
myocarde_minmax,hidden=3, act.fct = sigmoid)
plot(model_nnet)


Again, for the same network structure, with one (hidden) layer, and three nodes in it.

Network with multiple layers

The good thing is that it’s not possible to add more layers. Like two layers. Nodes from the first layer are no longuer connected with the sink, but with nodes in the second layer. And those nodes will then be connected to the sink. We now have something like
p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_h f_h\left(\omega_{h,0}+ \mathbf{z}_h^T\mathbf{\omega}_h\right)\right)where\mathbf{z}_h=f\left( \omega_{h,0}+ \sum_{j=1}^{k_h} \omega_{h,j} f_{h,j}\left(\omega_{h,j,0}+ \mathbf{x}^T\mathbf{\omega}_{h,j}\right)\right)I may be rambling here (a little bit) but that’s a lot of parameters. Here is the visualization of such a network,

library(neuralnet)
model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)),
myocarde_minmax,hidden=3, act.fct = sigmoid)
plot(model_nnet)

Application

Let us get back on our simple dataset, with only two covariates.

library(neuralnet)
df_minmax =df
df_minmax$y=(df_minmax$y=="1")*1
minmax = function(z) (z-min(z))/(max(z)-min(z))
for(j in 1:2) df_minmax[,j] = minmax(df[,j])
X = as.matrix(cbind(1,df_minmax[,1:2]))

Consider only one layer, with two nodes

model_nnet = neuralnet(formula(lm(y~.,data=df_minmax)),
df_minmax,hidden=c(2))
plot(model_nnet)


Here, we did not specify it, but the activation function is the sigmoid (actually, it is called logistic here)

model_nnet$act.fct
function (x) 
{
    1/(1 + exp(-x))
}
 
attr(,"type")
[1] "logistic"
f=model_nnet$act.fct

The weights (on the figure) can be obtained using

w0 = model_nnet$weights[[1]][[2]][,1]
w1 = model_nnet$weights[[1]][[1]][,1]
w2 = model_nnet$weights[[1]][[1]][,2]

Now, to get our prediction,
we should usep(\mathbf{x})=f\left( \omega_0+ \omega_1 f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_1 f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})\right)which can be obtained using

f(cbind(1,f(X%*%w1),f(X%*%w2))%*%w0)
              [,1]
 [1,] 0.7336477343
 [2,] 0.7317999050
 [3,] 0.7185803540
 [4,] 0.7404005280
 [5,] 0.7518482779
 [6,] 0.4939774149
 [7,] 0.4965876378
 [8,] 0.7101714888
 [9,] 0.5050760026
[10,] 0.5049877644

Unfortunately, it is not the output of the model here,

neuralnet::prediction(model_nnet)
Data Error:	0;
$rep1
       x1           x2              y
1  0.1250 0.0000000000  0.02030470787
2  0.0625 0.1176470588  0.89621706711
3  0.9375 0.2352941176  0.01995171956
4  0.0000 0.4705882353  1.10849420363
5  0.5000 0.4705882353 -0.01364966058
6  0.3125 0.5294117647 -0.02409150561
7  0.6875 0.8235294118  0.93743057765
8  0.3750 0.8823529412  1.01320924782
9  1.0000 0.9058823529  1.04805134309
10 0.5625 1.0000000000  1.00377379767

If anyone has a clue, I’d be glad to know what went wrong here… I find that odd to have outputs outside the (0,1) interval, but the output is neitherp(\mathbf{x})=\omega_{0,0}+ \omega_{0,1} f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_{0,2} f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})

cbind(1,f(X%*%w1),f(X%*%w2))%*%w0
                [,1]
 [1,]  1.01320924782
 [2,]  1.00377379767
 [3,]  0.93743057765
 [4,]  1.04805134309
 [5,]  1.10849420363
 [6,] -0.02409150561
 [7,] -0.01364966058
 [8,]  0.89621706711
 [9,]  0.02030470787
[10,]  0.01995171956

(to be continued…)

Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, \widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbraceThe objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}for any \mathbf{a}\in\mathbb{R}^d. One might say that the AIC could be written-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}And actually, this will be our objective function. More specifically, we will consider
\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbracefor some norm \|\cdot\|. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for \|\cdot\|_{\ell_2} (the Ridge regression) and for \|\cdot\|_{\ell_1} (the LASSO regression).

Normalization of the covariates

The problem of \|\mathbf{\beta}\| is that the norm should make sense, somehow. A small \mathbf{\beta}_j is with respect to the “dimension” of x_j‘s. So, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

Before running some codes, recall that we want to solve something like\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression.

The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : \min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbracecan be written equivalently (it is a strictly convex problem)\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbraceThus, the constrained maximum should lie in the blue disk

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
u = seq(-1,1,length=251)
lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue")
lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue")

Let us consider the objective function, with the following code

PennegLogLik = function(bbeta,lambda=0){
  b0   = bbeta[1]
  beta = bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*
  log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2)
}

Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case

lambda = 1
beta_init = lm(PRONO~.,data=myocarde)$coefficients
vpar = matrix(NA,1000,8)
for(i in 1:1000){
vpar[i,] = optim(par = beta_init*rnorm(8,1,2), 
function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par}
par(mfrow=c(1,2))
plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1])
plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2])


Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum.

The code to compute \widehat{\mathbf{\beta}}_{\lambda} would then be

opt_ridge = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]}

and we can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} as a function of {\lambda}

v_lambda = c(exp(seq(-2,5,length=61)))
est_ridge = Vectorize(opt_ridge)(v_lambda)
library("RColorBrewer")
colrs = brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1])
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])

At least it seems to make sense: we can observe the shrinkage as \lambda increases (we’ll get back to that later on).

Ridge, using Netwon Raphson algorithm

We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}where
\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})and\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}where \mathbf{\Delta}_{old} is the diagonal matrix with terms \mathbf{p}_{old}(1-\mathbf{p}_{old}) on the diagonal.

Thus\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]that we can also write\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. Here, on the penalized problem, we can easily prove that\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}while\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}Hence\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}
The code is then

Y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
X = cbind(1,X)
colnames(X) = c("Inter",names(myocarde[,1:7]))
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
beta[,8:10]
              [,1]        [,2]        [,3]
XInter  0.59619654  0.59619654  0.59619654
XFRCAR  0.09217848  0.09217848  0.09217848
XINCAR  0.77165707  0.77165707  0.77165707
XINSYS  0.69678521  0.69678521  0.69678521
XPRDIA -0.29575642 -0.29575642 -0.29575642
XPAPUL -0.23921101 -0.23921101 -0.23921101
XPVENT -0.33120792 -0.33120792 -0.33120792
XREPUL -0.84308972 -0.84308972 -0.84308972

Again, it seems that convergence is very fast.

And interestingly, with that algorithm, we can also derive the variance of the estimator\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}where\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}

The code to compute \widehat{\mathbf{\beta}}_{\lambda} as a function of \lambda is then

newton_ridge = function(lambda=1){
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:20){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
Varz = solve(Delta)
Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*%
  Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X)))
return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} (as a function of \lambda)

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])


and to get the evolution of the variance

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)


Recall that when \lambda=0 (on the left of the graphs), \widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco} (no penalty). Thus as \lambda increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with \alpha=0

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
library(glmnet)
glm_ridge = glmnet(X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

as a function of the norm

the \ell_1 norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post…

Ridge with orthogonal covariates

An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates.

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet)
glm_ridge = glmnet(pca_X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that \widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}

Application

Let us try with our second set of data

df0 = df
df0$y=as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type='response')}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}

We can try various values of \lambda

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(.2))
plot_lambda(.2)


or

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(1.2))
plot_lambda(1.2)


Next step is to change the norm of the penality, with the \ell_1 norm (to be continued…)

Classification from scratch, logistic with kernels 3/8

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})

Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code

mean_x = function(x,bw){
  w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1)
  weighted.mean(myocarde$PRONO,w)}
u = seq(5,55,length=201)
v = Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better.

Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1))
plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk)
f=function(bm){
  v = Vectorize(function(x) mean_x(x,bm))(reg$x)
  z=reg$y-v
  sum((z[!is.na(z)])^2)}
optim(bk,f)$par}
x=seq(1,10,by=.1)
y=Vectorize(g)(x)
plot(x,y)
abline(0,exp(-1),col="red")
abline(0,.37,col="blue")


There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest…

Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101)
p = function(x,y){
  bw1 = .2; bw2 = .2
  w = dnorm((df$x1-x)/bw1, mean=0,sd=1)*
      dnorm((df$x2-y)/bw2, mean=0,sd=1)
  weighted.mean(df$y=="1",w)
}
v = outer(u,u,Vectorize(p))
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with
\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7])
Sigma_Inv = solve(Sigma)
d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)}
k_closest = function(i,k){
  vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv)
vect = Vectorize(vect_dist)((1:nrow(myocarde))) 
which((rank(vect)))}

Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.

k_majority = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2]
  return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)])
  return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO,
MAJORITY=k_majority(7),PROPORTION=k_mean(7))
      OBSERVED MAJORITY PROPORTION
 [1,]        1        1  0.7142857
 [2,]        0        1  0.5714286
 [3,]        0        0  0.1428571
 [4,]        1        1  0.5714286
 [5,]        0        1  0.7142857
 [6,]        0        0  0.2857143
 [7,]        1        1  0.7142857
 [8,]        1        0  0.4285714
 [9,]        1        1  0.7142857
[10,]        1        1  0.8571429
[11,]        1        1  1.0000000
[12,]        1        1  1.0000000

Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){
  w = rank(abs(myocarde$INSYS-x),ties.method ="random")
  mean(myocarde$PRONO[which(w&lt;=9)])}
u=seq(5,55,length=201)
v=Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")]))
u = seq(0,1,length=51)
p = function(x,y){
  k = 6
  vect_dist = function(j)  d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv)
  vect = Vectorize(vect_dist)(1:nrow(df)) 
  idx  = which(rank(vect)&lt;=k)
  return(mean((df$y==1)[idx]))}
v = outer(u,u,Vectorize(p))
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x&gt;=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x&gt;=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(&gt;|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(&gt;|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(&gt;|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

On the interpretation of a regression model

Yesterday, NaytaData (aka @NaytaData ) posted a nice graph on reddit, with bicycle traffic and mean air temperature, in Helsinki, Finland, per day,

I found that graph interesting, so I did ask for the data (NaytaData kindly sent them to me tonight).

df=read.csv("cyclistsTempHKI.csv")
library(ggplot2)
ggplot(df, aes(meanTemp, cyclists)) +
  geom_point() +
  geom_smooth(span = 0.3)

But as mentioned by someone on twitter, the interpretation is somehow trivial : people get out on their bike when the weather is nice. The hotter, the more cyclists on the road. Which is interpreted here in a causal way…

But actually, we can also visualize the data as follows, as suggested by Antoine Chambert-Loir

 ggplot(df, aes(cyclists, meanTemp)) +
  geom_point() +
  geom_smooth(span = 0.3)

The interpretation would be, somehow, that the more cyclists on the road, the hotter it is. Why not consider this causal interpretation here ? Like cyclists go so fast, or sweat so much, that they increase temperature…

Of course, it is the standard (recurrent) discussion “correlation is not causality”, but in regression models, we like to tell a story, to pretend that we have some sort of a causal story. But we do not prove it. Here, we know that the first one is more credible than the second one, but how do we know that ? To go further, how can we use machine learning techniques to prove causal relationships ? How could a machine choose between the first and the second story ?

 

 

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar)
src &lt;- osmsource_api()
bb &lt;- center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua &lt;- get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids &lt;- find(ua, way(tags(k %in% vc)))
nat_ids &lt;- find_down(ua, way(nat_ids))
nat &lt;- subset(ua, ids = nat_ids)
nat_poly &lt;- as_sp(nat, type)}
 
listev = function(vc,type="polygons"){
  nat_ids &lt;- find(ua, way(tags(v %in% vc)))
  nat_ids &lt;- find_down(ua, way(nat_ids))
  nat &lt;- subset(ua, ids = nat_ids)
  nat_poly &lt;- as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines")
H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M)
plot(W,add=TRUE,col="blue")
plot(P,add=TRUE,col="green")
if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp)
library(raster)
library(rgdal)
library(rgeos)
library(maptools)
identification = function(xy,h,PLG){
  b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2),
               y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h)))
  pb1=Polygon(b)    
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))
  UC=gUnionCascaded(PLG)
  return(gIntersection(SPb1,UC))
}

and then, we identify, as follows

whichidtf = function(xy,h){
  h=.7*h
  label="EMPTY"
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"
return(label)
}

Let is use colored rectangle to make sure it works

nx=length(vx)
vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2)
ny=length(vy)
vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2)
 plot(M,border="white")
 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         lb=whichidtf(c(vx[i],vy[j]),h)
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      
     }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png)
 library(grid)
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png")
 tree &lt;- readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree
 rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png")
water &lt;- readPNG("angle-double-up.png")
 rev_water=water
 rev_water[,,3]=water[,,4]
 home &lt;- readPNG("home.png")
 rev_home=home
 rev_home[,,4]=home[,,4]*.5
 church &lt;- readPNG("church.png")
 rev_church=church
 rev_church[,,1]=church[,,4]*.5
 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white")
 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     lb=whichidtf(c(vx[i],vy[j]),h)
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     
   }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).