Tag Archives: Renglish

Convex Regression Model

This morning during the lecture on nonlinear regression, I mentioned (very) briefly the case of convex regression. Since I forgot to mention the codes in R, I will publish them here. Assume that y_i=m(\mathbf{x}_i)+\varepsilon_i where m:\mathbb{R}^d\rightarrow \mathbb{R} is some convex function.

Then m is convex if and only if \forall\mathbf{x}_1,\mathbf{x}_2\in\mathbb{R}^d, \forall t\in[0,1], m(t\mathbf{x}_1+[1-t]\mathbf{x}_2) \leq tm(\mathbf{x}_1)+[1-t]m(\mathbf{x}_2)Hidreth (1954) proved that if m^\star=\underset{m \text{ convex}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-m(\mathbf{x_i})\big)^2\right\rbracethen \mathbf{\theta}^\star=(m^\star(\mathbf{x_1}),\cdots,m^\star(\mathbf{x_n})) is unique.

Let \mathbf{y}=\mathbf{\theta}+\mathbf{\varepsilon}, then \mathbf{\theta}^\star=\underset{\mathbf{\theta}\in \mathcal{K}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \big(y_i-\theta_i)\big)^2\right\rbracewhere\mathcal{K}=\{\mathbf{\theta}\in\mathbb{R}^n:\exists m\text{ convex },m(\mathbf{x}_i)=\theta_i\}. I.e. \mathbf{\theta}^\star is the projection of \mathbf{y} onto the (closed) convex cone \mathcal{K}. The projection theorem gives existence and unicity.

For convenience, in the application, we will consider the real-valued case, m:\mathbb{R}\rightarrow \mathbb{R}, i.e. y_i=m(x_i)+\varepsilon_i. Assume that observations are ordered x_1\leq x_2\leq\cdots \leq x_n. Here \mathcal{K}=\left\lbrace\mathbf{\theta}\in\mathbb{R}^n:\frac{\theta_2-\theta_1}{x_2-x_1}\leq \frac{\theta_3-\theta_2}{x_3-x_2}\leq \cdots \leq \frac{\theta_n-\theta_{n-1}}{x_n-x_{n-1}}\right\rbrace

Hence, quadratic program with n-2 linear constraints.

m^\star is a piecewise linear function (interpolation of consecutive pairs (x_i,\theta_i^\star)).

If m is differentiable, m is convex if m(\mathbf{x})+ \nabla m(\mathbf{x})^{\text{T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})

More generally, if m is convex, then there exists \xi_{\mathbf{x}}\in\mathbb{R}^n such that m(\mathbf{x})+ \xi_{\mathbf{x}}^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y})
\xi_{\mathbf{x}} is a subgradient of m at {\mathbf{x}}. And then \partial m(\mathbf{x})=\big\lbrace m(\mathbf{x})+ \xi^{\text{ T}}\cdot[\mathbf{y}-\mathbf{x}] \leq m(\mathbf{y}),\forall \mathbf{y}\in\mathbb{R}^n\big\rbrace

Hence, \mathbf{\theta}^\star is solution of \text{argmin}\big\lbrace\|\mathbf{y}-\mathbf{\theta}\|^2\big\rbrace\text{subject to }\theta_i+\xi_i^{\text{ T}}[\mathbf{x}_j-\mathbf{x}_i]\leq\mathbf{\theta}_j,~\forall i,j and \xi_1,\cdots,\xi_n\in\mathbb{R}^n. Now, to do it for real, use cobs package for constrained (b)splines regression,

library(cobs)

To get a convex regression, use

plot(cars)
x = cars$speed
y = cars$dist
rc = conreg(x,y,convex=TRUE)
lines(rc, col = 2)


Here we can get the values of the knots

rc
 
Call:  conreg(x = x, y = y, convex = TRUE) 
Convex regression: From 19 separated x-values, using 5 inner knots,
     7,    8,    9,   20,   23.
RSS =  1356; R^2 = 0.8766;
 needed (5,0) iterations

and actually, if we use them in a linear-spline regression, we get the same output here

reg = lm(dist~bs(speed,degree=1,knots=c(4,7,8,9,,20,23,25)),data=cars)
u = seq(4,25,by=.1)
v = predict(reg,newdata=data.frame(speed=u))
lines(u,v,col="green")

Let us add vertical lines for the knots

abline(v=c(4,7,8,9,20,23,25),col="grey",lty=2)

Classification from scratch, neural nets 6/8

Sixth post of our series on classification from scratch. The latest one was on the lasso regression, which was still based on a logistic regression model, assuming that the variable of interest Y has a Bernoulli distribution. From now on, we will discuss technique that did not originate from those probabilistic models, even if they might still have a probabilistic interpretation. Somehow. Today, we will start with neural nets.

Maybe I should start with a disclaimer. The goal is not to replicate well designed R functions, used for predictive modeling. It is simply to get a basic understanding of what’s going on.

Networs, nodes and edges

First of all, neurals nets are nets, or networks. I will skip the parallel with “neural” stuff because it does not help me understanding what is happening (all apologies for my poor knowledge on biology, and cells)

So, it’s about some network. Networks have nodes, and edges (possibly connected) that connect nodes,

or maybe, to more specific (at least it helped me understanding what’s going on), some sort of flow network,

In such a network, we usually have sources (here multiple) sources (here \color{red}\{s_1,s_2,s_3\}), on the left, on a sink (here \{\color{blue}t\}), on the right. To continue with this metaphorical introduction, information from the sources should reach the sink. An usually, sources are explanatory variables, \{\mathbf{x}_1,\cdots,\mathbf{x}_p\}, and the sink is our variable of interest \mathbf{y}. And we want to create a graph, from the sources to the sink. We will have directed edges, with only one (unique) direction, where we will put weights. It is not a flow, the parallel with flow will stop here. For instance, the most simple network will be the following one, with no layer (i.e no node between the source and the sink)

The output here is a binary variable y\in\{0,1\} (it can also be y\in\{-1,+1\} but here, it’s not a big deal). In our network, our output will be y\in(0,1), because it is more easy to handly. For instance, consider y=f(something), for some function f taking values in (0,1). One can consider the sigmoid functionf(x)=\frac{1}{1+e^{-x}}=\frac{e^{x}}{e^{x}+1}which is actually the logistic function (so we should not be surprised to have results somehow close the logistic regression…). This function f is called the activation function, and there are thousands of such functions. If y\in\{-1,+1\}, people consider the hyperbolic tangentf(x)=\tanh(x)={\frac {(e^{x}-e^{-x})}{(e^{x}+e^{-x})}}or the inverse tangent function
f(x)=\tan ^{-1}(x)And as input for such function, we consider a weighted sum of incoming nodes. So herey_i=f\left(\sum_{j=1}^p\omega_j x_{j,i}\right)We can also add a constant actuallyy_i=f\left(\omega_0+\sum_{j=1}^p\omega_j x_{j,i}\right)So far, we are not far away from the logistic regression. Except that our starting point was a probabilistic model, in the sense that the later was interpreted as a probability (the probability that Y=1) and we wanted the model with the highest likelihood. But we’ll talk about selection of weights later one. First, let us construct our first (very simple) neural network. First, we have the sigmoid function

sigmoid = function(x) 1 / (1 + exp(-x))

The consider some weights. In our model with seven explanatory variables, with need 7 weights. Or 8 if we include the constant term. Let us consider \mathbf{\omega}=\mathbf{1},

weights_0 = rep(1,8)
X = as.matrix(cbind(1,myocarde[,1:7]))
y_5_1 = sigmoid(X %*% weights_0)

that’s kind of stupid because all our predictions are 1, here. Let us try something else. Like \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. It is optimized, somehow, but we needed something to visualize what’s going on

weights_0 = lm(PRONO~.,data=myocarde)$coefficients

then use

y_5_1 = sigmoid(X %*% weights_0)

In order to see if we get a “good” prediction, let use plot the ROC curve, and compare it with the one we got with a (simple) logistic regression

library(ROCR)
pred = ROCR::prediction(y_5_1,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))
y_0 = predict(reg,type="response")
pred0 = ROCR::prediction(y_0,myocarde$PRONO)
perf0 = ROCR::performance(pred0,"tpr", "fpr")
plot(perf0,add=TRUE,col="red")


That’s not bad for a very first attempt. Except that we’ve been cheating here, since we did use \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. How, for real, should we choose those weights?

Using a loss function

Well, if we want an “optimal” set of weights, we need to “optimize” an objective function. So we need to quantify the loss of a mistake, between the prediction, and the observation. Consider here a quadratic loss function

loss = function(weights){
  mean( (myocarde$PRONO-sigmoid(X %*% weights))^2) }

It might be stupid to use a quadratic loss function for a classification, but here, it’s not the point. We just want to understand what is the algorithm we use, and the loss function \ell is just one parameter. Then we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,f(\omega_0+\mathbf{x}_i^T\mathbf{\omega})\right)\right\rbraceThus, consider

weights_1 = optim(weights_0,loss)$par

(where the starting point is the OLS estimate). Again, to see what’s going on, let us visualize the ROC curve

y_5_2 = sigmoid(X %*% weights_1)
pred = ROCR::prediction(y_5_2,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf0,add=TRUE,col="red")


That’s not amazing, but again, that’s only a first step.

A single layer

Let us add a single layer in our network.

Those nodes are connected to the sources (incoming from sources) from the left, and then connected to the sink, on the right. Those nodes are not inter-connected. And again, for that network, we need edges (i.e series of weights). For instance, on the network above, we did add one single layer, with (only) three nodes.

For such a network, the prediction formula is \mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3\omega_h f_h\left(\omega_{h,0}+ \sum_{j=1}^p \omega_{h,j} x_j\right)\right)or more synthetically\mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)Usually, we consider the same activation function everywhere. Don’t ask me why, I find that weird.

Now, we have a lot of weights to choose. Let us use again OLS estimates

weights_1 <- lm(PRONO~1+FRCAR+INCAR+INSYS+PAPUL+PVENT,data=myocarde)$coefficients
X1 = as.matrix(cbind(1,myocarde[,c("FRCAR","INCAR","INSYS","PAPUL","PVENT")]))
weights_2 <- lm(PRONO~1+INSYS+PRDIA,data=myocarde)$coefficients
X2=as.matrix(cbind(1,myocarde[,c("INSYS","PRDIA")]))
weights_3 <- lm(PRONO~1+PAPUL+PVENT+REPUL,data=myocarde)$coefficients
X3=as.matrix(cbind(1,myocarde[,c("PAPUL","PVENT","REPUL")]))

In that case, we did specify edges, and which sources (explanatory variables) should be used for each additional node. Actually, here, other techniques could be have been used, like using a PCA. Each node will then be one of the components. But we’ll use that idea later one…

X = cbind(sigmoid(X1 %*% weights_1), sigmoid(X2 %*% weights_2), sigmoid(X3 %*% weights_3))

But we’re not done here. Those were weights from the source to the know nodes, in the layer. We still need the weights from the nodes to the sink. Here, let use use a simple average

weights = c(1/3,1/3,1/3)
y_5_3 <- sigmoid(X %*% weights)

Again, we can plot the ROC curve to see what we’ve done…

pred = ROCR::prediction(y_5_3,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf0,add=TRUE,col="red")

On back propagation

Now, we need some optimal selection of those weights. Observe that with only 3 nodes, there are already (7+1)\times3+3=27 parameters in that model! Clearly, parcimony is not the major issue when you start using neural nets! If p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,p(\mathbf{x}_i)\right)\right\rbracefor some loss function, which is\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i-p(\mathbf{x}_i))^2 \right\rbracefor the quadratic norm, or\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i\log p(\mathbf{x}_i)+[1-y_i]\log [1-p(\mathbf{x}_i)]) \right\rbraceif we want to use cross-entropy.

For convenience, let us center all the variable we create, otherwise, we get numerical problems.

center = function(z) (z-mean(z))/sd(z)
loss = function(weights){
weights_1 = weights[0+(1:7)]
weights_2 = weights[7+(1:7)]
weights_3 = weights[14+(1:7)]
weights_  = weights[21+1:4]
X1=X2=X3=as.matrix(myocarde[,1:7])
Z1 = center(X1 %*% weights_1)
Z2 = center(X2 %*% weights_2)
Z3 = center(X3 %*% weights_3)
X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3))
mean( (myocarde$PRONO-sigmoid(X %*% weights_))^2)}

Now that we have our objective function, consider some starting points. We can consider weights from a PCA, and then use a gradient descent algorithm,

pca = princomp(myocarde[,1:7])
W = get_pca_var(pca)$contrib
weights_0 = c(W[,1],W[,2],W[,3],c(-1,rep(1,3)/3))
weights_opt = optim(weights_0,loss)$par

The prediction is then obtained using

weights_1 = weights_opt[0+(1:7)]
weights_2 = weights_opt[7+(1:7)]
weights_3 = weights_opt[14+(1:7)]
weights_  = weights_opt[21+1:4]
X1=X2=X3=as.matrix(myocarde[,1:7])
Z1 = center(X1 %*% weights_1)
Z2 = center(X2 %*% weights_2)
Z3 = center(X3 %*% weights_3)
X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3))
y_5_4 = sigmoid(X %*% weights_)

And as previously, why not plot the ROC curve of that model

pred = ROCR::prediction(y_5_4,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf,add=TRUE,col="red")


That’s not too bad. But with 27 coefficients, that’s what we would expect, no?

Using nnet() function

That’s more or less what is done in neural nets functions. Let us now have a look at some dedicated R functions.

library(nnet)
myocarde_minmax = myocarde
minmax = function(z) (z-min(z))/(max(z)-min(z))
for(j in 1:7) myocarde_minmax[,j] = minmax(myocarde_minmax[,j])

Here, variables are linearly transformed, to take values in (0,1). Then we can construct a neural network with one single layer, and three nodes,

model_nnet = nnet(PRONO~.,data=myocarde_minmax,size=3)
summary(model_nnet)
a 7-3-1 network with 28 weights
options were -
 b->h1 i1->h1 i2->h1 i3->h1 i4->h1 i5->h1 i6->h1 i7->h1 
 -9.60  -1.79  21.00  14.72 -20.45  -5.05  14.37 -17.37 
 b->h2 i1->h2 i2->h2 i3->h2 i4->h2 i5->h2 i6->h2 i7->h2 
  4.72   2.83  -3.37  -1.64   1.49   2.12   2.31   4.00 
 b->h3 i1->h3 i2->h3 i3->h3 i4->h3 i5->h3 i6->h3 i7->h3 
 -0.58  -6.03  25.14  18.03  -1.19   7.52 -19.47 -12.95 
  b->o  h1->o  h2->o  h3->o 
 -1.32  29.00 -10.32  26.27

Here, it is the complete full network. And actually, there are (online) some functions that can he used to visualize that network

library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff044412703516c34f1a4684a5/nnet_plot_update.r')
plot.nnet(model_nnet)


Nice, isn’t it? We clearly see the intermediary layer, with three nodes, and on top the constants. Edges are the plain lines, the darker, the heavier (in terms of weights).

Using neuralnet()

Other R functions can actually be considered.

library(neuralnet)
model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)),
myocarde_minmax,hidden=3, act.fct = sigmoid)
plot(model_nnet)


Again, for the same network structure, with one (hidden) layer, and three nodes in it.

Network with multiple layers

The good thing is that it’s not possible to add more layers. Like two layers. Nodes from the first layer are no longuer connected with the sink, but with nodes in the second layer. And those nodes will then be connected to the sink. We now have something like
p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_h f_h\left(\omega_{h,0}+ \mathbf{z}_h^T\mathbf{\omega}_h\right)\right)where\mathbf{z}_h=f\left( \omega_{h,0}+ \sum_{j=1}^{k_h} \omega_{h,j} f_{h,j}\left(\omega_{h,j,0}+ \mathbf{x}^T\mathbf{\omega}_{h,j}\right)\right)I may be rambling here (a little bit) but that’s a lot of parameters. Here is the visualization of such a network,

library(neuralnet)
model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)),
myocarde_minmax,hidden=3, act.fct = sigmoid)
plot(model_nnet)

Application

Let us get back on our simple dataset, with only two covariates.

library(neuralnet)
df_minmax =df
df_minmax$y=(df_minmax$y=="1")*1
minmax = function(z) (z-min(z))/(max(z)-min(z))
for(j in 1:2) df_minmax[,j] = minmax(df[,j])
X = as.matrix(cbind(1,df_minmax[,1:2]))

Consider only one layer, with two nodes

model_nnet = neuralnet(formula(lm(y~.,data=df_minmax)),
df_minmax,hidden=c(2))
plot(model_nnet)


Here, we did not specify it, but the activation function is the sigmoid (actually, it is called logistic here)

model_nnet$act.fct
function (x) 
{
    1/(1 + exp(-x))
}
 
attr(,"type")
[1] "logistic"
f=model_nnet$act.fct

The weights (on the figure) can be obtained using

w0 = model_nnet$weights[[1]][[2]][,1]
w1 = model_nnet$weights[[1]][[1]][,1]
w2 = model_nnet$weights[[1]][[1]][,2]

Now, to get our prediction,
we should usep(\mathbf{x})=f\left( \omega_0+ \omega_1 f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_1 f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})\right)which can be obtained using

f(cbind(1,f(X%*%w1),f(X%*%w2))%*%w0)
              [,1]
 [1,] 0.7336477343
 [2,] 0.7317999050
 [3,] 0.7185803540
 [4,] 0.7404005280
 [5,] 0.7518482779
 [6,] 0.4939774149
 [7,] 0.4965876378
 [8,] 0.7101714888
 [9,] 0.5050760026
[10,] 0.5049877644

Unfortunately, it is not the output of the model here,

neuralnet::prediction(model_nnet)
Data Error:	0;
$rep1
       x1           x2              y
1  0.1250 0.0000000000  0.02030470787
2  0.0625 0.1176470588  0.89621706711
3  0.9375 0.2352941176  0.01995171956
4  0.0000 0.4705882353  1.10849420363
5  0.5000 0.4705882353 -0.01364966058
6  0.3125 0.5294117647 -0.02409150561
7  0.6875 0.8235294118  0.93743057765
8  0.3750 0.8823529412  1.01320924782
9  1.0000 0.9058823529  1.04805134309
10 0.5625 1.0000000000  1.00377379767

If anyone has a clue, I’d be glad to know what went wrong here… I find that odd to have outputs outside the (0,1) interval, but the output is neitherp(\mathbf{x})=\omega_{0,0}+ \omega_{0,1} f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_{0,2} f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})

cbind(1,f(X%*%w1),f(X%*%w2))%*%w0
                [,1]
 [1,]  1.01320924782
 [2,]  1.00377379767
 [3,]  0.93743057765
 [4,]  1.04805134309
 [5,]  1.10849420363
 [6,] -0.02409150561
 [7,] -0.01364966058
 [8,]  0.89621706711
 [9,]  0.02030470787
[10,]  0.01995171956

(to be continued…)

Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, \widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbraceThe objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}for any \mathbf{a}\in\mathbb{R}^d. One might say that the AIC could be written-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}And actually, this will be our objective function. More specifically, we will consider
\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbracefor some norm \|\cdot\|. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for \|\cdot\|_{\ell_2} (the Ridge regression) and for \|\cdot\|_{\ell_1} (the LASSO regression).

Normalization of the covariates

The problem of \|\mathbf{\beta}\| is that the norm should make sense, somehow. A small \mathbf{\beta}_j is with respect to the “dimension” of x_j‘s. So, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

Before running some codes, recall that we want to solve something like\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression.

The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : \min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbracecan be written equivalently (it is a strictly convex problem)\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbraceThus, the constrained maximum should lie in the blue disk

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
u = seq(-1,1,length=251)
lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue")
lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue")

Let us consider the objective function, with the following code

PennegLogLik = function(bbeta,lambda=0){
  b0   = bbeta[1]
  beta = bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*
  log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2)
}

Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case

lambda = 1
beta_init = lm(PRONO~.,data=myocarde)$coefficients
vpar = matrix(NA,1000,8)
for(i in 1:1000){
vpar[i,] = optim(par = beta_init*rnorm(8,1,2), 
function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par}
par(mfrow=c(1,2))
plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1])
plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2])


Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum.

The code to compute \widehat{\mathbf{\beta}}_{\lambda} would then be

opt_ridge = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]}

and we can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} as a function of {\lambda}

v_lambda = c(exp(seq(-2,5,length=61)))
est_ridge = Vectorize(opt_ridge)(v_lambda)
library("RColorBrewer")
colrs = brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1])
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])

At least it seems to make sense: we can observe the shrinkage as \lambda increases (we’ll get back to that later on).

Ridge, using Netwon Raphson algorithm

We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}where
\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})and\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}where \mathbf{\Delta}_{old} is the diagonal matrix with terms \mathbf{p}_{old}(1-\mathbf{p}_{old}) on the diagonal.

Thus\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]that we can also write\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. Here, on the penalized problem, we can easily prove that\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}while\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}Hence\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}
The code is then

Y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
X = cbind(1,X)
colnames(X) = c("Inter",names(myocarde[,1:7]))
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
beta[,8:10]
              [,1]        [,2]        [,3]
XInter  0.59619654  0.59619654  0.59619654
XFRCAR  0.09217848  0.09217848  0.09217848
XINCAR  0.77165707  0.77165707  0.77165707
XINSYS  0.69678521  0.69678521  0.69678521
XPRDIA -0.29575642 -0.29575642 -0.29575642
XPAPUL -0.23921101 -0.23921101 -0.23921101
XPVENT -0.33120792 -0.33120792 -0.33120792
XREPUL -0.84308972 -0.84308972 -0.84308972

Again, it seems that convergence is very fast.

And interestingly, with that algorithm, we can also derive the variance of the estimator\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}where\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}

The code to compute \widehat{\mathbf{\beta}}_{\lambda} as a function of \lambda is then

newton_ridge = function(lambda=1){
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:20){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
Varz = solve(Delta)
Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*%
  Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X)))
return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} (as a function of \lambda)

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])


and to get the evolution of the variance

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)


Recall that when \lambda=0 (on the left of the graphs), \widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco} (no penalty). Thus as \lambda increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with \alpha=0

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
library(glmnet)
glm_ridge = glmnet(X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

as a function of the norm

the \ell_1 norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post…

Ridge with orthogonal covariates

An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates.

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet)
glm_ridge = glmnet(pca_X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that \widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}

Application

Let us try with our second set of data

df0 = df
df0$y=as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type='response')}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}

We can try various values of \lambda

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(.2))
plot_lambda(.2)


or

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(1.2))
plot_lambda(1.2)


Next step is to change the norm of the penality, with the \ell_1 norm (to be continued…)

Classification from scratch, logistic with kernels 3/8

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})

Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code

mean_x = function(x,bw){
  w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1)
  weighted.mean(myocarde$PRONO,w)}
u = seq(5,55,length=201)
v = Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better.

Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1))
plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk)
f=function(bm){
  v = Vectorize(function(x) mean_x(x,bm))(reg$x)
  z=reg$y-v
  sum((z[!is.na(z)])^2)}
optim(bk,f)$par}
x=seq(1,10,by=.1)
y=Vectorize(g)(x)
plot(x,y)
abline(0,exp(-1),col="red")
abline(0,.37,col="blue")


There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest…

Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101)
p = function(x,y){
  bw1 = .2; bw2 = .2
  w = dnorm((df$x1-x)/bw1, mean=0,sd=1)*
      dnorm((df$x2-y)/bw2, mean=0,sd=1)
  weighted.mean(df$y=="1",w)
}
v = outer(u,u,Vectorize(p))
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with
\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7])
Sigma_Inv = solve(Sigma)
d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)}
k_closest = function(i,k){
  vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv)
vect = Vectorize(vect_dist)((1:nrow(myocarde))) 
which((rank(vect)))}

Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.

k_majority = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2]
  return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)])
  return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO,
MAJORITY=k_majority(7),PROPORTION=k_mean(7))
      OBSERVED MAJORITY PROPORTION
 [1,]        1        1  0.7142857
 [2,]        0        1  0.5714286
 [3,]        0        0  0.1428571
 [4,]        1        1  0.5714286
 [5,]        0        1  0.7142857
 [6,]        0        0  0.2857143
 [7,]        1        1  0.7142857
 [8,]        1        0  0.4285714
 [9,]        1        1  0.7142857
[10,]        1        1  0.8571429
[11,]        1        1  1.0000000
[12,]        1        1  1.0000000

Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){
  w = rank(abs(myocarde$INSYS-x),ties.method ="random")
  mean(myocarde$PRONO[which(w<=9)])}
u=seq(5,55,length=201)
v=Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")]))
u = seq(0,1,length=51)
p = function(x,y){
  k = 6
  vect_dist = function(j)  d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv)
  vect = Vectorize(vect_dist)(1:nrow(df)) 
  idx  = which(rank(vect)<=k)
  return(mean((df$y==1)[idx]))}
v = outer(u,u,Vectorize(p))
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar)
src <- osmsource_api()
bb <- center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua <- get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids <- find(ua, way(tags(k %in% vc)))
nat_ids <- find_down(ua, way(nat_ids))
nat <- subset(ua, ids = nat_ids)
nat_poly <- as_sp(nat, type)}
 
listev = function(vc,type="polygons"){
  nat_ids <- find(ua, way(tags(v %in% vc)))
  nat_ids <- find_down(ua, way(nat_ids))
  nat <- subset(ua, ids = nat_ids)
  nat_poly <- as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines")
H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M)
plot(W,add=TRUE,col="blue")
plot(P,add=TRUE,col="green")
if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp)
library(raster)
library(rgdal)
library(rgeos)
library(maptools)
identification = function(xy,h,PLG){
  b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2),
               y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h)))
  pb1=Polygon(b)    
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))
  UC=gUnionCascaded(PLG)
  return(gIntersection(SPb1,UC))
}

and then, we identify, as follows

whichidtf = function(xy,h){
  h=.7*h
  label="EMPTY"
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"
return(label)
}

Let is use colored rectangle to make sure it works

nx=length(vx)
vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2)
ny=length(vy)
vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2)
 plot(M,border="white")
 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         lb=whichidtf(c(vx[i],vy[j]),h)
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      
     }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png)
 library(grid)
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png")
 tree <- readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree
 rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png")
water <- readPNG("angle-double-up.png")
 rev_water=water
 rev_water[,,3]=water[,,4]
 home <- readPNG("home.png")
 rev_home=home
 rev_home[,,4]=home[,,4]*.5
 church <- readPNG("church.png")
 rev_church=church
 rev_church[,,1]=church[,,4]*.5
 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white")
 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     lb=whichidtf(c(vx[i],vy[j]),h)
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     
   }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).

 

Graduate Course on Advanced Tools for Econometrics (2)

This Tuesday, I will be giving the second part of the (crash) graduate course on advanced tools for econometrics. It will take place in Rennes, IMAPP room, and I have been told that there will be a visio with Nantes and Angers. Slides for the morning are online, as well as slides for the afternoon.

In the morning, we will talk about variable section and penalization, and in the afternoon, it will be on changing the loss function (quantile regression).

When “learning Python” becomes “practicing R” (spoiler)

15 years ago, a student of mine told me that I should start learning Python, that it was really a great language. Students started to learn it, but I kept postponing. A few years ago, I started also Python for Kids, which is really nice actually, with my son. That was nice, but not really challenging. A few weeks ago, I also started a crash course in Python, taught by Pierre. The truth is I think I will probably give up. I keep telling myself (1) I can do anything much faster in R (2) Python is not intuitive, especially when you’re used to practice R for almost 20 years… Last week, I also had to link Python and R for our pricing game : Ali wrote some template codes in Python, and I had to translate them in R. And it was difficult…

Anyway, since it was a school break this week, I said to my son that we should try to practice together, with a nice challenge. For those willing to try it, you’d better stop here, because I will spoil it.

Continue reading When “learning Python” becomes “practicing R” (spoiler)

Using convolutions (S3) vs distributions (S4)

Usually, to illustrate the difference between S3 and S4 classes in R, I mention glm (from base) and vglm (from VGAM) that provide similar outputs, but one is based on S3 codes, while the second one is based on S4 codes. Another way to illustrate is to manipulate distributions.

Consider the case where we want to sum (independent) random variables. For instance two lognormal distribution. Let us try to compute the median of the sum.

The distribution function of the sum of two independent (positive) random variables is F_{S_2}(x)=\int_0^x F_{X_1}(x-y)dF_{X_2}(x)

pSum2 = function(x) integrate(function(y) 
plnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value

Let us visualize that cumulative distribution function

vx=seq(0.1,50,by=.1)
vy=Vectorize(pSum2)(vx)
plot(vx,vy,type="l",ylim=c(0,1))
abline(h=.5,lty=2)

Let us find an upper bound to compute (in a decent time) quantiles

pSum2(350)
[1] 0.99195

and then use the uniroot function to inverse that function

qSum = function(u) uniroot(function(x) 
Vectorize(pSum2)(x)-u, interval=c(0,350))$root
vu=seq(.01,.99,by=.01)
vv=Vectorize(qSum)(vu)

The median is here

qSum(.5)
[1] 14.155

Why not consider the sum of three (independent) distributions ? Its cumulative distribution function can be writen using our previous function F_{S_3}(x)=\int_0^x F_{S_2}(x-y)dF_{X_3}(x)

pSum3 = function(x) integrate(function(y) 
pSum2(x-y)*dlnorm(y,2,2),0,x)$value

If we look at some values we good

pSum3(4)
[1] 0.015624
pSum3(5)
Error in integrate(function(y) plnorm(x - y, 1, 2) * 
dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

So obviously, there are computational issues here.

Let us consider the following alternative expression F_{S_3}(x)=\int_0^x F_{X_3}(x-y)dF_{S_2}(x). Of course, it is necessary here to compute the density of the sum of two variables

dSum2 = function(x) integrate(function(y) 
dlnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value
pSum3 = function(x) integrate(function(y) 
dlnorm(x-y,2,2)*dSum2(y),0,x)$value

Again, let us compute some values

pSum3(4)
[1] 0.0090285
pSum3(5)
[1] 0.01186

This one seems to work quite well. But it is just an illusion.

pSum3(9)
Error in integrate(function(y) dlnorm(x - y, 1, 2) *
 dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

Clearly, with those S3-type functions, it wlll be complicated to run computations with 3 variables, or more.

Let us consider distributions in the S4-type format of the following package

library(distr)
X1 = Lnorm(mean=1,sd=2)
X2 = Lnorm(mean=2,sd=1)
S2 = X1+X2

To compute the median, we simply have to use

distr::q(S2)(.5)
[1] 14.719

We can also visualize it easily

plot(q(S2))

which looks (very) close to what we got, manually.  But here, it is also possible to work with the sum of 3 (independent) random variables

X3 = Lnorm(mean=2,sd=2)
S3 = X1+X2+X3

To compute the median, use

distr::q(S3)(.5)
[1] 33.208

The function is here

plot(q(S3))

(Advanced) R Crash Course, for Actuaries

The fourth year of the Data Science for Actuaries program started this morning. I will be there for the introduction to R. The slides are available online (created with slidify, the .Rmd file is also available)

A (standard) markdown is also available (as well as the .Rmd file). I have to thank Ewen for his help on slidify (especially for the online quizz, and the integration of leaflet maps or the rgl animated graph….)

Visualizing effects of a categorical explanatory variable in a regression

Recently, I’ve been working on two problems that might be related to semiotic issues in predictive modeling (i.e. instead of a standard regression table, how can we plot coefficient values in a regression model). To be more specific, I have a variable of interest Y that is observed for several individuals i, with explanatory variables \mathbf{x}_i, year t, in a specific region z_i\in\{A,B,C,D,E\}. Suppose that we have a simple (standard) linear model (forget about time here) y_i=\beta_0+\beta_1x_{1,i}+\cdots+\beta_kx_{k,i}+\sum_j \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i

Let us forget the temporal effect to focus on the spatial effect today. And consider some simulated dataset. There will be only one (continuous) explanatory variable. And I will generate correlated covariates, just to be more realistic.

n=1000
library(mnormt)
r=.5
Sigma=matrix(c(1,r,r,1), 2, 2)
set.seed(1)
X=rmnorm(n,c(0,0),Sigma)
X1=cut(X[,1],c(-100,quantile(X[,1],c(.1,.4,.7,.85)),
100),labels=LETTERS[1:5])
X2=X[,2]
Y=5+X[,1]-X[,2]+rnorm(n)/2
db=data.frame(Y,X1,X2)

Here we have y_i=\beta_0+\beta_1x_{1,i}+\sum_{j\in\{A,B,C,D,E\}} \alpha_j \mathbf{1}(z_i\in j)+\varepsilon_i The goal here is to get to graph to visualize the vector \hat\alpha=(\hat\alpha_A,\cdots,\hat\alpha_E). Let us run the linear regression

reg1=lm(Y~X1+X2,data=db)
idx=which(substr(names(reg1$coefficients), 1,2)=="X1")
v1=reg1$coefficients[idx]
names(v1)=LETTERS[2:5]
barplot(v1,col=rgb(0,0,1,.4))

Note that it is possible to add some sort of “confidence interval” to discuss significance (or to avoid to spend hours discussing differences in bar heights that are not significantly different)

library(Hmisc)
sv1=summary(reg1)$coefficients[idx,2]
(bp1=barplot(v1,ylim=range(c(0,v1+2*sv1))))
errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

My main concern here is the “reference” that is considered. Should A be the reference? Why not B

db$X1=relevel(db$X1,"B")
reg1=lm(Y~X1+X2,data=db)
idx=which(substr(names(reg1$coefficients),1,2)=="X1")
v1=reg1$coefficients[idx]
names(v1)=LETTERS[c(1,3:5)]
library(Hmisc)
sv1=summary(reg1)$coefficients[idx,2]
(bp1=barplot(v1)
errbar(bp1[,1],v1,v1-2*sv1,v1+2*sv1,add=TRUE)

Why not the smallest one? Why not the largest one?… What if there is no simple way to choose. Furthermore, let us get back to the original point, which is that there might be some temporal aspects. More precisely, we can have \hat\alpha^{(t)}=(\hat\alpha_A^{(t)},\cdots,\hat\alpha_E^{(t)}). If we have also \hat\alpha^{(t+1)} and we get another plot, how do we interpret it. If for E the bar is taller, it means that relative to A, the difference has increased. I have the feeling that the interpretation is more complicated because we do not see, on that graph, changes in \hat\alpha^{(t)}_A.

Let us try something else. First, let us get back to the original setting

db$X1=relevel(db$X1,"A")

Consider here the regression without the intercept, so that all values remain

reg1=lm(Y~0+X1+X2,data=db)
idx=which(substr(names(reg1$coefficients),1,2)=="X1")
v1=reg2$coefficients[idx]
names(v1)=LETTERS[1:5]
barplot(v1)

It can be hard to read, especially if Y takes (very) large values, and you think that barplots should start at 0. But still, having those 5 values is nice. Why not rescale that graph?

A natural idea my be to consider the case where no spatial component is considered, and to look at the difference with that reference.

reg1=lm(Y~1+X2,data=db)
reg2=lm(Y~0+X1+X2,data=db)
idx=which(substr(names(reg2$coefficients),1,2)=="X1")
v1=reg2$coefficients[idx]
v2=v1-reg1$coefficients["(Intercept)"]
barplot(v2,col=rgb(0,0,1,.4))
sv2=summary(reg2)$coefficients[idx,2]
(bp2=barplot(v2,ylim=range(c(v2-2*sv2,v2+2*sv2))))
errbar(bp2[,1],v2,v2-2*sv2,v2+2*sv2,add=TRUE)

I like that graph, I should admit it. Now, I still have some remaining questions. For instance, can we insure that when only the intercept is considered, the value of \hat\beta_0 is somewhere between \hat\beta_A,\cdots,\hat\beta_E? Is it possible that \hat\beta_A-\hat\beta_0,\cdots,\hat\beta_E-\hat\beta_0 are all positive? In that case, I would find that hard to interpret.

Actually, if I really want values that can be seen as compared to some average, why not consider a (weighted) average of \hat\beta_A,\cdots,\hat\beta_E? (weights being here proportion in each class, in each region)

w=table(db$X1)
v3=v1-sum(w*v1)/sum(w)
(bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3))))
errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

I like that one. But what if, instead of normalizing at the end, we normalize the original dependent variable. By “normalize”, I mean “rescale”, to have a centered variable.

db$Y0=db$Y-mean(db$Y)
reg3=lm(Y0~0+X1+X2,data=db)
sv3=summary(reg3)$coefficients[idx,2]
(bp3=barplot(v3,ylim=range(c(v3-2*sv3,v3+2*sv3))))
errbar(bp3[,1],v3,v3-2*sv3,v3+2*sv3,add=TRUE)

This one is nice, because it is extremely simple to explain. But what if instead of a linear regression, we add a logistic one (with Y\in\{0,1\})? or a Poisson regression…

So maybe it cannot be the best solution here. Let us try something else… In insurance ratemaking, people like to use “zonier“. It is a two-stage regression. The idea is to run a regression without any spatial components, first. Then, consider the regression of residuals on spatial variables. Here, it would be something like

reg1=lm(Y~1+X2,data=db)
reg2=lm(Y~0+X1+X2,data=db)

Since we focus on residuals, those are centered, and we have an easy interpretation of respective values

sv4=summary(reg4)$coefficients[idx,2]
v4=reg4$coefficients
(bp4=barplot(v4,names.arg=LETTERS[1:5])))
errbar(bp4[,1],v4,v4-2*sv4,v4+2*sv4,add=TRUE)

I guess that it can also be use in generalized linear models, with Pearson (or deviance) residuals.

Another possible idea can be the following. Again, the goal is not to have the true values, but to visualize on a graph how regions can be different. Here, all of them are significantly different. And in region A, Y is smaller, ceteris paribus (other things equal in the sense that we have taken into account x_1). And in region E it is larger. Here, the graph helps to “see” those differences.

Why not consider a completely different graph. What if we plot vector a instead of \alpha, where a_A can be interpreted as the value of the coefficient if we consider region A against “not region A“. What if we consider 5 regressions where dichotomous versions of Z are considered : Z_j=\mathbf{1}_{Z=j}.

v5=sv5=rep(NA,5)
names(v5)=LETTERS[1:5]
for(k in 1:5){
reg=lm(Y~I(X1==LETTERS[k])+X2,data=db)
v5[k]=reg$coefficients[2]
sv5[k]=summary(reg)$coefficients[2,2]}

We can plot that sequence of values, including some confidence intervals (that would be related to significance with respect to all other regions)

(bp5=barplot(v5,ylim=range(c(v5-2*sv5,v5+2*sv5))))
errbar(bp5[,1],v5,v5-2*sv5,v5+2*sv5,add=TRUE)

Looking at values does not give intuitive results, but I have the feeling that it is easy to explain what we plot (we compare each region to “the rest of the world”), and the ordering of a seems to be consistent with \alpha (but I could not prove it).

Here are some ideas I got. I should be able to provide other graphs, but I would love to discuss with anyone on that topics, to find a proper and nice way to visualize effects of a categorical explanatory variable in a regression model (that can be a logistic one). Comments are open…

Holt-Winters with a Quantile Loss Function

Exponential Smoothing is an old technique, but it can perform extremely well on real time series, as discussed in Hyndman, Koehler, Ord & Snyder (2008)),

when Gardner (2005) appeared, many believed that exponential smoothing should be disregarded because it was either a special case of ARIMA modeling or an ad hoc procedure with no statistical rationale. As McKenzie (1985) observed, this opinion was expressed in numerous references to my paper. Since 1985, the special case argument has been turned on its head, and today we know that exponential smoothing methods are optimal for a very general class of state-space models that is in fact broader than the ARIMA class.

Furthermore, I like it because I think it has nice pedagogical features. Consider simple exponential smoothing, L_{t}=\alpha Y_{t}+(1-\alpha)L_{t-1} where \alpha\in(0,1) is the smoothing weight. It is locally constant, in the sense that {}_{t}\hat Y_{t+h} = L_{t}

 library(datasets)
 X=as.numeric(Nile)
 SimpleSmooth = function(a){
  T=length(X)
  L=rep(NA,T)
  L[1]=X[1]
  for(t in 2:T){L[t]=a*X[t]+(1-a)*L[t-1]}
  return(L)
 }
 plot(X,type="b",cex=.6)
 lines(SimpleSmooth(.2),col="red")

When using the standard R function, we get

hw=HoltWinters(X,beta=FALSE,gamma=FALSE, l.start=X[1])
hw$alpha
[1] 0.2465579

Of course, one can replicate that optimal value

V=function(a){
     T=length(X)
     L=erreur=rep(NA,T)
     erreur[1]=0
     L[1]=X[1]
     for(t in 2:T){
         L[t]=a*X[t]+(1-a)*L[t-1]
         erreur[t]=X[t]-L[t-1] }
     return(sum(erreur^2))
}
optim(.5,V)$par
[1] 0.2464844

Here, the optimal value for \alpha is the one that minimizes the one-step prediction, for the \ell_2 loss function, i.e. \sum_{t=2}^n(Y_t-{}_{t-1}\hat Y_t)^2 where here {}_{t-1}\hat Y_t = L_{t-1}. But one can consider another loss function, for instance the quantile loss function, \ell_{\tau}(\varepsilon)=\varepsilon(\tau-\mathbb{I}_{\varepsilon\leq 0}). The optimal coefficient is then obtained using

HWtau=function(tau){
loss=function(e) e*(tau-(e<=0)*1)
 V=function(a){
  T=length(X)
  L=erreur=rep(NA,T)
  erreur[1]=0
  L[1]=X[1]
  for(t in 2:T){
  L[t]=a*X[t]+(1-a)*L[t-1]
  erreur[t]=X[t]-L[t-1] }
 return(sum(loss(erreur)))
 }
 optim(.5,V)$par
}

Here is the evolution of \alpha^\star_\tau as a function of \tau (the level of the quantile considered).

T=(1:49)/50
HW=Vectorize(HWtau)(T)
plot(T,HW,type="l")
abline(h= hw$alpha,lty=2,col="red")

Note that the optimal \alpha is decreasing with \tau. I wonder how general this result can be…

Of course, one can consider more general exponential smoothing, for instance the double one, with L_t=\alpha Y_t+(1-\alpha)[L_{t-1}+B_{t-1}]andB_t=\beta[L_t-L_{t-1}]+(1-\beta)B_{t-1}so that the prediction is now {}_{t}\hat Y_{t+h} = L_{t}+hB_t (it is now locally linear – and no longer constant).

hw=HoltWinters(X,gamma=FALSE,l.start=X[1])
hw$alpha
    alpha 
0.4200241 
hw$beta
      beta 
0.05973389

The code to compute the smoothed series is the following

DoubleSmooth = function(a,b){
  T=length(X)
  L=B=rep(NA,T)
  L[1]=X[1]; B[1]=0
  for(t in 2:T){
  L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1])
  B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] }
 return(L+B)
 }

Here also it is possible to replicate R using the \ell_2 loss function

V=function(A){
     a=A[1]
     b=A[2]
     T=length(X)
     L=B=erreur=rep(NA,T)
     erreur[1]=0
     L[1]=X[1]; B[1]=X[2]-X[1]
     for(t in 2:T){
         L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1])
         B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] 
         erreur[t]=X[t]-(L[t-1]+B[t-1]) }
     return(sum(erreur^2))
}
optim(c(.5,.05),V)$par
[1] 0.41904510 0.05988304

(up to numerical optimization approximation, I guess). But here also, a quantile loss function can be considered

HWtau=function(tau){
loss=function(e) e*(tau-(e<=0)*1)
 V=function(A){
  a=A[1]
  b=A[2]
  T=length(X)
  L=B=erreur=rep(NA,T)
  erreur[1]=0
  L[1]=X[1]; B[1]=X[2]-X[1]
  for(t in 2:T){
   L[t]=a*X[t]+(1-a)*(L[t-1]+B[t-1])
   B[t]=b*(L[t]-L[t-1])+(1-b)*B[t-1] 
   erreur[t]=X[t]-(L[t-1]+B[t-1]) }
  return(sum(loss(erreur)))
  }
     optim(c(.5,.05),V)$par
}

and we can plot those values on a graph

T=(1:49)/50
HW=Vectorize(HWtau)(T)
plot(HW[1,],HW[2,],type="l")
abline(v= hw$alpha,lwd=.4,lty=2,col="red")
abline(h= hw$beta,lwd=.4,lty=2,col="red")
points(hw$alpha,hw$beta,pch=19,col="red")

(with \alpha on the x-axis, and \beta on the y-axis). So here, it is extremely simple to change the loss function, but so far, it should be done manually. Of course, one do it also for the seasonal exponential smoothing model.