Tag Archives: logistic

Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair)
library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}

Let us not compute it for all variables

N = 0:15
M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).

From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration

Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv,

The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

Estimates on training vs. validation samples

Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”.

In order to visualize this problem, consider my (simple) dataset

MYOCARDE=read.table(
  "http://freakonometrics.free.fr/saporta.csv",
  head=TRUE,sep=";")

Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually)

n=nrow(MYOCARDE)
M=matrix(NA,100,ncol(MYOCARDE))
colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7])
S1=S2=M1=M2=M
for(i in 1:100){
idx = which(sample(0:1,size=n, replace=TRUE)==1)
reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,]))
nm=names(reg$coefficients)
M1[i,nm]=reg$coefficients
S1[i,nm]=summary(reg)$coefficients[,2]
f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="")
reg=glm(f,data=MYOCARDE[-idx,])
M2[i,nm]=reg$coefficients
S2[i,nm]=summary(reg)$coefficients[,2]
}

Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)

for(j in 1:8){
idx=which(!is.na(M1[,j]))
plot(M1[idx,j],M2[idx,j])
abline(a=0,b=1,lty=2,col="gray")
segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j])  
segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j])  
}

For instance, with the intercept, we have the following

 

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).

We can also visualize the joint distribution of the two estimators,

for(j in 1:8){
library(ks)
idx = which(!is.na(M1[,j]))
Z = cbind(M1[idx,j],M2[idx,j])
H = Hpi(x=Z)
fhat = kde(x=Z, H=H)
image(fhat$eval.points[[1]],
fhat$eval.points[[2]],fhat$estimate)
abline(a=0,b=1,lty=2,col="gray")
abline(v=0,lty=2)
abline(h=0,lty=2)
}

which are here, almost on the diagonal,

meaning that the intercept on the two samples is (more or less) the same. We can then look at other parameters (which is actually more interesting).

On that variable, it seems that it is significant on the training dataset (somehow, it is consistent with the fact that it is remaining in the model after the stepwise procedure) but not on the validation sample (or hardly significant).

Others are much more consistent (with some possible outliers)

 

 

On the next one, we have again significance on the training sample, but not on the validation sample,

 

 

and probably more interesting

where the two are very consistent.

Probabilistic Foundations of Econometrics, part 3

This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.

Exponential family and linear models

The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .

From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.

Logistic Regression

Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)

where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).

Regression in high dimension

As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).

In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:

  • Step 0: Mix the data
  • Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}

This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).

(references will be given in the very last post of that series) To be continued

Classification from scratch, bagging and forests 10/8

Tenth post of our series on classification from scratch. Today, we’ll see the heuristics of the algorithm inside bagging techniques.

Often, bagging is associated with trees, to generate forests. But actually, it is possible using bagging for any kind of model. Recall that bagging means “boostrap aggregation”. So, consider a model m:\mathcal{X}\rightarrow \mathcal{Y}. Let \widehat{m}_{S} denote the estimator of m obtained from sample S=\{y_i,\mathbf{x}_i\} with i=\{1,\cdots,n\}.

Consider now some boostrap sample, S_b=\{y_i,\mathbf{x}_i\} with i is randomly drawn from \{1,\cdots,n\} (with replacement). Based on that sample, estimate \widehat{m}_{S_b}. Then draw many samples, and consider the agregation of the estimators obtained, using either a majority rule, or using the average of probabilities (if a probabilist model was considered). Hence\widehat{m}^{bag}(\mathbf{x})=\frac{1}{B}\sum_{b=1}^B \widehat{m}_{S_b}(\mathbf{x})

Bagging logistic regression #1

Consider the case of the logistic regression. To generate a bootstrap sample, it is natural to use the technique describe above. I.e. draw pairs (y_i,\mathbf{x}_i) randomly, uniformly (with probability 1/n) with replacement. Consider here the small dataset, just to visualize. For the b part of bagging, use the following code

L_logit = list()
n = nrow(df)
for(s in 1:1000){
  df_s = df[sample(1:n,size=n,replace=TRUE),]
  L_logit[[s]] = glm(y~., df_s, family=binomial)}

Then we should aggregate over the 1000 models, to get the agg part of bagging,

p = function(x){
  nd=data.frame(x1=x[1], x2=x[2]) 
  unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

We now have a prediction for any new observation

vu = seq(0,1,length=101)
vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)

Bagging logistic regression #2

Another technique that can be used to generate a bootstrap sample is to keep all \mathbf{x}_i‘s, but for each of them, to draw (randomly) a value for y, withY_{i,b}\sim\mathcal{B}(\widehat{m}_{S}(\mathbf{x}_i))since\widehat{m}(\mathbf{x})=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}].Thus, the code for the b part of bagging algorithm is now

L_logit = list()
n = nrow(df)
reg = glm(y~x1+x2, df, family=binomial)
for(s in 1:100){
  df_s = df
  df_s$y = factor(rbinom(n,size=1,prob=predict(reg,type="response")),labels=0:1)
  L_logit[[s]] = glm(y~., df_s, family=binomial)
}

The agg part of bagging algorithm remains unchanged. Here we obtain

vu = seq(0,1,length=101)
vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)


Of course, we can use that code we check the prediction obtain on the observations we have in our sample. Just to change, consider here the myocarde data. The entiere code is here

L_logit = list()
reg = glm(as.factor(PRONO)~., myocarde, family=binomial)
for(s in 1:1000){
  myocarde_s = myocarde
  myocarde_s$PRONO = 1*rbinom(n,size=1,prob=predict(reg,type="response"))
  L_logit[[s]] = glm(as.factor(PRONO)~., myocarde_s, family=binomial)
}
p = function(x){
  nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], 
                PAPUL=x[4], PVENT=x[5], REPUL=x[6]) 
  unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))}

For the first observation, with our 1000 simulated datasets, and our 1000 models, we obtained the following estimation for the probability to die.

histo = function(i){
x = as.numeric(myocarde[i,1:7])
v_x = p(x)
hist(v_x,proba=TRUE,breaks=seq(0,1,by=.05),xlab="",main="",
col=rep(c(rgb(0,0,1,.4),rgb(1,0,0,.4)),each=10),ylim=c(0,5))
segments(mean(v_x),0,mean(v_x),5,col="red",lty=2)
points(myocarde$PRONO[i],0,pch=19,cex=2)
xi = round(mean(v_x.5)*1000)/10
text(.75,-.1,paste(xi,"%",sep=""),col=rgb(1,0,0,.6))}
histo(1)
histo(4)

Hence, for the first observation, in 77.8% of the models, the predicted probability was higher than 50%, and the average probability was actually close to 75%.

or, for observation 22, predictions very close to the first one (except that the first one died, while the 22nd survived)

histo(23)
histo(11)

and, we observe here

Bagging trees

Let’s now get back on our trees, mentioned in the previous post. Bagging was introduced in 1994 by Leo Breiman in Bagging Predictors. If the first section describes the procedure, the second one introduces “Bagging Classification Trees”. Trees are nice for interpretation, but most of the time, they are rather poor predictors. The idea of bagging was to improve the accuracy of classification trees.

The idea of bagging to to generate a lot of trees

clr12 = c("#8dd3c7","#ffffb3","#bebada","#fb8072","#80b1d3","#fdb462","#b3de69","#fccde5","#d9d9d9","#bc80bd","#ccebc5","#ffed6f")
n = nrow(myocarde)
par(mfrow=c(4,3))
sed=c(1,2,4,5,6,10,11,21,22,24,27,28,30)
for(i in 1:12){
  set.seed(sed[i])
idx = sample(1:n, size=n, replace=TRUE)
cart =  rpart(PRONO~., myocarde[idx,])
prp(cart,type=2,extra=1,box.col=clr12[i])}


The strategie is actually the same as before. For the bootstrap part, store the tree in a list

L_tree = list()
for(s in 1:1000){
  idx = sample(1:n, size=n, replace=TRUE)
  L_tree[[s]] = rpart(as.factor(PRONO)~., myocarde[idx,])
}

and for the aggregation part, just take the average of predicted probabilities

p = function(x){
  nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], 
                PAPUL=x[4], PVENT=x[5], REPUL=x[6]) 
  unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}

Because with this example, we cannot visualize predictions, let us run the same code on the smaller dataset

L_tree = list()
n = nrow(df)
for(s in 1:1000){
  idx = sample(1:n, size=n, replace=TRUE)
  L_tree[[s]] = rpart(y~x1+x2, df[idx,],control = rpart.control(cp = 0.25,
minsplit = 2))
}
p = function(x){
  nd=data.frame(x1=x[1], x2=x[2]) 
  unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}
vu=seq(0,1,length=101)
vv=outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y)))))
image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(vu,vu,vv,levels = .5,add=TRUE)

Fronm bags to forest

Here, we grew a lot of trees, but it is not stricto sensus a random forest algorithm, as introduced in 1995, in Random decision forests. Actually, the difference is in the creation of decision trees. To understand what happens, get back to the previous post on classification trees. As we’ve seen, when we have a node, we look at possible splits : we consider all possible variable, and all possible threshold. The startegy here will be to draw randomly k variables out of p (with of course k<p, for instance k=\sqrt{p}). That's interesting in high dimension, because at each split, we should look for all variables, all cutoffs, and that can take quite some time (especially with the bootstrap procedure, where the goal will be to grow 1000 trees).

To be continued…

Classification from scratch, penalized Lasso logistic 5/8

Fifth post of our series on classification from scratch, following the previous post on penalization using the \ell_2 norm (so-called Ridge regression), this time, we will discuss penalization based on the \ell_1 norm (the so-called Lasso regression).

First of all, one should admit that if the name stands for least absolute shrinkage and selection operator, that’s actually a very cool name… Funny story, a few years before, Leo Breiman introduce a concept of garrote technique… “The garrote eliminates some variables, shrinks others, and is relatively stable”.

I guess that somehow, the lasso is the extension of the garotte technique

Normalization of the covariates

As previously, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

The heuristics about Lasso regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue square is the constraint we have, if we rewite the optimization problem as a contrained optimization problem,

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
polygon(c(-1,0,1,0),c(0,1,0,-1),border="blue")

The nice thing here is that is works as a variable selection tool, since some components can be null here. That’s the idea behind the following (popular) graph


(with lasso on the left, and ridge on the right).

Heuristically, the maths explanation is the following. Consider a simple regression y_i=x_i\beta+\varepsilon, with \ell_1-penality and a \ell_2-loss fuction. The optimization problem becomes\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda{\color{red}{|}}\beta{\color{red}{|}}\big\}The first order condition can be written-2\mathbf{y}^T\mathbf{x}+2\mathbf{x}^T\mathbf{x}\widehat{\beta}{\color{red}{\pm} }2\lambda=0(the sign in {\color{red}{\pm}} being the sign of \widehat{\beta}).
Assume that \mathbf{y}^T\mathbf{x}>0, then solution is
\widehat{\beta}_{\lambda}^{lasso}=\max\left\lbrace\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}},0\right\rbrace(we get a corner solution when \lambda is large).

Optimization routine

As in our previous post, let us start with standard (R) optimization routines, such as BFGS

PennegLogLik = function(bbeta,lambda=0){
  b0=bbeta[1]
  beta=bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
(1-y)*log(1 + exp(b0+X%*%beta)))+lambda*sum(abs(beta))
}
opt_lasso = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]
}
v_lambda=c(exp(seq(-4,2,length=61)))
est_lasso=Vectorize(opt_lasso)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_lasso[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_lasso[i,],col=colrs[i],lwd=2)


But it is very heratic… or non stable.

Using glmnet

Just to compare, with R routines dedicated to lasso, we get the following

library(glmnet)
glm_lasso = glmnet(X, y, alpha=1)
plot(glm_lasso,xvar="lambda",col=colrs,lwd=2)

plot(glm_lasso,col=colrs,lwd=2)

If we look carefully what’s in the ouput, we can see that there is variable selection, in the sense that some \widehat{\beta}_{j,\lambda}=0, in the sense “really null”

glmnet(X, y, alpha=1,lambda=exp(-4))$beta
7x1 sparse Matrix of class "dgCMatrix"
               s0
FRCAR  .         
INCAR  0.11005070
INSYS  0.03231929
PRDIA  .         
PAPUL  .         
PVENT -0.03138089
REPUL -0.20962611

Of course, with out optimization routine, we cannot expect to have null values

opt_lasso(.2)
         FRCAR         INCAR         INSYS         PRDIA
  0.4810999782  0.0002813658  1.9117847987 -0.3873926427
          PAPUL         PVENT        REPUL 
 -0.0863050787 -0.4144139379 -1.3849264055

So clearly, it will be necessary to spend more time today, to understand how it works…

Orthogonal covariates

Before getting into the maths, observe that when covariates are orthogonal, there is some very clear “variable” selection process,

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord
glm_lasso = glmnet(pca_X, y, alpha=1)
plot(glm_lasso,xvar="lambda",col=colrs)
plot(glm_lasso,col=colrs)

Interior Point approach

The penalty is now expressed using the \ell_1 so intuitively, it should be possible to consider algorithms related to linear programming. That was actually suggested in Koh, Kim & Boyd (2007), with some implementation in matlab, see http://web.stanford.edu/~boyd/l1_logreg/. If I can find some time, later one, maybe I will try to recode it. But actually, it is not the technique used in most R functions.

Now, o be honest, we face a double challenge today: the first one is to understand how lasso works for the “standard” (least square) problem, the second one is to see how to adapt it to the logistic case.

Standard lasso (with weights)

If we get back to the original Lasso approach, the goal was to solve\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace(with standard notions, as in wikipedia or Jocelyn Chi’s post – most of the code in this section is inspired by Jocelyn’s great post).

Observe that the intercept is not subject to the penalty. The first order condition is then\frac{\partial}{\partial\beta_0}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}-\beta_0\mathbf{1}\|^2=(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}+\beta_0\|\mathbf{1}\|^2=0i.e.\beta_0=\frac{1}{n^2}(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}Assume now that KKT conditions are satisfied, since we cannot differentiate (to find points where the gradient is \mathbf{0}), we can check if \mathbf{0} contains the subdifferential at the minimum.

Namely\mathbf{0}\in\partial \left(\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right)=\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\partial(\lambda\|\mathbf{\beta}\|_{\ell_1})
For the term on the left, we recognize \frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2=-\mathbf{X}^T(\mathbf{y}-\mathbf{X}\mathbf{\beta})=-\mathbf{g}so that the previous equation can be writeng_k\in\partial(\lambda|\beta_k|)=\begin{cases}\{+\lambda\}\text{ if }\beta_k>0 \\ \{-\lambda\}\text{ if }\beta_k<0 \\ (-\lambda,+\lambda)\text{ if }\beta_k=0\end{cases}i.e. if \beta_k\neq 0, then g_k = \text{sign}(\beta_k)\cdot\lambda.

Then we write the KKT conditions for this formulation and simplify them to produce a set of rules for checking our solution

We can split \beta_j into a sum of its positive and negative parts by replacing \beta_j with \beta_j^+-\beta_j^- where \beta_j^+,\beta_j^-\geq0. Then the Lasso problem becomes-\log\mathcal{L}(\mathbf{\beta})+\lambda\sum_j(\beta_j^+-\beta_j^-)with constraints \beta_j^+-\beta_j^-.

Let \alpha_j^+,\alpha_j^- denote the Lagrange multipliers for \beta_j^+,\beta_j^-, respectively.

L({\mathbf{\beta}}) + \lambda \sum_{j} (\beta_{j}^{+} - \beta_{j}^{-}) - \sum_{j}\alpha_{j}^{+}\beta_{j}^{+} - \sum_{j} \alpha_{j}^{-}\beta_{j}^{-}.To satisfy the stationarity condition, we take the gradient of the Lagrangian with respect to \beta_{j}^{+} and set it to zero to obtain\nabla L({\mathbf{\beta}})_{j} + \lambda - \alpha_{j}^{+} = 0We do the same with respect to \beta_{j}^{-} to obtain-\nabla L({\mathbf{\beta}})_{j}+\lambda-\alpha_{j}^{-} = 0

As discussed in Jocelyn Chi’s post, primal feasibility requires that the primal constraints be satisfied so this gives us \beta_{j}^{+} \ge 0 and \beta_{j}^{-} \ge 0. Then dual feasibility requires non-negativity of the Lagrange multipliers so we get \alpha_{j}^{+} \ge 0 and \alpha_{j}^{-} \ge 0. And finally, complementary slackness requires that \alpha_{j}^{+}\beta_{j}^{+} = 0 and \alpha_{j}^{-}\beta_{j}^{-} = 0. We can simplify these conditions to obtain a simple set of rules for checking whether or not our solution is a minimum. The following is inspired by Jocelyn Chi’s post.

From \nabla L(\beta)_{j} + \lambda - \alpha_{j}^{+} = 0, we have \nabla L(\beta)_{j} + \lambda= \alpha_{j}^{+} \ge 0. This gives us \nabla L(\beta)_{j} \ge -\lambda. From -\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{-} = 0, we have -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0. This gives us -\nabla L(\beta)_{j} \ge -\lambda, which gives us \nabla L(\beta)_{j} \le \lambda. Hence, \lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j

When \beta_{j}^{+} > 0, \lambda > 0, complementary slackness requires \alpha_{j}^{+} = 0. So \nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} = 0. Hence, \nabla L(\beta)_{j} = -\lambda < 0 since \lambda > 0. At the same time, -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0 so 2 \lambda = \alpha_{j}^{-} > 0 since \lambda > 0. Then complementary slackness requires \beta_{j}^{-} = 0. Hence, when \beta_{j}^{+} > 0, we have \beta_{j}^{-}=0 and \nabla L(\beta)_{j} = -\lambda

Similarly, when \beta_{j}^{-} > 0, \lambda > 0, complementary slackness requires \alpha_{j}^{-}=0. So -\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} = 0 and \nabla L(\beta)_{j}=\lambda>0 since \lambda > 0. Then from \nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} \ge 0 and the above, we get 2 \lambda = \alpha_{j}^{+} > 0. Then complementary slackness requires \beta_{j}^{+} = 0. Hence, when \beta_{j}^{-} > 0, we have \beta_{j}^{+}=0 and \nabla L(\beta)_{j} = \lambda.

Since \beta_{j} = \beta_{j}^{+} - \beta_{j}^{-}, this means that when \beta_{j} > 0, \nabla L(\beta)_{j} = -\lambda. And when \beta_{j} <0, \nabla L(\beta)_{j} = \lambda. Combining this with \lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j, we arrive at the same convergence requirements that we obtained before using subdifferential calculus.

For conveniency, introduce the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}
Noticing that the optimization problem \frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}can also be written
\min\left\lbrace\sum_{j=1}^p -\widehat{\beta}_j^{ols}\cdot\beta_j+\frac{1}{2}\beta_j^2+\lambda|\beta_j|\right\rbraceobserve that\widehat{\beta}_{j,\lambda}=S(\widehat{\beta}_j^{ols},\lambda)which is a coordinate-wise update.

Now, if we consider a (slightly) more general problem, with weights in the first part\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n{\color{red}{\omega_i}} [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbracethe coordinate-wise update becomes
\widehat{\beta}_{j,\lambda,{\color{red}{\omega}}}=S(\widehat{\beta}_j^{{\color{red}{\omega-}}ols},\lambda)
An alternative is to set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

soft_thresholding = function(x,a){
  result = numeric(length(x))
  result[which(x &gt; a)]  a)] - a
  result[which(x &lt; -a)] &lt;- x[which(x &lt; -a)] + a
  return(result)
}

and the code

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
    beta0list = numeric(length(maxiter+1))
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[1] = beta0
    for (j in 1:maxiter){
      for (k in 1:length(beta)){
        r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
        beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
      }
      beta0 = sum(y-X%*%beta)/(length(y))
      beta0list[j+1] = beta0
      betalist[[j+1]] = beta
      obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
      if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } 
    } 
return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

Let’s keep that one warm, and let’s get back to our initial problem.

The lasso logistic regression

The trick here is that the logistic problem can be formulated as a quadratic programming problem. Recall that the log-likelihood is here \log\mathcal{L}=\frac{1}{n}\sum_{i=1}^n y_i\cdot(\beta_0+\mathbf{x}_i^T\mathbf{\beta})-\log[1+\exp(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]
which is a concave function of the parameters. Hence, one can use a quadratic approximation of the log-likelihood – using Taylor expansion,\log\mathcal{L}\approx\log\mathcal{L}'=\frac{1}{n}\sum_{i=1}^n \omega_i\cdot[z_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2
where z_i is the working response
z_i=(\beta_0+\mathbf{x}_i^T\mathbf{\beta})+\frac{y_i-p_i}{p_i[1-p_i]}
p_i is the predictionp_i = \frac{\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{1+\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}and \omega_i are weights \omega_i = p_i[1-p_i].

Thus, we obtain a penalized least-square problem. And we can use what was done previously

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0 = sum(y-X%*%beta)/(length(y))
  p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta))
  z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p))
  omega = p*(1-p)/(sum((p*(1-p))))
    beta0list = numeric(length(maxiter+1))
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[1] = beta0
    for (j in 1:maxiter){
      for (k in 1:length(beta)){
        r = z - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
       beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
      }
      beta0 = sum(y-X%*%beta)/(length(y))
      beta0list[j+1] = beta0
      betalist[[j+1]] = beta
      obj[j] = (1/2)*(1/length(y))*norm(omega*(z - X%*%beta - 
beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
  p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta))
  z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p))
  omega = p*(1-p)/(sum((p*(1-p))))
      if (norm(rbind(beta0list[j],betalist[[j]]) - 
rbind(beta0,beta),'F') &lt; tol) { break } 
        } 
return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

It looks like what can get when calling glmnet… and here, we do have null components for some \lambda large enough ! Really null… and that’s cool actually.

Application on our second dataset

Consider now the second dataset, with two covariates. The code to get lasso estimates is

df0 = df
df0$y = as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] &lt;- (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type="response")}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)}

Consider some small values, for [\lambda], so that we only have some sort of shrinkage of parameters,

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=exp(-2.8))
plot_lambda(exp(-2.8))


But with a larger \lambda, there is variable selection: here \widehat{\beta}_{1,\lambda}=0

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=exp(-2.1))
plot_lambda(exp(-2.1))


(to be continued…)

Classification from scratch, logistic with kernels 3/8

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})

Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code

mean_x = function(x,bw){
  w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1)
  weighted.mean(myocarde$PRONO,w)}
u = seq(5,55,length=201)
v = Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better.

Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1))
plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk)
f=function(bm){
  v = Vectorize(function(x) mean_x(x,bm))(reg$x)
  z=reg$y-v
  sum((z[!is.na(z)])^2)}
optim(bk,f)$par}
x=seq(1,10,by=.1)
y=Vectorize(g)(x)
plot(x,y)
abline(0,exp(-1),col="red")
abline(0,.37,col="blue")


There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest…

Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101)
p = function(x,y){
  bw1 = .2; bw2 = .2
  w = dnorm((df$x1-x)/bw1, mean=0,sd=1)*
      dnorm((df$x2-y)/bw2, mean=0,sd=1)
  weighted.mean(df$y=="1",w)
}
v = outer(u,u,Vectorize(p))
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with
\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7])
Sigma_Inv = solve(Sigma)
d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)}
k_closest = function(i,k){
  vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv)
vect = Vectorize(vect_dist)((1:nrow(myocarde))) 
which((rank(vect)))}

Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.

k_majority = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2]
  return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)])
  return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO,
MAJORITY=k_majority(7),PROPORTION=k_mean(7))
      OBSERVED MAJORITY PROPORTION
 [1,]        1        1  0.7142857
 [2,]        0        1  0.5714286
 [3,]        0        0  0.1428571
 [4,]        1        1  0.5714286
 [5,]        0        1  0.7142857
 [6,]        0        0  0.2857143
 [7,]        1        1  0.7142857
 [8,]        1        0  0.4285714
 [9,]        1        1  0.7142857
[10,]        1        1  0.8571429
[11,]        1        1  1.0000000
[12,]        1        1  1.0000000

Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){
  w = rank(abs(myocarde$INSYS-x),ties.method ="random")
  mean(myocarde$PRONO[which(w&lt;=9)])}
u=seq(5,55,length=201)
v=Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")]))
u = seq(0,1,length=51)
p = function(x,y){
  k = 6
  vect_dist = function(j)  d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv)
  vect = Vectorize(vect_dist)(1:nrow(df)) 
  idx  = which(rank(vect)&lt;=k)
  return(mean((df$y==1)[idx]))}
v = outer(u,u,Vectorize(p))
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(&gt;|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(&gt;|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Regression on factors

Most of our intuitions about regression models come from the Gaussian standard linear model. One interesting feature is that, when we have a factor explanatory variable, the sum of predictions per class is the sum of observations of the endogeneous variable, per class. To be more specific, consider some factor variable https://latex.codecogs.com/gif.latex?x_1\in\{0,1\}, and a regression model

https://latex.codecogs.com/gif.latex?y_i=\beta_0+\beta_1%20\boldsymbol{1}(x_1=1)+\beta_2%20x_2+\varepsilon_i

Use ordinary least squares to fit that model

https://latex.codecogs.com/gif.latex?\widehat{y}_i=\widehat{\beta}_0+\widehat{\beta}_1%20\boldsymbol{1}(x_1=1)+\widehat{\beta}_2%20x_2

Then for all https://latex.codecogs.com/gif.latex?x\in\{0,1\}

https://latex.codecogs.com/gif.latex?\sum_{i:x_i=x}%20y_i%20=%20\sum_{i:x_i=x}%20\widehat{y}_i

> n=200
> X1=rep(0:1,each=n/2)
> set.seed(1)
> X2=runif(2*n)
> L=X1-X2
> B=data.frame(Y=rnorm(n,L),X1=as.factor(X1),X2=X2)
> pd=aggregate(x=B$Y,by=list(B$X1),mean)$x
> pd
[1] -0.4881735  0.5341301
> fit=lm(Y~X1+X2,data=B)
> B2=data.frame(x=B$X1,y=predict(fit))
> aggregate(x=B2$y,by=list(B2$x),mean)$x
[1] -0.4881735  0.5341301

Continue reading Regression on factors

Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
 $ Creditability   : int  1 1 1 1 1 1 1 1 1 1 ...
 $ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
 $ Duration        : int  18 9 12 12 12 10 8  ...
 $ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose + 
Length.of.current.employment + 
Sex...Marital.Status, family=binomial, 
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog1=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog1,"\n")
AUC:  0.7340997

An alternative is to consider a logistic regression on all explanatory variables

> LogisticModel <- glm(Creditability ~ ., 
+  family=binomial, 
+  data = credit[i_calibration,])

We might overfit, here, and we should observe that on the ROC curve

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ ., 
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCArbre=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCArbre,"\n")
AUC:  0.7100323

As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions.

> library(randomForest)
> RF <- randomForest(Creditability ~ .,
+ data = credit[i_calibration,])
> fitForet <- predict(RF,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ ., 
+    family=binomial, 
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test])
+   AUCLog2=performance(pred, measure = "auc")@y.values[[1]] 
+   RF <- randomForest(Creditability ~ .,
+   data = credit[i_calibration,])
+   fitForet <- predict(RF,
+                       newdata=credit[i_test,],
+                       type="prob")[,2]
+   pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

Choosing a Classifier

In order to illustrate the problem of chosing a classification model consider some simulated data,

> n = 500
> set.seed(1)
> X = rnorm(n)
> ma = 10-(X+1.5)^2*2
> mb = -10+(X-1.5)^2*2
> M = cbind(ma,mb)
> set.seed(1)
> Z = sample(1:2,size=n,replace=TRUE)
> Y = ma*(Z==1)+mb*(Z==2)+rnorm(n)*5
> df = data.frame(Z=as.factor(Z),X,Y)

A first strategy is to split the dataset in two parts, a training dataset, and a testing dataset.

> df1 = training = df[1:300,]
> df2 = testing  = df[301:500,]
  • The Holdout Method: Training and Testing Datasets

The two datasets can be visualised below, with the training dataset on top, and the testing dataset below

> plot(df1$X,df1$Y,pch=19,col=c(rgb(1,0,0,.4),
+ rgb(0,0,1,.4))[df1$Z])

Continue reading Choosing a Classifier

Variable Selection using Cross-Validation (and Other Techniques)

A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell  in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure.

  • It yields R-squared values that are badly biased to be high.
  • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
  • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)).
  • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem.
  • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)).
  • It has severe problems in the presence of collinearity.
  • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses.
  • Increasing the sample size does not help very much (see Derksen and Keselman (1992)).
  • It allows us to not think about the problem.
  • It uses a lot of paper.

Continue reading Variable Selection using Cross-Validation (and Other Techniques)

Visualising a Classification in High Dimension, part 2

A few weeks ago, I published a post on Visualising a Classification in High Dimension, based on the use of a principal component analysis, to get a projection on the first two components. Following that post, I was wondering what could be done in the context of a classification on categorical covariates. A natural idea would be to consider a correspondance analysis, and to run a similar code.

Consider here the dataset used in a recent post,

> source("http://freakonometrics.free.fr/import_data_credit.R")

If we consider a correspondance analysis, we get

> library(FactoMineR)
> acm=MCA(train.db,quali.sup = 
+ which(names(train.db,)=="class"),ncp=10)

For the covariates (including also the variable we want to model, considered here as some supplementary variable), the visualisation – on the first two components – is

and for the individuals

Continue reading Visualising a Classification in High Dimension, part 2

Visualising a Classification in High Dimension

So far, when discussing classification, we’ve been playing on my toy-dataset (actually, I should no claim it’s mine, it is inspired by the one used in the introduction of Boosting, by Robert Schapire and Yoav Freund). But in ral life, there are more observations, and more explanatory variables.With more than two explanatory variables, it starts to be more complicated to visualise. For instance, consider

MYOCARDE=read.table(
"http://freakonometrics.free.fr/saporta.csv",
head=TRUE,sep=";")

where we have observations from people in E.R., for infarctus, and we want to understand who did survive, to get a predictive model. But before running some classifier, let us visualise our data. Since we have seven explanatory variables and our class (survival or death), we can go for a PCA.

library(FactoMineR) # ACP (sur les var continues)
X=MYOCARDE[,1:7]
acp=PCA(X)

To add the death/survival variable, treat it as numerical 0/1 variable (at least to get a direction)

MYOCARDE2=MYOCARDE
MYOCARDE2$PRONO=(MYOCARDE2$PRONO=="SURVIE")*1
acp=PCA(MYOCARDE2,quanti.sup=8,graph=TRUE)

The nice thing is that we see here where variables are colinear with that one. It is also possible to visualise individuals, and classes, too

acp=PCA(MYOCARDE,quali.sup=8,graph=TRUE)
plot(acp, habillage = 8,col.hab=c("red","blue"))

Continue reading Visualising a Classification in High Dimension

Supervised Classification, beyond the logistic

In our data-science class, after discussing limitations of the logistic regression, e.g. the fact that the decision boundary line was a straight line, we’ve mentioned possible natural extensions. Let us consider our (now) standard dataset

 clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
 clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
 x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
 y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
 z <- c(1,1,1,1,1,0,0,1,0,0)
 df <- data.frame(x,y,z)
 plot(x,y,pch=19,cex=2,col=clr1[z+1])

One can consider a quadratic function of the covariates (instead of a linear one)

 reg=glm(z~x+y+I(x^2)+I(y^2)+I(x*y),
     data=df,family=binomial)
 summary(reg)
 
 pred_1 <- function(x,y){
 predict(reg,newdata=data.frame(x=x,
 y=y),type="response")>.5 }
 
 x_grid<-seq(0,1,length=101)
 y_grid<-seq(0,1,length=101)
 z_grid <- outer(x_grid,y_grid,pred_1)
 image(x_grid,y_grid,z_grid,col=clr2)
 points(x,y,pch=19,cex=2,col=clr1[z+1])

Continue reading Supervised Classification, beyond the logistic