# Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair) library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")] RF = randomForest(y~. ,data=subfrenchmotor) vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi)) dfvi = dfvi[rev(order(dfvi$g)),] nom = dfvi$nom nom[1] = "bs(LicAge)" nom[3] = "bs(DrivAge)" nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage $k$, we keep the $k$ most important variables, and run a logistic regression on those $k$ variables. Again, I should stress that the gender of the driver is not among those $k$ variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor) library(splines) idx_F = which(frenchmotor$sensitive == "Female") idx_M = which(frenchmotor$sensitive == "Male") metric_gender= function(k =3){ if(k==0){ reg = glm(y~1, family=binomial, data=subfrenchmotor) yp = predict(reg, type="response") yp_F = yp[idx_F] yp_M = yp[idx_M] sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9))) names(sortie)[1:2]=c("mean_F","mean_M") } if(k>0){ vr = paste(nom[1:k],collapse = " + ") fm = paste("y ~ ",vr,sep="") reg = glm(fm, family=binomial, data=subfrenchmotor) yp = predict(reg, type="response") yp_F = yp[idx_F] yp_M = yp[idx_M] sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9))) names(sortie)[1:2]=c("mean_F","mean_M") } sortie}

Let us not compute it for all variables

N = 0:15 M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab= "Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9)) lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){ if(k==0){ reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor) } if(k>0){ vr = paste(nom[1:k],collapse = " + ") fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="") reg = glm(fm_genre, family=binomial, data=frenchmotor) } pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female")) performance(pred,"tpr","fpr")} plot(metric_gender_2(15)) but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind). # From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv, The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization. # Estimates on training vs. validation samples Before moving to cross-validation, it was natural to say “I will burn 50% (say) of my data to train a model, and then use the remaining to fit the model”. For instance, we can use training data for variable selection (e.g. using some stepwise procedure in a logistic regression), and then, once variable have been selected, fit the model on the remaining set of observations. A natural question is usually “does it really matter ?”. In order to visualize this problem, consider my (simple) dataset MYOCARDE=read.table( "http://freakonometrics.free.fr/saporta.csv", head=TRUE,sep=";") Let us generate 100 training samples (where we keep about 50% of the observations). On each of them, we use a stepwise procedure, and we keep the estimates of the remaining variables (and their standard deviation actually) n=nrow(MYOCARDE) M=matrix(NA,100,ncol(MYOCARDE)) colnames(M)=c("(Intercept)",names(MYOCARDE)[1:7]) S1=S2=M1=M2=M for(i in 1:100){ idx = which(sample(0:1,size=n, replace=TRUE)==1) reg=step(glm(PRONO=="DECES"~.,data=MYOCARDE[idx,])) nm=names(reg$coefficients) M1[i,nm]=reg$coefficients S1[i,nm]=summary(reg)$coefficients[,2] f=paste("PRONO=='DECES'~",paste(nm[-1],collapse="+"),sep="") reg=glm(f,data=MYOCARDE[-idx,]) M2[i,nm]=reg$coefficients S2[i,nm]=summary(reg)$coefficients[,2] }

Then, for the 7 covariates (and the constant) we can look at the value of the coefficient in the model fitted on the training sample, and the value on the model fitted on the validation sample (of course, only when they were remaining)

for(j in 1:8){ idx=which(!is.na(M1[,j])) plot(M1[idx,j],M2[idx,j]) abline(a=0,b=1,lty=2,col="gray") segments(M1[idx,j]-2*S1[idx,j],M2[idx,j],M1[idx,j]+2*S1[idx,j],M2[idx,j]) segments(M1[idx,j],M2[idx,j]-2*S2[idx,j],M1[idx,j],M2[idx,j]+2*S2[idx,j]) }

For instance, with the intercept, we have the following

where horizontal segments are confidence intervals of the parameter on the model fitted on the training sample, the vertical on the validation sample. The green part means some sort of consistency, while the red one means that actually, the coefficient was negative with one model, positive with the other one. Which is odd (but in that case, observe that coefficients are rarely significant).

We can also visualize the joint distribution of the two estimators,

## Bagging logistic regression #2

Another technique that can be used to generate a bootstrap sample is to keep all $\mathbf{x}_i$‘s, but for each of them, to draw (randomly) a value for $y$, with$$Y_{i,b}\sim\mathcal{B}(\widehat{m}_{S}(\mathbf{x}_i))$$since$$\widehat{m}(\mathbf{x})=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}].$$Thus, the code for the b part of bagging algorithm is now

L_logit = list() n = nrow(df) reg = glm(y~x1+x2, df, family=binomial) for(s in 1:100){ df_s = df df_s$y = factor(rbinom(n,size=1,prob=predict(reg,type="response")),labels=0:1) L_logit[[s]] = glm(y~., df_s, family=binomial) } The agg part of bagging algorithm remains unchanged. Here we obtain vu = seq(0,1,length=101) vv = outer(vu,vu,Vectorize(function(x,y) mean(p(c(x,y))))) image(vu,vu,vv,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(vu,vu,vv,levels = .5,add=TRUE)

Of course, we can use that code we check the prediction obtain on the observations we have in our sample. Just to change, consider here the myocarde data. The entiere code is here

L_logit = list() reg = glm(as.factor(PRONO)~., myocarde, family=binomial) for(s in 1:1000){ myocarde_s = myocarde myocarde_s$PRONO = 1*rbinom(n,size=1,prob=predict(reg,type="response")) L_logit[[s]] = glm(as.factor(PRONO)~., myocarde_s, family=binomial) } p = function(x){ nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], PAPUL=x[4], PVENT=x[5], REPUL=x[6]) unlist(lapply(1:1000,function(z) predict(L_logit[[z]],newdata=nd,type="response")))} For the first observation, with our 1000 simulated datasets, and our 1000 models, we obtained the following estimation for the probability to die. histo = function(i){ x = as.numeric(myocarde[i,1:7]) v_x = p(x) hist(v_x,proba=TRUE,breaks=seq(0,1,by=.05),xlab="",main="", col=rep(c(rgb(0,0,1,.4),rgb(1,0,0,.4)),each=10),ylim=c(0,5)) segments(mean(v_x),0,mean(v_x),5,col="red",lty=2) points(myocarde$PRONO[i],0,pch=19,cex=2) xi = round(mean(v_x.5)*1000)/10 text(.75,-.1,paste(xi,"%",sep=""),col=rgb(1,0,0,.6))} histo(1) histo(4)

Hence, for the first observation, in 77.8% of the models, the predicted probability was higher than 50%, and the average probability was actually close to 75%.

or, for observation 22, predictions very close to the first one (except that the first one died, while the 22nd survived)

histo(23) histo(11)

and, we observe here

## Bagging trees

Let’s now get back on our trees, mentioned in the previous post. Bagging was introduced in 1994 by Leo Breiman in Bagging Predictors. If the first section describes the procedure, the second one introduces “Bagging Classification Trees”. Trees are nice for interpretation, but most of the time, they are rather poor predictors. The idea of bagging was to improve the accuracy of classification trees.

The idea of bagging to to generate a lot of trees

clr12 = c("#8dd3c7","#ffffb3","#bebada","#fb8072","#80b1d3","#fdb462","#b3de69","#fccde5","#d9d9d9","#bc80bd","#ccebc5","#ffed6f") n = nrow(myocarde) par(mfrow=c(4,3)) sed=c(1,2,4,5,6,10,11,21,22,24,27,28,30) for(i in 1:12){ set.seed(sed[i]) idx = sample(1:n, size=n, replace=TRUE) cart = rpart(PRONO~., myocarde[idx,]) prp(cart,type=2,extra=1,box.col=clr12[i])}

The strategie is actually the same as before. For the bootstrap part, store the tree in a list

L_tree = list() for(s in 1:1000){ idx = sample(1:n, size=n, replace=TRUE) L_tree[[s]] = rpart(as.factor(PRONO)~., myocarde[idx,]) }

and for the aggregation part, just take the average of predicted probabilities

p = function(x){ nd=data.frame(FRCAR=x[1], INCAR=x[2], INSYS=x[3], PRDIA=x[4], PAPUL=x[4], PVENT=x[5], REPUL=x[6]) unlist(lapply(1:1000,function(z) predict(L_tree[[z]],newdata=nd,type="prob")[,2]))}

Because with this example, we cannot visualize predictions, let us run the same code on the smaller dataset

## Ridge Regression (from scratch)

The heuristics about Lasso regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue square is the constraint we have, if we rewite the optimization problem as a contrained optimization problem,

LogLik = function(bbeta){ b0=bbeta[1] beta=bbeta[-1] sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))} u = seq(-4,4,length=251) v = outer(u,u,function(x,y) LogLik(c(1,x,y))) image(u,u,v,col=rev(heat.colors(25))) contour(u,u,v,add=TRUE) polygon(c(-1,0,1,0),c(0,1,0,-1),border="blue")

The nice thing here is that is works as a variable selection tool, since some components can be null here. That’s the idea behind the following (popular) graph

(with lasso on the left, and ridge on the right).

Heuristically, the maths explanation is the following. Consider a simple regression $y_i=x_i\beta+\varepsilon$, with $\ell_1$-penality and a $\ell_2$-loss fuction. The optimization problem becomes$$\min\big\{\mathbf{y}^T\mathbf{y}-2\mathbf{y}^T\mathbf{x}\beta+\beta\mathbf{x}^T\mathbf{x}\beta+2\lambda{\color{red}{|}}\beta{\color{red}{|}}\big\}$$The first order condition can be written$$-2\mathbf{y}^T\mathbf{x}+2\mathbf{x}^T\mathbf{x}\widehat{\beta}{\color{red}{\pm} }2\lambda=0$$(the sign in ${\color{red}{\pm}}$ being the sign of $\widehat{\beta}$).
Assume that $\mathbf{y}^T\mathbf{x}>0$, then solution is
$$\widehat{\beta}_{\lambda}^{lasso}=\max\left\lbrace\frac{\mathbf{y}^T\mathbf{x}-\lambda}{\mathbf{x}^T\mathbf{x}},0\right\rbrace$$(we get a corner solution when $\lambda$ is large).

## Optimization routine

As in our previous post, let us start with standard (R) optimization routines, such as BFGS

PennegLogLik = function(bbeta,lambda=0){ b0=bbeta[1] beta=bbeta[-1] -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*log(1 + exp(b0+X%*%beta)))+lambda*sum(abs(beta)) } opt_lasso = function(lambda){ beta_init = lm(PRONO~.,data=myocarde)$coefficients logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), hessian=TRUE, method = "BFGS", control=list(abstol=1e-9)) logistic_opt$par[-1] } v_lambda=c(exp(seq(-4,2,length=61))) est_lasso=Vectorize(opt_lasso)(v_lambda) library("RColorBrewer") colrs=brewer.pal(7,"Set1") plot(v_lambda,est_lasso[1,],col=colrs[1],type="l") for(i in 2:7) lines(v_lambda,est_lasso[i,],col=colrs[i],lwd=2)

But it is very heratic… or non stable.

## Using glmnet

Just to compare, with R routines dedicated to lasso, we get the following

library(glmnet) glm_lasso = glmnet(X, y, alpha=1) plot(glm_lasso,xvar="lambda",col=colrs,lwd=2)

plot(glm_lasso,col=colrs,lwd=2)

If we look carefully what’s in the ouput, we can see that there is variable selection, in the sense that some $\widehat{\beta}_{j,\lambda}=0$, in the sense “really null”

## Interior Point approach

The penalty is now expressed using the $\ell_1$ so intuitively, it should be possible to consider algorithms related to linear programming. That was actually suggested in Koh, Kim & Boyd (2007), with some implementation in matlab, see http://web.stanford.edu/~boyd/l1_logreg/. If I can find some time, later one, maybe I will try to recode it. But actually, it is not the technique used in most R functions.

Now, o be honest, we face a double challenge today: the first one is to understand how lasso works for the “standard” (least square) problem, the second one is to see how to adapt it to the logistic case.

## Standard lasso (with weights)

If we get back to the original Lasso approach, the goal was to solve$$\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace$$(with standard notions, as in wikipedia or Jocelyn Chi’s post – most of the code in this section is inspired by Jocelyn’s great post).

Observe that the intercept is not subject to the penalty. The first order condition is then$$\frac{\partial}{\partial\beta_0}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}-\beta_0\mathbf{1}\|^2=(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}+\beta_0\|\mathbf{1}\|^2=0$$i.e.$$\beta_0=\frac{1}{n^2}(\mathbf{X}\mathbf{\beta}-\mathbf{y})^T\mathbf{1}$$Assume now that KKT conditions are satisfied, since we cannot differentiate (to find points where the gradient is $\mathbf{0}$), we can check if $\mathbf{0}$ contains the subdifferential at the minimum.

Namely$$\mathbf{0}\in\partial \left(\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right)=\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2+\partial(\lambda\|\mathbf{\beta}\|_{\ell_1})$$
For the term on the left, we recognize $$\frac{1}{2}\nabla\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|^2=-\mathbf{X}^T(\mathbf{y}-\mathbf{X}\mathbf{\beta})=-\mathbf{g}$$so that the previous equation can be writen$$g_k\in\partial(\lambda|\beta_k|)=\begin{cases}\{+\lambda\}\text{ if }\beta_k>0 \\ \{-\lambda\}\text{ if }\beta_k<0 \\ (-\lambda,+\lambda)\text{ if }\beta_k=0\end{cases}$$i.e. if $\beta_k\neq 0$, then $g_k = \text{sign}(\beta_k)\cdot\lambda$.

Then we write the KKT conditions for this formulation and simplify them to produce a set of rules for checking our solution

We can split $\beta_j$ into a sum of its positive and negative parts by replacing $\beta_j$ with $\beta_j^+-\beta_j^-$ where $\beta_j^+,\beta_j^-\geq0$. Then the Lasso problem becomes$$-\log\mathcal{L}(\mathbf{\beta})+\lambda\sum_j(\beta_j^+-\beta_j^-)$$with constraints $\beta_j^+-\beta_j^-$.

Let $\alpha_j^+,\alpha_j^-$ denote the Lagrange multipliers for $\beta_j^+,\beta_j^-$, respectively.

$$L({\mathbf{\beta}}) + \lambda \sum_{j} (\beta_{j}^{+} - \beta_{j}^{-}) - \sum_{j}\alpha_{j}^{+}\beta_{j}^{+} - \sum_{j} \alpha_{j}^{-}\beta_{j}^{-}.$$To satisfy the stationarity condition, we take the gradient of the Lagrangian with respect to $\beta_{j}^{+}$ and set it to zero to obtain$$\nabla L({\mathbf{\beta}})_{j} + \lambda - \alpha_{j}^{+} = 0$$We do the same with respect to $\beta_{j}^{-}$ to obtain$$-\nabla L({\mathbf{\beta}})_{j}+\lambda-\alpha_{j}^{-} = 0$$

As discussed in Jocelyn Chi’s post, primal feasibility requires that the primal constraints be satisfied so this gives us $\beta_{j}^{+} \ge 0$ and $\beta_{j}^{-} \ge 0$. Then dual feasibility requires non-negativity of the Lagrange multipliers so we get $\alpha_{j}^{+} \ge 0$ and $\alpha_{j}^{-} \ge 0$. And finally, complementary slackness requires that $\alpha_{j}^{+}\beta_{j}^{+} = 0$ and $\alpha_{j}^{-}\beta_{j}^{-} = 0$. We can simplify these conditions to obtain a simple set of rules for checking whether or not our solution is a minimum. The following is inspired by Jocelyn Chi’s post.

From $\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{+} = 0$, we have $\nabla L(\beta)_{j} + \lambda= \alpha_{j}^{+} \ge 0$. This gives us $\nabla L(\beta)_{j} \ge -\lambda$. From $-\nabla L(\beta)_{j} + \lambda - \alpha_{j}^{-} = 0$, we have $-\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0$. This gives us $-\nabla L(\beta)_{j} \ge -\lambda$, which gives us $\nabla L(\beta)_{j} \le \lambda$. Hence, $\lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j$

When $\beta_{j}^{+} > 0, \lambda > 0$, complementary slackness requires $\alpha_{j}^{+} = 0$. So $\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} = 0$. Hence, $\nabla L(\beta)_{j} = -\lambda < 0$ since $\lambda > 0$. At the same time, $-\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} \ge 0$ so $2 \lambda = \alpha_{j}^{-} > 0$ since $\lambda > 0$. Then complementary slackness requires $\beta_{j}^{-} = 0$. Hence, when $\beta_{j}^{+} > 0$, we have $\beta_{j}^{-}=0$ and $\nabla L(\beta)_{j} = -\lambda$

Similarly, when $\beta_{j}^{-} > 0, \lambda > 0$, complementary slackness requires $\alpha_{j}^{-}=0$. So $-\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{-} = 0$ and $\nabla L(\beta)_{j}=\lambda>0$ since $\lambda > 0$. Then from $\nabla L(\beta)_{j} + \lambda = \alpha_{j}^{+} \ge 0$ and the above, we get $2 \lambda = \alpha_{j}^{+} > 0$. Then complementary slackness requires $\beta_{j}^{+} = 0$. Hence, when $\beta_{j}^{-} > 0$, we have $\beta_{j}^{+}=0$ and $\nabla L(\beta)_{j} = \lambda$.

Since $\beta_{j} = \beta_{j}^{+} - \beta_{j}^{-}$, this means that when $\beta_{j} > 0$, $\nabla L(\beta)_{j} = -\lambda$. And when $\beta_{j} <0$, $\nabla L(\beta)_{j} = \lambda$. Combining this with $\lvert \nabla L(\beta)_{j} \rvert \le \lambda \; \forall j$, we arrive at the same convergence requirements that we obtained before using subdifferential calculus.

For conveniency, introduce the soft-thresholding function$$S(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}$$
Noticing that the optimization problem $$\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}$$can also be written
$$\min\left\lbrace\sum_{j=1}^p -\widehat{\beta}_j^{ols}\cdot\beta_j+\frac{1}{2}\beta_j^2+\lambda|\beta_j|\right\rbrace$$observe that$$\widehat{\beta}_{j,\lambda}=S(\widehat{\beta}_j^{ols},\lambda)$$which is a coordinate-wise update.

Now, if we consider a (slightly) more general problem, with weights in the first part$$\min\left\lbrace\frac{1}{2n}\sum_{i=1}^n{\color{red}{\omega_i}} [y_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2+\lambda \sum_j |\beta_j|\right\rbrace$$the coordinate-wise update becomes
$$\widehat{\beta}_{j,\lambda,{\color{red}{\omega}}}=S(\widehat{\beta}_j^{{\color{red}{\omega-}}ols},\lambda)$$
An alternative is to set$$\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}$$
so that the optimization problem can be written, equivalently
$$\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace$$
hence$$\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace$$
and one gets
$$\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)$$
or, if we develop
$$\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)$$
Again, if there are weights $\mathbf{\omega}=(\omega_i)$, the coordinate-wise update becomes
$$\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)$$
The code to compute this componentwise descent is

soft_thresholding = function(x,a){ result = numeric(length(x)) result[which(x &gt; a)] a)] - a result[which(x &lt; -a)] &lt;- x[which(x &lt; -a)] + a return(result) }

and the code

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){ beta = as.matrix(beta) X = as.matrix(X) omega = rep(1/length(y),length(y)) obj = numeric(length=(maxiter+1)) betalist = list(length(maxiter+1)) betalist[[1]] = beta beta0list = numeric(length(maxiter+1)) beta0 = sum(y-X%*%beta)/(length(y)) beta0list[1] = beta0 for (j in 1:maxiter){ for (k in 1:length(beta)){ r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y)) beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda) } beta0 = sum(y-X%*%beta)/(length(y)) beta0list[j+1] = beta0 betalist[[j+1]] = beta obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta)) if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } } return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

Let’s keep that one warm, and let’s get back to our initial problem.

## The lasso logistic regression

The trick here is that the logistic problem can be formulated as a quadratic programming problem. Recall that the log-likelihood is here $$\log\mathcal{L}=\frac{1}{n}\sum_{i=1}^n y_i\cdot(\beta_0+\mathbf{x}_i^T\mathbf{\beta})-\log[1+\exp(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]$$
which is a concave function of the parameters. Hence, one can use a quadratic approximation of the log-likelihood – using Taylor expansion,$$\log\mathcal{L}\approx\log\mathcal{L}'=\frac{1}{n}\sum_{i=1}^n \omega_i\cdot[z_i-(\beta_0+\mathbf{x}_i^T\mathbf{\beta})]^2$$
where $z_i$ is the working response
$$z_i=(\beta_0+\mathbf{x}_i^T\mathbf{\beta})+\frac{y_i-p_i}{p_i[1-p_i]}$$
$p_i$ is the prediction$$p_i = \frac{\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{1+\exp[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}$$and $\omega_i$ are weights $\omega_i = p_i[1-p_i]$.

Thus, we obtain a penalized least-square problem. And we can use what was done previously

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){ beta = as.matrix(beta) X = as.matrix(X) obj = numeric(length=(maxiter+1)) betalist = list(length(maxiter+1)) betalist[[1]] = beta beta0 = sum(y-X%*%beta)/(length(y)) p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta)) z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p)) omega = p*(1-p)/(sum((p*(1-p)))) beta0list = numeric(length(maxiter+1)) beta0 = sum(y-X%*%beta)/(length(y)) beta0list[1] = beta0 for (j in 1:maxiter){ for (k in 1:length(beta)){ r = z - X[,-k]%*%beta[-k] - beta0*rep(1,length(y)) beta[k] = (1/sum(omega*X[,k]^2))*soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda) } beta0 = sum(y-X%*%beta)/(length(y)) beta0list[j+1] = beta0 betalist[[j+1]] = beta obj[j] = (1/2)*(1/length(y))*norm(omega*(z - X%*%beta - beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta)) p = exp(beta0*rep(1,length(y)) + X%*%beta)/(1+exp(beta0*rep(1,length(y)) + X%*%beta)) z = beta0*rep(1,length(y)) + X%*%beta + (y-p)/(p*(1-p)) omega = p*(1-p)/(sum((p*(1-p)))) if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') &lt; tol) { break } } return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

It looks like what can get when calling glmnet… and here, we do have null components for some $\lambda$ large enough ! Really null… and that’s cool actually.

## Application on our second dataset

Consider now the second dataset, with two covariates. The code to get lasso estimates is

df0 = df df0$y = as.numeric(df$y)-1 plot_lambda = function(lambda){ m = apply(df0,2,mean) s = apply(df0,2,sd) for(j in 1:2) df0[,j] &lt;- (df0[,j]-m[j])/s[j] reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1,lambda=lambda) u = seq(0,1,length=101) p = function(x,y){ xt = (x-m[1])/s[1] yt = (y-m[2])/s[2] predict(reg,newx=cbind(x1=xt,x2=yt),type="response")} v = outer(u,u,p) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)} Consider some small values, for [\lambda], so that we only have some sort of shrinkage of parameters, reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=exp(-2.8)) plot_lambda(exp(-2.8))

But with a larger $\lambda$, there is variable selection: here $\widehat{\beta}_{1,\lambda}=0$

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=1) par(mfrow=c(1,2)) plot(reg,xvar="lambda",col=c("blue","red"),lwd=2) abline(v=exp(-2.1)) plot_lambda(exp(-2.1)) # Classification from scratch, logistic with kernels 3/8 Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric. ## kernel based estimated, from scratch I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate $\hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x})$. Heuritically, we want to compute the (conditional) expected value on the neighborhood of $\mathbf{x}$. If we consider some spatial model, where $\mathbf{x}$ is the location, we want the expected value of some variable $Y$, “on the neighborhood” of $\mathbf{x}$. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of $\mathcal{X}$ (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider $$\hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}$$or the moving regressogram $$\hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}$$In that case, the neighborhood is defined as the interval $(x\pm h)$. That’s nice, but clearly very simplistic. If $\mathbf{x}_i=\mathbf{x}$ and $\mathbf{x}_j=\mathbf{x}-h+\varepsilon$ (with $\varepsilon>0$), both observations are used to compute the conditional expected value. But if $\mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon$, only $\mathbf{x}_i$ is considered. Even if the distance between $\mathbf{x}_{j}$ and $\mathbf{x}_{j'}$ is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between $\mathbf{x}_{i}$‘s and $\mathbf{x}$.Use$$\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}$$where (classically)$$k_h(x)=k\left(\frac{x}{h}\right)$$for some kernel $k$ (a non-negative function that integrates to one) and some bandwidth $h$. Usually, kernels are denoted with capital letter $K$, but I prefer to use $k$, because it can be interpreted as the density of some random noise we add to all observations (independently). Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that$$\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)$$ Now, use the fact that the expected value can be defined as$$m(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}$$Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by$$\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)$$while the denominator is estimated by$$\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)$$In a general setting, we still use product kernels between $Y$ and $\mathbf{X}$ and write $$\widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}$$for some symmetric positive definite bandwidth matrix $\mathbf{H}$, and $$k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})$$ Now that we know what kernel estimates are, let us use them. For instance, assume that $k$ is the density of the $\mathcal{N}(0,1)$ distribution. At point $x$, with a bandwidth $h$ we get the following code mean_x = function(x,bw){ w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1) weighted.mean(myocarde$PRONO,w)} u = seq(5,55,length=201) v = Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) and of course, we can change the bandwidth. v = Vectorize(function(x) mean_x(x,2))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point $x$, so the smaller the neighborhood, the better. ## Using ksmooth R function Actually, there is a function in R to compute this kernel regression. reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1)) plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19) We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before g=function(bk=3){ reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk) f=function(bm){ v = Vectorize(function(x) mean_x(x,bm))(reg$x) z=reg$y-v sum((z[!is.na(z)])^2)} optim(bk,f)$par} x=seq(1,10,by=.1) y=Vectorize(g)(x) plot(x,y) abline(0,exp(-1),col="red") abline(0,.37,col="blue")

There is a slope of $0.37$, which is actually $e^{-1}$. Coincidence ? I don’t know to be honest…

## Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101) p = function(x,y){ bw1 = .2; bw2 = .2 w = dnorm((df$x1-x)/bw1, mean=0,sd=1)* dnorm((df$x2-y)/bw2, mean=0,sd=1) weighted.mean(df$y=="1",w) } v = outer(u,u,Vectorize(p)) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

## k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point $\mathbf{x}$ but the $k$-neighbors, with the $n$ observations we got.$$\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i$$
where $\omega_{i,k}(\mathbf{x})=n/k$ if $i\in\mathcal{I}_{\mathbf{x}}^k$ with
$$\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}$$
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7]) Sigma_Inv = solve(Sigma) d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)} k_closest = function(i,k){ vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv) vect = Vectorize(vect_dist)((1:nrow(myocarde))) which((rank(vect)))}

Here we have a function to find the $k$ closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for $y_i$ is the same as the one the majority of the neighbors.

k_majority = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2] return(Y)} But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels), k_mean = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)]) return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO, MAJORITY=k_majority(7),PROPORTION=k_mean(7)) OBSERVED MAJORITY PROPORTION [1,] 1 1 0.7142857 [2,] 0 1 0.5714286 [3,] 0 0 0.1428571 [4,] 1 1 0.5714286 [5,] 0 1 0.7142857 [6,] 0 0 0.2857143 [7,] 1 1 0.7142857 [8,] 1 0 0.4285714 [9,] 1 1 0.7142857 [10,] 1 1 0.8571429 [11,] 1 1 1.0000000 [12,] 1 1 1.0000000 Here, we got a prediction for an observed point, located at $\boldsymbol{x}_i$, but actually, it is possible to seek the $k$ closest neighbors of any point $\boldsymbol{x}$. Back on our univariate example (to get a graph), we have mean_x = function(x,k=9){ w = rank(abs(myocarde$INSYS-x),ties.method ="random") mean(myocarde$PRONO[which(w&lt;=9)])} u=seq(5,55,length=201) v=Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19) That’s not very smooth, but we do not have a lot of points either. If we use that technique on our two-dimensional dataset, we obtain the following Sigma_Inv = solve(var(df[,c("x1","x2")])) u = seq(0,1,length=51) p = function(x,y){ k = 6 vect_dist = function(j) d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv) vect = Vectorize(vect_dist)(1:nrow(df)) idx = which(rank(vect)&lt;=k) return(mean((df$y==1)[idx]))} v = outer(u,u,Vectorize(p)) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of $\mathbf{x}$ or simply using the $k$ nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

# Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates $\mathbf{x}$, $Y$ has a Bernoulli distribution,$$Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}$$The goal is to estimate parameter $\mathbf{\beta}$.

Recall that the heuristics for the use of that function for the probability is that$$\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}$$

## Maximimum of the (log)-likelihood function

The log-likelihood is here$$\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i)$$ where $p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}$. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO X = cbind(1,as.matrix(myocarde[,1:7])) negLogLik = function(beta){ -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta))) } We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par (Intercept) FRCAR INCAR INSYS 1.656926397 0.045234029 -2.119441743 0.204023835 PRDIA PAPUL PVENT REPUL -0.102420095 0.165823647 -0.081047525 -0.005992238 Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly) simu = function(i){ logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9)) logistic_opt_i$par[2:3] } v_beta = t(Vectorize(simu)(1:1000)) plot(v_beta) par(mfrow=c(1,2)) hist(v_beta[,1],xlab=names(myocarde)[1]) hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx) logit = function(mX, vBeta) { exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) } logLikelihoodLogitStable = function(vBeta, mX, vY) { -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + (1-vY)*(-log(1 + exp(mX %*% vBeta)))) } likelihoodScore = function(vBeta, mX, vY) { return(t(mX) %*% (logit(mX, vBeta) - vY) ) } optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, method = 'L-BFGS-B', gr = likelihoodScore, mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]] [,1] 0.066680272 FRCAR 0.003080542 INCAR 0.079031364 INSYS -0.001586194 PRDIA 0.040500697 PAPUL -0.041870705 PVENT -0.014162756 REPUL 0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,$$\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}$$The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv) library(MASS) logit = function(x){1/(1+exp(-x))} logLik = function(beta, X, y){ -sum(y*log(logit(X%*%beta)) + (1-y)*log(1-logit(X%*%beta))) } optim_second = function(beta, num_iter){ LL = vector() for(i in 1:num_iter){ grad = (t(X)%*%(logit(X%*%beta) - y)) H = hessian(logLik, beta, method = "complex", X = X, y = y) beta = beta - ginv(H)%*%grad LL[i] = logLik(beta, X, y) } result = list(beta, H) return(result) }

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500) opt0[[1]] [,1] [1,] 0.951074420 [2,] 0.018860280 [3,] 0.275428978 [4,] 0.144803636 [5,] -0.058535606 [6,] 0.001182178 [7,] -0.108651776 [8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500) opt1[[1]] [,1] [1,] 0.052894794 [2,] 0.024718435 [3,] 0.167953661 [4,] 0.171662947 [5,] -0.057458066 [6,] -0.011361034 [7,] -0.107532114 [8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

## Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get $$\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})$$
while$$\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}$$

Y=myocarde$PRONO X=cbind(1,as.matrix(myocarde[,1:7])) colnames(X)=c("Inter",names(myocarde[,1:7])) beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1) for(s in 1:9){ pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) gradient=t(X)%*%(Y-pi) omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi)) Hessian=-t(X)%*%omega%*%X beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10] [,1] [,2] [,3] XInter -10.187641685 -10.187641696 -10.187641696 XFRCAR 0.138178119 0.138178119 0.138178119 XINCAR -5.862429035 -5.862429037 -5.862429037 XINSYS 0.717084018 0.717084018 0.717084018 XPRDIA -0.073668171 -0.073668171 -0.073668171 XPAPUL 0.016756506 0.016756506 0.016756506 XPVENT -0.106776012 -0.106776012 -0.106776012 XREPUL -0.003154187 -0.003154187 -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

# Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
$Creditability : int 1 1 1 1 1 1 1 1 1 1 ...$ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
$Duration : int 18 9 12 12 12 10 8 ...$ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose +
Length.of.current.employment +
Sex...Marital.Status, family=binomial,
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test]) > perf <- performance(pred, "tpr", "fpr") > plot(perf) > AUCLog1=performance(pred, measure = "auc")@y.values[[1]] > cat("AUC: ",AUCLog1,"\n") AUC: 0.7340997 An alternative is to consider a logistic regression on all explanatory variables > LogisticModel <- glm(Creditability ~ ., + family=binomial, + data = credit[i_calibration,]) We might overfit, here, and we should observe that on the ROC curve > fitLog <- predict(LogisticModel,type="response", + newdata=credit[i_test,]) > pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ .,
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test]) > perf <- performance(pred, "tpr", "fpr") > plot(perf) > AUCArbre=performance(pred, measure = "auc")@y.values[[1]] > cat("AUC: ",AUCArbre,"\n") AUC: 0.7100323 As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions. > library(randomForest) > RF <- randomForest(Creditability ~ ., + data = credit[i_calibration,]) > fitForet <- predict(RF, + newdata=credit[i_test,], + type="prob")[,2] > pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ .,
+    family=binomial,
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test]) + AUCLog2=performance(pred, measure = "auc")@y.values[[1]] + RF <- randomForest(Creditability ~ ., + data = credit[i_calibration,]) + fitForet <- predict(RF, + newdata=credit[i_test,], + type="prob")[,2] + pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

# Choosing a Classifier

In order to illustrate the problem of chosing a classification model consider some simulated data,

> n = 500
> set.seed(1)
> X = rnorm(n)
> ma = 10-(X+1.5)^2*2
> mb = -10+(X-1.5)^2*2
> M = cbind(ma,mb)
> set.seed(1)
> Z = sample(1:2,size=n,replace=TRUE)
> Y = ma*(Z==1)+mb*(Z==2)+rnorm(n)*5
> df = data.frame(Z=as.factor(Z),X,Y)

A first strategy is to split the dataset in two parts, a training dataset, and a testing dataset.

> df1 = training = df[1:300,]
> df2 = testing  = df[301:500,]
• The Holdout Method: Training and Testing Datasets

The two datasets can be visualised below, with the training dataset on top, and the testing dataset below

> plot(df1$X,df1$Y,pch=19,col=c(rgb(1,0,0,.4),
+ rgb(0,0,1,.4))[df1$Z]) # Variable Selection using Cross-Validation (and Other Techniques) A natural technique to select variables in the context of generalized linear models is to use a stepŵise procedure. It is natural, but contreversial, as discussed by Frank Harrell in a great post, clearly worth reading. Frank mentioned about 10 points against a stepwise procedure. • It yields R-squared values that are badly biased to be high. • The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution. • The method yields confidence intervals for effects and predicted values that are falsely narrow (see Altman and Andersen (1989)). • It yields p-values that do not have the proper meaning, and the proper correction for them is a difficult problem. • It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large (see Tibshirani (1996)). • It has severe problems in the presence of collinearity. • It is based on methods (e.g., F tests for nested models) that were intended to be used to test prespecified hypotheses. • Increasing the sample size does not help very much (see Derksen and Keselman (1992)). • It allows us to not think about the problem. • It uses a lot of paper. # Visualising a Classification in High Dimension, part 2 A few weeks ago, I published a post on Visualising a Classification in High Dimension, based on the use of a principal component analysis, to get a projection on the first two components. Following that post, I was wondering what could be done in the context of a classification on categorical covariates. A natural idea would be to consider a correspondance analysis, and to run a similar code. Consider here the dataset used in a recent post, > source("http://freakonometrics.free.fr/import_data_credit.R") If we consider a correspondance analysis, we get > library(FactoMineR) > acm=MCA(train.db,quali.sup = + which(names(train.db,)=="class"),ncp=10) For the covariates (including also the variable we want to model, considered here as some supplementary variable), the visualisation – on the first two components – is and for the individuals # Visualising a Classification in High Dimension So far, when discussing classification, we’ve been playing on my toy-dataset (actually, I should no claim it’s mine, it is inspired by the one used in the introduction of Boosting, by Robert Schapire and Yoav Freund). But in ral life, there are more observations, and more explanatory variables.With more than two explanatory variables, it starts to be more complicated to visualise. For instance, consider MYOCARDE=read.table( "http://freakonometrics.free.fr/saporta.csv", head=TRUE,sep=";") where we have observations from people in E.R., for infarctus, and we want to understand who did survive, to get a predictive model. But before running some classifier, let us visualise our data. Since we have seven explanatory variables and our class (survival or death), we can go for a PCA. library(FactoMineR) # ACP (sur les var continues) X=MYOCARDE[,1:7] acp=PCA(X) To add the death/survival variable, treat it as numerical 0/1 variable (at least to get a direction) MYOCARDE2=MYOCARDE MYOCARDE2$PRONO=(MYOCARDE2\$PRONO=="SURVIE")*1
acp=PCA(MYOCARDE2,quanti.sup=8,graph=TRUE)

The nice thing is that we see here where variables are colinear with that one. It is also possible to visualise individuals, and classes, too

acp=PCA(MYOCARDE,quali.sup=8,graph=TRUE)
plot(acp, habillage = 8,col.hab=c("red","blue"))

# Supervised Classification, beyond the logistic

In our data-science class, after discussing limitations of the logistic regression, e.g. the fact that the decision boundary line was a straight line, we’ve mentioned possible natural extensions. Let us consider our (now) standard dataset

 clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1))
clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2))
x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z <- c(1,1,1,1,1,0,0,1,0,0)
df <- data.frame(x,y,z)
plot(x,y,pch=19,cex=2,col=clr1[z+1])

One can consider a quadratic function of the covariates (instead of a linear one)

 reg=glm(z~x+y+I(x^2)+I(y^2)+I(x*y),
data=df,family=binomial)
summary(reg)

pred_1 <- function(x,y){
predict(reg,newdata=data.frame(x=x,
y=y),type="response")>.5 }

x_grid<-seq(0,1,length=101)
y_grid<-seq(0,1,length=101)
z_grid <- outer(x_grid,y_grid,pred_1)
image(x_grid,y_grid,z_grid,col=clr2)
points(x,y,pch=19,cex=2,col=clr1[z+1])