Tag Archives: Newton

Classification from scratch, penalized Ridge logistic 4/8

Fourth post of our series on classification from scratch, following the previous post which was some sort of detour on kernels. But today, we’ll get back on the logistic model.

Formal approach of the problem

We’ve seen before that the classical estimation technique used to estimate the parameters of a parametric model was to use the maximum likelihood approach. More specifically, \widehat{\mathbf{\beta}}=\text{argmax}\lbrace \log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})\rbraceThe objective function here focuses (only) on the goodness of fit. But usually, in econometrics, we believe something like non sunt multiplicanda entia sine necessitate (“entities are not to be multiplied without necessity”), the parsimony principle, simpler theories are preferable to more complex ones. So we want to penalize for too complex models.

This is not a bad idea. It is mentioned here and there in econometrics textbooks, but usually, for model choice, not about the inference. Usually, we estimate parameters using maximum likelihood techniques, and them we use AIC or BIC to compare two models. Recall that Akaike (AIC) criteria is based on-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\text{dim}(\widehat{\mathbf{\beta}})We have on the left a measure for the goodness of fit, and on the right, a penalty increasing with the “complexity” of the model.

Very quickly, here, the complexity is the number of variates used. I will not enter into details about the concept of sparsity (and the true dimension of the problem), I will recommend to read the book by Martin Wainwright, Robert Tibshirani and Trevor Hastie on that issue. But assume that we do not make and variable selection, we consider the regression on all covariates. Define\Vert\mathbf{a} \Vert_{\ell_0}=\sum_{i=1}^d \mathbf{1}(a_i\neq 0), ~~\Vert\mathbf{a} \Vert_{\ell_1}=\sum_{i=1}^d |a_i|,~~\Vert\mathbf{a} \Vert_{\ell_2}=\left(\sum_{i=1}^d a_i^2\right)^{1/2}for any \mathbf{a}\in\mathbb{R}^d. One might say that the AIC could be written-2\log\mathcal{L}(\widehat{\mathbf{\beta}}|\mathbf{x},\mathbf{y})+2\|\widehat{\mathbf{\beta}}\|_{\ell_0}And actually, this will be our objective function. More specifically, we will consider
\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|\rbracefor some norm \|\cdot\|. I will not get back here on the motivation and the (theoretical) properties of those estimates (that will actually be discussed in the Summer School in Barcelona, in July), but in this post, I want to discuss the numerical algorithm to solve such optimization problem, for \|\cdot\|_{\ell_2} (the Ridge regression) and for \|\cdot\|_{\ell_1} (the LASSO regression).

Normalization of the covariates

The problem of \|\mathbf{\beta}\| is that the norm should make sense, somehow. A small \mathbf{\beta}_j is with respect to the “dimension” of x_j‘s. So, the first step will be to consider linear transformations of all covariates x_j to get centered and scaled variables (with unit variance)

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)

Ridge Regression (from scratch)

Before running some codes, recall that we want to solve something like\widehat{\mathbf{\beta}}_{\lambda}=\text{argmin}\lbrace -\log\mathcal{L}(\mathbf{\beta}|\mathbf{x},\mathbf{y})+\lambda\|\mathbf{\beta}\|_{\ell_2}^2\rbrace In the case where we consider the log-likelihood of some Gaussian variable, we get the sum of the square of the residuals, and we can obtain an explicit solution. But not in the context of a logistic regression.

The heuristics about Ridge regression is the following graph. In the background, we can visualize the (two-dimensional) log-likelihood of the logistic regression, and the blue circle is the constraint we have, if we rewite the optimization problem as a contrained optimization problem : \min_{\mathbf{\beta}:\|\mathbf{\beta}\|^2_{\ell_2}\leq s} \lbrace \sum_{i=1}^n -\log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) \rbracecan be written equivalently (it is a strictly convex problem)\min_{\mathbf{\beta},\lambda} \lbrace -\sum_{i=1}^n \log\mathcal{L}(y_i,\beta_0+\mathbf{x}^T\mathbf{\beta}) +\lambda \|\mathbf{\beta}\|_{\ell_2}^2 \rbraceThus, the constrained maximum should lie in the blue disk

LogLik = function(bbeta){
  b0=bbeta[1]
  beta=bbeta[-1]
  sum(-y*log(1 + exp(-(b0+X%*%beta))) - 
  (1-y)*log(1 + exp(b0+X%*%beta)))}
u = seq(-4,4,length=251)
v = outer(u,u,function(x,y) LogLik(c(1,x,y)))
image(u,u,v,col=rev(heat.colors(25)))
contour(u,u,v,add=TRUE)
u = seq(-1,1,length=251)
lines(u,sqrt(1-u^2),type="l",lwd=2,col="blue")
lines(u,-sqrt(1-u^2),type="l",lwd=2,col="blue")

Let us consider the objective function, with the following code

PennegLogLik = function(bbeta,lambda=0){
  b0   = bbeta[1]
  beta = bbeta[-1]
 -sum(-y*log(1 + exp(-(b0+X%*%beta))) - (1-y)*
  log(1 + exp(b0+X%*%beta)))+lambda*sum(beta^2)
}

Why not try a standard optimisation routine ? In the very first post on that series, we did mention that using optimization routines were not clever, since they were strongly relying on the starting point. But here, it is not the case

lambda = 1
beta_init = lm(PRONO~.,data=myocarde)$coefficients
vpar = matrix(NA,1000,8)
for(i in 1:1000){
vpar[i,] = optim(par = beta_init*rnorm(8,1,2), 
function(x) PennegLogLik(x,lambda), method = "BFGS", control = list(abstol=1e-9))$par}
par(mfrow=c(1,2))
plot(density(vpar[,2]),ylab="",xlab=names(myocarde)[1])
plot(density(vpar[,3]),ylab="",xlab=names(myocarde)[2])


Clearly, even if we change the starting point, it looks like we converge towards the same value. That could be considered as the optimum.

The code to compute \widehat{\mathbf{\beta}}_{\lambda} would then be

opt_ridge = function(lambda){
beta_init = lm(PRONO~.,data=myocarde)$coefficients
logistic_opt = optim(par = beta_init*0, function(x) PennegLogLik(x,lambda), 
method = "BFGS", control=list(abstol=1e-9))
logistic_opt$par[-1]}

and we can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} as a function of {\lambda}

v_lambda = c(exp(seq(-2,5,length=61)))
est_ridge = Vectorize(opt_ridge)(v_lambda)
library("RColorBrewer")
colrs = brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1])
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])

At least it seems to make sense: we can observe the shrinkage as \lambda increases (we’ll get back to that later on).

Ridge, using Netwon Raphson algorithm

We’ve seen that we can also use Newton Raphson to solve this problem. Without the penalty term, the algorithm was\mathbf{\beta}_{new} = \mathbf{\beta}_{old} - \left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}where
\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})and\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}where \mathbf{\Delta}_{old} is the diagonal matrix with terms \mathbf{p}_{old}(1-\mathbf{p}_{old}) on the diagonal.

Thus\mathbf{\beta}_{new} = \mathbf{\beta}_{old} + (\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T[\mathbf{y}-\mathbf{p}_{old}]that we can also write\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. Here, on the penalized problem, we can easily prove that\frac{\partial\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}=\frac{\partial\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}}-2\lambda\mathbf{\beta}_{old}while\frac{\partial^2\log\mathcal{L}_p(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{\lambda,old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}-2\lambda\mathbb{I}Hence\mathbf{\beta}_{\lambda,new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}+2\lambda\mathbb{I})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}
The code is then

Y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
X = cbind(1,X)
colnames(X) = c("Inter",names(myocarde[,1:7]))
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
beta[,8:10]
              [,1]        [,2]        [,3]
XInter  0.59619654  0.59619654  0.59619654
XFRCAR  0.09217848  0.09217848  0.09217848
XINCAR  0.77165707  0.77165707  0.77165707
XINSYS  0.69678521  0.69678521  0.69678521
XPRDIA -0.29575642 -0.29575642 -0.29575642
XPAPUL -0.23921101 -0.23921101 -0.23921101
XPVENT -0.33120792 -0.33120792 -0.33120792
XREPUL -0.84308972 -0.84308972 -0.84308972

Again, it seems that convergence is very fast.

And interestingly, with that algorithm, we can also derive the variance of the estimator\text{Var}[\widehat{\mathbf{\beta}}_{\lambda}]=[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}\mathbf{X}^T\mathbf{\Delta}\text{Var}[\mathbf{z}]\mathbf{\Delta}\mathbf{X}[\mathbf{X}^T\mathbf{\Delta}\mathbf{X}+2\lambda\mathbb{I}]^{-1}where\text{Var}[\mathbf{z}]=\mathbf{\Delta}^{-1}

The code to compute \widehat{\mathbf{\beta}}_{\lambda} as a function of \lambda is then

newton_ridge = function(lambda=1){
 beta = as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:20){
   pi = exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   Delta = matrix(0,nrow(X),nrow(X));diag(Delta)=(pi*(1-pi))
   z = X%*%beta[,s] + solve(Delta)%*%(Y-pi)
   B = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% (t(X)%*%Delta%*%z)
   beta = cbind(beta,B)}
Varz = solve(Delta)
Varb = solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X))) %*% t(X)%*% Delta %*% Varz %*%
  Delta %*% X %*% solve(t(X)%*%Delta%*%X+2*lambda*diag(ncol(X)))
return(list(beta=beta[,ncol(beta)],sd=sqrt(diag(Varb))))}

We can visualize the evolution of \widehat{\mathbf{\beta}}_{\lambda} (as a function of \lambda)

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$beta)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i])


and to get the evolution of the variance

v_lambda=c(exp(seq(-2,5,length=61)))
est_ridge=Vectorize(function(x) newton_ridge(x)$sd)(v_lambda)
library("RColorBrewer")
colrs=brewer.pal(7,"Set1")
plot(v_lambda,est_ridge[1,],col=colrs[1],type="l")
for(i in 2:7) lines(v_lambda,est_ridge[i,],col=colrs[i],lwd=2)


Recall that when \lambda=0 (on the left of the graphs), \widehat{\mathbf{\beta}}_{0}=\widehat{\mathbf{\beta}}^{mco} (no penalty). Thus as \lambda increase (i) the bias increase (estimates tend to 0) (ii) the variances deacrease.

Ridge, using glmnet

As always, there are R functions availble to run a ridge regression. Let us use the glmnet function, with \alpha=0

y = myocarde$PRONO
X = myocarde[,1:7]
for(j in 1:7) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
X = as.matrix(X)
library(glmnet)
glm_ridge = glmnet(X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

as a function of the norm

the \ell_1 norm here, I don’t know why. I don’t know either why all graphs obtained with different optimisation routines are so different… Maybe that will be for another post…

Ridge with orthogonal covariates

An interesting case is obtained when covariates are orthogonal. This can be obtained using a PCA of the covariates.

library(factoextra)
pca = princomp(X)
pca_X = get_pca_ind(pca)$coord

Let us run a ridge regression on those (orthogonal) covariates

library(glmnet)
glm_ridge = glmnet(pca_X, y, alpha=0)
plot(glm_ridge,xvar="lambda",col=colrs,lwd=2)

plot(glm_ridge,col=colrs,lwd=2)

We clearly observe the shrinkage of the parameters, in the sense that \widehat{\mathbf{\beta}}_{\lambda}^{\perp}=\frac{\widehat{\mathbf{\beta}}^{mco}}{1+\lambda}

Application

Let us try with our second set of data

df0 = df
df0$y=as.numeric(df$y)-1
plot_lambda = function(lambda){
m = apply(df0,2,mean)
s = apply(df0,2,sd)
for(j in 1:2) df0[,j] = (df0[,j]-m[j])/s[j]
reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0,lambda=lambda)
u = seq(0,1,length=101)
p = function(x,y){
  xt = (x-m[1])/s[1]
  yt = (y-m[2])/s[2]
  predict(reg,newx=cbind(x1=xt,x2=yt),type='response')}
v = outer(u,u,p)
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}

We can try various values of \lambda

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(.2))
plot_lambda(.2)


or

reg = glmnet(cbind(df0$x1,df0$x2), df0$y==1, alpha=0)
par(mfrow=c(1,2))
plot(reg,xvar="lambda",col=c("blue","red"),lwd=2)
abline(v=log(1.2))
plot_lambda(1.2)


Next step is to change the norm of the penality, with the \ell_1 norm (to be continued…)

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Newton-Raphson avec un dessin

Dans un précédant billet, publié il a quelques mois maintenant, je trouvais dommage que l’on oublie de citer des personnes qui ont introduit des concepts fondamentaux. En l’occurence, je pensais à Bruno de Finetti. En traînant sur Google Books l’autre jour, j’ai eu l’impression inverse, en trouvant dommage que l’on ne se souvienne que de la personne qui a introduit un concept, et pas celui qui en a compris la portée générale. Mais je devrais peut-être revenir un peu sur le contexte.

En travaillant avec un étudiant cet été, on devait résoudre des problèmes d’optimisation (en l’occurence l’estimation de paramètres dans un mélange de lois par maximum de vraisemblance) et je me souviens encore lui avoir dit, “c’est facile, tu programmes une descente de gradient, la méthode de Newton-Raphson, et c’est bon, on réglera l’histoire des contraintes plus tard” (oui, on avait un ensemble – simple – de contraintes qu’on pouvait intégrer en reparamétrant le programme d’optimisation). La méthode de Newton repose sur l’idée que pour trouver le zéro d’une fonction, on utilise une suite définie par récurence,

La fameuse illustration de cette formule étant un dessin de la forme

De là, si on se souvient que la condition du premier ordre pour cherche un extremum d’une fonction revient à annuler la dérivée, on peut utiliser cette méthode pour chercher un extremum, avec une suite définie par

Cette méthode se généralise en dimension quelconque, avec la méthode dite de la descente de gradient

Bref, ces méthodes sont très géométriques. Et j’avais toujours cru que les vieux mathématiciens étaient tous des géomètres. Aussi, j’espérais trouver plein de dessins dans les textes de Newton et Raphson. Mais non….

La méthode de Newton a été introduite par Isaac Newton dans De analysi per aequationes numero terminorum infinitas, écrit en 1669 et publié en 1711. Mais c’est surtout dans De metodis fluxionum et serierum infinitarum, écrit en 1671, traduit (du latin) et publié sous le titre Methods of Fluxions en 1736 que l’on va trouver l’idée de la méthode dite de Newton pour trouver les zéros d’une fonction à valeurs réelles. C’est ce texte que l’on peut trouver en ligne, numérisé. Mais Newton n’a aucunement proposé une méthode générale, car il ne l`appliquait qu’aux seuls polynômes. A sa décharge, la notion de dérivée n’était pas (clairement) définie à cette époque… donc imaginer une descente en suivant la tangente, en 1710, ne serait peut-être pas raisonnable.

C’est Thomas Simpson qui proposa de généraliser cette méthode pour mettre en place une méthode de calcul itératif des solutions d’une équation non linéaire, en utilisant les dérivées (qu’il appelait fluxions, comme Newton).  En 1690,  Raphson publie une description de sa méthode dans Analysis aequationum universalis. Comme Newton, il va proposer une méthode de calcul récursif des approximations successives d’un zéro d’un polynôme (avec une petite nuance, car Newton essayait de construire une suite de polynômes, comme l’expliquent Ypma (1995) ou Deuflhard (2005))

Maintenant, si on regarde les publications, le dessin avec la tangente, on ne le voit pas ! Loin de là ! On voit des pages de calculs algébriques,

Il m’a fallu un certain temps pour découvrir un traité abordant ce problème avec des dessins. Pour les amateurs de mathématiques marseillaises, on peut découvrir un traité rafraîchissant, datant de 1768, publié d’ailleurs par un homme qui allait devenir par la suite maire de Marseille (à l’époque où les hommes politiques faisaient des maths, comme j’en parlais dans un vieux billet, avec une preuve du théorème de Pythagore proposé par un mathématicien amateur, qui allait devenir ensuite Président des États-Unis d’Amérique), monsieur Jean-Raymond Pierre  Mouraille.

Ce traité est passionnant, avec un style d’une autre époque

Mourailles se positionne clairement dans la lignée de Newton, avec quelques bémols toutefois,

Le plus intéressant, ce sont les pages d’annexes, remplies de dessins, où enfin, on peut découvrir le dessin avec la suite définie par récurrence, et la descente le long de la tangente,

Bref, tout le monde pense à ces dessins quand il entend parler de la méthode de Newton-Raphson, même si aucune représentation géométrique n’est présentée.

Je ne sais pas si Jean-Raymond Pierre  Mouraille a le premier a avoir représenté la recherche de zéros d’une fonction de cette manière, mais il est dommage qu’on n’en parle pas plus souvent…