Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.