Tag Archives: logit

Classification from scratch, logistic with kernels 3/8

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})

Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code

mean_x = function(x,bw){
  w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1)
  weighted.mean(myocarde$PRONO,w)}
u = seq(5,55,length=201)
v = Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better.

Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1))
plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk)
f=function(bm){
  v = Vectorize(function(x) mean_x(x,bm))(reg$x)
  z=reg$y-v
  sum((z[!is.na(z)])^2)}
optim(bk,f)$par}
x=seq(1,10,by=.1)
y=Vectorize(g)(x)
plot(x,y)
abline(0,exp(-1),col="red")
abline(0,.37,col="blue")


There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest…

Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101)
p = function(x,y){
  bw1 = .2; bw2 = .2
  w = dnorm((df$x1-x)/bw1, mean=0,sd=1)*
      dnorm((df$x2-y)/bw2, mean=0,sd=1)
  weighted.mean(df$y=="1",w)
}
v = outer(u,u,Vectorize(p))
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with
\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7])
Sigma_Inv = solve(Sigma)
d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)}
k_closest = function(i,k){
  vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv)
vect = Vectorize(vect_dist)((1:nrow(myocarde))) 
which((rank(vect)))}

Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.

k_majority = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2]
  return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)])
  return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO,
MAJORITY=k_majority(7),PROPORTION=k_mean(7))
      OBSERVED MAJORITY PROPORTION
 [1,]        1        1  0.7142857
 [2,]        0        1  0.5714286
 [3,]        0        0  0.1428571
 [4,]        1        1  0.5714286
 [5,]        0        1  0.7142857
 [6,]        0        0  0.2857143
 [7,]        1        1  0.7142857
 [8,]        1        0  0.4285714
 [9,]        1        1  0.7142857
[10,]        1        1  0.8571429
[11,]        1        1  1.0000000
[12,]        1        1  1.0000000

Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){
  w = rank(abs(myocarde$INSYS-x),ties.method ="random")
  mean(myocarde$PRONO[which(w<=9)])}
u=seq(5,55,length=201)
v=Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")]))
u = seq(0,1,length=51)
p = function(x,y){
  k = 6
  vect_dist = function(j)  d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv)
  vect = Vectorize(vect_dist)(1:nrow(df)) 
  idx  = which(rank(vect)<=k)
  return(mean((df$y==1)[idx]))}
v = outer(u,u,Vectorize(p))
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

Multinomial Logit as an Iterated Logit Regression

For the second section of the course at ENSAE, yesterday, we’ve seen how to run a multinomial logistic regression model. It is simply an extension of the binomial logistic regression. But actually, it is also possible to consider iterative binomial regressions.

Consider here a response variable Y with a multinomial distribution (3 factors to have something more general than the binomial), taking values \{A,B,C\}, with respective probabilities \mathbf{p}=(p_A,p_B,p_C). Here is a code to generate some multinomial variables

msample=function(A,B,C){
Y=rep(NA,B)
for(i in 1:B){Y[i]=sample(A,size=1,prob=C[i,])}
return(Y)
}

and here is a code to generate a dataset with n rows,

generate3=function(n,x,pb=c(-2,0)){
set.seed(x)
X1=runif(n)
X2=runif(n)
X3=runif(n)
s1=pb[1]+X1+X2
s2=pb[2]-X1+X2
P1=exp(s1)/(1+exp(s1)+exp(s2))
P2=exp(s2)/(1+exp(s1)+exp(s2))
Y=msample(0:2,n,cbind(1-P1-P2,P1,P2))
df=data.frame(Y=Y,X1=X1,X2=X2,X3=X3)
return(df)
}

Let us generate a training dataset and a validation one

pb=c(.31,.42)
DF1=generate3(1000,1,pb=pb)
DF2=generate3(500,2,pb=pb)

With a multivariate logistic regression
\mathbb{P}[Y=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{1}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}

For convenience, consider the most popular factor in our training dataset

modalite=names(sort(table(DF1$Y),decreasing = TRUE))

Consider a regression model on the simulated dataset (with several covariates), let us estimate it, and let us get predictions.

library(nnet)
reg=multinom(as.factor(Y) ~ ., data = DF1)
mp1=predict (reg, DF1, "probs")
mp2=predict (reg, DF2, "probs")

An alternative can be the following.
consider a first regression model on the Bernoulli variable Y_A=\mathbf{1}(Y=A). Actually, we will consider the most important factor, but for convenience, assume that it is A.
\mathbb{P}[Y_A=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}
On our dataset, estimate that model, and get predictions. In the case where Y\neq A, define another Bernoulli variable Y_B=\mathbf{1}(Y=B|Y\neq A). We can estimate that model and derive two probabilities, \mathbb{P}(Y=B|Y\neq A) and \mathbb{P}(Y=C|Y\neq A) (the sum of the two being equal to 1). Based on those two models, it is possible to compute the three probabilities we are looking for. \mathbb{P}[Y=A] is obtained from the first model, and we can derive the other two from \mathbb{P}[Y=B|Y\neq A]\cdot\mathbb{P}[Y\neq A] and \mathbb{P}[Y=C|Y\neq A]\cdot\mathbb{P}[Y\neq A].

reg1=glm((Y==modalite[1])~.,data=DF1,family=binomial)
reg2=glm((Y==modalite[2])~.,data=DF1[-which(DF1$Y==modalite[1]),],family=binomial)
p11=predict (reg1, newdata=DF1, type="response")
p12=predict (reg2, newdata=DF1, type="response")
p21=predict (reg1, newdata=DF2, type="response")
p22=predict (reg2, newdata=DF2, type="response")
mmp1=cbind(p11,(1-p11)*p12,(1-p11)*(1-p12))
mmp2=cbind(p21,(1-p21)*p22,(1-p21)*(1-p22))
colnames(mmp1)=colnames(mmp2)=modalite

Let us compare the predicted probabilites, on the same dataset (here the training dataset)

> mmp1[1:9,c("0","1","2")]
0 1 2
1 0.19728737 0.4991805 0.3035321
2 0.17244580 0.5648537 0.2627005
3 0.19291753 0.5971058 0.2099767
4 0.09087176 0.7787304 0.1303978
5 0.23400225 0.4083022 0.3576955
6 0.18063647 0.6637352 0.1556283
7 0.13188881 0.7402710 0.1278401
8 0.13776970 0.6524959 0.2097344
9 0.12325864 0.6790336 0.1977078
> mp1[1:9,c("0","1","2")]
0 1 2
1 0.19691036 0.5022692 0.3008205
2 0.17123189 0.5680647 0.2607034
3 0.19293066 0.5984402 0.2086291
4 0.08821851 0.7813318 0.1304497
5 0.23470739 0.4109990 0.3542936
6 0.18249687 0.6602168 0.1572863
7 0.13128711 0.7400898 0.1286231
8 0.13525341 0.6553618 0.2093848
9 0.12090016 0.6815915 0.1975084

The two are very close. So yes, it is possible to see the multinomial regression as some sequential binomial regressions.

Actuariat de l’Assurance Non-Vie #2

Mardi, on commencera le cours d’actuariat, avec un cours d’introduction le matin, mais l’après midi, on attaquera plus sérieusement, avec une mise en bouche, avec les modèles de classification (de type {survenance} vs {non survenance} d’accident). Là encore, les slides sont en ligne.

Je parlerais aussi du premier devoir, qui portera sur les modèles que nous (re)verrons ici.

 

Re-parametrization and Maximum Likelihood

The maximum likelihood estimator is invariant in the sense that for all bijective function  , if   is the maximum likelihood estimator of   then  . Let  , then   is equal to  , and the likelihood function in   is  . And since   is the maximum likelihood estimator of  ,

hence,   is the maximum likelihood estimator of  .

For instance, the Bernoulli distribution is   with   and

 

Given sample  , the likelihood is

 

The log-likelihood is then

 

with ICI

 https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20p}\log\mathcal{L}(p)=\frac{\sum%20x_i}{p}-\frac{n-\sum%20x_i}{1-p}.

Thus, the first order condition

 https://latex.codecogs.com/gif.latex?\frac{\partial}{\partial%20p}\log\mathcal{L}(p)=0

is satisfied when  . In order to illustrate, consider the following data


> set.seed(1)
> X=sample(0:1,size=15,replace=TRUE)
> X
[1] 0 0 1 1 0 1 1 1 1 0 0 0 1 0 1

The (negative) log-likelihood is here


> loglik=function(p){
+ -sum(log(dbinom(X,size=1,prob=p)))
+ }

that we can visualize below


> u=seq(0,1,by=.025)
> v=-Vectorize(loglik)(u)
> plot(u,v,type="l",xlab="",ylab="")

From calculations above, we know that the maximum likelihood estimator for  is


> mean(X)
[1] 0.5333333

The numerical version is


> (opt=optim(.5,loglik))
$par
[1] 0.5333008

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL

Somehow, we were lucky here, because we did not say that the optimization was on the interval . Nevertheless, our estimator for the probability belongs to . In order to insure that the optimal value is in , we can consider some constrained optimization routine


> constrOptim(.5, loglik, grad=NULL,ui=matrix(c(1,-1),2,1), ci=c(0,-1))
$par
[1] 0.5333008

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL

$outer.iterations
[1] 2

$barrier.value
[1] 6.909277e-05

On the previous graph, we did – indeed – reach that maximum of the log-likelihood


> abline(v=opt$par,col="red")

An alternative is to consider   (as in the exponential family). The log-likelihood is then

 

since

 

Here

 

Thus, the first order condition

 

is satisfied when

i.e.

 

From a numerical perspective, we have the same optimal value


> loglik=function(theta){
+ -sum(log(dbinom(X,size=1,prob=exp(theta)/(1+exp(theta)))))
+ }
> (opt=optim(0,loglik))
$par
[1] 0.1335938

$value
[1] 10.36385

$counts
function gradient
20 NA

$convergence
[1] 0

$message
NULL
> exp(opt$par)/(1+exp(opt$par))
[1] 0.5333489

Modelling Occurence of Events, with some Exposure

This afternoon, an interesting point was raised, and I wanted to get back on it (since I did publish a post on that same topic a long time ago). How can we adapt a logistic regression when all the observations do not have the same exposure. Here the model is the following: ,

  • the occurence of an event https://latex.codecogs.com/gif.latex?Y_i^\star on the period https://latex.codecogs.com/gif.latex?[0,1] is unobserved
  • the occurence of an event https://latex.codecogs.com/gif.latex?Y_i on https://latex.codecogs.com/gif.latex?[0,E_i] is observed (as well as https://latex.codecogs.com/gif.latex?E_i)

If we assume that the ‘occurence of an event’ is the first occurence of a Poisson processus, we can prove that

i.e. no event occur on  if no event occur on  and no event occur on . Assuming independence between the two, we can prove that we have

https://latex.codecogs.com/gif.latex?\mathbb{P}(Y=0)%20=%20\mathbb{P}(N=0)^E

With words, it means that the probability of not having a claim in the first six months of the year is the square root of not have a claim over a year. Which makes sense.

Continue reading Modelling Occurence of Events, with some Exposure

There is no “Too Big” Data, is there?

A few years ago, a former classmate came back to me with a simple problem. He was working for some insurance company (and still is, don’t worry, chatting with me is not yet a reason for dismissal), and his problem was that their dataset was too large to run (standard) codes to get a regression, and some predictions. My answer was too use sub-sampling techniques, and I still believe that this might be a good idea (actually, I wrote a long post, on that issue, entitled too large datasets for regression ? What about subsampling). But I wanted to go further, since I did not discuss predictions obtained with sub-sampling techniques.

So, consider here a logistic regression , based on some covariates. We have  explanatory variables ( will be large, but not too large) and  observations (with ), Here we have a big (potentially) matrix product  i.e. with a large  matrix. Here, assume that we have a  matrix, with  individual observations, and  possible variables (and the intercept). Actually, in my model, only  variables were actually used in the real model. Assume further that explanatory variables are – potentially – correlated.

n=100000
library(mnormt)
k=50
r=.2
Sig=matrix(r,k,k)
diag(Sig)=1
X=rmnorm(n,varcov=Sig)
U=pnorm(rmnorm(n,varcov=Sig))
p=exp(-U[,1]-X[,1]-1)/(1+exp(-U[,1]-X[,1]-1))
Y=rbinom(n,size=1,p)
df=data.frame(Y,U,X)
names(df)=c("Y",paste("U",1:50,sep=""),paste("X",1:50,sep=""))
reg=glm(Y~.,data=df,family="binomial")

In some sense, it is not too big, since we can run a regression on that dataset with a simple laptop (even if it can still be seen as a large dataset, in the sense discussed in http://businessweek.com/…). But let us consider an alternative strategy, to be able to get some predictions – or some model – in the case we cannot run a regression. Two strategies will be compared,

  • generate   datasets with  observations, by sub-sampling
  • generate   datasets with  observations, by sub-sampling,

On each dataset, we can now run a regression, and compare the estimation of the coefficients with the “true” regression (on the whole dataset, since here, we can still run it). Then, since out of  explanatory variables, only  were actually used to generate the ouput, we should probably remove unnecessary variables in our model. So, some stepwise procedures were used.

L1=L2=L1s=L2s=list()
library(MASS)
ns1=n/10
ns2=n/100
for(s in 1:100){
i=sample(1:n,size=ns1,replace=TRUE)
reg_sub=glm(Y~.,data=df[i,],family="binomial")
L1[[s]]=reg_sub
L1s[[s]]=stepAIC(reg_sub)
i=sample(1:n,size=ns2,replace=TRUE)
reg0=glm(Y~.,data=df[i,],family="binomial")
L2[[s]]=reg_sub
L2s[[s]]=stepAIC(reg_sub)
}

For instance, if we consider the very first coefficient which should appear in the regression (let us forget about the intercept), or the second coefficient (which was not considered to generate the dataset), we get

VC=c(-1,-1,rep(0,49),-1,rep(0,49))
coef=function(k){
C1=unlist(lapply(L1,function(x) coefficients(x)[k]))
C2=unlist(lapply(L2,function(x) coefficients(x)[k]))
m=summary(reg)$coefficients
u=seq(quantile(C2,.2),quantile(C2,.8),length=501)
v=dnorm(u,m[k,1],m[k,2])
plot(u,v,col="white",xlab="",ylab="",axes=FALSE)
axis(1)
polygon(c(u,rev(u)),c(v,rep(0,length(u))),col="grey",border=NA)
abline(v=VC[k],lty=2)
boxplot(C1,horizontal=TRUE,add=TRUE,at=max(v)/3)
boxplot(C2,horizontal=TRUE,add=TRUE,at=max(v)/3*2)
}

coef(2)

where the density in grey is the Gaussian density of some estimator obtained from the large (and complete) dataset and boxplots are estimates obtained on sub-samples (without the stepwise procedure, just to make sure I will keep that variable).

For coefficients associated to variables not used to generate the dataset, we get graphs like the following

So, clearly, the smaller the dataset, the large the dispersion of the estimates. But far, nothing new. In my previous post – too large datasets for regression ? What about subsampling – my point was to discuss computational times, and a possible optimal size of sub-datasets. Now, what about the impact of sub-sampling on predictions. Here, we fit a model on a small sample, but we can get a prediction on the whole dataset. In order to describe the goodness of fit of our regression model, let us plot ROC curves. More specifically, three kinds of lines will be plotted,

  • the ROC curve for the ‘s obtained with the model on the complete dataset [red]
  • the ROC curves for the ‘s obtained with the model on the ‘s subsample [light blue]
  • the ROC curve for the  ‘s obtained by averaging the previous estimates [blue]

S=predict(reg,type="response")
Y=def$Y
M.ROC=ROC.curve(S,Y)
plot(M.ROC[1,],M.ROC[2,],type="s",col="red")

Z=df$Y*0
for(si in 1:100){
S=predict(L1s[[si]],type="response",newdata=df)
Z=Z+S
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="light blue")
}

S=Z/100
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="blue",lwd=2)

If we consider sub-samples of size , we get the following, and when we consider sub-samples of size , without the stepwise procedure (most variables have a small coefficient, not significant) and after the stepwise procedure Clearly – and that should not be a surprise – looking at predictions when the model was fitted on !% of the dataset is not great (ROC curves are substantially below the red ROC curve). But the interesting point is that averaging yields great results. In terms of ROC curve, we have the same

  • running one regression on our   matrix
  • averaging prediction after running  regressions on some   matrices

Except that the first one might not be possible to run, if the dataset was larger. And I have to admit that with the stepwise procedure, with variables (where should – theoretically – be renoved), it took some time! But still. I have the feeling the sub-sampling is extremely promising in the context of too large datasets.

Introduction à la régression logistique et aux arbres

Pour le second cours ACT2040, on va finir l’introduction (et les rappels de statistique inférentiel) puis attaque la première grosse section, sur la régression logistique et aux arbres de classification. base tirée du livre de Jed Frees, http://instruction.bus.wisc.edu/jfrees/…

> baseavocat=read.table("http://freakonometrics.free.fr/AutoBI.csv",
+ header=TRUE,sep=",")
> tail(baseavocat)
     CASENUM ATTORNEY CLMSEX MARITAL CLMINSUR SEATBELT CLMAGE  LOSS
1335   34204        2      2       2        2        1     26 0.161
1336   34210        2      1       2        2        1     NA 0.576
1337   34220        1      2       1        2        1     46 3.705
1338   34223        2      2       1        2        1     39 0.099
1339   34245        1      2       2        1        1     18 3.277
1340   34253        2      2       2        2        1     30 0.688

On dispose d’une variable dichotomique indiquant si un assuré – suite à un accident de la route – a été représenté par un avocat (1 si oui, 2 si non). On connaît le sexe de l’assuré (1 pour les hommes et 2 pour les femmes), le statut marital (1 s’il est marié, 2 s’il est célibataire, 3 pour un veuf, et 4 pour un assuré divorcé). On sait aussi si l’assuré portait ou non une ceinture de sécurité lorsque l’accident s’est produit (1 si oui, 2 si non et 3 si l’information n’est pas connue). Enfin, une information pour savoir si le conducteur du véhicule était ou non assuré (1 si oui, 2 si non et 3 si l’information n’est pas connue). On va recoder un peu les données afin de les rendre plus claires à lire.

Les transparents sont en ligne sur le blog,

Des compléments théoriques sur les arbres peuvent se trouver en ligne http://genome.jouy.inra.fr/…, http://ensmp.fr/…, ou http://ujf-grenoble.fr/… (pour information, nous ne verrons que la méthode CART). Je peux renvoyer au livre (et au blog) de Stéphane Tuffery, ou (en anglais) au livre de Richard Berk, dont un résumé se trouve en ligne sur http://crim.upenn.edu/….

Residuals from a logistic regression

I always claim that graphs are important in econometrics and statistics ! Of course, it is usually not that simple. Let me come back to a recent experience. A got an email from Sami yesterday, sending me a graph of residuals, and asking me what could be done with a graph of residuals, obtained from a logistic regression ? To get a better understanding, let us consider the following dataset (those are simulated data, but let us assume – as in practice – that we do not know the true model, this is why I decided to embed the code in some R source file)

> source("http://freakonometrics.free.fr/probit.R")
> reg=glm(Y~X1+X2,family=binomial)

If we use R’s diagnostic plot, the first one is the scatterplot of the residuals, against predicted values (the score actually)

> plot(reg,which=1)

we is simply

> plot(predict(reg),residuals(reg))
> abline(h=0,lty=2,col="grey")

Why do we have those two lines of points ? Because we predict a probability for a variable taking values 0 or 1. If the tree value is 0, then we always predict more, and residuals have to be negative (the blue points) and if the true value is 1, then we underestimate, and residuals have to be positive (the red points). And of course, there is a monotone relationship… We can see more clearly what’s going on when we use colors

> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> abline(h=0,lty=2,col="grey")

Points are exactly on a smooth curve, as a function of the predicted value,

Now, there is nothing from this graph. If we want to understand, we have to run a local regression, to see what’s going on,

> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)

This is exactly what we have with the first function. But with this local regression, we do not get confidence interval. Can’t we pretend, on the graph about that the plain dark line is very close to the dotted line ?

> rl=lm(residuals(reg)~bs(predict(reg),8))
> #rl=loess(residuals(reg)~predict(reg))
> y=predict(rl,se=TRUE)
> segments(predict(reg),y$fit+2*y$se.fit,predict(reg),y$fit-2*y$se.fit,col="green")

Yes, we can.And even if we have a guess that something can be done, what would this graph suggest ?

Actually, that graph is probably not the only way to look at the residuals. What not plotting them against the two explanatory variables ? For instance, if we plot the residuals against the second one, we get

> plot(X2,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X2,residuals(reg)),col="black",lwd=2)
> lines(lowess(X2[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X2[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")

The graph is similar to the one we had earlier, and against, there is not much to say,

If we now look at the relationship with the first one, it starts to be more interesting,

> plot(X1,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X1,residuals(reg)),col="black",lwd=2)
> lines(lowess(X1[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X1[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")

since we can clearly identify a quadratic effect. This graph suggests that we should run a regression on the square of the first variable. And it can be seen as a significant effect,

Now, if we run a regression including this quadratic effect, what do we have,

> reg=glm(Y~X1+I(X1^2)+X2,family=binomial)
> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(predict(reg)[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(predict(reg)[Y==1],residuals(reg)[Y==1]),col="red")
> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)
> abline(h=0,lty=2,col="grey")

Actually, it looks like we back where we were initially…. So what is my point ? my point is that

  • graphs (yes, plural) can be used to see what might go wrong, to get more intuition about possible non linear transformation
  • graphs are not everything, and they never be perfect ! Here, in theory, the plain line should be a straight line, horizontal. But we also want a model as simple as possible. So, at some stage, we should probably give up, and rely on statistical tests, and confidence intervals. Yes, an almost flat line can be interpreted as flat.

Régression logistique et arbres

Pour le cours de mercredi prochain, la base utilisée sera une base tirée du livre de Jed Frees, http://instruction.bus.wisc.edu/jfrees/…

> baseavocat=read.table("http://freakonometrics.free.fr/AutoBI.csv",header=TRUE,sep=",")
> tail(baseavocat)
     CASENUM ATTORNEY CLMSEX MARITAL CLMINSUR SEATBELT CLMAGE  LOSS
1335   34204        2      2       2        2        1     26 0.161
1336   34210        2      1       2        2        1     NA 0.576
1337   34220        1      2       1        2        1     46 3.705
1338   34223        2      2       1        2        1     39 0.099
1339   34245        1      2       2        1        1     18 3.277
1340   34253        2      2       2        2        1     30 0.688

On dispose d’une variable dichotomique indiquant si un assuré – suite à un accident de la route – a été représenté par un avocat (1 si oui, 2 si non). On connaît le sexe de l’assuré (1 pour les hommes et 2 pour les femmes), le statut marital (1 s’il est marié, 2 s’il est célibataire, 3 pour un veuf, et 4 pour un assuré divorcé). On sait aussi si l’assuré portait ou non une ceinture de sécurité lorsque l’accident s’est produit (1 si oui, 2 si non et 3 si l’information n’est pas connue). Enfin, une information pour savoir si le conducteur du véhicule était ou non assuré (1 si oui, 2 si non et 3 si l’information n’est pas connue). On va recoder un peu les données afin de les rendre plus claires à lire.

Les transparents du cours sont en ligne ici,

Sur les arbres de régression, je mettrais en ligne un billet, afin d’illustrer la méthode. En attendant des compléments théoriques peuvent se trouver en ligne http://genome.jouy.inra.fr/…, http://ensmp.fr/…, ou http://ujf-grenoble.fr/… (pour information, nous ne verrons que la méthode CART). Je peux renvoyer au livre (et au blog) de Stéphane Tuffery, ou (en anglais) au livre de Richard Berk, dont un résumé se trouve en ligne sur http://crim.upenn.edu/….

La semaine suivante, nous aborderons la régression de Poisson, les méthodes de biais minimal, et introduirons les modèles linéaires généralisés. Je renvoie au chapitre sur la tarification a priori du Denuit & Charpentier (2005), aux chapitres 12 et 13 de Frees (2010) ou encore les chapitres 5 et 6 du De Jong  & Heller (2008). Pour les plus curieux qui veulent comprendre les liens entre les modèles linéaires généralisés et la tarification par crédibilité, je renvoie à l’article de Klinker (2010)

Segmentation en tarification, compléments

Dans le premier cours d’actuariat IARD, nous avons vu l’importance de la ségmentation, et son implication sur le calcul des primes (passer d’une espérance mathématique à une espérance conditionnelle). Pour aller un peu plus loin, quelques compléments,

pour une lecture plus économique de la problématique de la segmentation en assurance

ou pour une lecture plus légale

Sinon, plusieurs articles de vulgarisation peuvent être lu sur internet,

La première démo aura lieu lundi, en salle informatique. Karim sera une introduction au langage R, à la manipulation des variables (qualitatives et quantitatives). Je mettrais en ligne les transparents en fin de semaine, et les codes seront mis en ligne dans le courant de la semaine prochaine.

Comme annoncé hier, il n’y aura pas cours mercredi prochain. Le mercredi suivant, nous verrons la modélisation des variables indicatrices, i.e. la régression logistique, et les arbres de régression. On supposera que le modèle linéaire aura été vu, je mets un lien vers les transparents du cours ACT6420 de la session passée, notes de cours transparents1 et transparents2. Il est aussi possible de relire Frees (2010), chapitres 3, 4, 5 et 6.

Pour commencer à pratiquer la régression logistique, on utilisera la petite fonction suivante

logit = function(formula, lien="logit", data=NULL) {
glm(formula,family=binomial(link=lien),data)
}

Sinon, la Casualty Actuarial Society a mis en ligne plusieurs documents en ligne sur les arbres de régression (qui sont peu abordés dans les livres mentionnés auparavant),

pour une comparaison de toutes les méthodes

Les transparents seront mis en ligne en fin de semaine prochaine. A suivre donc…

Construire une courbe ROC

Juste avant les vacances, Jean-Pierre Liégeois, un jeune lecteur du var, me demandais par courriel, “à partir d’une régression logistique (ou d’une matrice de confusion 2×2), comment programmer en R, un programme qui construit la courbe ROC associée“. Avant d’aller plus loin (et de répondre a la question), je vais renvoyer vers un vieux billet sur les matrices de confusion. L’idée est que l’on suppose que l’on dispose d’un prédicteur d’une variable prenant des valeurs 0 et 1 (ou pour reprendre la terminologie classique “positif” et “négatif”), par exemple un modèle logistique. Formellement, pour l’ensemble de nos observations, on a une valeur observée http://freakonometrics.hypotheses.org/files/2018/02/ROC-01.png et (comme je l’expliquais dans un autre billet) et d’un score \widehat{S}. Et c’est ce score qu’on va utiliser pour construire la courbe ROC. Ce score sera utilise pour prédire http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png . La règle d’affectation est alors simple: on se fixe un seuil http://freakonometrics.hypotheses.org/files/2018/02/ROC-04.png, et

  • si http://freakonometrics.hypotheses.org/files/2018/02/ROC-05.png, alors http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png est “positif”
  • si http://freakonometrics.hypotheses.org/files/2018/02/ROC-06.png, alors http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png est “négatif”

On peut alors construire une matrice dite de confusion, qui est simplement un table de contingence,

valeur observée http://freakonometrics.hypotheses.org/files/2018/02/ROC-01.png
valeur prédite
http://freakonometrics.hypotheses.org/files/2018/02/ROC-02.png
“positif” “négatif”
“positif” TP FP
“négatif” FN TN

où TP désigne les vrais positifs (true positive), TN les vrais négatifs (true negative),FP désigne les faux positifs (false positive) ou erreur de type I (dans une terminologie de théorie de la décision, ou de théorie des tests), et FN désigne les faux négatifs (false negative) ou erreur de type II.
Quid de la mise en œuvre sous R ? Commençons par générer des données, et estimons un modèle de régression.

set.seed(1)
n=50
X=rnorm(n)
Y=rbinom(n,size=1,prob=
exp(2*X-1)/(1+exp(2*X-1)))
B=data.frame(Y,X)
reg=glm(Y~X,family=binomial,data=B)
S=predict(reg,type="response")

On a maintenant notre observation (variable prenant les valeurs 0 ou 1) et notre score. On va ensuite pouvoir choisir plusieurs valeurs possible pour le seuil, et visualiser le taux de vrais positifs, en fonction du taux de faux positifs.

plot(0:1,0:1,xlab="False Positive Rate",
ylab="True Positive Rate",cex=.5)
for(s in seq(0,1,by=.01)){
Ps=(S>s)*1
FP=sum((Ps==1)*(Y==0))/sum(Y==0)
TP=sum((Ps==1)*(Y==1))/sum(Y==1)
points(FP,TP,cex=.5,col="red")
}

On a alors le graphique suivant,

Si on relit les points, on a alors la courbe ROC,

FP=TP=rep(NA,101)
plot(0:1,0:1,xlab="False Positive Rate",
ylab="True Positive Rate",cex=.5)
for(s in seq(0,1,by=.01)){
Ps=(S>s)*1
FP[1+s*100]=sum((Ps==1)*(Y==0))/sum(Y==0)
TP[1+s*100]=sum((Ps==1)*(Y==1))/sum(Y==1)
}
lines(c(FP),c(TP),type="s",col="red")

En fait, le code est assez simple, et il traîne dans différents packages de R, e.g.

library(ROCR)
pred=prediction(P,Y)
perf=performance(pred,"tpr", "fpr")
plot(perf,colorize = TRUE)

On peut aussi s’amuser a bootstrapper l’échantillon pour construire des intervalles de confiance, ou ajuster des modèles théoriques,

library(verification)
roc.plot(Y,P, xlab = "False Positive Rate",
ylab = "True Positive Rate", main = "", CI = TRUE,
n.boot = 100, plot = "both", binormal = TRUE)

ou encore (toujours avec des bornes de confiance obtenues par bootstrap)

library(pROC)
PROC=plot.roc(Y,P,main="", percent=TRUE,
ci=TRUE)
SE=ci.se(PROC,specificities=seq(0, 100, 5))
plot(SE, type="shape", col="light blue")

Régression logistique, analyse discriminante… et XLStat

Pour répondre à une question qui m’a été posée, il est possible de récupérer une version gratuite de 30 jours d’XLStat via http://www.01net.com/.

Sinon je ne peux qu’encourager les étudiants à se mettre à R… contrairement à XLStat, il faut coder un minimum, mais l’investissement en vaut la peine. Pour ceux qui auraient peur, je renvoie vers ce document rédigé par Denis Poinsot, “R pour les Statophobes“.

Sinon, sur l’analyse discriminante, encore une fois, des choses sont possibles via XLStat…


Pour la régression logistique, on pourra consulter

Les outils de régression logistique ne sont pas rangés dans l’analyse des données,

On a alors par défaut une régression logistique sur données binaire (deux classes),

mais qui peut s’étendre à plusieurs classes,

Pour la base des données, les bases saporta.xls et bordeaux.xls sont en ligne sur ma page (la base tirée des pages 414-415 de Saporta  1990 ne contient que 3/4 des observations, car c’est un peu long de taper à la main…).