Category Archives: Computer

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

Visualization of Airline Transportation Data

Tuesday, I will be in Paris, as a member of the jury on dataviz, organized by the Direction Generale de l’Aviation Civile, during the “assises nationales du transport aérien“.

Dans ce contexte, le Ministère de la transition écologique et solidaire lance un appel à projets pour la conception d’une application open source facilitant la visualisation et le partage des données du transport aérien. Volume de trafic (passagers et mouvements d’avion), retards et émissions aux abords des aéroports sont autant de données recensées par la DGAC, pour élaborer un outil de data-visualisation innovant, interactif et pédagogique au service des professionnels et du grand public.

There were some nice studies based on those data, available from the dedicated website (even if sometime, it can be hard to get a clear understanding, but that’s actually the main challenge with dataviz)

I can also upload some screen shots from some apps that were submited, and there were nice things, such as

or the following one

Some candidates were selected to present their viz to the jury, and then, there will be prices. More to come on Wednesday, probably.

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar)
src <- osmsource_api()
bb <- center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua <- get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids <- find(ua, way(tags(k %in% vc)))
nat_ids <- find_down(ua, way(nat_ids))
nat <- subset(ua, ids = nat_ids)
nat_poly <- as_sp(nat, type)}
 
listev = function(vc,type="polygons"){
  nat_ids <- find(ua, way(tags(v %in% vc)))
  nat_ids <- find_down(ua, way(nat_ids))
  nat <- subset(ua, ids = nat_ids)
  nat_poly <- as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines")
H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M)
plot(W,add=TRUE,col="blue")
plot(P,add=TRUE,col="green")
if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp)
library(raster)
library(rgdal)
library(rgeos)
library(maptools)
identification = function(xy,h,PLG){
  b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2),
               y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h)))
  pb1=Polygon(b)    
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))
  UC=gUnionCascaded(PLG)
  return(gIntersection(SPb1,UC))
}

and then, we identify, as follows

whichidtf = function(xy,h){
  h=.7*h
  label="EMPTY"
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"
return(label)
}

Let is use colored rectangle to make sure it works

nx=length(vx)
vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2)
ny=length(vy)
vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2)
 plot(M,border="white")
 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         lb=whichidtf(c(vx[i],vy[j]),h)
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      
     }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png)
 library(grid)
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png")
 tree <- readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree
 rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png")
water <- readPNG("angle-double-up.png")
 rev_water=water
 rev_water[,,3]=water[,,4]
 home <- readPNG("home.png")
 rev_home=home
 rev_home[,,4]=home[,,4]*.5
 church <- readPNG("church.png")
 rev_church=church
 rev_church[,,1]=church[,,4]*.5
 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white")
 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     lb=whichidtf(c(vx[i],vy[j]),h)
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     
   }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).

 

When “learning Python” becomes “practicing R” (spoiler)

15 years ago, a student of mine told me that I should start learning Python, that it was really a great language. Students started to learn it, but I kept postponing. A few years ago, I started also Python for Kids, which is really nice actually, with my son. That was nice, but not really challenging. A few weeks ago, I also started a crash course in Python, taught by Pierre. The truth is I think I will probably give up. I keep telling myself (1) I can do anything much faster in R (2) Python is not intuitive, especially when you’re used to practice R for almost 20 years… Last week, I also had to link Python and R for our pricing game : Ali wrote some template codes in Python, and I had to translate them in R. And it was difficult…

Anyway, since it was a school break this week, I said to my son that we should try to practice together, with a nice challenge. For those willing to try it, you’d better stop here, because I will spoil it.

Continue reading When “learning Python” becomes “practicing R” (spoiler)

Actuarial Pricing Game

Within a few days, we will launch the fourth actuarial pricing game.

  1. People have to register for that new game. People have to fill a form online before February 28th to get a training dataset. As in our previous versions, players can be individuals or in a team (identified as one player).
  2. When registration closes, players will receive a training dataset (training.csv). The training dataset will be based on two years of data on household policies. There will be information about the insurance policy from underwriters, as well as, claims data. They will also receive a pricing dataset (pricing.csv) with only underwriters information and no claims. The goal is to propose a premium for all records in the pricing dataset. Note that those registered will also receive additional information and template files after registration closes.
  3. Before  April 9th, 2018, players will have to provide two files: a pricing function (either a formula.py Python function, or a formula.R R function) and a dataset with prices prices.csv). The pricing function should be an understandable explicit function which, once applied to the training dataset, will yield premium prices for each record. There will be limits on the complexity of this function. To ensure this, players will receive instructions on what can be used to construct formulas either in Python or R.
  4. Before May 14th, 2018, players are asked again to submit two files: a dataset with prices (second_prices.csv) and a pricing model (either a model.py Python file, or a model.R R file). For this round there are no restrictions on what can be used in the model, as long as, by using the model file submitted the prices dataset (second_prices.csv) can be reproduced. Players will again receive further instruction on this after registration.

Unicode, UTF-8… la base de l’encodage

Un petit billet rapide, sans aucune prétention. Juste pour revenir sur des trucs bizarres sur l’encodage… J’avais commencé à m’y intéresser il y a 3 ans, alors que je discutais avec des utilisateurs de R en Inde et au Japon. Mais revenons à la base… Si on regarde la page “unicode” de Wikipedia (ou plus précisément, si on regarde le code source, on peut avoir des informations (dans les métadonnées) : en l’occurrence, on voit que la page est encodée en UTF-8

C’est cohérent car de plus en plus de documents en ligne semblent encodé dans ce format. Si on revient à la source, l’encodage c’est lié au passage par les bytes 0/1. Plein de sites expliquent (en gros) la philosophie. Quand j’étais petit, on nous expliquait les caractères ASCII, qui utilisent 7 bits ce qui permet de disposer  128 caractères uniquement, numérotés de 0 à 127.  Genre “A” était le 65ème caractère, et “a” le 97ème. Il y avait des symboles exotiques (comme “@” qui est le 64ème) ou la ponctuation, mais pas d’accents, comme “à”. Si on veut les lettres avec des accents, il faut plus que 128 caractères. Est-alors arrivée ISO/CEI 8859, qui proposait d’encoder les caractères sur 8 bits (et pas 7). On parle aussi de ISO 8859-1 plus populaire sous le nom “latin-1” très populaires dans les pays latins, avec le “à“, et puis on a continué d’enrichir pour arriver au ISO 8859-15 (latin-9) avec dedans le symbole “€”. Pour les langues asiatiques, il a fallu enrichir encore. Bref, c’est un jeu sans fin. Dans les années 90, est arrivée la norme ISO 10646 correspondant au “jeu universel de caractères” ou UCS pour Universal Character Set (en anglais), qui donnera Unicode. Je crois que les puristes font la différence entre les jeux de caractères (l’ensemble de caractères auxquels on attribue à chacun un point de code unique, la liste des caractères ASCII par exemple) et l’encodage (qui est la façon de représenter ce point de code en mémoire). Mais on va faire simple…

Là où ça se complique c’est que les OS utilisaient (par défaut) des encodages différents. Sous Windows, UTF-16 et Unicode sont majoritairement utilisé. Sous Mac OS,il y avait un truc appelé MacRoman mais il semble que UTF-8 l’emporte aujourd’hui. Enfin, sous linux, historiquement c’était du latin-1 par défaut, mais progressivement l’UTF-8 semble prendre le dessus. Bref, tous ceux qui naviguent entre les OS le savent : les accents c’est l’enfer. Et le principal soucis c’est qu’il est impossible, quand on récupère un fichier d’avoir son encodage a priori. Regardons le fichier suivant BaseANSI.txt

On peut tenter de regarder s’il y a des sortes de méta-données dans le document…

mais ce n’est pas le cas. Le document est un document brut, avec juste les données, et aucune métadonnées. Rien sur l’encodage (en particulier, car c’est ce qui nous intéresse aujourd’hui). Pour l’anecdote, si au lieu d’ouvrir le fichier avec mon bloc-note sous Windows, on tente sur un mac, on a

Et quand je passe par le terminal DOS sous Windows, c’est la même chose (ce qui est un peu rassurant)

Regardons un autre fichier BaseUNICODE.txt. Si vous voyez une différence avec l’autre, dites le moi…

Pourtant il y en a une : la preuve, dans un terminal, on peut lire les accents du fichier,

Un dernier ? Regardons BaseUTF8.txt,Là encore, on a l’impression d’avoir le même fichier. Et pourtant non.

Commençons par récupérer les trois fichiers (via R)

download.file(url="http://freakonometrics.free.fr/BaseUTF8.txt",
destfile="BaseUTF8.txt")
download.file(url="http://freakonometrics.free.fr/BaseANSI.txt",
destfile="BaseANSI.txt")
download.file(url="http://freakonometrics.free.fr/BaseUNICODE.txt",
destfile="BaseUNICODE.txt")

Techniquement, ces documents sont différents, même si dans le bloc note, on ne voyait rien : il n’ont pas la même taille !

file.size("BaseUNICODE.txt")
[1] 322
file.size("BaseANSI.txt")
[1] 163
file.size("BaseUTF8.txt")
[1] 171

On peut essayer de les lire, pour voir…

read.table("BaseANSI.txt",header=TRUE,sep=";")
                       Département No Habitants
1                              Ain 01    631877
2                          Ardèche 07    324209
3                 Bouches-du-Rhône 13   2016622
4                     Corse-du-Sud 2A    152730
5 Côte-dOr;21;533147\nCôtes-dArmor 22    598357

On a un petit soucis avec l’apostrophe, mais rien de grave

read.table("BaseANSI.txt",header=TRUE,sep=";",quote="\"")
       Département No Habitants
1              Ain 01    631877
2          Ardèche 07    324209
3 Bouches-du-Rhône 13   2016622
4     Corse-du-Sud 2A    152730
5        Côte-d'Or 21    533147
6    Côtes-d'Armor 22    598357

R arrive parfaitement a lire (sur mon ordinateur du moins) cette première base de données. Tentons une seconde

read.table("BaseUTF8.txt",header=TRUE,sep=";",quote="\"")
    ï..DÃ.partement No Habitants
1               Ain 01    631877
2          Ardèche 07    324209
3 Bouches-du-Rhône 13   2016622
4      Corse-du-Sud 2A    152730
5        Côte-d'Or 21    533147
6    Côtes-d'Armor 22    598357

On a manifestement un soucis, un soucis d’encodage. Le soucis c’est que, d’ordinaire, on ne sait pas comment le fichier a été encodé. Et comme on l’a vu, il n’y a pas de métadonnées qui nous donne l’encodage. Comment faire ? Heureusement, il existe une fonction sous R qui devine l’encodage. Littéralement.

library(readr)
guess_encoding("BaseUTF8.txt", n_max = 1000)
# A tibble: 3 x 2
  encoding   confidence
 
1 UTF-8           1.00 
2 ISO-8859-1      0.620
3 ISO-8859-2      0.430

Autrement dit, le best-guess est ici un encodage UTF-8. Essayons pour voir,

read.table("BaseUTF8.txt",header=TRUE,sep=";",encoding="UTF-8",quote="\"")
  X.U.FEFF.Département No Habitants
1                  Ain 01    631877
2              Ardèche 07    324209
3     Bouches-du-Rhône 13   2016622
4         Corse-du-Sud 2A    152730
5            Côte-d'Or 21    533147
6        Côtes-d'Armor 22    598357

Ça marche ! ou disons que ça marche presque. J’ai un soucis dans mes libellés de variables. Mais disons que c’est anecdotique ici.

Tentons le dernier fichier

read.table("BaseUNICODE.txt",header=TRUE,sep=";",quote="\"")
  ÿþD
1  NA
2  NA
3  NA
4  NA
5  NA
6  NA
7  NA
8  NA

Comme ça ne marche pas, essayons d’avoir un best-guess sur l’encodage

guess_encoding("BaseUNICODE.txt", n_max = 1000)
# A tibble: 3 x 2
  encoding   confidence
 
1 UTF-16LE        1.00 
2 ISO-8859-1      0.530
3 ISO-8859-2      0.270

On en tient un !

read.table("BaseUNICODE.txt",header=TRUE,encoding="UTF-16LE")
  ÿþD
1  \n
2   B
3  \n
4   C
5  \n
6   C
7  \n
8   C

Damned, ça ne marche pas… Heureusement, Ewen est venu à ma rescousse. La solution est relativement étrange

read.table("BaseUNICODE.txt",header=TRUE,fileEncoding="UTF-16LE", sep=";", quote="")
       Département No Habitants
1              Ain 01    631877
2          Ardèche 07    324209
3 Bouches-du-Rhône 13   2016622
4     Corse-du-Sud 2A    152730
5        Côte-d'Or 21    533147
6    Côtes-d'Armor 22    598357

Oui, on n’utilise pas l’encodage via “encoding” mais “fileEncoding” ! Subtil non… Cela dit, si on voulait être cohérent on devrait utiliser la fonction du package pour lire la base, non ? Car dans le package “readr” il y a une fonction “read_table”

read_table("BaseUNICODE.txt",locale=locale(encoding = "UTF-16LE"))
Error in guess_types_(datasource, tokenizer, locale, n = guess_max) : 
  Incomplete multibyte sequence

Sauf que quand on la lance, on a un message d’erreur… Bref, l’encodage c’est compliqué. C’est ce qui est expliqué dans le forum de discussion du package cela dit. Je ne suis pas sûr que mon billet serve à quoi que ce soit, mais je voulais garder une trace de tout ça !

Using convolutions (S3) vs distributions (S4)

Usually, to illustrate the difference between S3 and S4 classes in R, I mention glm (from base) and vglm (from VGAM) that provide similar outputs, but one is based on S3 codes, while the second one is based on S4 codes. Another way to illustrate is to manipulate distributions.

Consider the case where we want to sum (independent) random variables. For instance two lognormal distribution. Let us try to compute the median of the sum.

The distribution function of the sum of two independent (positive) random variables is F_{S_2}(x)=\int_0^x F_{X_1}(x-y)dF_{X_2}(x)

pSum2 = function(x) integrate(function(y) 
plnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value

Let us visualize that cumulative distribution function

vx=seq(0.1,50,by=.1)
vy=Vectorize(pSum2)(vx)
plot(vx,vy,type="l",ylim=c(0,1))
abline(h=.5,lty=2)

Let us find an upper bound to compute (in a decent time) quantiles

pSum2(350)
[1] 0.99195

and then use the uniroot function to inverse that function

qSum = function(u) uniroot(function(x) 
Vectorize(pSum2)(x)-u, interval=c(0,350))$root
vu=seq(.01,.99,by=.01)
vv=Vectorize(qSum)(vu)

The median is here

qSum(.5)
[1] 14.155

Why not consider the sum of three (independent) distributions ? Its cumulative distribution function can be writen using our previous function F_{S_3}(x)=\int_0^x F_{S_2}(x-y)dF_{X_3}(x)

pSum3 = function(x) integrate(function(y) 
pSum2(x-y)*dlnorm(y,2,2),0,x)$value

If we look at some values we good

pSum3(4)
[1] 0.015624
pSum3(5)
Error in integrate(function(y) plnorm(x - y, 1, 2) * 
dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

So obviously, there are computational issues here.

Let us consider the following alternative expression F_{S_3}(x)=\int_0^x F_{X_3}(x-y)dF_{S_2}(x). Of course, it is necessary here to compute the density of the sum of two variables

dSum2 = function(x) integrate(function(y) 
dlnorm(x-y,1,2)*dlnorm(y,2,1),0,x)$value
pSum3 = function(x) integrate(function(y) 
dlnorm(x-y,2,2)*dSum2(y),0,x)$value

Again, let us compute some values

pSum3(4)
[1] 0.0090285
pSum3(5)
[1] 0.01186

This one seems to work quite well. But it is just an illusion.

pSum3(9)
Error in integrate(function(y) dlnorm(x - y, 1, 2) *
 dlnorm(y, 2, 1),  : 
  maximum number of subdivisions reached

Clearly, with those S3-type functions, it wlll be complicated to run computations with 3 variables, or more.

Let us consider distributions in the S4-type format of the following package

library(distr)
X1 = Lnorm(mean=1,sd=2)
X2 = Lnorm(mean=2,sd=1)
S2 = X1+X2

To compute the median, we simply have to use

distr::q(S2)(.5)
[1] 14.719

We can also visualize it easily

plot(q(S2))

which looks (very) close to what we got, manually.  But here, it is also possible to work with the sum of 3 (independent) random variables

X3 = Lnorm(mean=2,sd=2)
S3 = X1+X2+X3

To compute the median, use

distr::q(S3)(.5)
[1] 33.208

The function is here

plot(q(S3))

Tous des (potentiels) terroristes ?

En fin de semaine dernière, je commencais surveiller et prévenir: L’ère de la pénalité prédictive de Nicolas Bourgoin, et au début, il évoque des mesures de sécurité renforcées dans un rayon de 20km des gares, des aéroports et des ports, qui pourraient être prises dans le cadre de la réforme de lois sur la sécurité intérieure.  Cette information était reprise récemment sur rtl,

Le périmètre est aussi agrandi : les contrôles pourront avoir lieu aux abords des gares internationales (et non plus seulement à l’intérieur) ainsi que dans un rayon de 20 kilomètres autour des aéroports et des ports.

ou sur le site de l’obs

ces opérations de contrôles seront mises en place “aux abords” de 373 gares, ports et aéroports, dans un rayon de 20 kilomètres. Une extension considérable puisque jusqu’à présent, ces contrôles restaient cantonnées à l’intérieur de ces espaces accessibles au public.

En lisant le livre, je trouvais incroyable cette histoire de 20km, car tout le monde habite à moins de 20km d’une gare (ne sachant trop quelles pouvaient être ces 373 gares, je pars du fait que toutes les gares pourraient être concernées). C’est un peu ce que note la cimade, en écrivant

Le projet de loi prévoit ainsi de permettre les contrôles d’identité aux frontières pour une durée de 12 heures (contre 6 aujourd’hui), de les élargir « aux abords » de 373 gares, ports et aéroports, ainsi que dans un rayon de 20 km des 118 points de passages frontaliers. Bien au-delà des simples frontières de l’Hexagone, c’est presque tout le territoire qui est couvert. Le dispositif porte ainsi une atteinte disproportionnée aux droits et libertés des personnes. Des villes entières, comme Paris et toute la région Île-de-France, Lyon, Nantes, Rennes, Bordeaux, Montpellier, Toulouse ou Marseille seraient soumises à un régime de légalisation du contrôle au faciès. Des personnes assimilées par la police comme étant étrangères, quelle que soit leur situation en France, risquent ainsi d’être les victimes de ces contrôles d’identité.

Etant d’un naturel (très) sceptique, j’ai voulu vérifier par moi même, non seulement en terme de surface (ce qui est dit ici) mais surtout en terme de population. La première étape est de récupérer la liste des gares géolocalisées, sur https://ressources.data.sncf.com/. On peut aussi récupérer la liste des aéroports sur https://www.data.gouv.fr/ si on veut aller plus loin. Mais contentons nous des gares pour l’instant. La seconde étape est de récupérer les contours des communes et leur population. En fait, plus que la superficie, ce qui m’intéresse le plus, c’est le nombre de personnes qui y habitent. On peut trouver un tel fichier sur https://www.data.gouv.fr/ là encore. Mais commencons par charger tous nos packages de cartographie,

library(maptools)
library(rgeos)
library(rgdal)
library(ggplot2)
library(plyr)
library(maptools)

On peut ensuite récupérer les données des gares

loc = "/home/arthur/referentiel-gares-voyageurs.shp"
gare = readOGR(loc)

Pour regarder où sont ces gares, on récupère un fond de carte

Proj = CRS("+proj=longlat +datum=WGS84")
France = readShapePoly("departements-20140306-100m.shp", verbose=TRUE, proj4string=Proj)
metropolitaine = as.character(1:95)
metropolitaine[1:9] = paste("0",1:9,sep="")
France = France[France$code_insee%in%metropolitaine,]

On utilise aussi une base avec les contours des communes,

loc2 = "/home/arthur/communes-20150101-5m.shp"
communes_lim = readOGR(loc2)
communes_lim = spTransform(communes_lim, CRS("+proj=longlat +datum=WGS84"))

et une base pour la population des communes

 base = read.csv(
"http://freakonometrics.free.fr/popfr19752010.csv",
header=TRUE)
base$insee = base$dep*1000+base$com

Cette fois on est prêt. Considérons un rayon de 20km

r=20

On va constituer tous les polygônes correspondant à une cercle de rayon 20km, centré sur les gares francaises

PL = data.frame(i=1:(3000*120),id=rep(1:3000,each=120),lon=NA,lat=NA)
 for(i in 1:nrow(gare)){
   x=as.numeric(as.character(gare$longitude_w[i]))
   y=as.numeric(as.character(gare$latitude_wg[i]))
   vx=c(x+u*r/111,x+rev(u)*r/111)
   vy=c(y+v*r/111,y-rev(v)*r/111)
   polygon(vx,vy,border=NA,col="blue",pch=19)
   PL[PL$id==i,2:4]=data.frame(id=i,lon=vx,lat=vy) }

(comme 1 degré fait environ 111km). On doit alors bricoler un peu pour constituer une collection de polygones, que l’on pourra ensuite manipuler à notre guise

PL=PL[!is.na(PL$lat),]
 PLdf=PL[,c(3,4,2)]
 PLdf[,3]=as.factor(PLdf[,3])
 PL_list <- split(PLdf, PL$id)
 PL_list <- lapply(PL_list, 
  function(x) { x["id"] <- NULL; x })
 PPL <- sapply(PL_list, Polygon)
 PPL1 <- lapply(seq_along(PPL), 
    function(i) Polygons(list(PPL[[i]]),
        ID = names(PL_list)[i]  ))
PPLs <- SpatialPolygons(PPL1, 
proj4string = CRS("+proj=longlat +datum=WGS84") )

Si on visualise ces périmètres de sécurité, on obtient

G=gUnionCascaded(PPLs)
F=gUnionCascaded(France)
FG=gIntersection(F,G)
plot(F)
plot(G,add=TRUE,col="blue")

soit, si on regarde l’intersection entre les deux,

plot(F)
plot(FG,add=TRUE)

Pour trouver la population dans cette vaste région, on va supposer la population uniformément répartie sur une commune. Et on regarde la proportion de la surface de la commune qui est à moins de 20km d’une gare (c’est en gros la technique qu’on utilisait dans Kernel density estimation based on Ripley’s correction pour corriger d’effet de bords dans du lissage spatial, sur des cartes, avec Ewen Gallic). Bref, pour une commune donnée, on a le code suivant, qui permet de récuperer la proportion de la surface située à moins de 20km d’une gare, et sa population

f=function(i){
insee=as.numeric(as.character(communes_lim@data$insee[i]))
  POPULATION=base[base$insee==insee,"pop_2010"]
  B_list=list()
  for(j in 1:length(communes_lim@polygons[[1]]@Polygons)){ B_list[[j]]=data.frame(communes_lim@polygons[[i]]@Polygons[[j]]@coords,id=j)}
  B_list <- lapply(B_list,function(x) { x["id"] <- NULL; x })
  BL <- sapply(B_list, Polygon)
  BL1 <- lapply(seq_along(BL), function(i) Polygons(list(BL[[i]]),ID = names(PL_list)[i]  ))
  BLs <- SpatialPolygons(BL1, proj4string = CRS("+proj=longlat +datum=WGS84") ) 
  t=try(FGB<-gIntersection(BLs,FG),silent=TRUE)  t1=try(l<-length(BLs@pointobj@coords),silent=TRUE)
  if((!inherits(t1, "try-error"))){ 
    a_list=list()
    for(j in 1:length(BLs@pointobj@coords)){
      a_list[[j]]=BLs@polyobj@polygons[[j]]@area}
    a1=sum(unlist(a_list))}
  if((inherits(t1, "try-error"))){ 
    a1=BLs@polygons[[1]]@area}
  a2=0
  if(!is.null(FGB)){
t2=try(l<-length(FGB@pointobj@coords),silent=TRUE) if((!inherits(t2, "try-error"))){ a_list=list() for(j in 1:length(FGB@pointobj@coords)){ a_list[[j]]=FGB@polyobj@polygons[[j]]@area} a2=sum(unlist(a_list))} if((inherits(t2, "try-error"))){ a2=FGB@polygons[[1]]@area}} p=c(1,NA,0) if((!inherits(t, "try-error"))&(!is.null(t))&(length(POPULATION)==0)) p=c(a2/a1,a1,0) if((!inherits(t, "try-error"))&(!is.null(t))&(length(POPULATION)>0) ) p=c(a2/a1,a1,POPULATION) 
cat(i,insee,sum(unlist(lapply(B_list,nrow))),p,"\n")
  return(p)}

On n’a plus qu’à faire tourner sur notre 36000 communes,

F = lapply(unique(communes_lim_df$id),f)

On obtient alors

> F0=F[!is.na(F[,2]),]
> sum(F0[,1]*F0[,2])/sum(F0[,2])
[1] 0.9031736
> F=F1
> sum(F[,1]*F[,3])/sum(F[,3])
[1] 0.9661656

Autrement dit, dans un rayon de 20km des gares (seulement), on a 90% du territoire, et plus de 95% de la population. Si on passe à 10km, on note que l’on couvre environ 75% du territoire, et plus de 90% de la population, et en passant à 5km, on couvre moins de 50% de la surface du territoire, et environ 75% de la population.

On retrouve des résultats du même ordre que ceux obtenus dans un précédant billet, qui établissait que 80% de la population francaise était à moins de 3km d’une agence bancaire. On pourrait bien sûr rajouter les ports, et les aéroports, et surprimer quelques gares… mais je doute qu’on ait une conclusion très différente… Après, les codes sont disponibles, il suffit de les adapter sur une autre base de centres de cercles…

Scraper pour avoir des infos sur les médecins sur Paris

Allez, quelques codes R pour s’amuser un peu… Je me suis inspiré du projet de Marine pour la formation Data Science pour l’Actuariat. L’idée était de scraper un site qui permet de prendre un rendez-vous chez un médecin, pour voir où se trouvent les médecins (sur Paris) et de voir leurs prix. Histoire de donner un avant goût, voilà les graphiques qu’on cherche à produire

Continue reading Scraper pour avoir des infos sur les médecins sur Paris

I Got The Feelin’

Last week, I’ve been going through my CD collection, trying to find records I haven’t been listing for a while. And I got the feeling that music I listen to nowadays is slower than the one I was listening to in my 20’s. I was wondering if that was an age issue, or it was simply the fact that music in the 90s was “faster” than the one released in 2015. So I tried to scrap the BPM database to get a more appropriate answer about this “feeling” I have. I did extract two information: the BPM (beat per minute) and the year (of release).

Here is a function to extract information from the website,

> library(XML)
> extractbpm = function(VBP,P){
+ url=paste("https://www.bpmdatabase.com/music/search/?artist=&title=&bpm=",VBP,"&genre=&page=",P,sep="")
+ download.file(url,destfile = "page.html")
+ tables=readHTMLTable("page.html")
+ return(tables)}

For instance

> extractbpm(115,13)
$`track-table`
Artist Title
1 Eros Ramazzotti y Claudio Guidetti Dimelo A Mi
2 Everclear Volvo Driving Soccer Mom
3 Evils Toy Dear God
4 Expose In Walked Love
5 Fabolous ft. 2 Chainz When I Feel Like It
6 Fabolous ft. 2 Chainz When I Feel Like It
7 Fabolous ft. 2 Chainz When I Feel Like It
8 Fanny Lu Fanfarron
9 Featurecast Ain't My Style
10 Fem 2 Fem Obsession
11 Fernando Villalona Mi Delito
12 Fever Ray Triangle Walks
13 Firstlove Freaky
14 Fito Blanko Pegadito Suavecito
15 Flechazo Del Norte Mariposa Traicionera
16 Fluke Switch/Twitch
17 Flyleaf Something Better
18 FM Static The Next Big Thing
19 Fonseca Eres Mi Sueno
20 Fonseca ft. Maffio & Nayer Eres Mi Sueno
21 Francesca Battistelli Have Yourself A Merry Little Christmas
22 Frankie Ballard Young & Crazy
23 Frankie J. More Than Words
24 Frank Sinatra The Hucklebuck
25 Franz Ferdinand The Dark Of The Matinée
Mix BPM Genre Label Year
1 — 115 — Sony 2009
2 — 115 — Capitol Records 2003
3 — 115 — — —
4 — 115 — Arista Records 1994
5 Explicit 115 Urban Def Jam/Island Def Jam 2013
6 — 115 Urban Def Jam/Island Def Jam 2013
7 Radio Edit 115 Urban Def Jam/Island Def Jam 2013
8 — 115 Latin Pop Universal Latino 2011
9 Psychemagik Dub 115 — Jalapeno 2012
10 — 115 — Critique Records 1993
11 — 115 — Mt&vi Records/caminante Records 2001
12 Rex The Dog Remix 115 — Little Idiot/Mute 2012
13 — 115 — Jwp Music 2000
14 — 115 Merengue Mambo Crown Loyalty 2012
15 — 115 — Hacienda 2010
16 Album Version 115 — One Little Indian Records 2004
17 — 115 Alternative A&M/Octone 2013
18 — 115 — Tooth & Nail Records 2007
19 — 115 Merengue Mambo 10 2012
20 Urban Version 115 — 10 2012
21 — 115 — Word/Fervent/Warner Bros 2009
22 — 115 Country Warner Bros 2015
23 Mynt Rocks Radio Edit 115 — Columbia 2005
24 — 115 Jazz Columbia 1950
25 — 115 New Wave — 2004

We have here one of the few old songs, a 1950 tune by Frank Sinatra. If we scrap the website, with a simple loop (where the bpm is from 40 to 200). Start with

BASE=NULL
> vbp=40
> p=1

and then, a loop based on

> while(vbp<=2017){
+ F=extractbmp(vbp,p)
+ if(length(F)==1){
+ BASE=rbind(BASE,F[[1]][,c("Artist","Title","BPM","Year")])
+ p=p+1}
+ if(length(F)==0){
+ vbp=vbp+1
+ p=1}}

Then we should clean the dataset

BASE=BASE[-duplicated(BASE),]
BASE=BASE[-which(BASE$Year=="—"),]
BASE$y=as.numeric(as.character(BASE$Year))
BASE$bpm=as.numeric(as.character(BASE$BPM))
BASE=BASE[BASE$y>=1940,]

and we end up with almost 50,000 tunes.

boxplot(BASE$bpm~as.factor(BASE$y),
col="light blue")

Over the past 20 years, it looks like speed of tunes has declined (let us forget tunes of 2017, clearly we have a problem here…)

library(mgcv)
plot(BASE$y,BASE$bpm)
reg=gam(bpm~s(y),data=BASE)
B=data.frame(y=1950:2017)
p=predict(reg,newdata=B)
lines(B$y,p,lwd=3,col="red")

which is confirmed with a (smoothed) regression

p2=predict(reg,newdata=B,se.fit=TRUE)
plot(B$y,p2$fit,lwd=3,col="red",type="l",ylim=c(90,140))
lines(B$y,p2$fit+p2$se.fit)
lines(B$y,p2$fit-p2$se.fit)

even when incorporating the confidence band. Bumps are probably related to smoothing parameters, but indeed, it looks like the average speed of music tune has decreased, from 110-115 in the 90’s to less than 100 nowadays. Now to be honest, I would love to have access to personal information from itunes, deezer or spotify, to get a better understanding (eg when in the week, in the day, do we like to listen to faster music for instance). But so far, I could not have access to such data. Too bad…

Les votes à l’assemblée nationale

Un dernier petit billet basé sur les projets de R que j’avais donnés pour la formation Data Science pour l’Actuariat. Aujourd’hui, je reviens sur des codes tirés du projet de Raphaël qui a scrapé les données de l’Assemblée Nationale. On commence par charger les librairies qui nous seront utiles,

require(xml2)
require(downloader)
require(stringr)
require(classInt)
require(plotrix)
require(FactoMineR)
require(sp)

La première partie de l’importation des données vise à importer l’ensemble des députés, c’est à dire l’ensemble des personnes  (compris entre 1 et 248) dont le mandat est à l’ASSEMBLEE. Pour chacun de ces députés, on récupére le numéro de circonscription et de département, afin de pouvoir construire le code IDEN, identifiant unique de chaque circonscription (DDCC, deux chiffres pour le département, deux pour la circonscription), la référence acteur (acteurRef) et la référence mandat (uid) afin de pouvoir construire un identifiant pour chaque député du type (PA???XXXPMXXXXXX, chaque X representant un chiffre, les ? representant un chiffre ou l’absence de caractères – les PA n’ont pas tous le même nombre de caractères). On va ensuite créer des variables qui serviront par la suite : le nombre de vote total, le nombre de vote POUR, CONTRE, ABSTENTION et le nom de l’organe (groupe auquel appartient le deputé à l’assemblée).

path="http://data.assemblee-nationale.fr/static/openData/repository/AMO/deputes_senateurs_ministres_legislature/AMO20_dep_sen_min_tous_mandats_et_organes_XIV.csv.zip"
dest="deputes.zip"
download(path,destfile=dest,mode="wb")
loc=paste(getwd(),"/",dest,sep="" )
unzip(loc)
dest="acteurs.csv"
loc=paste(getwd(),"/",dest,sep="" )
t2=read.csv(loc,sep=";")
res=NULL
for (i in 1:248){
test=str_c("mandats.1..mandat.",i,"..typeOrgane.1.")
j=which(colnames(t2) == test)
#Je ne récupère que les personnes dont le i ème mandat est un mandat de député i.e siegeant à l'Assemblée Nationale
t3=subset(t2,t2[,j]=="ASSEMBLEE")
if(nrow(t3)!=0){
#Nom des différentes variables en concatenant les chaines
circo=str_c("mandats.1..mandat.",i,"..election.1..lieu.1..numCirco.1.")
dept=str_c("mandats.1..mandat.",i,"..election.1..lieu.1..numDepartement.1.")
acteur=str_c("mandats.1..mandat.",i,"..acteurRef.1.")
mandat=str_c("mandats.1..mandat.",i,"..uid.1.")
# Récupération du numero de la colonne du data frame dont le nom correspond a chacune des variable voulues
k=which(colnames(t3) == circo)
l=which(colnames(t3) == dept)
m=which(colnames(t3) == acteur)
n=which(colnames(t3) == mandat)
#Je cree un data frame correspondant "au numero de mandat"
t4=data.frame(as.numeric(as.character(t3[,k])),as.numeric(as.character(t3[,l])),str_c(t3[,m],t3[,n]))
colnames(t4)=c("circo","dept","identifiant")
#Et j'ajoute l'ensemble des députés
res=rbind(res,t4)
}
}
res$iden=str_c(res$dept,res$circo)
res$nbVote=0
res$oui=0
res$non=0
res$abst=0
res$nomOrgane=0

Ensuite, pour récupérer les votes de chacun des députés à chaque scrutin, on récupére le fichier XML du site data.assemblee-nationale.fr.

path="http://data.assemblee-nationale.fr/static/openData/repository/LOI/scrutins/Scrutins_XIV.xml.zip"
dest="Scrutins_XIV.xml.zip"
download(path,destfile=dest,mode="wb")
loc=paste(getwd(),"/",dest,sep="" )
unzip(loc)
dest="Scrutins_XIV.xml"
loc=paste(getwd(),"/",dest,sep="" )

En parcourant la hiérarchie du fichier XML (à l’aide de la fonction xml_children) on obtient les différents fils d’un noeud.

t=read_xml(loc)
liste=xml_children(t)

Le premier niveau de noeud correspond à l’ensemble des scrutins. En bouclant sur chacun d’eux, on récupéree dans les fils le résultat du vote de chaque scrutin pour chaque député. On va se limiter aux scrutins dont le mode de publication scrutin (14ème fils) est “Decompte nominatif” : en effet, pour les scrutins par “Decompte Dissident”, il n’y avait que les votes des députés qui n’avaient pas voté dans le sens la majorité, ce qui introduirait un biais ensuite.

Le 16ème fils correspond à la variable ventilation du vote. Ce noeud recense toutes les données concernant le résultat du vote.
Pour chacun des groupes représentés à l’Assemblée présent dans le fils ventilation vote, on récupère le vote de chaque député du groupe dont les modalités sont : “Non Votant”, “Pour”, “Contre” et “Abstention”.
En bouclant sur l’ensemble des scrutins on comptabilise également le nombre de vote réalisé pour chaque député et le nom de son groupe (organeRef).

for (a in liste){
if(xml_text(xml_children(a)[14])=="DecompteNominatif"){
numero=xml_text(xml_children(a)[2])
ventil=xml_children(a)[16]
groupe=xml_children(xml_children(xml_children(ventil)))
tempDataFrame=data.frame(res$identifiant,NA)
colnames(tempDataFrame)=c("identifiant",str_c("scrutin",numero))
for (b in groupe){
intermediaire=xml_children(xml_children(b))
nomGroupe=xml_text(xml_children(b)[1])
for (i in 1:4){
df3=data.frame(xml_text(xml_children(xml_children(intermediaire[3])[i])))
if(nrow(df3)!=0){
for (j in df3[,]){
if (i==1){
j=strsplit(strsplit(strsplit(j,"MG")[[1]],"PSE")[[1]],"PAN")[[1]]
}
res[res$identifiant==j,]$nomOrgane=as.character(nomGroupe)
if(i!=1)
res[res$identifiant==j,]$nbVote=res[res$identifiant==j,]$nbVote+1
if (i==1)
tempDataFrame[tempDataFrame$identifiant==j,2]="NV"
if (i==2){
res[res$identifiant==j,]$oui=res[res$identifiant==j,]$oui+1
tempDataFrame[tempDataFrame$identifiant==j,2]="POUR"
}
if (i==3){
res[res$identifiant==j,]$non=res[res$identifiant==j,]$non+1
tempDataFrame[tempDataFrame$identifiant==j,2]="CTRE"
}
if (i==4){
res[res$identifiant==j,]$abst=res[res$identifiant==j,]$abst+1
tempDataFrame[tempDataFrame$identifiant==j,2]="ABST"
}}}}}
res=data.frame(res, tempDataFrame[,2])
}}
res2=subset(res,res$dept<96)
res2$circo=ifelse(nchar(res2$circo)==1, str_c("0",res2$circo) , str_c("",res2$circo))
res2$dept=ifelse(nchar(res2$dept)==1, str_c("0",res2$dept) , str_c("",res2$dept))
res2$iden=str_c(res2$dept,res2$circo)
res2=subset(res2,!is.na(iden))
res2=res2[order(res2$iden),]
parIden=aggregate(res2$nbVote, by=list(res2$iden), sum)

Par soucis de visualisation, on va se limiter ici à la France métropolitaine. Il a fallu uniformiser le code IDEN de chaque département pour qu’il corresponde au format utilisé par la carte de France utilisé (DDCC, deux chiffres pour le département, deux pour la circonscription). Le fond de carte est le suivant

path="http://www.laspic.eu/data/documents/circosshp_v3.zip"
dest="circosshp_v3.zip"
download(path,destfile=dest,mode="wb")
loc=paste(getwd(),"/",dest,sep="" )
unzip(loc)
dest="circosSHP_v3.RData"
loc=paste(getwd(),"/",dest,sep="" )
load(loc)

On agrége enfin par par député afin d’obtenir un nombre de vote par circonscription. Notons que dans certaines circonscriptions, le député démissionait ou mourrait et était donc remplacé par un autre député. J’ai donc choisi de sommer le vote de l’ensemble des députés ayant représenté une circonscription pour le représenter graphiquement : on raisonne par circonscription, et par par représentant.

nuancier <- findColours(classIntervals(parIden$x, 6, style = "quantile"), smoothColors("white",98,"#0C3269"))
plot(fdc, col=nuancier)
leg <- findColours(classIntervals( round(parIden$x), 6, style="quantile"), smoothColors("white",98,"#0C3269"), under="moins de", over="plus de", between="–",cutlabels=FALSE)
legend("bottomleft",fill=attr(leg, "palette"), legend=names(attr(leg,"table")),title = "Nombre de Votes",bty="n")
title( main="Nombre de votes par circonscription",cex.main=1.5)

parIdenScrutin=aggregate(res2[,9:ncol(res2)], by=list(res2$iden),na.omit)
for (i in 1:nrow(parIdenScrutin))
parIdenScrutin$nomOrgane[i]=ifelse(parIdenScrutin$nomOrgane[i][[1]][1]!="0",as.character(parIdenScrutin$nomOrgane[i][[1]][1]),as.character(parIdenScrutin$nomOrgane[i][[1]][2]))
for (i in 3:ncol(parIdenScrutin))
parIdenScrutin[,i]=as.factor(levels(parIdenScrutin[,i][[1]])[as.numeric(parIdenScrutin[,i])])
parIdenScrutin=replace(parIdenScrutin,is.na(parIdenScrutin),"NV")
for (i in 1:nrow(parIdenScrutin)) {
if (parIdenScrutin$nomOrgane[i]=="PO656014" || parIdenScrutin$nomOrgane[i]=="PO713077" || parIdenScrutin$nomOrgane[i]=="PO656002")
parIdenScrutin$nomOrgane[i] <- "SER"
if (parIdenScrutin$nomOrgane[i] == "PO656006" || parIdenScrutin$nomOrgane[i] =="PO707869")
parIdenScrutin$nomOrgane[i] <- "LR"
if (parIdenScrutin$nomOrgane[i] == "PO656022")
parIdenScrutin$nomOrgane[i] <- "RRDP"
if (parIdenScrutin$nomOrgane[i] == "PO656010")
parIdenScrutin$nomOrgane[i] <- "UDI"
if (parIdenScrutin$nomOrgane[i] == "PO656018")
parIdenScrutin$nomOrgane[i] <- "GDR"
if (parIdenScrutin$nomOrgane[i] == "PO645633")
parIdenScrutin$nomOrgane[i] <- "NI"
}
parIdenScrutin$Group.1 <- as.factor(parIdenScrutin$Group.1)
parIdenScrutin$nomOrgane <- as.factor(as.character(parIdenScrutin$nomOrgane))

On peut finir par une petite analyse des correspondances : les députés ont voté a 644 scrutins (en colonne). Pour chaque scrutin, un député peut avoir voté pour (POUR), contre (CTRE), s’être abstenu (ABST) ou ne pas s’être présenté. Si on ne garde que les deux premiers axes pricinpaux, on obtient

acm <- MCA(parIdenScrutin, quali.sup=2, graph=FALSE)
head(acm$eig,5)
plot(acm$ind$coord[, 1:2], type= "n", xlab=paste0("Axe 1 (" , round(acm$eig[1,2], 1), " %)"), ylab=paste0("Axe 2 (", round(acm$eig[2,2], 1), " %) "), main= "Nuage des individus selon les partis", cex.main=1, cex.axis=1, cex.lab=1, font.lab=3)
abline(h=0, v=0, col= "grey", lty=3, lwd=1)
points(acm$ind$coord[,1:2], col=as.numeric(parIdenScrutin$nomOrgane), pch=19, cex=0.5)
legend("topleft", legend=levels(parIdenScrutin$nomOrgane), bty= "o", text.col=1:10, col=1:10, pch=18, cex=0.8)
text(acm$ind$coord[,1:2], labels=rownames(acm$ind$coord), col=as.numeric(parIdenScrutin$nomOrgane), cex=0.7, pos=4)

On a ici le nuage des individus. Ces derniers sont placés en fonction de leur tendance de votes. Les individus les plus proches sont ceux qui votent sensiblement de la même façon tandis que les individus éloignés votent différemment à chaque scrutin. Les couleurs représente le parti auquel appartient le député. Elles sont présentent à titre indicatif pour valider ou non l’hypothèse de “vote en groupes“. Sur ce graphique, parmi les groupes représentés, deux sont très nets à gauche et à droite. Un 3ème se remarque dans la partie centre haute du graphique et enfin un 4e groupe dans la supérieure droite du graphique.

Si on se limite aux couleurs (les partis), les deux groupes très nets qui s’opposent sont Les Républicains (LR – Droite, à gauche sur la projection) et le groupe Socialiste, Ecologiste et Républicain (SER – Gauche, à droite sur la projection). Le 3ème groupe qui se distingue représente le groupe d’extrème Gauche GDR (Gauche démocrate et républicaine). Ces trois groupes sont disctinct car ils votent de façon opposé. Ensuite en regardant toujours les partis, on remarque que l’Union des Démocrates et Indépendants (UDI – Droite) ont des votes proches de ceux du groupe LR. Le groupe Radical, Républicain, démocrate et progressiste ont une façon de voter similaire à SER. Les votes des députés non inscrits varient énormément entre les scrutins, ils ne se rapporchent d’aucun parti.