Classification from scratch, logistic with kernels 3/8

Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.

kernel based estimated, from scratch

I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).

Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})

Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code

mean_x = function(x,bw){
  w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1)
  weighted.mean(myocarde$PRONO,w)}
u = seq(5,55,length=201)
v = Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


and of course, we can change the bandwidth.

v = Vectorize(function(x) mean_x(x,2))(u)
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better.

Using ksmooth R function

Actually, there is a function in R to compute this kernel regression.

reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1))
plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)

We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before

g=function(bk=3){
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk)
f=function(bm){
  v = Vectorize(function(x) mean_x(x,bm))(reg$x)
  z=reg$y-v
  sum((z[!is.na(z)])^2)}
optim(bk,f)$par}
x=seq(1,10,by=.1)
y=Vectorize(g)(x)
plot(x,y)
abline(0,exp(-1),col="red")
abline(0,.37,col="blue")


There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest…

Application in higher dimension

Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels

u = seq(0,1,length=101)
p = function(x,y){
  bw1 = .2; bw2 = .2
  w = dnorm((df$x1-x)/bw1, mean=0,sd=1)*
      dnorm((df$x2-y)/bw2, mean=0,sd=1)
  weighted.mean(df$y=="1",w)
}
v = outer(u,u,Vectorize(p))
image(u,u,v,col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

We get the following prediction

Here, the different colors are probabilities.

k-nearest neighbors

An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with
\mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\}
The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance

Sigma = var(myocarde[,1:7])
Sigma_Inv = solve(Sigma)
d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)}
k_closest = function(i,k){
  vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv)
vect = Vectorize(vect_dist)((1:nrow(myocarde))) 
which((rank(vect)))}

Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.

k_majority = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2]
  return(Y)}

But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),

k_mean = function(k){
  Y=rep(NA,nrow(myocarde))
  for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)])
  return(Y)}

We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors

cbind(OBSERVED=myocarde$PRONO,
MAJORITY=k_majority(7),PROPORTION=k_mean(7))
      OBSERVED MAJORITY PROPORTION
 [1,]        1        1  0.7142857
 [2,]        0        1  0.5714286
 [3,]        0        0  0.1428571
 [4,]        1        1  0.5714286
 [5,]        0        1  0.7142857
 [6,]        0        0  0.2857143
 [7,]        1        1  0.7142857
 [8,]        1        0  0.4285714
 [9,]        1        1  0.7142857
[10,]        1        1  0.8571429
[11,]        1        1  1.0000000
[12,]        1        1  1.0000000

Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have

mean_x = function(x,k=9){
  w = rank(abs(myocarde$INSYS-x),ties.method ="random")
  mean(myocarde$PRONO[which(w<=9)])}
u=seq(5,55,length=201)
v=Vectorize(function(x) mean_x(x,3))(u)
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)


That’s not very smooth, but we do not have a lot of points either.

If we use that technique on our two-dimensional dataset, we obtain the following

Sigma_Inv = solve(var(df[,c("x1","x2")]))
u = seq(0,1,length=51)
p = function(x,y){
  k = 6
  vect_dist = function(j)  d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv)
  vect = Vectorize(vect_dist)(1:nrow(df)) 
  idx  = which(rank(vect)<=k)
  return(mean((df$y==1)[idx]))}
v = outer(u,u,Vectorize(p))
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)

This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued

Classification from scratch, logistic with splines 2/8

Today, second post of our series on classification from scratch, following the brief introduction on the logistic regression.

Piecewise linear splines

To illustrate what’s going on, let us start with a “simple” regression (with only one explanatory variable). The underlying idea is natura non facit saltus, for “nature does not make jumps”, i.e. process governing equations for natural things are continuous. That seems to be a rather strong assumption, because we can assume that there is a fixed threshold to explain death. For instance, if patients die (for sure) if the “stroke index” exceeds a threshold, we might expect some discontinuity. Exceept that if that threshold is an heterogeneous (non-observable continuous) variable, then we get back to the continuity assumption.

The most simple model we can think of to extend the linear model we’ve seen in the previous post is to consider a piecewise linear function, with two parts : small values of x, and larger values of x. The most convenient way to do so is to use the positive part function (x-s)_+ which is the difference between x and s if that difference is positive, and 0 otherwise. For instance \beta_1 x+\beta_2(x-s)_+ is the following piecewise linear function, continuous, with a “rupture” at knot s.

Observe also the following interpretation: for small values of x, there is a linear increase, with slope \beta_1, and for lager values of x, there is a linear decrease, with slope \beta_1+\beta_2. Hence, \beta_2 is interpreted as a change of the slope.

And of course, it is possible to consider more than one knot. The function to get the positive value is the following

pos = function(x,s) (x-s)*(x>=s)

then we can use it direcly in our regression model

reg = glm(PRONO~INSYS+pos(INSYS,15)+
pos(INSYS,25),data=myocarde,family=binomial)

The output of the regression is here

summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)     -0.1109     3.2783  -0.034   0.9730  
INSYS           -0.1751     0.2526  -0.693   0.4883  
pos(INSYS, 15)   0.7900     0.3745   2.109   0.0349 *
pos(INSYS, 25)  -0.5797     0.2903  -1.997   0.0458 *

Hence, the original slope, for very small values is not significant, but then, above 15, it become significantly positive. And above 25, there is a significant change again. We can plot it to see what’s going on

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,type="l")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() linear splines

Using the GAM function, things are slightly different. We will use here so called b-splines,

library(splines)

We can define spline functions with support (5,55) and with knots \{15,25\}

clr6 = c("#1b9e77","#d95f02","#7570b3","#e7298a","#66a61e","#e6ab02")
x = seq(0,60,by=.25)
B = bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=1)
matplot(x,B,type="l",lty=1,lwd=2,col=clr6)


as we can see, the functions defined here are different from the one before, but we still have (piecewise) linear functions on each segment (5,15), (15,25) and (25,55). But linear combinations of those functions (the two sets of functions) will generate the same space. Said differently, if the interpretation of the output will be different, predictions should be the same

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=1),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)  
(Intercept)    -0.9863     2.0555  -0.480   0.6314  
bs(INSYS,..)1  -1.7507     2.5262  -0.693   0.4883  
bs(INSYS,..)2   4.3989     2.0619   2.133   0.0329 *
bs(INSYS,..)3   5.4572     5.4146   1.008   0.3135

Observe that there are three coefficients, as before, but again, the interpretation is here more complicated…

v=predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Nevertheless, the prediction is the same… and that’s nice.

Piecewise quadratic splines

Let us go one step further… Can we have also the continuity of the derivative ? Yes, and that’s easy actually, considering parabolic functions. Instead of using a decomposition on x,(x-s_1)_+ and (x-s_2)_+ consider now a decomposition on x,x^{\color{red}{2}},(x-s_1)^{\color{red}{2}}_+ and (x-s_2)^{\color{red}{2}}_+.

 pos2 = function(x,s) (x-s)^2*(x>=s)
reg = glm(PRONO~poly(INSYS,2)+pos2(INSYS,15)+pos2(INSYS,25),
data=myocarde,family=binomial)
summary(reg)
 
Coefficients:
                Estimate Std. Error z value Pr(>|z|)  
(Intercept)      29.9842    15.2368   1.968   0.0491 *
poly(INSYS, 2)1 408.7851   202.4194   2.019   0.0434 *
poly(INSYS, 2)2 199.1628   101.5892   1.960   0.0499 *
pos2(INSYS, 15)  -0.2281     0.1264  -1.805   0.0712 .
pos2(INSYS, 25)   0.0439     0.0805   0.545   0.5855

As expected, there are here five coefficients: the intercept and two for the part on the left (three parameters for the parabolic function), and then two additional terms for the part in the center – here (15,25) – and for the part on the right. Of course, for each portion, there is only one degree of freedom since we have a parabolic function (three coefficients) but two constraints (continuity, and continuity of the first order derivative).

On a graph, we get the following

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Using bs() quadratic splines

Of course, we can do the same with our R function. But as before, the basis of function is expressed here differently

 x = seq(0,60,by=.25)
B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=2)
matplot(x,B,type="l",xlab="INSYS",col=clr6)


If we run R code, we get

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=2),data=myocarde,
family=binomial)
summary(reg)
 
Coefficients:
               Estimate Std. Error z value Pr(>|z|)  
(Intercept)       7.186      5.261   1.366   0.1720  
bs(INSYS, ..)1  -14.656      7.923  -1.850   0.0643 .
bs(INSYS, ..)2   -5.692      4.638  -1.227   0.2198  
bs(INSYS, ..)3   -2.454      8.780  -0.279   0.7799  
bs(INSYS, ..)4    6.429     41.675   0.154   0.8774

But that’s not really a big deal since the prediction is exactly the same

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red")
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)

Cubic splines

Last, but not least, we can reach the cubic splines. With our previous notions, we would consider a decomposition on (guess what) x,x^2,x^{\color{red}{3}},(x-s_1)^{\color{red}{3}}_+,(x-s_2)^{\color{red}{3}}_+, to get this time continuity, as well as continuity of the first two derivatives (and to get a very smooth function, since even variations will be smooth). If we use the bs function, the basis is the followin

B=bs(x,knots=c(15,25),Boundary.knots=c(5,55),degre=3)
matplot(x,B,type="l",lwd=2,col=clr6,lty=1,ylim=c(-.2,1.2))
abline(v=c(5,15,25,55),lty=2)

and the prediction will now be

reg = glm(PRONO~bs(INSYS,knots=c(15,25),
Boundary.knots=c(5,55),degre=3),
data=myocarde,family=binomial)
u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=c(5,15,25,55),lty=2)


Two last things before concluding (for today), the location of the knots, and the extension to additive models.

Location of knots

In many applications, we do not want to specify the location of the knots. We just want – say – three (intermediary) knots. This can be done using

reg = glm(PRONO~1+bs(INSYS,degree=1,df=4),data=myocarde,family=binomial)

We can actually get the locations of the knots by looking at

attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 1L, knots = c(15.8, 21.4, 27.15), 
Boundary.knots = c(8.7, 54), intercept = FALSE)

which provides us with the location of the boundary knots (the minumun and the maximum from from our sample) but also the three intermediary knots. Observe that actually, those five values are just (empirical) quantiles

quantile(myocarde$INSYS,(0:4)/4)
   0%   25%   50%   75%  100% 
 8.70 15.80 21.40 27.15 54.00

If we plot the prediction, we get

v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


If we get back on what was computed before the logit transformation, we clealy see ruptures are the different quantiles

B = bs(x,degree=1,df=4)
B = cbind(1,B)
y = B%*%coefficients(reg)
plot(x,y,type="l",col="red",lwd=2)
abline(v=quantile(myocarde$INSYS,(0:4)/4),lty=2)


Note that if we do specify anything about knots (number or location), we get no knots…

reg = glm(PRONO~1+bs(INSYS,degree=2),data=myocarde,family=binomial)
attr(reg$terms, "predvars")[[3]]
bs(INSYS, degree = 2L, knots = numeric(0), 
Boundary.knots = c(8.7,54), intercept = FALSE)

and if we look at the prediction

u = seq(5,55,length=201)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)


actually, it is the same as a quadratic regression (as expected actually)

reg = glm(PRONO~1+poly(INSYS,degree=2),data=myocarde,family=binomial)
v = predict(reg,newdata=data.frame(INSYS=u),type="response")
plot(u,v,ylim=0:1,type="l",col="red",lwd=2)
points(myocarde$INSYS,myocarde$PRONO,pch=19)

Additive models

Consider now the second dataset, with two variables. Consider here a model like
\mathbb{P}[Y|X_1=x_1,X_2=x_2]=\frac{\exp[\eta(x_1,x_2)]}{1+\exp[\eta(x_1,x_2)]}
where
\exp[\eta(x_1,x_2)]=\beta_0+\color{red}{s_1(x_1)}+\color{blue}{s_2(x_2)}
\color{red}{s_1(x_1)}=\beta_{1,0}x_1+\beta_{1,1}(x_1-s_{11})_++\beta_{1,2}(x_1-s_{12})_+
and
\color{blue}{s_2(x_2)}=\beta_{2,0}x_2+\beta_{2,1}(x_2-s_{21})_++\beta_{2,2}(x_2-s_{22})_+
It might seem a little bit restrictive, but that’s actually the idea of additive models.

reg = glm(y~bs(x1,degree=1,df=3)+bs(x2,degree=1,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Now, if think about is, we’ve been able to get a “perfect” model, so, somehow, it seems no longer continuous…

persp(u,u,v,theta=20,phi=40,col="green"


Of course, it is… it is piecewise linear, with hyperplane, some being almost vertical.

And one can also consider piecewise quadratic functions

reg = glm(y~bs(x1,degree=2,df=3)+bs(x2,degree=2,df=3),data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(df$x1,df$x2,pch=19,cex=1.5,col="white")
points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Funny thing, we now have two “perfect” models, with different areas for the white and the black dots… Don’t ask me how to choose on that one.

In R, it is possible to use the mgcv package to run a gam regression. It is used for generalized additive models, but here, we have only one variable, so it is difficult to see the “additive” part, actually. And to be more specific, mgcv is using penalized quasi-likelihood from the nlme package (but we’ll get back on penalized routines later on).

But maybe I should also mention another smoothing tool before, kernels (and maybe also k-nearest neighbors). To be continued

Classification from scratch, logistic regression 1/8

Let us start today our series on classification from scratch

The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.

Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}

Maximimum of the (log)-likelihood function

The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function

y = myocarde$PRONO
X = cbind(1,as.matrix(myocarde[,1:7]))
negLogLik = function(beta){
 -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta)))
 }

We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm

beta_init = lm(PRONO~.,data=myocarde)$coefficients

Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.

logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9))

Here, we obtain

 logistic_opt$par
 (Intercept)        FRCAR        INCAR        INSYS    
 1.656926397  0.045234029 -2.119441743  0.204023835 
       PRDIA        PAPUL        PVENT        REPUL 
-0.102420095  0.165823647 -0.081047525 -0.005992238

Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)

simu = function(i){
logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, 
negLogLik, hessian=TRUE, method = "BFGS", 
control=list(abstol=1e-9))
logistic_opt_i$par[2:3]
}
v_beta = t(Vectorize(simu)(1:1000))
plot(v_beta)
par(mfrow=c(1,2))
hist(v_beta[,1],xlab=names(myocarde)[1])
hist(v_beta[,2],xlab=names(myocarde)[2])

Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine

library(optimx)
logit = function(mX, vBeta) {
  exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) 
}
logLikelihoodLogitStable = function(vBeta, mX, vY) {
  -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + 
(1-vY)*(-log(1 + exp(mX %*% vBeta)))) 
}
likelihoodScore = function(vBeta, mX, vY) {
  return(t(mX) %*% (logit(mX, vBeta) - vY) )
}
optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, 
method = 'L-BFGS-B', gr = likelihoodScore, 
mX = X, vY = y, hessian=TRUE)

The optimum is here

attr(optimLogitLBFGS, "details")[[2]]
              [,1]
       0.066680272
FRCAR  0.003080542
INCAR  0.079031364
INSYS -0.001586194
PRDIA  0.040500697
PAPUL -0.041870705
PVENT -0.014162756
REPUL  0.195632244

Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?

Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.

Actually, it is possible to use functions dedicated to such computation

library(numDeriv)
library(MASS)
logit = function(x){1/(1+exp(-x))}
logLik = function(beta, X, y){
 -sum(y*log(logit(X%*%beta)) + 
(1-y)*log(1-logit(X%*%beta)))
}
optim_second = function(beta, num_iter){
  LL = vector()
  for(i in 1:num_iter){
    grad = (t(X)%*%(logit(X%*%beta) - y)) 
    H = hessian(logLik, beta, method = "complex", X = X, y = y)
    beta = beta - ginv(H)%*%grad
    LL[i] = logLik(beta, X, y)
  }
  result = list(beta, H)
return(result)
}

With our OLS starting point, we obtain

opt0 = optim_second(beta_init,500)
opt0[[1]]
             [,1]
[1,]  0.951074420
[2,]  0.018860280
[3,]  0.275428978
[4,]  0.144803636
[5,] -0.058535606
[6,]  0.001182178
[7,] -0.108651776
[8,] -0.002940315

But if we try with another starting point

opt1 = optim_second(beta_init*runif(8),500)
opt1[[1]]
             [,1]
[1,]  0.052894794
[2,]  0.024718435
[3,]  0.167953661
[4,]  0.171662947
[5,] -0.057458066
[6,] -0.011361034
[7,] -0.107532114
[8,] -0.002679064

Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).

Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.

Newton (or Fisher) Algorithm

If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}

Y=myocarde$PRONO
X=cbind(1,as.matrix(myocarde[,1:7]))
colnames(X)=c("Inter",names(myocarde[,1:7]))
 beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}

Observe that here, I use only ten iterations of the algorithm !

 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641685 -10.187641696 -10.187641696
XFRCAR   0.138178119   0.138178119   0.138178119
XINCAR  -5.862429035  -5.862429037  -5.862429037
XINSYS   0.717084018   0.717084018   0.717084018
XPRDIA  -0.073668171  -0.073668171  -0.073668171
XPAPUL   0.016756506   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point

beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8)
 for(s in 1:9){
   pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s]))
   gradient=t(X)%*%(Y-pi)
   omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi))
   Hessian=-t(X)%*%omega%*%X
   beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)}
 beta[,8:10]
                [,1]          [,2]          [,3]
XInter -10.187641586 -10.187641696 -10.187641696
XFRCAR   0.138178118   0.138178119   0.138178119
XINCAR  -5.862429017  -5.862429037  -5.862429037
XINSYS   0.717084013   0.717084018   0.717084018
XPRDIA  -0.073668172  -0.073668171  -0.073668171
XPAPUL   0.016756508   0.016756506   0.016756506
XPVENT  -0.106776012  -0.106776012  -0.106776012
XREPUL  -0.003154187  -0.003154187  -0.003154187

Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.

Weighted Least-Squares

Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.

The algorithm will be

df = myocarde
beta_init = lm(PRONO~.,data=df)$coefficients
X = cbind(1,as.matrix(myocarde[,1:7]))
beta = beta_init
for(s in 1:1000){
p = exp(X %*% beta) / (1+exp(X %*% beta))
omega = diag(nrow(df))
diag(omega) = (p*(1-p))
df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p)
beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients
}

and the output is here

 beta
  (Intercept)         FRCAR         INCAR         INSYS         PRDIA 
-10.187641696   0.138178119  -5.862429037   0.717084018  -0.073668171 
        PAPUL         PVENT         REPUL 
  0.016756506  -0.106776012  -0.003154187

which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators

summary( lm(Z~.,data=df[,-8], weights=diag(omega)))
 
Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -10.187642  10.668138  -0.955    0.343
FRCAR         0.138178   0.102340   1.350    0.182
INCAR        -5.862429   6.052560  -0.969    0.336
INSYS         0.717084   0.503527   1.424    0.159
PRDIA        -0.073668   0.261549  -0.282    0.779
PAPUL         0.016757   0.306666   0.055    0.957
PVENT        -0.106776   0.099145  -1.077    0.286
REPUL        -0.003154   0.004386  -0.719    0.475

The standard glm function

Of course, it is possible to use an R built-in function to get our estimate

summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit")))
 
Coefficients:
              Estimate Std. Error z value Pr(>|z|)
(Intercept) -10.187642  11.895227  -0.856    0.392
FRCAR         0.138178   0.114112   1.211    0.226
INCAR        -5.862429   6.748785  -0.869    0.385
INSYS         0.717084   0.561445   1.277    0.202
PRDIA        -0.073668   0.291636  -0.253    0.801
PAPUL         0.016757   0.341942   0.049    0.961
PVENT        -0.106776   0.110550  -0.966    0.334
REPUL        -0.003154   0.004891  -0.645    0.519

Application and visualisation

Let us visualize the prediction obtained from the logistic regression, on our second dataset

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit"))
u = seq(0,1,length=101)
p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)


Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.

Next time, we will introduce splines to smooth those continuous covariates… to be continued.

Classification from scratch, overview 0/8

Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.

According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?

When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be

I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)

x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85)
y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3)
z = c(1,1,1,1,1,0,0,1,0,0)
df = data.frame(x1=x,x2=y,y=as.factor(z))
plot(x,y,pch=c(1,19)[1+z])

Here is some code to get a visualization of the prediction (here the probability to be a black point)

rmatrix_model = function(model){
u = seq(0,1,length=101)
p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response")
v = outer(u,u,p)
return(v)}
nice_graph=function(v){
u = seq(0,1,length=101)
image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10)
points(x,y,pch=19,cex=1.5,col="white")
points(x,y,pch=c(1,19)[1+z],cex=1.5)
contour(u,u,v,levels = .5,add=TRUE)
}
reg = glm(y~x1+x2,data=df,family=binomial)
nice_graph(rmatrix_model(reg))

Note that colors are defined here as

clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b")

or with some nonlinear model

The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).

myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";")
myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1
y = myocarde$PRONO
X = as.matrix(cbind(1,myocarde[,1:7]))

So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.

Visualization of Airline Transportation Data

Tuesday, I will be in Paris, as a member of the jury on dataviz, organized by the Direction Generale de l’Aviation Civile, during the “assises nationales du transport aérien“.

Dans ce contexte, le Ministère de la transition écologique et solidaire lance un appel à projets pour la conception d’une application open source facilitant la visualisation et le partage des données du transport aérien. Volume de trafic (passagers et mouvements d’avion), retards et émissions aux abords des aéroports sont autant de données recensées par la DGAC, pour élaborer un outil de data-visualisation innovant, interactif et pédagogique au service des professionnels et du grand public.

There were some nice studies based on those data, available from the dedicated website (even if sometime, it can be hard to get a clear understanding, but that’s actually the main challenge with dataviz)

I can also upload some screen shots from some apps that were submited, and there were nice things, such as

or the following one

Some candidates were selected to present their viz to the jury, and then, there will be prices. More to come on Wednesday, probably.

On the interpretation of a regression model

Yesterday, NaytaData (aka @NaytaData ) posted a nice graph on reddit, with bicycle traffic and mean air temperature, in Helsinki, Finland, per day,

I found that graph interesting, so I did ask for the data (NaytaData kindly sent them to me tonight).

df=read.csv("cyclistsTempHKI.csv")
library(ggplot2)
ggplot(df, aes(meanTemp, cyclists)) +
  geom_point() +
  geom_smooth(span = 0.3)

But as mentioned by someone on twitter, the interpretation is somehow trivial : people get out on their bike when the weather is nice. The hotter, the more cyclists on the road. Which is interpreted here in a causal way…

But actually, we can also visualize the data as follows, as suggested by Antoine Chambert-Loir

 ggplot(df, aes(cyclists, meanTemp)) +
  geom_point() +
  geom_smooth(span = 0.3)

The interpretation would be, somehow, that the more cyclists on the road, the hotter it is. Why not consider this causal interpretation here ? Like cyclists go so fast, or sweat so much, that they increase temperature…

Of course, it is the standard (recurrent) discussion “correlation is not causality”, but in regression models, we like to tell a story, to pretend that we have some sort of a causal story. But we do not prove it. Here, we know that the first one is more credible than the second one, but how do we know that ? To go further, how can we use machine learning techniques to prove causal relationships ? How could a machine choose between the first and the second story ?

 

 

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

library(osmar)
src = osmsource_api()
bb = center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua = get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids = find(ua, way(tags(k %in% vc)))
nat_ids = find_down(ua, way(nat_ids))
nat = subset(ua, ids = nat_ids)
nat_poly = as_sp(nat, type)}
 
listev = function(vc,type="polygons"){
  nat_ids = find(ua, way(tags(v %in% vc)))
  nat_ids = find_down(ua, way(nat_ids))
  nat = subset(ua, ids = nat_ids)
  nat_poly  as_sp(nat, type)}

For instance to get rivers, use

W=listek(c("waterway"))

and to get buildings

M=listek(c("building"))

We can also get churches

C=listev(c("church","chapel"))

but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads

H1=listek(c("highway"),"lines")
H2=listev(c("residential","pedestrian","secondary","tertiary"),"lines")

but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

plot(M)
plot(W,add=TRUE,col="blue")
plot(P,add=TRUE,col="green")
if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

library(sp)
library(raster)
library(rgdal)
library(rgeos)
library(maptools)
identification = function(xy,h,PLG){
  b=data.frame(x=rep(c(xy[1]-h,xy[1]+h),each=2),
               y=c(c(xy[2]-h,xy[2]+h,xy[2]+h,xy[2]-h)))
  pb1=Polygon(b)    
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))
  UC=gUnionCascaded(PLG)
  return(gIntersection(SPb1,UC))
}

and then, we identify, as follows

whichidtf = function(xy,h){
  h=.7*h
  label="EMPTY"
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"
return(label)
}

Let is use colored rectangle to make sure it works

nx=length(vx)
vx=as.numeric((vx[2:nx]+vx[1:(nx-1)])/2)
ny=length(vy)
vy=as.numeric((vy[2:ny]+vy[1:(ny-1)])/2)
 plot(M,border="white")
 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         lb=whichidtf(c(vx[i],vy[j]),h)
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      
     }}

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 library(png)
 library(grid)
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/tree.png","tree.png")
 tree = readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package

 rev_tree=tree
 rev_tree[,,2]=tree[,,4]

We can do the same for houses, churches and water actually

 download.file("http://freakonometrics.hypotheses.org/files/2018/05/angle-double-up.png","angle-double-up.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/home.png","home.png")
 download.file("http://freakonometrics.hypotheses.org/files/2018/05/church.png","curch.png")
water = readPNG("angle-double-up.png")
 rev_water=water
 rev_water[,,3]=water[,,4]
 home = readPNG("home.png")
 rev_home=home
 rev_home[,,4]=home[,,4]*.5
 church = readPNG("church.png")
 rev_church=church
 rev_church[,,1]=church[,,4]*.5
 rev_church[,,3]=church[,,4]*.5

and that’s almost it. We can then add it on the map

 plot(M,border="white")
 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     lb=whichidtf(c(vx[i],vy[j]),h)
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     
   }}

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).

 

Fake News, Wikipedia and Blockchain (Truth and Consensus)

(this article was intially writen in French)

We must not lie, we are taught at a very young age, and yet we all do it all the time. Provocatively, Meyer (2011) says that you will lie to your wife in one in ten conversations. And if you’re not married, the risk would be even higher. Yet we seem to live with these little lies (to the point of being destabilized when we find ourselves in front of an overly honest person). But the debates on the “Fake News” reminded us that some lies have a price, to the point of creating doubt and permanent distrust. This plurality of words, and the absence of a reference word, is not unlike the philosophy conveyed by crypto-currencies: instead of a centralised mode of governance (validation, certification), it is a global validation by a network, a consensus, which will prevail. Have we changed our definition of what truth is?

The story of “the truth”

In Athens, during the pompous period of Athenian democracy (around the middle of the fifth century BC), every citizen (ὁ βουλεύμενος, “whoever wants it”) has the right to speak, to formulate a speech. We will talk about iségorie (ἰσηγορία) equality of speech. As is so often the case, this equality is in fact very unequal, since not everyone is equal in eloquence. In Gorgias, Plato opposes the latter to Socrates, questioning the basis and purpose of this discourse, which can be put at the service of the common good, of truth, but also be used for purposes of persuasion. Socrates seeks the truth, and the moral value of the stated opinions, fundamentally opposing philosophy and rhetoric. But for Plato, truth is not everything: the notion of the common good seems to prevail over truth, a lie can be lawful if it is just (in the service of this common good). Greek myths are built on this idea, as Veyne (1983) develops, wondering whether the Greeks believed in their myths. This is not a lie, but fiction, allegory. The word μῦθος certainly gave the word “mythomaniac” (by adding μανια, madness), but this word initially meant the speech, the word, the story. We could almost find the concept of “lying true” proposed by poet Louis Aragon.

By radically opposing Good and Evil, Christianity has changed our conception of truth. It is appropriate to confess, to confess, if one aspires to salvation. Truth becomes sacred (and those who do not live with it will pay with their lives). Ecclesiastical authorities then define what is true, Foucault (1994b) spoke of “pastoral” power. “This form of power is oriented towards salvation (…) it is linked to a production of truth“. The opinion of the people is then guided and controlled by official representatives of the truth (ecclesiastical then state).

However, this power will crumble over time. The multiple words will reappear, and with them, the “Fake News”. Just after the First World War, Marc Bloch noted that “false news is always born of collective representations that pre-date its birth; it is fortuitous only in appearance, or, more precisely, all that is fortuitous in it is the initial incident, absolutely unspecified, that triggers the work of imaginations but this setting in motion takes place only because the imaginations are already prepared and ferment deafly. An event, a bad perception for example that would not go in the direction that the minds of all already lean, could at most form the origin of an individual error, but not of a popular and widespread false news“. We find here an idea translated today under the name of “Fake News”. Recently, some have felt compelled to consider legislation, raising the question of freedom of speech, but also probably a form of mistrust by a certain elite towards a people they think incapable of judgment and discernment. But who can afford to decree what is true and what is not?

Science and Truth

There is a form of truth through authority. Following a (painful) fall while skiing, even if I have vague knowledge of anatomy, I did better to consult a doctor, who has expertise from his studies and practice. If he tells me that I have a fracture, it is not the fact that he claims (with authority) that he is telling the truth, but because it is the conclusion he reaches on the basis of solid reasoning, which other doctors might contest. Not having any skills, I rely on his authority. Just as I trust my lawyer if I need legal advice (but perhaps not my doctor). Faced with an expert, the difficulty is to find what weight, what credit, to give him.

Science, as it is taught, seems to state irrefutable truths. If my pen falls off the table, its speed will increase linearly, until it touches the ground. I would be foolish to refute this reality, which stems from the theory of universal gravitation enunciated by Issac Newton in 1687 (in a much more general form than the fall of a pen, or an apple). But this vision is quite dated: “absolute truth” no longer exists, only agreement within the scientific community prevails. As van de Kerchove (2013) notes, mentioning the work of Karl Popper, facts in science are finally established in the same way as “jury proof” in English law. The main difference is the temporal aspect (a judge may set aside elements considered prescribed by law, even if they are scientifically enlightening) and the fact that justice judges a particular case, whereas science seeks general truths. It is the convergence of beliefs, based on collective discussions bringing a consensus which makes it possible to establish scientific truths.
And scientific knowledge evolves over time, even allowing itself to become blurred! Instead of having a “true” or “false” result, we now have true results with a certain probability. In the 2001 IPCC report, it is described as “likely”, with 2 out of 3 chances, that “human activity has been the main cause of observed warming” since the middle of the 20th century. In 2007, the probability exceeded 90%, reaching 95% (then becoming “extremely probable”) in 2014. We will also remember this episode in July 2012, when CERN announced the discovery of an elementary particle (postulated by Higgs in the 1960s, the famous “Higgs boson”) “with a confidence level of 99.99997%”. Science allows itself doubt, and even better, manages to quantify it with amazing precision.

Because doubt still exists in science. This doubt, sometimes called “Cartesian“, allows science to advance, but also, on occasion, to retreat. This is what Oreskes & Conway (2014) points out, highlighting the strategy of certain industries that have funded research projects denying scientific evidence of the dangerousness of tobacco, DDT (insecticide), the reality of the ozone hole, environmental damage from acid rain… etc.. By confusing, discrediting studies – and the scientists who conducted them – doubt has turned against science, while further strengthening the evidence. By questioning the dangers of smoking, numerous studies have established that on the contrary, no doubt was permitted. But the strategy was to buy time (and in that sense the strategy worked).

Justice and truth

But scientists are not the only ones trying to establish the truth. To resume the debate launched by Dagorn (2018), is the tomato a fruit or a vegetable? The fruit is defined in botany as the result of fertilization of the plant: it is derived from the transformation of the pistil once it is fertilized, and will then carry seeds allowing the plant to reproduce. So the tomato is a fruit. But vegetables have no scientific definition, even if they seem to designate the edible parts of plants. That said, chemists who are interested in cooking seem to have decided, using the flavor of the food. The fruit is a sweet food, while the vegetable would be dirty, bitter, or neutral. But if science does not seem to want to impose truth on the subject, justice has examined the question several times. In the United States, at the end of the 19th century, a tax existed on vegetables, but not on fruits. In 1887, John Nix sued New York Harbour treasurer Edward Hedden for taxing his tomato imports (recounted in Sterbenz (2013)). The United States Supreme Court ruled against him in 1893 (unanimously) explaining that the Customs Act referred to the common meaning of the terms “fruit” and “vegetable”, and not to the technical jargon of botanists. But this American vision opposes that adopted in Europe, since a 1988 directive of the Council of the European Union classified tomatoes in the category of fruits (just like carrots or rhubarb branches, moreover).

Justice feels invested with a mission to seek the truth. This is what Jean Domat said in 1745 when he stated that “the Laws want something judged to pass for truth”. The presumption of truth attached to a judgment – “res judicata pro veritate habetur” – is thus found several times in the Civil Code. It may, however, be objected that the raison d’être of this principle is essentially to avoid the repetition of trials ad infinitum (otherwise remedies would not exist: a court decision may be challenged, but according to a very specific procedure). The search for the truth then takes place during an investigation, which has become a “way of authenticating the truth, of acquiring things that will be considered true, and of transmitting them“, as Foucault (1994a) notes.

If science can’t decide, sometimes justice is still asked to make decisions. This is what the decisions say about whether the tomato is a fruit (or not) but it is also the essence of the precautionary principle, evoked in Charpentier (2016). And the situation becomes complex when laws that go against a scientific truth. Baruch (2013) thus returns in detail to the episode of the adoption of the Boyer law tending to repress the denial of genocides, at the turn of the years 2011 and 2012 (the debate on the “memorial laws”), and above all the contestation aroused by article 4 of the law of 23 February – known as the Mekachera law – with numerous positions on the “positive role” of French colonisation. MPs wanted to impose a truth (by law) that went against scientific historical knowledge. And how can we forget the condemnation (following a trial) of Galileo in 1633, following the publication of the Dialogue on the Two Great Systems of the World, by those who supported geocentrism against heliocentrism (previously established by Nicolas Copernicus, among others). In a letter written to the Grand Duchess Christine of Lorraine – mentioned by Gingras, Keating and Limoges (1999) – Galileo notes that the court ruled, “forgetting in a certain way that the multiplication of discoveries contributes to the progress of research, to the development and strengthening of sciences and not to their weakening or destruction, and at the same time showing itself more attached to their own opinions than to the truth“.

Is the majority always right?

In a sitting court, for an accused to be found guilty, the law requires that a (strong) majority of the members of the court of assizes decide it. The majority then imposes its law. But the vote is taken after discussion, after consensus has been reached. According to Wikipedia, “consensus characterizes the existence among the members of a group of a general (tacit or manifest), positive and unanimous agreement that can allow a decision or action to be taken together without a prior vote or special deliberation”. And Wikipedia is built on this idea of consensus. The first principle is openness and transparency: everything is open, everyone can contribute by submitting any content, and a record is kept of the history of interventions. It is required to provide proof of any assertion, with a clear and searchable reference (we find here a basic principle of scientific publication). Then follows a discussion phase. The importance taken today by Wikipedia shows that this principle works.

This delegation of governance is found in the majority of cryptocurrencies. To explain how Lamport works, Shostak and Pease (1982) proposed the fable of the Byzantine generals, to illustrate this concept of “gentium consensus” in computing. Several armies are ready to attack the same city, but the only way to synchronize the different armies, to determine whether to attack or retreat, is to circulate a messenger on horseback. Each general then mandates a knight, to carry the message “attack” or “retreat”, but there may be traitors among the generals who attack. The idea is to bring out a global validation, to obtain a general vote, a consensus, starting from the fact that dishonest people are less numerous than honest people. And indeed, in this case, coordination is possible. As with the blockchain (and all smart contracts), validation is by consensus.
The majority rule, the basis of many democratic systems, seems obvious to us. But the notion of “argumentum ad populum” reminds us that a proposition is not true because most believe it. If this rule seems natural to us, in a democracy, to associate the greatest number with decision-making in the city, in daily life, we quickly see that it leads to many dead ends. Imagine an airline pilot, faced with bad weather conditions, having to make the decision to land (or not) in an emergency. Should it allow passengers to vote? We imagine that it will be inappropriate, if not ineffective, because what the pilot wants is an informed majority, rather than a simple majority. In reality, as we can see in some important democratic decisions, following the choice of the majority is above all a way to avoid bearing responsibility for a choice.

The search for consensus is complicated, even impossible if we believe the literature on voting mechanisms and Arrow’s impossibility theorem. Arrow (1951) – inspired by Condorcet’s paradox – shows that there is no indisputable social choice process that allows a hierarchy of preferences for a group to be expressed from the aggregation of preferences. Also, when a consensus is reached, not everyone is satisfied with the outcome. Some even refer to a “dictatorship of the majority“. To quote Manin’s argument (1985), “the adherence of the greatest number reflects the superior strength of one argument over the others“, because argumentation and discussion are important, “this process makes the appearance of reasonable results more likely“. But this search for consensus is necessarily imperfect: “the true source of legitimacy therefore remains unanimity; the majority will is not legitimate in itself, it is legitimate because it is decided to confer on it all the attributes of the unanimous will. (…) The majority principle is a simple necessity of fact, without any intelligible link with the principle of legitimacy; it is only a convenient convention“, as Minor (2010) recalls.

A “post-truth” world?

The fact that science proposes vague truths does not facilitate understanding of the world, and the term “post-truth” has been proposed to describe this world where the borderline between lies and truth, honesty and dishonesty, fiction and “non-fiction” is no longer very clear. The open data movement also proposes to introduce a form of transparency, with raw data, so that everyone can decide on a debate. In the United States, several years ago, American citizens were shocked to discover that many politicians had received (sometimes very) large sums of money from private companies. But companies always had the desire to finance politicians in order to secure possible support, if necessary. Between these two contradictory visions, the compromise was to impose transparency: politicians had to keep track of any sum paid by a third party, and this information had to be made public. But what to do with this information? In France, this type of transparency is now mandatory for doctors. The database transparence-santé makes all the information declared by companies on the links of interest they maintain with players in the health sector (including doctors) accessible. Who took the time to go to the site to see if their doctor had a conflict of interest when he prescribed a medication? For this somewhat idyllic vision forgets that “pure” data does not exist (“row data is an oxymoron” to use the title of the book edited by Lisa Gitelman). We suspect that since the law imposes this transparency, practices have changed. Data (and facts) do not exist without narration. In 1936, a publisher asked Georges Orwell to investigate the workers’ condition in northern England, in a mining town during the Depression. In the spring of 1937, The Road to Wigan Pier appeared, often considered a report. Crick (1990) compared the notes taken in Orwell’s diary with the novel, to find out if he was giving raw sensations (facts) or if he had staged again, thus reconstructing his original vision. As he notes, “the bare style of the documentary is in reality a perfectly deliberate artistic creation“. Leys (1984) goes even further in the analysis “what Orwell’s invisible and so effective art illustrates is that’the truth of the facts’ cannot exist in a pure state. Facts by themselves never form anything but meaningless chaos: only artistic creation can invest them with meaning, giving them form and rhythm (…) Literally, truth must be invented“.

References

Arrow, K. (1951). Social Choice and Individual Values. Wiley.

Baruch, M-O. (2013). Des lois indignes ? Les historiens, la politique et le droit. Tallandier

Bloch, M. (1921) Réflexions d’un historien sur les fausses nouvelles de la guerre. Revue de synthèse historique, 33. https://bit.ly/2FC6GwZ

Charpentier, A. (2016). Les dérives du principe de précaution. Risques, 108.

Crick, B. (1990) Georges Orwell: Une Vie. Balland.

Dagorn, G. (2018). Aubergine, tomate, carotte… Savez-vous vraiment distinguer fruits et légumes ? lemonde.fr https://lemde.fr/2rkc7vt

Domat, J. (1745) Les lois civiles dans leur ordre naturel. https://bit.ly/2jrlNke

Foucault, M. (1994a) La vérité et les formes juridiques, in « Dits et écrits » tome II texte n°139, Gallimard.

Foucault, M. (1994b) Le sujet et le pouvoir, in « Dits et écrits » tome IV texte n°306, Gallimard.

Gingras, Y., Keating, P. & Limoges, C. (1999) Du scribe au savant : Les Porteurs du savoir de l’Antiquité à la révolution industrielle, Boréal.

Gitelman, L. (2013). Raw Data Is an Oxymoron. MIT Press.

Lamport, L., Shostak, R. & Pease, M. (1982) The Byzantine Generals Problem. ACM Transactions on Programming Languages and Systems, vol. 4, no 3,‎ juillet 1982.

Leys, S. (1984) Orwell, ou, L’horreur de la politique. Plon.

Manin, B. (1985). Volonté générale ou délibération. Le débat.

Meyer, P. (2011). Liespotting.  Saint Martin’s Griffin.

Mineur, D. (2010). Les justifications de la règle de majorité en démocratie moderne. Raisons Politiques, 39, 127—149.

Oreskes, N. & Conway, E.M. (2014). Les marchands de doute : Ou comment une poignée de scientifiques ont masqué la vérité sur des enjeux de société tels que le tabagisme et le réchauffement climatique. Editions le Pommier.

Popper, K. (1973). La logique de la découverte scientifique. Payot.

Sterbenz, C. (2013). The Supreme Court Says The Tomato Is A Vegetable — Not A Fruit. Business Insider,‎ 30 décembre 2013. https://read.bi/2I93gHn

van de Kerchove, M. (2013) Vérité judiciaire et para-judiciaire en matière pénale : quelle vérité ?. Droit et Societe, 84, 411-432.

Veyne, P.  (1983) Les Grecs ont-ils cru à leurs mythes ? 
Essai sur l’imagination

 

Fake News, Wikipedia et Blockchain (Vérité et Consensus)

Il ne faut pas mentir, nous apprend-on tout petit, et pourtant, on le fait tous en permanence. De manière provocatrice, Meyer (2011) affirme que vous allez mentir à votre épouse lors d’une conversation sur dix. Et si vous n’êtes pas marié, le risque serait encore plus élevé. On semble pourtant s’accommoder de ces petits mensonges (au point d’être déstabilisé quand on se retrouve face à une personne trop honnête). Mais les débats sur les « Fake News » nous ont rappelés que certains mensonges ont un prix, au point d’instaurer le doute et la méfiance permanente. Cette pluralité des paroles, et l’absence de parole de référence n’est pas sans rappeler la philosophie véhiculée par les crypto-monnaies : au lieu d’un mode de gouvernance (de validation, de certification) centralisé, c’est une validation globale par un réseau, un consensus, qui fera foi. A-t-on change de définition de ce qu’est la vérité ?

L’histoire de « la vérité »

A Athènes, lors de la période faste de la démocratie athénienne (vers le milieu du Ve siècle avant notre ère), tout citoyen (ὁ βουλεύμενος, « celui qui le veut ») a le droit de prendre la parole, de formuler un discours. On parlera d’iségorie (ἰσηγορία) l’égalité de parole. Comme bien souvent, cette égalité est en fait très inégalitaire, puisque tous ne sont pas égaux en matière d’éloquence. Dans le Gorgias, Platon oppose ce dernier à Socrate, s’interrogeant sur le fondement et la finalité de ce discours, pouvant être mis au service du bien commun, de la vérité, mais aussi être utilise à des fins de persuasion. Socrate cherche la vérité, et la valeur morale des opinions énoncées, opposant fondamentalement philosophie et rhétorique. Mais pour Platon, la vérité n’est pas tout : la notion de bien commun semble primer sur la vérité, un mensonge pouvant être licite s’il est juste (au service de ce bien commun). Les mythes grecs sont d’ailleurs construits sur cette idée, comme le développe Veyne (1983), se demandant si les grecs ont cru en leur mythes. Il ne s’agit pas d’un mensonge, mais de fictions, d’allégorie. Le mot μῦθος a certes donne le mot « mythomane » (en rajoutant μανια, la folie), mais ce mot désignait initialement le discours, la parole, le récit. On pourrait presque retrouver le concept de « mentir vrai » propose par Louis Aragon.

En opposant radicalement le Bien et le Mal, le christianisme a changé notre conception de la vérité. Il convient de se confesser, d’avouer, si on aspire au salut. La vérité devient sacrée (et ceux qui ne s’en accommodent pas le payeront de leur vie). Des autorités ecclésiastiques définissent alors ce qui est vrai, Foucault (1994b) parlera de pouvoir « pastoral ». « Cette forme de pouvoir est orientée vers le salut (…) elle est liée à une production de la vérité ». L’opinion du peuple est alors guidée et contrôlée par des représentants officiels de la vérité (ecclésiastiques puis étatiques).

Ce pouvoir va toutefois s’effriter avec le temps. Les paroles multiples vont resurgir, et avec elles, les « Fake News ». Juste après la première guerre mondiale, Marc Bloch notait qu’ « une fausse nouvelle naît toujours de représentations collectives qui préexistent à sa naissance ; elle n’est fortuite qu’en apparence, ou, plus précisément, tout ce qu’il y a de fortuit en elle c’est l’incident initial, absolument quelconque, qui déclenche le travail des imaginations ; mais cette mise en branle n’a lieu que parce que les imaginations sont déjà préparées et fermentent sourdement. Un événement, une mauvaise perception par exemple qui n’irait pas dans le sens où penchent déjà les esprits de tous, pourrait tout au plus former l’origine d’une erreur individuelle, mais non pas d’une fausse nouvelle populaire et largement répandue ». On retrouve ici une idée traduite aujourd’hui sous le nom de « Fake News ». Récemment, certains se sont senti oblige d’envisager de légiférer, reposant la question de la liberté de parole, mais aussi probablement une forme de défiance d’une certaine élite envers un peuple qu’elle pense incapable de jugement, et de discernement. Mais qui peut se permettre de décréter ce qui est vrai, et ce qui ne l’est pas ?

La science et la vérité

Il existe une forme de vérité par l’autorité. Suite à une chute (douloureuse) au ski, même si j’ai de vagues connaissances en anatomie, j’ai mieux fait de consulter un médecin, qui possède une expertise, venant de ses études, et de sa pratique. S’il me dit que j’ai une fracture, ce n’est pas le fait qu’il prétende (avec autorité) qu’il dit la vérité, mais parce que c’est la conclusion à laquelle il arrive sur la base d’un raisonnement solide, que d’autres médecins pourraient contester. N’ayant pas de compétences, je me fie à son autorité. Tout comme je fais confiance à mon avocat si j’ai besoin de conseils juridiques (mais peut-être pas à mon médecin). Face à un expert, la difficulté est de trouver quel poids, quel crédit, lui accorder.

La science, telle qu’elle est enseignée, semble énoncer des vérités irréfutables. Si mon stylo tombe de la table, sa vitesse va augmenter linéairement, jusqu’à toucher le sol. Je serais fou de réfuter cette réalité, qui découle de la théorie de la gravitation universelle énoncée par Issac Newton en 1687 (sous une forme beaucoup plus générale que la chute d’un stylo, ou d’une pomme). Mais cette vision est passablement datée : la « vérité absolue » n’existe plus, seul l’accord au sein de la communauté scientifique prévaut. Comme le note van de Kerchove (2013), mentionnant les travaux de Karl Popper, les faits en sciences s’établissent finalement de la même manière que la « preuve par le jury » dans le droit anglais. La principale différence est l’aspect temporel (un juge pourra écarter des éléments considères comme prescrits par le droit, même s’ils sont scientifiquement éclairants) et le fait que la justice juge un cas particulier, alors que la science cherche des vérités générales. C’est la convergence des croyances, bases sur des discussions collectives apportant un consensus qui permet d’établir des vérités scientifiques.

Et la connaissance scientifique évolue dans le temps, en s’autorisant même à devenir floue ! Au lieu d’avoir un résultat « vrai » ou « faux » on a aujourd’hui des résultats vrais avec une certaine probabilité. Dans le rapport de 2001 du GIEC, il est qualifié de « probable », avec 2 chances sur 3, que l’« activité humaine est la cause principale du réchauffement observé » depuis le milieu du XXe siècle. En 2007, la probabilité dépassait 90%, pour atteindre 95% (devenant alors « extrêmement probable ») en 2014. On se souviendra aussi de cet épisode en juillet 2012, où le CERN a annoncé la découverte d’une particule élémentaire (postulé par Higgs dans les années 1960, le fameux « boson de Higgs ») « avec un degré de confiance de 99,99997 % ». La science s’autorise le doute, et mieux encore, parvient à le quantifier avec une précision stupéfiante.

Car le doute existe toujours en science. Ce doute, parfois appelé « cartésien », permet à la science d’avancer, mais aussi, à l’occasion, de reculer. C’est ce que rappelle Oreskes & Conway (2014), mettant en avant la stratégie de certaines industries qui ont financé des projets de recherche niant les preuves scientifiques de la dangerosité du tabac, du DDT (l’insecticide), de la réalité du trou de la couche d’ozone, des atteintes environnementales des pluies acides…etc. En semant la confusion, en discréditer des études – et les scientifiques qui les ont menées – le doute s’est retourne contre la science, tout en renforçant encore les preuves. En remettant en cause les dangers du tabac, de nombres études ont permis d’établir qu’au contraire, aucun doute n’était permis. Mais la stratégie était d’acheter du temps (et en ce sens, la stratégie a fonctionné).

La justice et la vérité

Mais les scientifiques ne sont pas les seuls à tenter d’établir la vérité. Pour reprendre le débat lance par Dagorn (2018), la tomate est-elle un fruit ou un légume ? Le fruit est défini en botanique comme le résultat de la fécondation de la plante : il est issu de la transformation du pistil une fois que celui-ci est fécondé, et sera alors porteur des graines permettant à la plante de se reproduire. Donc la tomate est un fruit. Mais le légume n’a pas de définition scientifique, même s’il semble designer les parties comestibles des plantes. Cela dit, les chimistes qui s’intéressent à la cuisine semblent avoir trancher, en utilisant la saveur de l’aliment. Le fruit est un aliment sucré, alors que le légume serait sale, amer, ou neutre. Mais si la science ne semble pas vouloir imposer de vérité sur le sujet, la justice, elle, s’est penchée sur la question a plusieurs reprises. Aux États-Unis, à la fin du XIXème siècle, une taxe sur les légumes existait, mais par sur les fruits. En 1887, John Nix a intenté un procès au trésorier du port de New York, Edward Hedden, qui souhaitait taxer son importation de tomates (raconté dans Sterbenz (2013)). La cour suprême des États-Unis lui donna tort en 1893 (à l’unanimité) expliquant que la loi douanière se référait au sens commun des termes « fruit » et « légume », et non pas au jargon technique des botanistes. Mais cette vision américaine s’oppose à celle adoptée en Europe, puisqu’une directive de 1988 du Conseil de l’Union Européenne a range la tomate dans la catégorie des fruits (tout comme la carotte ou les branches de rhubarbe, d’ailleurs).

La justice se sent investi d’une mission, de recherche de la vérité. C’est ce que disait Jean Domat en 1745 lorsqu’il affirmait que « les Lois veulent qu’une chose jugée passe pour vérité ». La présomption de vérité attachée a un jugement – « res judicata pro veritate habetur » – se retrouve ainsi à plusieurs reprises dans le Code Civil. On pourra toutefois objecter que la raison d’être de ce principe est essentiellement d’éviter le renouvellement de procès à l’infini (sinon les voies de recours n’existeraient pas : on peut contester une décision de justice, mais selon une procédure bien précise). La recherche de la vérité se fait alors lors d’une enquête, devenue une « manière d’authentifier la vérité, d’acquérir des choses qui vont être considérées comme vraies, et de les transmettre », comme le note Foucault (1994a).

Si la science n’arrive pas à trancher, on demande parfois malgré tout à la justice de prendre des décisions. C’est ce que disent les décisions pour savoir si la tomate est un fruit (ou pas) mais c’est aussi l’essence du principe de précaution, évoqué dans Charpentier (2016). Et la situation devient complexe lorsque des lois qui vont à l’encontre d’une vérité scientifique. Baruch (2013) revient ainsi en détails sur l’épisode de l’adoption de la loi Boyer tendant à réprimer la négation des génocides, au tournant des années 2011 et 2012 (le débat sur les « lois mémorielles »), et surtout la contestation suscitée par l’article 4 de la loi du 23 février – dite loi Mekachera – avec de nombreuses prises de position sur le « rôle positif » de la colonisation française. Les députés ont voulu imposer une vérité (par la loi) qui allait à l’encontre de la connaissance historique scientifique. Et comment oublier la condamnation (suite à un procès) de Galilée en 1633, consécutive à la publication du Dialogue sur les deux grands systèmes du monde, par ceux qui soutenaient le géocentrisme contre l’héliocentrisme (établi auparavant par Nicolas Copernic, entre autres). Dans une lettre écrite a la grande-duchesse Christine de Lorraine – mentionnée par Gingras, Keating et Limoges (1999) – Galilée note que le tribunal a tranché, « oubliant d’une certaine manière que la multiplication des découvertes concourt au progrès de la recherche, au développement et à l’affermissement des sciences et non pas à leur affaiblissement ou à leur destruction, et se montrant dans le même temps plus attachés à leurs propres opinions qu’à la vérité ».

La majorité a-t-elle toujours raison ?

Dans une cour d’assise, pour qu’un accusé soit déclaré coupable, la loi exige qu’une (forte) majorité de membres de la cour d’assises le décident. La majorité impose alors sa loi. Mais le vote se fait après discussion, après recherche d’un consensus. Selon Wikipedia, « un consensus caractérise l’existence parmi les membres d’un groupe d’un accord général (tacite ou manifeste), positif et unanime pouvant permettre de prendre une décision ou d’agir ensemble sans vote préalable ou délibération particulière ». Et justement, Wikipedia est construit sur cette idée de consensus. Le premier principe est l’ouverture et la transparence : tout est ouvert, chacun peut contribuer en soumettant n’importe quel contenu, et une trace est garde de l’historique des interventions. Il est demande d’apporter des preuves de toute affirmation, avec une référence claire et consultable (on retrouve ici un principe de base de la publication scientifique). Suit alors une phase de discussion. L’importance prise aujourd’hui par Wikipedia montre que ce principe fonctionne.

Cette délégation de la gouvernance se retrouve dans la majorité des crypto-monnaies. Pour expliquer le fonctionnement Lamport, Shostak et Pease (1982) ont proposé la fable des généraux byzantins, pour illustrer ce concept de « consensus gentium » en informatique. Plusieurs armées sont prêtes à attaquer une même ville, mais le seul moyen pour synchroniser les différentes armées, pour déterminer s’il faut attaquer ou battre en retraite, est de faire circuler un messager à cheval. Chaque général mandate alors un chevalier, pour porter le message « attaque » ou « retraite », mais il peut y avoir des traitres parmi les généraux qui attaquent. L’idée est de faire émerger une validation globale, d’obtenir un vote général, un consensus, en partant du fait que les personnes malhonnêtes sont moins nombreuses que les personnes honnêtes. Et effectivement, dans ce cas, une coordination est possible. Pareil pour la blockchain (et tous les « smart contracts »), la validation se fait par consensus.

La règle de la majorité, base de nombre de système démocratique, nous semble comme une évidence. Mais la notion d’« argumentum ad populum »  nous rappelle toutefois que qu’une proposition n’est pas vraie parce que la plupart y croit. Si cette règle nous semble naturelle, en démocratie, pour associer le plus grand nombre à la prise de décisions dans la cité, dans la vie quotidienne, on voit rapidement qu’elle aboutit à de nombreuses impasses. Imaginons un pilote de ligne, face à de mauvaises conditions atmosphériques, devant prendre la décision d’atterrir (ou pas) en urgence. Doit-il permettre aux passagers de voter ? On imagine que ça sera inapproprié, pour ne pas dire inefficace, car ce que souhaite le pilote, ça serait plutôt une majorité informée, plus qu’une simple majorité. En réalité, comme on peut le voir dans certaines décisions importantes en démocratie, suivre le choix de la majorité est surtout un moyen d’éviter de porter la responsabilité d’un choix[ii].

La recherche d’un consensus est compliquée, voire impossible si on en croit la littérature sur les mécanismes de vote, et le théorème d’impossibilité de Arrow. Arrow (1951) – s’inspirant du paradoxe de Condorcet – montrer qu’il n’existe pas de processus de choix social indiscutable, qui permette d’exprimer une hiérarchie des préférences pour un groupe à partir de l’agrégation des préférences. Aussi, lors d’un consensus, tout le monde n’est pas satisfait du résultat. Certains évoquent même une « dictature de la majorité ». Pour reprendre l’argumentation de Manin (1985), « l’adhésion du plus grand nombre reflète la force supérieure d’une argumentation par rapport aux autres », car l’argumentation et la discussion sont importantes, « ce processus rend plus probable l’apparition de résultats raisonnables ». Mais cette recherche de consensus est forcément imparfaite : « la véritable source de la légitimité demeure donc l’unanimité ; la volonté majoritaire n’est pas légitime par elle-même, elle est légitimée parce qu’on décide de lui conférer tous les attributs de la volonté unanime. (…) Le principe majoritaire est une simple nécessité de fait, sans lien intelligible avec le principe de légitimité ; il n’est qu’une convention commode », comme le rappelle Mineur (2010).

Un monde « post-truth » ?

Le fait que la science propose des vérités floues[iii] ne facilite pas la compréhension du monde, et le terme « post-truth » a été proposé pour décrire ce monde où la frontière entre mensonge et vérité, honnêteté et déshonnêteté, fiction et « non-fiction » n’est plus très nette. Le mouvement pour les données ouvertes – ou « open data » – propose d’ailleurs d’instaurer une forme de transparence, avec des données brutes, pour que tout le monde puisse trancher sur un débat. Aux États-Unis, il y a plusieurs années, les citoyens américains ont été choqué de découvrir que nombres d’hommes politiques avaient perçu des sommes (parfois très) importantes de la part d’entreprises privées. Mais les entreprises avaient toujours le désir de financer des hommes politiques afin de s’assurer un éventuel soutien, si besoin. Entre ces deux visions antinomiques, le compromis a été d’imposer une transparence : les hommes politiques devaient garder une trace de toute somme versée par un tiers, et cette information devait être rendu publique. Mais que faire de cette information ? En France, ce type de transparence est aujourd’hui obligatoire pour les médecins. La base transparence-santé rend accessible l’ensemble des informations déclarées par les entreprises sur les liens d’intérêts qu’elles entretiennent avec les acteurs du secteur de la santé (dont les médecins). Qui a pris le temps d’aller sur le site pour regarder si son médecin avait un conflit d’intérêt lorsqu’il a prescrit un médicament ? Car cette vision quelque peu idyllique oublie que les données « pures » n’existent pas (« raw data is an oxymoron » pour reprendre le titre du livre édité par Lisa Gitelman). On se doute que depuis que la loi impose cette transparence, les pratiques ont changée. Les données (et les faits) n’existent pas sans narration. En 1936, un éditeur demande à Georges Orwell une enquête sur la condition ouvrière dans le nord de l’Angleterre, dans une cité minière en pleine période de Dépression. Au printemps 1937 paraîtra The Road to Wigan Pier, souvent considéré comme un reportage. Crick (1990) a comparé les notes prises dans le journal d’Orwell et le roman, pour savoir s’il donnait des sensations brutes (des faits) ou s’il avait remis en scène, reconstruisant ainsi sa vision première. Comme il le note « le style dépouillé du documentaire est en réalité une création artistique parfaitement délibérée ». Leys (1984) va encore plus loin dans l’analyse « ce que l’art invisible et si efficace d’Orwell illustre, c’est que la ‘vérité des faits’ ne saurait exister à l’état pur. Les faits par eux-mêmes ne forment jamais qu’un chaos dénué de sens : seule la création artistique peut les investir de signification, en leur conférant forme et rythme (….) Littéralement, il faut inventer la vérité ».

Bibliographie

Arrow, K. (1951). Social Choice and Individual Values. Wiley.

Baruch, M-O. (2013). Des lois indignes ? Les historiens, la politique et le droit. Tallandier

Bloch, M. (1921) Réflexions d’un historien sur les fausses nouvelles de la guerre. Revue de synthèse historique, 33. https://bit.ly/2FC6GwZ

Charpentier, A. (2016). Les dérives du principe de précaution. Risques, 108.

Crick, B. (1990) Georges Orwell: Une Vie. Balland.

Dagorn, G. (2018). Aubergine, tomate, carotte… Savez-vous vraiment distinguer fruits et légumes ? lemonde.fr https://lemde.fr/2rkc7vt

Domat, J. (1745) Les lois civiles dans leur ordre naturel. https://bit.ly/2jrlNke

Foucault, M. (1994a) La vérité et les formes juridiques, in « Dits et écrits » tome II texte n°139, Gallimard.

Foucault, M. (1994b) Le sujet et le pouvoir, in « Dits et écrits » tome IV texte n°306, Gallimard.

Gingras, Y., Keating, P. & Limoges, C. (1999) Du scribe au savant : Les Porteurs du savoir de l’Antiquité à la révolution industrielle, Boréal.

Gitelman, L. (2013). Raw Data Is an Oxymoron. MIT Press.

Lamport, L., Shostak, R. & Pease, M. (1982) The Byzantine Generals Problem. ACM Transactions on Programming Languages and Systems, vol. 4, no 3,‎ juillet 1982.

Leys, S. (1984) Orwell, ou, L’horreur de la politique. Plon.

Manin, B. (1985). Volonté générale ou délibération. Le débat.

Meyer, P. (2011). Liespotting.  Saint Martin’s Griffin.

Mineur, D. (2010). Les justifications de la règle de majorité en démocratie moderne. Raisons Politiques, 39, 127—149.

Oreskes, N. & Conway, E.M. (2014). Les marchands de doute : Ou comment une poignée de scientifiques ont masqué la vérité sur des enjeux de société tels que le tabagisme et le réchauffement climatique. Editions le Pommier.

Popper, K. (1973). La logique de la découverte scientifique. Payot.

Sterbenz, C. (2013). The Supreme Court Says The Tomato Is A Vegetable — Not A Fruit. Business Insider,‎ 30 décembre 2013. https://read.bi/2I93gHn

van de Kerchove, M. (2013) Vérité judiciaire et para-judiciaire en matière pénale : quelle vérité ?. Droit et Societe, 84, 411-432.

Veyne, P.  (1983) Les Grecs ont-ils cru à leurs mythes ? 
Essai sur l’imagination

 

[i] Publiée en ligne dans https://bit.ly/2ri8V4t (historiquement, cette lettre permet aussi de mieux comprendre les rapports entre sciences et religions, Galilée tentant d’expliquer que l’héliocentrisme n’est pas contraire aux théories de Saint Augustin).

[ii] Parmi les exemples récents, on pourra penser aux turbulences entre 1965 et 2018 autour de la construction de Notre Dame des Landes, et la difficulté de prendre des décisions (avec une enquête publique en 2006 refusant l’aéroport a 67% puis le referendum départemental de 2016 où le refus est cette fois de 45%)

[iii] Au sens de la logique floue (“fuzzy logic” en anglais) où au lieu d’avoir des opérateurs « vrai » (1) et « faux » (0), on a des valeurs réelles dans l’intervalle [0,1].