Category Archives: Computer

Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair)
library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}

Let us not compute it for all variables

N = 0:15
M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).

Fairness in Multi-Task Learning via Wasserstein Barycenters

Our new paper, with François Hu and Philipp Ratz, Fairness in Multi-Task Learning via Wasserstein Barycenters, is now available.

Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making.

It will be presented in September, at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023), in Torino.

Quand on marche sur des œufs avec Chat-GPT

Avec la sortie de GPT4, et pour faire suite à mon article La société du bullshit, quelques nouvelles de GPT. Alors que je parlais de “bullshit”, on voit ressortir ces derniers temps l’idée d'”hallucinations”, définies proprement dans Hallucinations in Neural Machine Translation, voilà déjà 5 ans. Un récent article en reparlait cette semaine. Pour illustrer, inspiré de discussions avec Louis Abraham, lors de ma visite parisienne, j’ai tenté d’avoir des conseils culinaires

J’ai tenté d’autres types d’œufs, de lapin

ou de baleine,

Louis me faisait remarquer que GPT3 pouvait facilement être induit en erreur, contrairement à ChatGPT… alors j’ai tenté,

que ce soit des œufs de lapin ou de beluga, ChatGPT donne des conseils étranges

Et il ne s’arrête pas

je ne peux m’empêcher de partager cette petite conclusion

Sur le moment, je me suis demandé s’il n’était pas pris dans mon délire absurde…

J’ai même tenté les œufs de cochon

J’ai fini par lui poser la question franchement,

et c’était fini, il avait quitté mon délire, impossible  de jouer davantage….

Je me suis aussi amusé à poser des questions sur la connaissance des relations familiales. Et c’est assez facile de piéger GPT3

Impossible d’avoir un “je ne sais pas”. Ici, il va prendre le seule prénom féminin qui ait été mentionné… Bref, il n’y a aucune représentation de ce qu’une famille peut être, ce qui sera une étape indispensable pour que l’outil fonctionne bien.

Monty Hall problem, with Thompson sampling

We all know the Monty Hall problem. Recently, Jason Rosenhouse published a book on that topic (entitled The Monty Hall Problem, The Remarkable Story of Math’s Most Contentious Brain Teaser). The game is more or less described by the following question

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

While I was preparing some slides for a lecture on Bayesian modeling and thinking, I wanted to find an illustration of what is sometimes called the Bayesian brain, that can be related to updates of beliefs, when we experience. And I was looking for examples of Thompson sampling. And actually, it is possible to learn that switching is the optimal strategy, in the Monty Hall problem, just by playing sequentially the game, and learning from previous strategies. The following code is used, to choose the door with the price (the car), and the one we first select

set.seed(1)
n = 5000
listdoor = matrix(1:3,3,n)
door = listdoor
win = sample(1:3,size=n,replace=TRUE)
pick1 = sample(1:3,size=n,replace=TRUE)

Then, the presenter picks one, that is neither the car, nor the one we chose initially. The following trick can be used, to get the list of available choices

door[win+(0:(n-1))*3] = NA
door[,1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA 1 NA NA 1 NA 1 NA
[2,] 2 2 NA NA 2 2 2 NA NA 2
[3,] 3 NA 3 3 NA NA NA 3 NA NA
door[pick1+(0:(n-1))*3] = NA
door[,1:10]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] NA NA NA 1 NA NA 1 NA 1 NA
[2,] 2 2 NA NA 2 2 2 NA NA 2
[3,] 3 NA 3 3 NA NA NA 3 NA NA

Then, the presenter picks one

presenter = apply(door,2, function(x) sample(x[!is.na(x)],size=1))
> presenter[win != pick1] = apply(door,2,function(x) x[!is.na(x)])[win != pick1] 
presenter = unlist(presenter)
presenter[1:10]
[1] 3 2 3 1 2 2 2 3 1 2

Now, let us consider the  Monty Hall problem. We have two possible strategies. The first one is to keep the door we chose, initially

pick2a = pick1
gaina = (pick2a==win)
mean(gaina)
[1] 0.3392

As expected, on average, we win with (about) 1 chance out of 3. The second one is to (always) pick the other door (the one left). The code is close to the one we used before

door = listdoor
door[pick1+(0:(n-1))*3] = NA
door[presenter+(0:(n-1))*3] = NA
pick2b = apply(door,2,function(x) x[!is.na(x)])
gainb = (pick2b==win)
mean(gainb)
[1] 0.6608

If you know Monty Hall problem the probability to win is now 2 chance out of 3 (which is what the maths tells us). That is what we have with simulations.

Now, what if we don’t know how to do the maths, and we don’t want to compute it? We can use Thompson sampling to explore, and exploit. In a general context, we have to choose among On a le choix entre K alternatives (here K=2, since we can either keep our initial choice, or pick the other one), and the output is \boldsymbol{X}=(X_1,\cdots, X_K), where X_k\sim\mathcal{B}(\theta_k), but \theta_k is unknow, and we will play the game, and learn. From previous computations, we know that \theta_1=1/3 while \theta_2=2/3.

We use some prior distribution, \theta_k\sim\mathcal{B}eta(\alpha_k,\beta_k), since the Beta distribution is the conjugate of the Bernoulli. At time t, we draw K (independent) Beta variables B_k\sim\mathcal{B}eta(\alpha_k,\beta_k), and pick k^\star = \displaystyle{\underset{k=1,\cdots,K}{\text{argmax}}\{B_k\}}.  Here the code will be

set.seed(2)
X = cbind(pick2a == win,pick2b == win)*1
AB1 = AB2 = tirage = matrix(NA,n,2)
choix = rep(NA,n)
k=1
AB1[k,] = AB2[k,] = c(1,1)
for(k in 1:(n-1)){
tirage[k,] = c(rbeta(1,AB1[k,1],AB1[k,2]),
rbeta(1,AB2[k,1],AB2[k,2]))
choix[k] = which.max(tirage[k,])
if(choix[k] == 1){
AB1[k+1,] = AB1[k,] + c(X[k,1],1-X[k,1])
AB2[k+1,] = AB2[k,] 
}
if(choix[k] == 2){
AB1[k+1,] = AB1[k,] 
AB2[k+1,] = AB2[k,] + c(X[k,2],1-X[k,2])
}}

Before showing some graphs, let us check that indeed, we select more the second strategy (which is here to select the other door)

AB1[n,]
[1] 5 13
AB2[n,]
[1] 3292 1693

Indeed, since the average of a Beta distribution, \mathcal{B}eta(\alpha,\beta) is \alpha/(\alpha+\beta)

AB2[n,1]/(sum(AB2[n,]))
[1] 0.6603811

i.e. the probability to win, with this second strategy is about 2/3 (as obtained previously). We can visualize this on the animation below, with, in red the first strategy (keep your initial choice), in green the second one (select the other door), 0 and 1 respectively if we win, or not. Then we can visualize the evolution of \alpha_2 and \beta_2 on topc, and \alpha_1 and \beta_1 below (the index is time t). Finallly, we have the two variables B_1 and B_2 drawn,

Of course, another simulation would have given different B_1‘s and B_2‘s, but finally, we learn that the second strategy is better, and we learn it quite fast…

Here is another one (just to confirm)

So clearly, even if we don’t know which is the optimal strategy (keep our initial choice, or switch), a player who played that game about 30 times should be able to understand that switching should be a better strategy.

Lilliefors, Kolmogorov-Smirnov and cross-validation

In statistics, Kolmogorov–Smirnov test is a popular procedure to test, from a sample \{x_1,\cdots,x_n\} is drawn from a distribution F, or usually F_{\theta_0}, where F_{\theta} is some parametric distribution. For instance, we can test H_0:X_i\sim\mathcal{N(0,1)} (where \theta_0=(\mu_0,\sigma_0^2)=(0,1)) using that test. More specifically, I wanted to discuss today p-values. Given n let us draw \mathcal{N}(0,1) samples of size n, and compute the p-values of Kolmogorov–Smirnov tests

n=300
p = rep(NA,1e5)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s] = ks.test(X,"pnorm",0,1)$p.value
}

We can visualise the distribution of the p-values below (I added some Beta distribution fit here)

library(fitdistrplus)
fit.dist = fitdist(p,"beta")
hist(p,probability = TRUE,main="",xlab="",ylab="")
vu = seq(0,1,by=.01)
vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)

It looks like it is quite uniform (theoretically, the p-value is uniform). More specifically, the p-value was lower than 5% in 5% of the samples

[note: here I compute ‘mean(p<=.05)’ but I have some trouble with the ‘<‘ and ‘>’ symbols, as always]

mean(p&lt;=.05)
[1] 0.0479

i.e. we wrongly reject H_0:X_i\sim\mathcal{N(0,1)} is 5% of the samples.

As discussed previously on the blog, in many cases, we do care about the distribution, and not really the parameters, so we wish to test something like H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2. Therefore, a natural idea can be to test H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)}, for some estimates of \mu and \sigma^2. That’s the idea of Lilliefors test. More specifically, Lilliefors test suggests to use , Kolmogorov–Smirnov statistics, but corrects the p-value. Indeed, if we draw many samples, and use Kolmogorov–Smirnov statistics and its classical p-value to test for H_0:X_i\sim\mathcal{N(\hat\mu,\hat\sigma^2)},

n=300
p = rep(NA,1e5)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s] = ks.test(X,"pnorm",mean(X),sd(X))$p.value
}

we see clearly that the distribution of p-values is no longer uniform

fit.dist = fitdist(p,"beta")
hist(p,probability = TRUE,main="",xlab="",ylab="")
vu = seq(0,1,by=.01)
vv = dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)

More specifically, if x_i‘s are actually drawn from some Gaussian distribution, there are no chance to reject H_0, the p-value being almost never below 5%

mean(p&lt;=.05)
[1] 0.00012

Usually, to interpret that result, the heuristics is that \hat\mu and \hat\sigma^2 are both based on the sample, while previously 0 and 1 where based on some prior knowledge. Somehow, it reminded me on the classical problem when mention when we introduce cross-validation, which is Goodhart’s law

When a measure becomes a target, it ceases to be a good measure

i.e. we cannot assess goodness of fit using the same data as the ones used to estimate parameters. So here, why not use some hold-out (or cross-validation) procedure : split the dataset in two parts, \{x_1,\cdots,x_k\} (with k<n) to estimate parameters \mu and \sigma^2 and then use \{x_{k+1},\cdots,x_n\} and Kolmogorov–Smirnov statistics on it to test if x_i‘s are drawn from some Gaussian distribution. More precisely, will the p-value computed using the standard Kolmogorov–Smirnov procedure be ok here. Here, I tried two scenarios, k/n being either 1/3 or 2/3,

p = matrix(NA,1e5,4)
for(s in 1:1e5){
X = rnorm(n,0,1)
p[s,1] = ks.test(X,"pnorm",0,1)$p.value
p[s,2] = ks.test(X,"pnorm",mean(X),sd(X))$p.value
p[s,3] = ks.test(X[1:200],"pnorm",mean(X[201:300]),sd(X[201:300]))$p.value
p[s,4] = ks.test(X[201:300],"pnorm",mean(X[1:200]),sd(X[1:200]))$p.value
}

Again, we can visualize the distributions of p-values,  in the case where 1/3 of the data is used to estimate \mu and \sigma^2, and 2/3 of the data is used to test

fit.dist = fitdist(p[,3],"beta")
hist(p[,3],probability = TRUE,main="",xlab="",ylab="")
vu=seq(0,1,by=.01)
vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)


and in the case where 2/3 of the data is used to estimate \mu and \sigma^2, and 1/3 of the data is used to test

fit.dist = fitdist(p[,4],"beta")
hist(p[,4],probability = TRUE,main="",xlab="",ylab="")
vu=seq(0,1,by=.01)
vv=dbeta(vu,shape1 = fit.dist$estimate[1], shape2 = fit.dist$estimate[2])
lines(vu,vv,col="dark red", lwd=2)


Observe here that we (wrongly) reject too frequently H_0, since the p-values are  below 5% in 25% of the scenarios, in the first case (less data used to estimate), and 9% of the scenarios, in the second case (less data used to test)

mean(p[,3]&lt;=.05)
[1] 0.24168
mean(p[,4]&lt;=.05)
[1] 0.09334

We can actually compute that probability as a function of k/n

n=300
p = matrix(NA,1e4,99)
for(s in 1:1e4){
  X = rnorm(n,0,1)
  KS = function(p) ks.test(X[1:(p*n)],"pnorm",mean(X[(p*n+1):n]),sd(X[(p*n+1):n]))$p.value
  p[s,] = Vectorize(KS)((1:99)/100)
}

The evolution of the probability is the following

prob5pc = apply(p,2,function(x) mean(x&lt;=.05))
plot((1:99)/100,prob5pc)

so, it looks like we can use some sort of hold-out procedure to test for H_0:X_i\sim\mathcal{N(\mu,\sigma^2)}, for some \mu and \sigma^2, using Kolmogorov–Smirnov test with \mu=\hat\mu and \sigma^2=\hat\sigma^2 but the proportion of data used to estimate those quantities should be (much) larger that the one used to compute the statistics. Otherwise, we clearly reject too frequently H_0.

Insurance Pricing Game

Would you like to put your data science skills to the test?

Imperial College London, Universite du Quebec à Montreal (UQAM), and actuarial institutes in Singapore, the UK, including the IFoA, and Australia, ASTIN, the Casualty Actuarial Society are co-organising a global data science competition.

Would you like to accurately predict the cost of insurance by putting your data science skills to the test? We are hosting two competitions with separate datasets, a loss prediction competition on Kaggle with synthetic workers’ compensation data, and a pricing competition in a simulated market hosted on AI Crowd with real-world motor insurance contracts. Codes can be either in R or python. The competition is being sponsored by a number of different organisations, with a total of US$12,000 in cash prizes to be won. For more information about how to take part please visit www.pricing-game.com

Hidding values in the output of the summary function for a (linear) regression

Since our Fall 2020 session will be 100% online (and off-site), I have to work hard this summer to prepare online quizz and exams. I started intensively to play with Achim’s awesome r-exams package. But there are still a few things that I wanted to add, so I will post a series of posts on my blogs to keep tracks of updated functions I will write. Most of them a modification of R internal functions, so the code is hard to read. Here is the file, and I will update it frequently

url = "http://freakonometrics.free.fr/onlineExams.R"
source(url)

I have updated the summary function (more precisely the summary.lm function). To see how it works, consider a simple regression

library(car)
reg = lm(prestige ~ women, data=Prestige)
my_summary(reg)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: 0.2362

A classical question I ask in my quizz is to hide the p-value of the F-test, and ask what it is (to make sure that students understand the equivalence between the F and the t test, in a simple regression). To hide the p-value, use

my_summary(reg, Fisher=TRUE)
 
Call:
lm(formula = prestige ~ women, data = Prestige)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-33.444 -12.391  -4.126  13.034  39.185 
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept) 48.69300    2.30760  21.101    2e-16 ***
women       -0.06417    0.05385  -1.192    0.236    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 17.17 on 100 degrees of freedom
Multiple R-squared:  0.014,	Adjusted R-squared:  0.004143
F-statistic:  1.42 on 1 and 100 DF,  p-value: ■■■■■

(and then, in a multiple choice exam, I ask if it is 1%, 5%, 12%, 23%, 47%, for example). That one was easy, since all those lines are based on the cat function, so I just modify it, if necessary

if(Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
        x$fstatistic[3L], "DF,  p-value:", "■■■■■")
    if(!Fisher) cat("\nF-statistic:", formatC(x$fstatistic[1L], 
    digits = digits), "on", x$fstatistic[2L], "and", 
                   x$fstatistic[3L], "DF,  p-value:", format.pval(pf(x$fstatistic[1L], 
                   x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), 
                   digits = digits))

(here I use the unicode ‘black square‘ symbol to hide numbers). Of course, I can hide the value of \sigma, or the (adjusted or not) R ^2, etc.

Now, a little bit more tricky: what if we want to change the regression table, with the coefficients, their standard deviation, etc.  It is tricky since those values are numeric, with an appropriate format (not too many digits), but it can be done easily since that formating is done through the printCoefmat function.  So in my code, I have my internal function, where I ask to put some ‘black square‘ (and the good number to keep a readable format) at some specific locations. Consider a more complex regression

reg = lm(prestige ~ ., data=Prestige)

and assume that we want to hide the value of the intercet, \widehat{\beta}_0 (i.e. located at (1,1) in the matrix) and the p-value of the t-test for the fourth one (i.e. located at (4,4) in the matrix – since the first colum is \widehat{\beta}_3, the second one its standard deviation, the thirst one the t value, and then, the fourth one, the p-value of the test). I use the following two vectors

vligne = c(1,4),
vcolonne = c(1,4)

with rows and columns in the matrix (of course, the two should have the same length). Then, the good thing is that the printCoefmat function convert numerical values into characters (to have things that look like columns actually). So we simply have to remove numerical digits, and use squares instead

Cf2=Cf
  if(length(vligne)&gt;0){  
    for(i in 1:length(vligne)){
      long = nchar(Cf[vligne[i],vcolonne[i]])
      Cf2[vligne[i],vcolonne[i]] = paste(rep("■",long),collapse = "")
    }}

Then, we print the updated version of the table

print.default(Cf2, quote = quote, right = right, na.print = na.print,...)

For example, here, it would be

my_summary(reg, vligne=c(1,4), vcolonne=c(1,4))
 
Call:
lm(formula = prestige ~ ., data = Prestige)
 
Residuals:
     Min       1Q   Median       3Q      Max 
-12.9863  -4.9813   0.6983   4.8690  19.2402 
 
Coefficients:
              Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept) ■■■■■■■■■■  8.018e+00  -1.513  0.13380    
education    3.933e+00  6.535e-01   6.019 3.64e-08 ***
income       9.946e-04  2.601e-04   3.824  0.00024 ***
women        1.310e-02  3.019e-02   0.434  ■■■■■■■    
census       1.156e-03  6.183e-04   1.870  0.06471 .  
typeprof     1.077e+01  4.676e+00   2.303  0.02354 *  
typewc       2.877e-01  3.139e+00   0.092  0.92718    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 7.037 on 91 degrees of freedom
  (4 observations deleted due to missingness)
Multiple R-squared:  0.841,	Adjusted R-squared:  0.8306
F-statistic: 80.25 on 6 and 91 DF,  p-value: &lt; 2.2e-16

Of course, it is hand made, I do not check for typos (like you should not ask to put squares in the seventh column), but that works well enough to generate random regressions in a quizz (or identical regressions on subsamples of a large dataset), and to hide values, in a quizz.

NSERC – Discovery Grants Program, over the past 5 years

In a previous post, I discussed how it was possible to scrap the NSERC website to get stats about discovery grants. Since we just got the new 2018 figures, I thought it would be a good opportunity to update my graphs,

library(XML)
library(stringr)
url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp"
download.file(url,destfile = "GSC.html")
library(XML)
tables=readHTMLTable("GSC.html")
GSC=tables[[1]]$V1
GSC=as.character(GSC[-(1:2)])
namesGSC=tables[[1]]$V2
namesGSC=as.character(namesGSC[-(1:2)])
Correction = function(x) as.numeric(gsub('[$,]', '', x))
YEAR=2013:2018
for(i in 1:length(YEAR)){
y=YEAR[i]
grants= function(gsc){
  url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=",y,"&amp;GSC=",gsc,sep="")
  download.file(url,destfile = "GSC.html")
  library(XML)
  tables=readHTMLTable("GSC.html")
  X=as.character(tables[[1]]$"Awarded Amount")
  A=as.numeric(Vectorize(Correction)(X))
  return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100))))
}
M=Vectorize(grants)(GSC[1:12])
plot(M[3:101,8],(1:99)/100,type="s",xlim=c(0,130000),xlab=
paste("Annual Discovery Grant (CAN) - ",y,sep=""),ylab="")
lines(M[3:101,5],(1:99)/100,type="s",col="red")
lines(M[3:101,4],(1:99)/100,type="s",col="blue")
abline(v=M[3,5],lty=2,col=rgb(1,0,0,.4))
idx=which(M[3:101,8]&lt;M[3,5])
lines(M[2+idx,8],(idx)/100,type="s",lwd=4)
legend("bottomright",c("maths","physics","chemestry"),
col=c("black","red","blue"),lty=1,bty="n")}

With those functions, I plot the cumulative distribution functions for three disciplines, manely maths, physics and chemistry. I added a line for the lowest value in physics (the vertical line), and the bold line shows the proportion of researchers in maths who got less than the lowest amount in physics,

Hence, in 2013, 60% of the researchers in maths get less than any researcher in physics (and more than 90% in maths get less than any researcher in chemistry). Then, from 2014 to 2018, we get

It is rather constant : 50% of the researchers in mathematics in Canada get less than any researcher in physics, or in chemistry. I don’t understand why, but it’s interesting to observe that this is very stable…

Extracting information from a picture, round 2

Yesterday, I published a post on extracting information from a picture, but it did not work as expected. I claimed that it was because of the original graph I had. More precisely, the was based on some weird projection, and I could not reconcile. So I decide to cheat a little bit, by creating my own map,

Colors are ugly, I know. But I got them using

u = seq(0,1,length=30)
couleurs = rgb(u,rev(u),0,1)

The picture is

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage3.png"
library(pixmap)
library(png)
IMG = readPNG(url)

I used those colors because it would make things easy when extracting reds and greens…

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Let us see if the contour of France can be overlaid

library(maptools)
library(PBSmapping)
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
par(mfrow=c(1,1))
PP=PP[(PP$X&lt;=8.25)&amp;(PP$Y&gt;=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We have a perfect match, don’t we…?

Let us now use a shapefile based on départements,

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm2.rds","FRA_adm2.rds")
FR2=readRDS("FRA_adm2.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR2)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
k=35
pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
points(pX,pY)nge(pX)

For instance, the thirty-fifth polygon is the following

Let us extract the color inside that polygon

u=1:nrow(ROUGE)
v=1:ncol(ROUGE)

The code would be

pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
E=expand.grid(u,v)
M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
image(u,v,ROUGE*M,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(pX,pY)

Now, for each département, I extract the average value of red, and the average value of green,

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

On the graph below, we have the original values on the x-axis (unemployement, in percent) and the “average value of red”.  Note that points are almost perfectly correlated… The accumulation can be explained because on the original map, different values could have the same color

So far, I can claim that we’ve been able to extract useful information from the original picture.

Consider the case now that the original map was the following one

The picture can be downloaded using the following code

url = "https://freakonometrics.hypotheses.org/files/2018/12/chomage5.png"
library(pixmap)
library(png)
IMG = readPNG(url)

Here, the colors are obtained from a standard palette,

library(pals)
couleurs = rev(brewer.rdylgn(30))

Here again, we use our previous code to extract reds and greens

And if we use our function

extract_info = function(k){
  pX=(PP$X[PP$PID==k]-ax)/bx*max(u)
  pY=(PP$Y[PP$PID==k]-ay)/by*max(v)
  E=expand.grid(u,v)
  M=matrix(point.in.polygon(E[,1],E[,2],pX,pY)&gt;0,length(u),length(v))
nom=FR2[FR2$OBJECTID ==k,c("NAME_2","CCA_2")]
return(c(as.numeric(nom$CCA_2),sum(ROUGE[M==1])/sum(M),sum(VERT[M==1])/sum(M)))
}
donnees = Vectorize(extract_info)(1:95)
x2=donnees[1,]
y2=donnees[2,]/(donnees[2,]+donnees[3,])
df2=data.frame(dpt=x2,extract=y2)
x1=as.numeric(as.character(baseChomage$no))
y1=baseChomage$chomagePremierTrimestre2017
df1=data.frame(dpt=x1,obs=y1)
df=merge(df1,df2)
plot(df$obs,df$extract)

we obtain the following graph

Here again, we have a strong correlation, not to say comonotonic variables (in the sense that ranks are identical). Nice, isn’t it ?

Extracting information from a picture, round 1

This week, I wanted to get information I found on the nice map, below. I could not get access to the original dataset, per zip code… and I was wondering, if (assuming that the map was with high resolution) it was actually possible to extract information, using a simple R function…

As we can see, there is red, and green on the map, and I would love to know which are the green and the red cities, in France. One important issue is actually the background. Here it’s nice, it white… but white is a strange color, achromatic and very light. More specifically, if I search red areas, the background is very red. And very green, too. So, to avoid those issues, I did use gimp to change the background, into black. On the opposite, where it’s black, it’s neither red, nor green !

Let us get the map, and extract information from the file

url="https://freakonometrics.hypotheses.org/files/2018/12/inondation3.png"
download.file(url,"inondation3.png")
image="inondation3.png"
library(pixmap)
library(png)
IMG=readPNG(image)

Information is stored in several matrices – or in arrays.  Dimension 1 is the height of the picture (in pixels), dimension 2 is the width, and the third one is either 1 (red), 2 (green) or 3 (blue), based on the rgb decomposition of each pixel. Then, I try to find the border of the map

nl=dim(IMG)[1]
nc=dim(IMG)[2]
MAT=(IMG[,,1]+IMG[,,2])/2
x=apply(MAT,2,max)
plot(x,type="l")

When it’s null, it means no color on the line of the matrix, i.e. completly black (initially, I used the mean function, but the maximum really behaves like a step function)

y=apply(MAT,1,max)
plot(y,type="l")

Let us find cutoff values, on the left and on the right, on top and on the bottom

image(1:nc,1:nl,t(MAT))
abline(v=min(which(x>.2)),col="blue")
abline(v=max(which(x>.2)),col="blue")
abline(h=min(which(y>.2)),col="blue")
abline(h=max(which(y>.2)),col="blue")

We obtain the following (forget about the fact that – somehow – France is upside-down)

We can zoom-in, just to make sure that our border are fine

par(mfrow=c(1,2))
image(min(which(x>.2))+(-5):5,1:nl,t(MAT)[min(which(x>.2))+(-5):5,])
abline(v=min(which(x>.2))+(-5):5,col="white")
abline(v=min(which(x>.2)),col="blue")
x1=min(which(x>.2))-1

and on the vertical range

image(max(which(x>.2))+(-5):5,1:nl,t(MAT)[max(which(x>.2))+(-5):5,])
abline(v=max(which(x>.2))+(-5):5,col="white")
abline(v=max(which(x>.2)),col="blue")
x2=max(which(x>.2))+1

So far so good. Let us keep the subpart of the picture,

image(x1:x2,y1:y2,t(MAT)[x1:x2,y1:y2])

Now, let us focus on the red part / component of that picture

ROUGE=t(IMG[,,1])[x1:x2,]
ROUGE=ROUGE[,y2:y1]
library(scales)
image(x1:x2,y1:y2,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01))

That’s not bad, isn’t it ? And get can have a similar graph for the green part

VERT=t(IMG[,,2])[x1:x2,]
VERT=VERT[,y2:y1]
image(x1:x2,y1:y2,VERT,col=alpha(colour=rgb(0,1,0,1), alpha = seq(0,1,by=.01)))

Now, I wanted to ajust a map of France on that one. Using shapefiles of administrative regions, it would be possible to get the proportion of red and green parts (départements, cantons, etc). As a starting point (before going to ‘départements’), let us use a standard shapefile for France

library(maptools)
library(PBSmapping)
url="http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds"
download.file(url,"FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
library(maptools)
PP = SpatialPolygons2PolySet(FR)
PP=PP[(PP$X<=8.25)&(PP$Y>=42.2),]
u=(x1:x2)-x1
v=(y1:y2)-y1
ax=min(PP$X)
bx=max(PP$X)-min(PP$X)
ay=min(PP$Y)
by=max(PP$Y)-min(PP$Y)
PP$X=(PP$X-ax)/bx*max(u)
PP$Y=(PP$Y-ay)/by*max(v)
image(u,v,ROUGE,col=alpha(colour=rgb(1,0,0,1), alpha = seq(0,1,by=.01)))
points(PP$X,PP$Y)

We try here to rescale it. The left part should be on the left part of the picture as well as the right part. And the same holds for the top, and the bottom,

Unfortunately, even if we change the projection technique, I could not match perfectly the contour of France. I am quite sure that it’s a projection problem ! But I did try a dozen popular ones, with no success… so if anyone has a clever idea…

Je code, donc je suis

Mercredi 21 novembre, l’édition 2018 du Colloque HumanIA se tiendra à l’Agora Hydro-Québec du Complexe des sciences Pierre-Dansereau de l’UQAM dès 9h30. Dans le cadre de la semaine La France à l’UQAM, le Colloque sera suivi d’un débat sur le thème «Intelligence artificielle : l’erreur n’est-elle qu’humaine ?», dans l’après midi. J’interviendrais pour ma part dans un atelier en avant midi, sur le theme, “je code, donc je suis“. Je mets quelques liens pour alimenter la discussion,

The “probability to win” is hard to estimate…

Real-time computation (or estimation) of the “probability to win” is difficult. We’ve seem that in soccer games, in elections… but actually, as a professor, I see that frequently when I grade my students.

Consider a classical multiple choice exam. After each question, imagine that you try to compute the probability that the student will pass. Consider here the case where we have 50 questions. Students pass when they have 25 correct answers, or more. Just for simulations, I will assume that students just flip a coin at each question… I have n students, and 50 questions

set.seed(1)
n=10
M=matrix(sample(0:1,size=n*50,replace=TRUE),50,n)

Let X_{i,j} denote the score of student i at question j. Let S_{i,j} denote the cumulated score, i.e. S_{i,j}=X_{i,1}+\cdots+X_{i,j}. At step j, I can get some sort of prediction of the final score, using \hat{T}_{i,j}=50\times S_{i,j}/j. Here is the code

SM=apply(M,2,cumsum)
NB=SM*50/(1:50)

We can actually plot it

plot(NB[,1],type="s",ylim=c(0,50))
abline(h=25,col="blue")
for(i in 2:n) lines(NB[,i],type="s",col="light blue")
lines(NB[,3],type="s",col="red")


But that’s simply the prediction of the final score, at each step. That’s not the computation of the probability to pass !

Let’s try to see how we can do it… If after j questions, the students has 25 correct answer, the probability should be 1 – i.e. if S_{i,j}\geq 25 – since he cannot fail. Another simple case is the following : if after j questions, the number of points he can get with all correct answers until the end is not sufficient, he will fail. That means if S_{i,j}+(50-i+1)< 25 the probability should be 0. Otherwise, to compute the probability to sucess, it is quite straightforward. It is the probability to obtain at least 25-S_{i,j} correct answers, out of 50-j questions, when the probability of success is actually S_{i,j}/j. We recognize the survival probability of a binomial distribution. The code is then simply

PB=NB*NA
for(i in 1:50){
  for(j in 1:n){
    if(SM[i,j]&gt;=25) PB[i,j]=1
    if(SM[i,j]+(50-i+1)&lt;25)   PB[i,j]=0
    if((SM[i,j]&lt;25)&amp;(SM[i,j]+(50-i+1)&gt;=25)) PB[i,j]=1-pbinom(25-SM[i,j],size=(50-i),prob=SM[i,j]/i)
  }}

So if we plot it, we get

plot(PB[,1],type="s",ylim=c(0,1))
abline(h=25,col="red")
for(i in 2:n) lines(PB[,i],type="s",col="light blue")
lines(PB[,3],type="s",col="red")

which is much more volatile than the previous curves we obtained ! So yes, computing the “probability to win” is a complicated exercice ! Don’t blame those who try to find it hard to do !

Of course, things are slightly different if my students don’t flip a coin… this is what we obtain if half of the students are good (2/3 probability to get a question correct) and half is not good (1/3 chance),

If we look at the probability to pass, we usually do not have to wait until the end (the 50 questions) to know who passed and who failed

PS : I guess a less volatile solution can be obtained with a Bayesian approach… if I find some spare time this week, I will try to code it…

Acheter un billet de train (pas trop cher)

Hier, je suis tombé sur article qui discutait des prix des billets de train, en France (et du prix très élevé, a certaines dates, genre pendant les vacances d’hiver). Tous ceux qui ont l’habitude de prendre le train savent que le prix que l’on paye dépend du moment ou on achète le billet (et de la souplesse que l’on peut avoir sur l’heure (voire la date) du trajet). Cet été, dans le cadre du projet de la formation Data Science pour l’Actuariat, pour mon cours, Pierre proposait de moissonner le site https://www.oui.sncf/ pour suivre un peu l’évolution du prix des billets.

Le principal soucis est que le site https://www.oui.sncf/ s’appuie sur du javascript pour les formulaires de saisie et pour l’affichage des résultats, ce qui empêche l’utilisation classique du package rvest, par exemple. J’avais évoqué dans un autre billet l’utilisation de wdman, pour scraper le site des incendies de forets. Ici, Pierre proposait de passer par casperjs, et je vais reprendre un peu ici sa stratégie:

  • on va utiliser casperjs, un émulateur de navigateur écrit en javascript. Il permet d’émuler un véritable navigateur (même moteur que google chrome) et de résoudre le javascript intégrés à la page
  • on va utiliser un petit code en bash pour lancer le code

Pour le code en bash, c’est juste que je suis sous mac et linux. Sous mac, on peut faire pas mal de choses en bash… Tout passe par des variables, que l’on peut définir, et afficher, par exemple

qui nous donne l’heure. Je peux aussi définir une variable, et l’incrémenter (pratique pour faire des boucles)

Plus intéressant, on peut planifier des taches. Pour ça, on tape

qui va ouvrir un éditeur,

Je demande ici de lancer un code R, tous les heures, a 13:50, 14:50, 15:50, etc. Pour demander tous les jours a 13:50, je tape

on va ensuite sauver l’instruction

On voit que la commande sera lancée, tous les jours : elle est dans la liste des taches a faire

notons qu’on peut lancer un code tout en passant des arguments : ici je lui dis quel objet manipuler (le premier argument) et le second sert pour créer un fichier (et pour le nommer).

Bref, c’est assez facile de lancer automatiquement des codes, pour scraper. Sous windows, on passe par le planificateur de taches. Un premier script permet d’extraire les trains ouverts dans la journée,

et le second, les trains ouverts depuis plus de 24 heures,

Ensuite, on va utiliser http://docs.casperjs.org/en/latest/ pour coder notre émulateur de navigateur internet (le code est ici en ligne).

On va ainsi créer plein de fichiers, contenant les informations que l’on veut ! Je vais passer un peu le retraitement, et juste présenter les informations qu’on peut en tirer. En particulier, Pierre avait stocke les données entre mars et juin dernier.

Ici, on est juste sur quelques trajets de grande ligne,

library(readr)
library(rgdal)
nomFichier = tempfile(fileext = ".zip")
  download.file("https://freakonometrics.free.fr/CarteFrance.zip", destfile = nomFichier, mode = "wb")
  unzip(zipfile = nomFichier, exdir = getwd())
  download.file("https://freakonometrics.free.fr/LgTroncons.csv", destfile = "LgTroncons.csv", mode = "wb")
  download.file("https://freakonometrics.free.fr/CoordVilles.csv", destfile = "CoordVilles.csv", mode = "wb")
  fra0 = readOGR(dsn = paste(getwd(), "/CarteFrance", sep = ""), layer = "gadm36_FRA_0", verbose = F)
  LgTroncons = read_delim("LgTroncons.csv",";", escape_double = FALSE,locale = locale(decimal_mark = ","), trim_ws = TRUE)
CoordVilles = read_delim("CoordVilles.csv",";", escape_double = FALSE,locale = locale(decimal_mark = ","), trim_ws = TRUE)
NomsVilles = CoordVilles[CoordVilles$NOM_A_AFFICHER==1,]
library(ggplot2)
fr_df = fortify(fra0)
ggp = ggplot() + geom_polygon(data=fr_df, aes(long, lat,group = group), fill = "#3A8EBA") 
  ggp = ggp + geom_path(data = LgTroncons, aes(x = LONG, y = LAT, group = ID_TRONCON), colour= "#CC5500", lineend = "round", size=3) + geom_path(data = LgTroncons, aes(x=LONG, y=LAT, group = ID_TRONCON), colour="white", lineend = "round",  size=1.75)
  ggp = ggp + geom_point(data = NomsVilles, aes(x=LONG, y=LAT), colour = "blue", fill = "white", shape=21, size = NomsVilles$PT_SIZE, stroke = NomsVilles$PT_STROKE) + theme_void()
  ggp = ggp + geom_text(data = NomsVilles, aes(x=LONG, y=LAT, label=NOM),hjust = NomsVilles$H_AJUST,
vjust=NomsVilles$V_AJUST, colour = "white", fontface = "bold", size =3.25)+coord_fixed(1.47)
  ggp = ggp + ggtitle("Représentation des trajets étudiées") + theme(plot.title = element_text(hjust = 0.5, face="bold"))
  print(ggp)

On va travailler sur les trajets suivants,

Comparons ici les billets pour un trajet Pars-Rennes, a partir des informations moissonnées pendant 3 mois, le vendredi soir, en particulier 2 vendredi de juin 2018 (les 15 et 22 juin). Pour ces deux jours, il y a eu 6 trains, entre 17 et 20 heures. Pour le 15 juin, les trois premiers ont commencé avec un prix de 45 €. Pour le 22 juin, le premier a commence a 45 €, mais les deux suivant on été lancés a 33 €. Assez rapidement, les prix sont monte a 45 €.

On peut regarder l’evolution du prix

Si on regarde plusieurs destination, on observe des comportement très différentes,

  • pour Le Mans, les prix montent très vite, commençant a 15 €, montant a 18 € 10 heures après le lancement, 21 € le lendemdemain, 27 € au bout d’une semaine. En un mois, les prix ont presque doublé.
  • pour Rennes, on observe une evolution similaire, passant de 20 € a 25 € en quelques heures, et 40 € deux semaines après !
  • pour Toulouse en revanche, le prix initial est plus haut, 43 €, monte de 3 € en 10 heures, 6 € en 16 heures, pour rester a 49 €

Mais pour toutes les destinations, les prix sont croissants.

soit graphiquement

On peut aussi faire une carte. Si on regarde les prix a l’ouverture, Lille, Le Mans et Rennes sont peu chères.

Et les plus fortes variations sur 10 heures sont observées sur Nantes et Bordeaux.

Amusant non ?

Les transports publics parisiens

Histoire de continuer la série de billets sur la visualisation, et la manipulation de données ouvertes, je vais reprendre de codes de Tony, de la formation Data Science pour l’Actuariat, pour visualiser le transport dans Paris (et la région parisienne). Si j’ai le temps, dans les jours a venir, je ferais une analyse du réseau de métro, compare aux autres grandes villes européennes. Pour commencer, on va récupérer les données, fournies par le site d’open data du stif, le syndicat des transports d’Ile de France (https://opendata.stif.info). Les données sont découpées par semestre, ce qui rend le code un peu lourd… mais bon, ça n’est pas plus complique pour autant.

library(dplyr)
library(stringr)
library(ggplot2)
library(xlsx)
library(ggmap)

On commence par lire tous les fichiers en ligne

nbvalid = list()
download.file("https://opendata.stif.info/explore/dataset/emplacement-des-gares-idf-data-generalisee/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","Gares.csv")
gares = read.csv("Gares.csv", sep=";", header = TRUE)
distr_pers = list()
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires1.csv")
distr_pers$S1 = read.csv("horaires1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires2.csv")
distr_pers$S2 = read.csv("horaires2.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations1.csv")
nbvalid$S1 = read.csv("validations1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations2.csv")
nbvalid$S2 = read.csv("validations2.csv", sep=";", header = TRUE)
download.file("https://freakonometrics.free.fr/Correspondance_NOM.csv","Correspondance_NOM.csv")
Cooresp = read.csv("Correspondance_NOM.csv", sep=";", header = TRUE)

On a ensuite besoin de définir les dates des vacances, pour 2017

Vacances = list()
Vacances$Noel = append(seq(from = as.Date("01/01/2017", format="%d/%m/%Y"), to=as.Date("02/01/2017", format="%d/%m/%Y"), by=1),seq(from = as.Date("24/12/2017", format="%d/%m/%Y"), to=as.Date("31/12/2017", format="%d/%m/%Y"), by=1))
Vacances$Ski = seq(from = as.Date("04/02/2017", format="%d/%m/%Y"), to=as.Date("19/02/2017", format="%d/%m/%Y"), by=1)
Vacances$Printemps = seq(from = as.Date("02/04/2017", format="%d/%m/%Y"), to=as.Date("17/04/2017", format="%d/%m/%Y"), by=1)
Vacances$Ete = seq(from = as.Date("08/07/2017", format="%d/%m/%Y"), to=as.Date("03/09/2017", format="%d/%m/%Y"), by=1)
Vacances$Toussaint = seq(from = as.Date("21/10/2017", format="%d/%m/%Y"), to=as.Date("05/11/2017", format="%d/%m/%Y"), by=1)
Vacances$All=Reduce(append,Vacances)

Après, un peu de nettoyage est nécessaire, avec des gares en double (par exemple quand passe a la fois le RER et le métro), et pour recuperer leur localisation spatiale (latitude et longitude)

gares$NOM_LONG = as.character(gares$NOM_LONG)
DD = (gares$NOM_LONG[duplicated(gares$NOM_LONG)])
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="Metro"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"M", sep="-")
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="RER"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"R", sep="-")
gares$NOM_LONG=factor(gares$NOM_LONG)
 
a=as.character(gares$Geo.Point)
gares$Y=as.numeric(str_extract_all(a,"^[0-9]+.[0-9]+"))
gares$X=as.numeric(str_extract_all(a,"[0-9]+.[0-9]+$"))

On compte ensuite les nombres de validation de tickets, par gare

Manip_nbvalid = function(Data,DD,gares) {
  i=grep("^[a-zA-Z]+",as.character(Data$NB_VALD))
  Data$NB_VALD[i]=as.integer(5)
  i=is.na(Data$NB_VALD)
  Data$NB_VALD[i]=as.integer(5)
  Data$LIBELLE_ARRET=as.character(Data$LIBELLE_ARRET)
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="100"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"M", sep="-")
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="800"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"R", sep="-")
 
  for (i in seq(1,nrow(Cooresp))) { Data$LIBELLE_ARRET=gsub(as.character(Cooresp$nbval[i]),as.character(Cooresp$gares[i]),Data$LIBELLE_ARRET)
  }
gares$NOM_LONG=as.character(gares$NOM_LONG)
Data=dplyr::left_join(Data,gares[,c("NOM_LONG","X","Y")],by=c("LIBELLE_ARRET"="NOM_LONG"))
  Data=Data[is.na(Data$CODE_STIF_ARRET)==FALSE,]
  Data=Data[Data$CODE_STIF_ARRET!="ND",]
  Data$NB_VALD=as.integer(as.character(Data$NB_VALD))
  Data$JOUR=as.Date(Data$JOUR)
  Data$CODE_STIF_TRNS=factor(Data$CODE_STIF_TRNS)
  Data$CODE_STIF_RES=factor(Data$CODE_STIF_RES)
  Data$CODE_STIF_ARRET=factor(Data$CODE_STIF_ARRET)
  Data$LIBELLE_ARRET=factor(Data$LIBELLE_ARRET)
  Data$ID_REFA_LDA=factor(Data$ID_REFA_LDA)
  Data$CATEGORIE_TITRE=factor(Data$CATEGORIE_TITRE)
  Data$JOURSEM=weekdays(Data$JOUR)  
  return(Data)
}
nbvalid=lapply(nbvalid, Manip_nbvalid,DD=DD,gares=gares)

On a ainsi tous les comptages, pour toutes les gares. On fait ensuite un découpage par tranche horaire

Manip_dist_pers = function(DataFrame) {
  DataFrame=DataFrame[(DataFrame$TRNC_HORR_60)!="ND",]
  DataFrame$TRNC_HORR_60=factor(DataFrame$TRNC_HORR_60, levels = c("0H-1H", "1H-2H", "2H-3H", "3H-4H", "4H-5H", "5H-6H", "6H-7H", "7H-8H", "8H-9H", "9H-10H", "10H-11H", "11H-12H", "12H-13H", "13H-14H", "14H-15H", "15H-16H", "16H-17H", "17H-18H", "18H-19H", "19H-20H", "20H-21H", "21H-22H", "22H-23H", "23H-0H")) 
  DataFrame=DataFrame[(DataFrame$CODE_STIF_ARRET)!="ND",]
  DataFrame$CODE_STIF_ARRET=factor(DataFrame$CODE_STIF_ARRET)
DataFrame$TRANCHE=str_extract(as.character(DataFrame$TRNC_HORR_60),"^[0-9]{1,2}")
  return(DataFrame)
}
distr_pers=lapply(distr_pers, Manip_dist_pers)

On peut ensuite recuperer la distribution des validation, par jour

distr_JOURV=list()
distr_JOURV$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$Y=rbind(distr_JOURV$S1,distr_JOURV$S2)
distr_JOUR=list()
distr_JOUR$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$Y=rbind(distr_JOUR$S1,distr_JOUR$S2)
distr_JOUR_Station=list()
distr_JOUR_Station$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
distr_JOUR_Station$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
Manip_dist_Jour = function(DataFrame) {
  DataFrame$JOURSEM=factor(DataFrame$JOURSEM,levels = c("lundi","mardi","mercredi","jeudi","vendredi","samedi","dimanche"))
  DataFrame$TypeJ=character(nrow(DataFrame))
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ete]="Ete"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Noel]="Noel"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ski]="Ski"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Printemps]="Printemps"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Toussaint]="Toussaint"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$All == FALSE]="HorsVacances"
  DataFrame$CAT_JOUR=character(nrow(DataFrame))
  DFr=list()
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOVS"
  DFr$JOVS$Data = DataFrame[ii,]
  DFr$JOVS$Nom="Jours ouvrés Vacances Scolaires"
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOHV"
  DFr$JOHV$Data = DataFrame[ii,]
  DFr$JOHV$Nom="Jours ouvés Hors Vacances Scolaires"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAVS"
  DFr$SAVS$Data = DataFrame[ii,]
  DFr$SAVS$Nom="Samedi VS"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAHV"
  DFr$SAHV$Data = DataFrame[ii,]
  DFr$SAHV$Nom="Samedi HV"
  ii=DataFrame$JOURSEM=="dimanche"
  DataFrame$CAT_JOUR[ii]="DIJFP"
  DFr$DIJFP$Data = DataFrame[ii,]
  DFr$DIJFP$Nom="Dimanche"
  return(list("TypeJ"=DFr,"Distr"=DataFrame))
}
res=Manip_dist_Jour(distr_JOUR$Y)
distr_TypeJ=res$TypeJ
distr_JOUR$Y=res$Distr
res=Manip_dist_Jour(distr_JOURV$Y)
distr_TypeJV=res$TypeJ
distr_TypeJ_Station=list()
res=Manip_dist_Jour(distr_JOUR_Station$S1)
distr_TypeJ_Station$S1=res$TypeJ
distr_JOUR_Station$S1=res$Distr
res=Manip_dist_Jour(distr_JOUR_Station$S2)
distr_TypeJ_Station$S2=res$TypeJ
distr_JOUR_Station$S2=res$Distr
rm(res)

On peut alors tracer toutes sortes de graphiques, par exemple le nombre de validations, par jour, entre le 1er janvier et le 31 décembre, en fonction du jour de la semaine.

g0 = ggplot(distr_JOUR$Y, aes(x=JOUR, y=NB_VALD, color = JOURSEM)) + geom_point()
g0 = g0 + labs(title="Nombres de validations chaque jours de 2017", x="Date", y="Nombre de validations")
g0

On peut voir la très forte baisse les jours de semaine pendant les vacances d’été. Au lieu de regarder sur l’année, on peut regarder sur la journée

Fct_FqH = function(DataFrame,distr_pers) {
DataFrame=dplyr::full_join(DataFrame,distr_pers[,c("CAT_JOUR","CODE_STIF_ARRET","pourc_validations","TRANCHE","TRNC_HORR_60")],by=c("CODE_STIF_ARRET"="CODE_STIF_ARRET","CAT_JOUR"="CAT_JOUR"))
  DataFrame$NB_VALD=DataFrame$NB_VALD*DataFrame$pourc_validations
  return(DataFrame)
}
distr_JOUR_Station$S1=Fct_FqH(distr_JOUR_Station$S1, distr_pers$S1)
distr_JOUR_Station$S2=Fct_FqH(distr_JOUR_Station$S2, distr_pers$S2)
distr_JOUR_Station$Y=rbind(distr_JOUR_Station$S1,distr_JOUR_Station$S2)
distr_JOUR_Station$Y=distr_JOUR_Station$Y[is.na(distr_JOUR_Station$Y$NB_VALD)==FALSE,]

On peut alors faire un graphique, en fonction de la tranche horaire, pour certaines périodes, par exemple en dehors de vacances scolaires, en semaine (par heure, on a ici un boxplot)

Graphique_HOR = function(DataFrame,TypeJ,NomJ) {
  # Graphique de la distribution de l'affluence par tranche horaire et type de jours
  g1 = ggplot(DataFrame[DataFrame$CAT_JOUR==TypeJ,], aes(x=TRNC_HORR_60, y=pourc_validations, color = TRNC_HORR_60,las=2)) + geom_boxplot() + ylim(c(0,100))
  g1 = g1 + labs(title=paste(c("Distribution des validations par tranche horaire ",NomJ), sep="", collapse = ""), x="Jours", y="Nombre de validations") +
  theme(axis.text.x= element_text(size = 8, angle = 45))
  g1
}
Graphique_HOR(distr_JOUR_Station$Y,"JOHV","Jours ouvrés Hors Vacances Scolaires")

ou bien le samedi

Graphique_HOR(distr_JOUR_Station$Y,"SAHV","Samedi Hors Vacances Scolaires")

On peut tenter un peu de cartographie. Comme nombre de métros/bus, dans le monde, on a souvent uniquement accès aux nœuds d’entrée dans le réseau (et pas aux nœuds de sortie). Mais ça reste intéressant, et très informatif

get_Paris1 = get_map(c(2.3448688,48.8613029), zoom = 11)
Paris1 = ggmap(get_Paris1)

Par gare, et par heure, on peut regarder le nombre de validations de tickets

Median_Valid = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
Median_Valid_Station = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, TRNC_HORR_60,LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
 
Carte_Densite = function(Nom,Carte,TypeJ,HOR,DataFrame) {
if (HOR=="") {
    ii=DataFrame$CAT_JOUR==TypeJ
    NomSave=paste("Densité des validations",Nom,TypeJ)
  }
  else {
    ii=DataFrame$CAT_JOUR==TypeJ &amp; DataFrame$TRNC_HORR_60==HOR
    NomSave=paste("Densité des validations",Nom,TypeJ,HOR)
  }
  U=DataFrame[ii,]
  n=round(log10(median(U$NB_VALD)))-1
  n=max(1,10^n)
  Nb_Repete_Stations=ceiling(U$NB_VALD/n)
  U$Size_Stations=U$NB_VALD/max(U$NB_VALD)
  Z=U[rep(1:nrow(U),Nb_Repete_Stations),]
  Carte_A= Carte + geom_point(aes(x=X,y=Y),data=Z,col="coral", size=10*Z$Size_Stations) +
    geom_density2d(data = Z, aes(x=X,y=Y), size = 0.5) + 
    stat_density2d(data = Z, aes(x=X,y=Y,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") +
    scale_fill_gradient(low = "chartreuse", high = "red",guide = FALSE) + 
    scale_alpha(range = c(0, 0.3), guide = FALSE) + ggtitle(NomSave) +
    theme(axis.title.x = element_blank(), axis.title.y = element_blank(), axis.text.x= element_blank(), axis.text.y = element_blank())
 
  suppressWarnings(print(Carte_A))
}

Par exemple, si on regarde les points de validations de tickets entre 5 et 6 heures du matin, on obtient

L=levels(Median_Valid_Station$TRNC_HORR_60)
Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[6],Median_Valid_Station)

avec beaucoup de ville dans la banlieue proche. Plus tard en journée, entre 11 heures et midi, les gares de validation sont davantage dans le cœur de Paris, avec la Défense a gauche et Saint-Denis au nord

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[12],Median_Valid_Station)

En fin de journée, c’est Paris et surtout la Défense qui ressortent

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[19],Median_Valid_Station)

Amusant, non ?

Analyse des résultats au baccalauréat des séries générales

Pour continuer sur les manipulation de données publiques, je voulais m’inspirer du projet de Cédric, de la formation Data Science pour l’Actuariat sur les résultats au baccalauréat. Les données nécessaires à cette étude sont disponibles sur plusieurs sites,

Il ne s’agit aucunement d’une analyse poussée des résultats, juste un peu de visualisation, sans aucune autre prétention ! Ah oui, même si on ne va pas faire de carte (je les trouve peu lisibles) on va quand même utiliser les données spatiales : les établissements scolaires sont géolocalises, et on peut obtenir des informations locales, sur le taux de chômage, ou le revenu médian. Et faire des graphiques.

Ce préambule passé, on peut commencer.

library(dplyr)
library(readxl)
library(sp)
library(ggmap)
library(raster)
library(leaflet)
library(DT)
library(cowplot)
library(gstat)
library(tmap)

On va commencer par récupérer par établissement, les résultats au bac.

url_resultat_etab = "https://data.education.gouv.fr/explore/dataset/fr-en-indicateurs-de-resultat-des-lycees-denseignement-general-et-technologique/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true"
download.file(url_resultat_etab,destfile = paste0(librairie,"import_resultat_etab.csv"), method="curl")
df_resultat_etab = read.csv("import_resultat_etab.csv",header=TRUE, sep= ";", encoding="UTF-8")

Comme bien souvent avec les données des administrations françaises, on a souvent des soucis de typographie. Pour simplifier, on va supprimer les accents, et uniformiser un peu les noms

MiseEnForme_Colonnes = function(text) {
  text &lt;- gsub("è", "e", text)  
  text &lt;- gsub("é", "e", text)         
  text &lt;- gsub("_", ".", text)
  text &lt;- gsub("serie.", "", text)
  text &lt;- gsub("Effectif.Presents.", "Effectif.", text)
  text &lt;- gsub("Taux.","Tx.",text)
  text &lt;- gsub("Brut.de.Reussite.", "Admis.Etab.", text)
  text &lt;- gsub("Reussite.Attendu.", "Admis.", text)
  text &lt;- gsub("brut", "Etab", text)
  text &lt;- gsub("attendu", "Academie", text)
  text &lt;- gsub("toutes.", "TOTAL", text)
  text &lt;- gsub("Total.", "TOTAL", text)
  text &lt;- gsub("..Etablissement", ".Etab", text)
  text &lt;- gsub("Pourcentage", "Tx", text)
  return(text)
}
for(i in 1:ncol(df_resultat_etab)){
  colnames(df_resultat_etab)[i] &lt;- MiseEnForme_Colonnes(names(df_resultat_etab)[i])
}

On va ensuite supprimer les départements et régions d’outre-mer,

df_resultat_etab = df_resultat_etab[-which(toupper(df_resultat_etab$Departement) %in% c("GUADELOUPE","MAYOTTE","MARTINIQUE","REUNION","GUYANE")),]

récupérer les noms des colonnes

Colonnes = colnames(df_resultat_etab)

et comme on s’intéresse aux premières variables

Colonnes_Generiques = Colonnes[1:8]

on les recupere, pour construire quelques statistiques pour colonnes relatives aux séries L, ES et S

Colonnes_Series = Colonnes[grepl("([a-zA-Z]*?.)*\\.S$|([a-zA-Z]*?.)*\\.ES$|([a-zA-Z]*?.)*\\.L$|([a-zA-Z]*?.)*\\.TOTAL$",Colonnes)]

Et on finit avec les autres

Colonnes_Autres = Colonnes[grepl("(Tx.Bacheliers.*)|(Tx.acces.*)|(Effectif.de.*)|(libelle.region)|(code.region)|(element)",Colonnes)] 
df_resultat_etab = cbind(df_resultat_etab[Colonnes_Generiques],df_resultat_etab[Colonnes_Series],df_resultat_etab[Colonnes_Autres])

On peut aussi localiser les établissements

url_carto_etab &lt;- "https://www.data.gouv.fr/s/resources/adresse-et-geolocalisation-des-etablissements-denseignement-du-premier-et-second-degres/20160526-143453/DEPP-etab-1D2D.csv"
 
download.file(url_carto_etab,destfile=paste0(librairie,"import_carto_etab.csv"))
df_carto_etab = read.csv2("import_carto_etab.csv",header=TRUE

On récupère ici la géolocalisation de 66556 établissements nationaux ! On peut croiser avec des données socio-économiques des communes

nom_base_emploi = "base-cc-emploi-pop-act-2014"
url_baseemploi_popactive = paste0("https://www.insee.fr/fr/statistiques/fichier/2862207/",nom_base_emploi,".zip")
download.file(url_baseemploi_popactive,destfile=paste0(librairie,nom_base_emploi,".zip"))
unzip(paste0(nom_base_emploi,".zip"),overwrite = TRUE) 
df_base_emploi_source = read_excel(paste0(nom_base_emploi,".xls"),sheet="COM_2014",skip=5)

On va exclure les territoires d’outre-mer ici

df_base_emploi_source &lt;- df_base_emploi_source[-which(df_base_emploi_source$DEP %in% c("971","972","973","974","975")),]
df_base_emploi_colonnes = c("CODGEO","P14_POP1564","P14_H1564","P14_F1564","P14_ACT1564","P14_ACTOCC1564","P14_CHOM1564","P14_INACT1564", "P14_ETUD1564", "P14_RETR1564", "P14_AINACT1564", "P14_HCHOM1524", "P14_FCHOM1524", "C14_ACT1564","C14_ACT1564_CS1","C14_ACT1564_CS2","C14_ACT1564_CS3","C14_ACT1564_CS4","C14_ACT1564_CS5","C14_ACT1564_CS6","P14_POP15P")
df_base_emploi = df_base_emploi_source[,names(df_base_emploi_source) %in% df_base_emploi_colonnes]

et corriger les soucis classiques de la Corse,

MiseEnForme_CodeGeo = function(text) {
  text &lt;- gsub("2A", "20", text)  
  text &lt;- gsub("2B", "20", text)  
}
df_base_emploi$CODGEO = MiseEnForme_CodeGeo(df_base_emploi$CODGEO)

On peut aussi utiliser des données de revenus, par communes

nom_base_revenus = "indic-struct-distrib-revenu-2014-COMMUNES"
url_baserevenus = paste0("https://www.insee.fr/fr/statistiques/fichier/3126151/",nom_base_revenus,".zip")
download.file(url_baserevenus,destfile=paste0(librairie,nom_base_revenus,".zip"))
unzip(paste0(nom_base_revenus,".zip"),overwrite = TRUE)
df_base_revenus = read_excel("FILO_DISP_COM.xls",sheet="ENSEMBLE",skip=5)[,c(1,4,7)]
df_base_revenus$CODGEO = MiseEnForme_CodeGeo(df_base_revenus$CODGEO)

On recupere des donnees spatiales relatives aux communes

url_geoloc_communes = "http://www.nosdonnees.fr/wiki/images/b/b5/EUCircos_Regions_departements_circonscriptions_communes_gps.csv.gz"
download.file(url_geoloc_communes,destfile=paste0(librairie,"geoloc_communes.csv.gz"))
df_geoloc_communes = read.csv2(gzfile("geoloc_communes.csv.gz"),header=TRUE, stringsAsFactors = FALSE,encoding="UTF-8")

et comme toujours, un peu de corrections s’imposent

df_geoloc_communes = df_geoloc_communes[-which(df_geoloc_communes$numéro_département %in% c("971","972","973","974","975")),]
df_geoloc_communes = df_geoloc_communes[,names(df_geoloc_communes) %in% c("code_insee","latitude","longitude","codes_postaux")]
df_geoloc_communes_nb &lt;- nrow(df_geoloc_communes)

On va ensuite creer une fonction de remplacement des valeurs manquantes, et de correction des séparateurs décimaux

MiseEnForme_CoordonneesGeo = function(valeur){
pretraitement = ifelse(as.character(valeur)=="-","0",as.character(valeur))
traitement = as.numeric(ifelse(pretraitement==".","0",gsub(pattern=",",replacement=".",pretraitement)))
  return(traitement)
}
df_geoloc_communes$latitude = MiseEnForme_CoordonneesGeo(df_geoloc_communes$latitude)
df_geoloc_communes$longitude = MiseEnForme_CoordonneesGeo(df_geoloc_communes$longitude)

On passe ensuite a l’élimination des lignes en double

df_geoloc_communes = unique(df_geoloc_communes)

On va ensuite changer les noms des colonnes pour harmoniser avec les autres bases

names(df_geoloc_communes) = c("Codes_Postaux","CODGEO","coordonnee_y","coordonnee_x")

On peut ensuite rechercher les lignes en double sur les codes insee

liste_CODGEO2 = aggregate(x=df_geoloc_communes$Codes_Postaux,by=list(df_geoloc_communes$CODGEO),FUN="length")
list_geoloc_communes_CODGEO2 = liste_CODGEO2[liste_CODGEO2$x&gt;1,1]
df_geoloc_communes_CODGEO2 = df_geoloc_communes[df_geoloc_communes$CODGEO %in% list_geoloc_communes_CODGEO2,1:2]

Ici, un correction manuelle s’impose pour 4 configurations : les données propres à Lyon, Paris et Marseille ne sont pas géolocalisées, les données propres à la ville de Laguépie sont géolocalisées en doubles

df_geoloc_communes_propre = df_geoloc_communes[!df_geoloc_communes$CODGEO %in% list_geoloc_communes_CODGEO2,]
df_geoloc_communes_corrige = data.frame(Codes_Postaux=c("13001","69001","75001","82250"),                        CODGEO=c("13055","69123","75056","82088"),
coordonnee_y=c(43.3,45.75,48.85,44.15),
coordonnee_x=c(5.4,4.85,2.31,1.97))
df_geoloc_communes = rbind(df_geoloc_communes_propre,df_geoloc_communes_corrige)

On peut enfin fusionner les bases

df_etab = merge(df_resultat_etab,df_carto_etab,by="Cod.Etab")

Certains établissements ne peuvent être géolocalisés pour certaines années

df_etab_total_nongeolocalises &lt;- df_resultat_etab[!df_resultat_etab$Cod.Etab %in% df_carto_etab$Cod.Etab,]

Comme l’étude ne porte que sur les seuls lycées d’enseignement général et technologique, le dataframe est réduit aux observations relatives d’une part aux lycées, d’autre part aux établissements d’enseignement polyvalent, général ou général et technologique.

df_etab = df_etab[grep("LYCÉE",toupper(df_etab$nature_uai_libe)),]
df_etab = df_etab[grep("GÉNÉRAL|POLYVALENT",toupper(df_etab$nature_uai_libe)),]
df_etab_nongeolocalises = df_etab[df_etab$Cod.Etab %in% df_etab_total_nongeolocalises$Cod.Etab,]
df_etab_geolocalise = df_etab[!is.na(df_etab$coordonnee_x),]
df_etab_geolocalise = df_etab_geolocalise[!is.na(df_etab_geolocalise$coordonnee_y),]

Enfin, on va convertir les code géographiques (ici reconnus comme facteur) en chaines de caractères (de 5 caractères) pour pouvoir fusionner les tables

ConvertCODGEO = function(code) {
  if(is.character(code)) {
    code_character = ifelse(nchar(code)&lt;5, paste0("0",code), code)
    return(code_character)
  }
  else if(is.factor(code)){
    code_character = ifelse(code&lt;10000, paste0("0",as.numeric(as.character(code))), as.numeric(as.character(code)))
    return(code_character)
  }
  else if(is.numeric(code)){
    code_character = ifelse(code&lt;10000, paste0("0",code), as.character(code))
    return(code_character)
  } 
}
df_etab_geolocalise$Code.commune = ConvertCODGEO(df_etab_geolocalise$Code.commune) 
df_etab_geolocalise$Secteur.Public.Prive = sapply(df_etab_geolocalise$Secteur.Public.Prive,function(nature) {ifelse(nature=="PU","Lycées Publics","Lycées Privés")})

On conservation alors les établissements dont la commune n’est pas manquante

df_etab_geolocalise = df_etab_geolocalise[!is.na(df_etab_geolocalise$Code.commune),]

Pour finir, on va creer une base, pour ensuite faire une graphique

tbl_etab_nature_res_source = df_etab_geolocalise[,c(3,8,9,10,11,13,14,15)]
for(i in c(6,7,8)){
  temp = tbl_etab_nature_res_source[!is.na(tbl_etab_nature_res_source[i]),c(1,2,i-3,i)]
  temp$Serie = ifelse(i==6,"L",ifelse(i==7,"ES","S"))
  names(temp)[2:4] = c("Nature","Effectif","Tx.Admis")
  if(i==6){
    tbl_etab_nature_result = temp
  }
  else{
    tbl_etab_nature_result = rbind(tbl_etab_nature_result,temp)
  }
}
graph = ggplot(tbl_etab_nature_result,aes(x=Effectif,y=Tx.Admis,colour=factor(Annee))) 
graph = graph + geom_point(alpha=0.45)
graph = graph + facet_grid(Serie~Nature)
graph = graph + xlab("Effectifs de l'établissement en terminale (par série)") + ylab("Taux d'admission (%)") 
graph = graph + scale_color_discrete(name="Année des\nrésultats")
graph = graph + theme(legend.title = element_text(size=9,face="bold"),
       legend.text = element_text(size=9),
       strip.background = element_rect(colour="black", fill="gray95"),
       panel.border = element_rect(linetype = "solid"),
       panel.grid.major = element_line(colour = "gray75",linetype = "dashed"),
       panel.grid.minor = element_line(colour = "gray95",linetype = "dashed"),
       axis.title.x = element_text(size=9, face="bold"),
       axis.text.x  = element_text(size=8),
       axis.title.y = element_text(size=9, face="bold"),
       axis.text.y  = element_text(size=8))
graph

On a ici l’evolution des resultats en fonction de la taille des etablissements.

df_communes_CorrNaN = df_communes_Corr[which(!df_communes_Corr$TxChomage == "NaN" &amp; !df_communes_Corr$TxCadres == "NaN" &amp; !df_communes_Corr$TxOuvriers == "NaN" &amp; !df_communes_Corr$NbPopulation == "NaN" &amp; !df_communes_Corr$TxSenior == "NaN" &amp; !df_communes_Corr$RevenusMedians == "NaN"),]
df_communes_sp = SpatialPointsDataFrame(coords = df_communes_CorrNaN[, c("coordonnee_x", "coordonnee_y")], data = df_communes_CorrNaN) 
Grille              = as.data.frame(makegrid(df_communes_sp, nsig=2, cellsize = 0.1))
names(Grille)       = c("X", "Y")
coordinates(Grille) = c("X", "Y")
gridded(Grille)     = TRUE  
fullgrid(Grille)    = TRUE  
proj4string(Grille) = proj4string(df_communes_sp)

On peut ensuite faire du krigeage, histoire de lisser un peu nos donnees de chomage et de revenu

df_communes_sp.TxChomage = krige(TxChomage ~ 1, df_communes_sp, Grille, nmax=1)
df_communes_sp.RevenusMedians = krige(RevenusMedians ~ 1, df_communes_sp, Grille, nmax=1)
sp_lycee_WGS84@data$TxChomage      = extract(R.TxChomage,sp_lycee_WGS84)
sp_lycee_WGS84@data$TxCadres       = extract(R.TxCadres,sp_lycee_WGS84)
sp_lycee_WGS84@data$TxOuvriers     = extract(R.TxOuvriers,sp_lycee_WGS84)
sp_lycee_WGS84@data$NbPopulation   = extract(R.NbPopulation,sp_lycee_WGS84)
sp_lycee_WGS84@data$TxSenior       = extract(R.TxSenior,sp_lycee_WGS84)
sp_lycee_WGS84@data$RevenusMedians = extract(R.RevenusMedians,sp_lycee_WGS84)

On peut enfin conclure, en faisant une fonction generique de visualisation

Creation_Graphique = function(df, AnneeObs_Ouv, AnneeObs_Clo, Effectifs, Abscisses, Ordonnees, TitreAbs, TitreOrd, CouleurGraph, CouleurLiss, Serie) {
  df_temp = df[which(df$Annee&gt;=AnneeObs_Ouv &amp; df$Annee&lt;=AnneeObs_Clo),]
  df_temp = df_temp[which(!is.na(df_temp[,Effectifs]) &amp; !is.na(df_temp[,Abscisses]) &amp; !is.na(df_temp[,Ordonnees])),]
  df_temp = df_temp[!df_temp[,Effectifs]==0,]
  df_temp = df_temp[,c(Abscisses,Ordonnees)]
  graphique = ggplot(df_temp,aes(x = df_temp[,Abscisses],y = df_temp[,Ordonnees])) 
  graphique = graphique + geom_point(data = df_temp, aes(x = df_temp[,Abscisses],y = df_temp[,Ordonnees]),size=1, color=CouleurGraph,alpha=0.25) 
  graphique = graphique + geom_density2d(aes(colour=..level..),show.legend=F) + scale_colour_gradient(low="gray55",high="gray25") 
  graphique = graphique + scale_y_continuous(breaks= seq(80,100,by=2), limits = c(80,100))
  graphique = graphique + xlab(TitreAbs) + ylab(TitreOrd) 
  graphique = graphique + ggtitle(Serie) 
  graphique = graphique + theme(plot.title   = element_text(size=13,color=CouleurLiss, face="bold", hjust=0),
       axis.title.x = element_text(size=8, face="bold"),
       axis.text.x  = element_text(size=8),
       axis.title.y = element_text(size=8, face="bold"),
       axis.text.y  = element_text(size=8),
       panel.border = element_rect(linetype = "solid"),
       panel.grid.major = element_line(colour = "gray55",linetype = "dashed"),
       panel.grid.minor = element_line(colour = "gray75",linetype = "dashed")) 
graphique = graphique + stat_smooth(method = "loess",fill=CouleurLiss,color=CouleurLiss)
  return(graphique)
}
Production_Graphique_VI_1 = function(df, Titre_General, Axe_Abscisses, Titre_Abscisses, Annee_Observee_Ouv, Annee_Observee_Clo){
  Graph_S = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.S", Axe_Abscisses, "Tx.Admis.Etab.S", Titre_Abscisses, "Taux d'admission (%)", "dodgerblue3","dodgerblue4","Série S")
  Graph_ES = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.ES", Axe_Abscisses, "Tx.Admis.Etab.ES", Titre_Abscisses, "Taux d'admission (%)","darkorange2","darkorange3","Série ES")
  Graph_L = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.L", Axe_Abscisses, "Tx.Admis.Etab.L", Titre_Abscisses, "Taux d'admission (%)", "chartreuse4","darkgreen","Série L")
  Graph_TS = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.Etab", Axe_Abscisses, "Tx.Admis.Etab", Titre_Abscisses, "Taux d'admission (%)", "indianred1","red4","Toutes séries")
  p = plot_grid(Graph_S, Graph_ES, Graph_L, Graph_TS, ncol = 2, nrow = 2,align = 'hv',
  scale = c(0.95, 0.95, 0.95, 0.95),vjust = 0.9, hjust=-0.5)
  titre &lt;- ggdraw() + draw_label(Titre_General,fontface="bold", size=10)
  plot_grid(titre, p, ncol = 1, rel_heights=c(.25,5))
}

On note ici

df_lycee &lt;- sp_lycee_WGS84@data

et on peut faire un premier graphique, avec le taux de chomage

Production_Graphique_VI_1(df              = df_lycee,
                          Titre_General   = "Taux d'admission par série en fonction du taux de chômage \n dans la population active - Tous lycées confondus",
                          Axe_Abscisses   = "TxChomage",
                          Titre_Abscisses = "Taux de chômage dans la population active (%)",
                          Annee_Observee_Ouv     = "2013",
                          Annee_Observee_Clo     = "2015")

On ne va pas enfoncer les portes ouvertes de l’inference ecologique, en affirmant des choses aussi stupides que “on a moins de chances d’avoir le bac quand on est au chômage”. Mais on peut noter que dans les zones avec un fort taux de chômage, les résultats au bac sont moins bons.

On peut ensuite regarder en fonction du revenu de la commune du lycée

Production_Graphique_VI_1(df                     = df_lycee,
                          Titre_General          = "Taux d'admission par série en fonction du niveau des revenus disponibles médians - Tous lycées confondus",
                          Axe_Abscisses          = "RevenusMedians",
                          Titre_Abscisses        = "Quantile du niveau des revenus disponibles médians (%)",
                          Annee_Observee_Ouv     = "2013",
                          Annee_Observee_Clo     = "2015")

Fascinant, non ? Mais c’est clairement juste une première approche… il faudrait aller plus loin ensuite !