Tag Archives: data

IDSC’24, Insurance Data Science Conference, in Stockholm

Great time at the IDSC’24, Insurance Data Science Conference, in Stockholm, those two days…

I am glad to see so many people using the datasets of the CASdatasets package… good news, we are working with Christophe Dutang, Julien Siharath and Ewen Gallic this summer to enrich it, with fresh and new data, and with vignettes ! more about it this Fall !

From Contemplative to Predictive Modeling

As mentioned yesterday, I gave a talk, this afternoon, entitled From Contemplative to Predictive Modeling (in actuarial science and risk management). Slides are available online, but maybe I can take some time to explain what I talked about…

It is usually claimed that actuaries build ‘predictive models’ but most of the time, what they consider would be simply ‘contemplative modeling’, in the sense that they use past information and hope that the future will be more or less the same (corresponding to the idea of generalization in machine learning). In the context of climate change (but also when modeling insurance market competition) it is not the case, data used to train models do not have the same distribution as the one we will have in the future.

Continue reading From Contemplative to Predictive Modeling

Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair)
library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}

Let us not compute it for all variables

N = 0:15
M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).

Fairness and discrimination, PhD Course, #5 Models and Data

For the fifth course, we will discuss machine learning and standard techniques used to get predictive models, and to assess accuracy of those models.

GLM (possibly constrained)

Classically, we use a penalized version of least squares (but this can be adapted to GLMs, when penalizing the negative log-likelihood).  Because of Karush–Kuhn–Tucker conditions, having a constraint on the parameter is equivalent to the following penalized problem, when the constraint is on the \ell_2 norm of \boldsymbol{\beta},

We can also consider the \ell_1 norm of \boldsymbol{\beta},

Those two approaches can be see as a trade-off between accuracy (here the empirical risk on the left) and complexity of the model (on the right). And we can also consider a mixture of the two norms,

As we will see, it will also be possible to consider some penality related to fairness and discriminiation measures (in-processing).

Classifier and ROC Curves

We will also recall metrics used in the context of classification, such as the ROC curve

Each point of the curve can be related to two areas related to the distributions of the scores (in the two groups), for the same threshold – namely the false positive rate and true positive rate

Based on the ROC curve, we can define the AUC, the area under the ROC curve,

But for classifiers, the important challenge is to have calibrated scores, meaning that we want the score to be interpreted as the true underlying probability.

Calibration

Well-calibration is defined as follows

or (with different notations)

It is a well know properties in several applications.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-07.png

The plot on the right is the calibration plot,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-10.png

We can easily get that plot

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-09.png

This concept is related to the question “do probabilities returned by some model represent reals probabilities ?” For instance, below, we have pictures generated as some sort of geodesic between two pictures, with a woman on the top left, and a man in the bottom right, published in the New York Times. And below, “probabilities” given by  https://www.picpurify.com/demo-face-gender-age.html.

We could agree that it is rather strange that probabilities (to have a man) do not increase continuously, but on top, with extremely high confidence, the model predicts that the picture is the one of a woman, and below, also with extremely high confidence, that the person is a man…

Data, observations vs. experiments

Then, after concept and notations related to models, we will talk about data. More specifically, the distinction between observations and experimentations.

Another popular classification is the one discussed by Judea Pearl.

So we will talk about association, correlation, causal inference, and counterfactuals.

“Correlated variables” or proxies

One important issue, is that with massive data, one can easily get a (good) proxy of almost any sensitive variable.

The concept is related to comonotonicity, or perfect correlation.

But this is clearly too strong, so we will discuss depedence measures, too.

Independence properties

Recall that independence is defined as follows

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-11.png

and we can consider a weaker form, based on null-covariance

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-12.png

or null-correlation

(sidenote, this correlation measure is bounded, and those bounds are related to Hardy-Littlewood inequality and optimal transport)

An interesting measure is the maximal correlation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-13.png

or we can consider a weaker version, without consider all possible transformation, but only a subset

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-14.png

Another important concept is the one of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-16.png

(the later will be used in the context of causal graphs).

Causality

Before talking about causality, recall that what non-independence mean…

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-17.png

We can then construct causal graphs, or “directed acyclic graphs”

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-20.png

where nodes are the variables used in the model, and the outcome (usually that the end of the causal graph). Then we define paths

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-18.png

and the concept of d-separation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-19.png

This concept is related to the statistical property of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-21.png

More precisely, we have the following Markov property on causal graphs

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-22.png

For example, for such a graphical model,

the joint distribution is \mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2|x_1]\times \mathbb{P}[x_3|x_2]\times \mathbb{P}[x_4|x_3]and for the graphical model below

we have\mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2]\times \mathbb{P}[x_3|x_1,x_2]\times \mathbb{P}[x_4|x_3]Those graphs can be related to structural models (with idiosyncratic noise denoted U), since

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-23.png

Potential outome

Another important concept is the concept of counterfactuals, and potential outome. In an ideal world, we would have observed the outome in both cases, with and without the treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-24.png

but in real life, it’s only one of them,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-25.png

And the goal will be, somehow, to estimate what the non-observed outcome would be. And then, classical quantites we wish to estimate are the average treatement effect, and the conditional version, based on some covariates.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-26.png

This concept will be related to counterfactual fairness actually, when the “treatement” will be the sensitive attribute.

Twin network representation of the counterfactual

Finally, we will consider a so-called “twin network representation”. Consider a DAG, associated with some simple structural model

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-27.png

Based on a structural model, we can get values of idiosyncratic noise component

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-28.png

Then, we use those values on the twin representation, when the treatement is not 0, but 1. Counterfactuals are created by using the same noises

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-29.png

The difference between the two outcomes is the treatement effect, or the disparate treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-30.png

or more generally, we write

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-31.png

This is an idea used in Plecko & Meinshausen, 2019, in the context of fairness, but we will discuss this more, later on…

Data Augmentation for Imbalanced Regression

Our paper Data Augmentation for Imbalanced Regression, writen with Denys Pommeret and Sam Stocksieker, is now available on ArXiv.

In this work, we consider the problem of imbalanced data in a regression framework when the imbalanced phenomenon concerns continuous or discrete covariates. Such a situation can lead to biases in the estimates. In this case, we propose a data augmentation algorithm that combines a weighted resampling (WR) and a data augmentation (DA) procedure. In a first step, the DA procedure permits exploring a wider support than the initial one. In a second step, the WR method drives the exogenous distribution to a target one. We discuss the choice of the DA procedure through a numerical study that illustrates the advantages of this approach. Finally, an actuarial application is studied.

Probabilités prédictives, pour 100% actuaires, et 100% data science

Ce jeudi, je participe (virtuellement) à un atelier sur les probabilités prédictives à la journée 100% actuaires, et 100% data science, de l’Institut des Actuaires, en France, avec Nicolas Marescaux et Florence Picard.

L’évènement n’étant pas offert en mode “hybride” je serais présent via une courte vidéo, qui sera transmise lors de l’atelier.

Je vais reprendre graphiquement un point que j’évoque régulièrement, à savoir la variabilité fondamentale qui existe lorsqu’on modéliser des évènements binaires, y compris si la vraie probabilité est connue. Dans un modèle Gaussien, si Y\sim\mathcal{N}(\mu,\sigma^2), même en connaissant (parfaitement) la moyenne \mu, Y-\mu est une variable, aléatoire, Y-\mu\sim\mathcal{N}(0,\sigma^2) qui peut avoir une petite variance, si \sigma est faible. Et dans un modèle de régression Y|X=x\sim\mathcal{N}(\mu_x,\sigma_0^2) avec \sigma_0^2\leq \sigma^2. Mais dans les GLM, avec les modèles de classification ou de Poisson, la variance ne diminue pas : si Y|X=x\sim\mathcal{B}(p_x), et la variance sera p_x(1-p_x) qui restera toujours non-nulle quand p_x\in(0,1).

L’autre point est que si on a une loi de Bernoulli, p ne peut être vraiment connu que si on a beaucoup d’observations, c’est la loi des grands nombres, sinon on a de la variabilité. Sur la figure ci-dessous, j’ai plusieurs niveaux de probabilité p, et je trace à gauche l’intervalle de “confiance” pour la fréquence, quand on n=100 observations, avec en bas la distribution de la fréquence. Par exemple, si p=15\%, avec une centaine d’observations, la fréquence a 90% de chances d’être entre 10\% et 20\%. Autrement dit, prédire une probabilité d’avoir un accident de 15.723\% n’a pas trop de sens. Au centre, le regarde la distribution de \overline{y}_n-p, et on voit qu’il reste une part non négligeable de variance. À droite, c’est la variation relative (\overline{y}_n-p)/p, qui varie en sens inverse par rapport à la variation absolue : si p est faible (disons 2\%), on s’écarte peu en valeur absolue (\pm 2\%) mais il n’est pas impossible (y compris avec n=100 observations) d’observer 2 fois trop de 1).

Regarder \overline{y}_n-p, c’est supposer qu’on peut connaître la vraie probabilité, et qu’on l’utilise : \overline{y} c’est la charge moyenne par années, et p, c’est la prime pure (annuelle). Donc \overline{y}_n-p, c’est la perte annuelle. Mais au lieu d’utiliser p, on peut imaginer qu’on mutualiser : par exemple ci-dessous, si p\in[0\%;4\%] on utilise la valeur moyenne (soit 2\%). On voit qu’on rajoute de la variabilité sur la distribution des pertes individuelle, d’autant plus qu’il y a peu de classes.

On en reparlera…

DataDay 2022 MAIF

Ce mercredi, je participerais au à la semaine DataDay 2022 de la MAIF, pour parler de données et de climat…

Je commencerais pas revenir (il s’agit d’un exposé grand public) sur les spécificités des différents types de données, en commençant par les données individuelles vs. les données temporelles

Dans le premier cas, on parlera surtout (dans les applications climatiques) de données spatiales

et dans le second cas, de séries temporelles,

On finira avec les données spatio-temporelles

Insurance Data Science Conference 2021 (online)

The Insurance Data Science Conference returns in 2021 for an on-line global event. The conference will run over three half-days (afternoons in Europe & Africa / mornings in the Americas). The conference brings together academics and practitioners in areas including data science, analytics, machine learning, artificial intelligence, computational statistics and software, as applied in the insurance industry. For more information, see https://insurancedatascience.org/

INF7100, statistiques

La seconde partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les statistiques, univariées et multivarirées. Le plan sera le suivant

  • 201: De la Statistique aux Sciences de Données pdf video (14:24)
  • 211: Fonctions Usuelles en Statistique (fonction de réparition, densité, histogramme) pdf video (28:37)
  • 221: Indicateurs Statistique: Valeur Centrale (moyenne) pdf video (32:56)
  • 222: Indicateurs Statistique: Dispersion (variance, inégalités) pdf video (22:21)
  • 223: Indicateurs Statistique: Approximations (approximation normale) pdf video (18:42)
  • 224: Indicateurs Statistique: Quantiles pdf video (24:54)
  • 231: Inférence (statistique bayésienne) pdf video (39:33)
  • 241: Tests Statistiques (1) (tests, significativité, p-value) pdf video (43:41)
  • 242: Tests Statistiques (2) (erreurs) pdf video (16:51)
  • 261: Statistiques Bivariées pdf video (25:16)
  • 271: Statistiques Multivariées: Projections pdf video (29:06)
  • 272: Statistiques Multivariées: Clusters pdf video (32:21)
  • 281: Réseaux et Graphs pdf video (32:40)
  • 291: Données Chronologiques pdf video (29:01)

 

INF7100, les données

La première partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les données (et la distinction entre les données d’observations et les données d’expérience). Le plan est le suivant

  • 111: Observation vs. Expérimentation pdf video (19:48)
  • 112: Données Observationnelles et Biais pdf video (22:39)
  • 121: Paradoxe de Simpson pdf video (22:24)
  • 122: Recherche de Contrefactuels pdf video (25:27)
  • 123: A/B Testing et Renforcement pdf video (12:13)
  • 131: Incertitude (1) pdf video (39:01)
  • 132: Incertitude (2) pdf video (41:45)

(oui je suis passé sur youtube pour ce cours… mon compte viméo a malheureusement été suspendu… et toutes les capsules de mon cours précédent ne sont plus visibles)

Testing for Covid-19 in the U.S.

For almost a month, on a daily basis, we are working with colleagues (Romuald, Chi and Mathieu) on modeling the dynamics of the recent pandemic. I learn of lot of things discussing with them, but we keep struggling with the tests. Paul, in Montréal, helped me a little bit, but I think we will still have to more to get a better understand. To but honest, we stuggle with two very simple questions

  • how many people are tested on a daily basis ?

Recently, I discovered Modelling COVID-19 exit strategies for policy makers in the United Kingdom, which is very close to what we try to do… and in the document two interesting scenarios are discussed, with, for the first one, “1 million ‘reliable’ daily tests are deployed” (in the U.K.) and “5 million ‘useless’ daily tests are deployed”. There are about 65 millions unhabitants in the U.K. so we talk here about 1.5% people tested, on a daily basis, or 7.69% people ! It could make sense, but our question was, at some point, is that realistic ? where are we today with testing ? In the U.S. https://covidtracking.com/ collects interesting data, on a daily basis, per state.

url = "https://raw.githubusercontent.com/COVID19Tracking/covid-tracking-data/master/data/states_daily_4pm_et.csv"
download.file(url,destfile="covid.csv")
base = read.csv("covid.csv")

Unfortunately, there is no information about the population. That we can find on wikipedia. But in that table, the state is given by its full name (and the symbol in the previous dataset). So we new also to match the two datasets properly,

url="https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_population"
download.file(url,destfile = "popUS.html")
#pas contaminé 2/3 R=3
library(XML)
tables=readHTMLTable("popUS.html")
T=tables[[1]][3:54,c("V3","V4")]
names(T)=c("state","pop")
url="https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviations"
download.file(url,destfile = "nameUS.html")
tables=readHTMLTable("nameUS.html")
T2=tables[[1]][13:63,c(1,4)]
names(T2)=c("state","symbol")
T=merge(T,T2)
T$population = as.numeric(gsub(",", "", T$pop, fixed = TRUE))
names(base)[2]="symbol"
base = merge(base,T[,c("symbol","population")])

Now our dataset is fine… and we can get a function to plot the number of people tested in the U.S. (cumulated). Here, we distinguish between the positive and the negative,

drawing = function(st ="NY"){
sbase=base[base$symbol==st,c("date","positive","negative","population")]
sbase$DATE = as.Date(as.character(sbase$date),"%Y%m%d")
sbase=sbase[order(sbase$DATE),]
par(mfrow=c(1,2))
plot(sbase$DATE,(sbase$positive+sbase$negative)/sbase$population,ylab="Proportion Test (/population of state)",type="l",xlab="",col="blue",lwd=3)
lines(sbase$DATE,sbase$positive/sbase$population,col="red",lwd=2)
legend("topleft",c("negative","positive"),lwd=2,col=c("blue","red"),bty="n")
title(st)
plot(sbase$DATE,sbase$positive/(sbase$positive+sbase$negative),ylab="Ratio of positive tests",ylim=c(0,1),type="l",xlab="",col="black",lwd=3)
title(st)}

Let us start with New York

drawing("NY")

As at now, 4% of the entiere population got tested… over 6 weeks…. The graph on the right is the proportion of people who tested positive… I won’t get back on that one here today, I keep it for our work. In New Jersey, we got about 2.5% of the entiere population tested, overall,

drawing("NJ")

Let us try a last one, Florida

drawing("FL")

As at today, it is 1.5% of the population, over 6 weeks. Overall, in the U.S. less than 0.1% people are tested, on a daily basis. Which is far from the 1.5% in the U.K. scenarios. Now, here come the second question,

  • what are we actually testing for ?

On that one, my experience in biology is… very limited, and Paul helped me. He mentioned this morning a nice report, from a lab in UC Berkeley

One of my question was for instance, if you get tested positive, and you do it again, can you test negative ? Or, in the context of our data, do we test different people ? are some people tested on a regular basis (perhaps every week) ? For instance, with antigen tests (Reverse Transcription Quantitative Polymerase Chain Reaction (RT-qPCR) – also called molecular or PCR – Polymerase Chain Reaction – test) we test if someone is infectious, while with antibody test (using serological immunoassays that detect viral-specific antibodies — Immunoglobin M (IgM) and G (IgG) — also called serology test), we test for immunity. Which is rather different…

I have no idea what we have in our database, to be honest… and for the past six weeks, I have seen a lot of databases, and most of the time, I don’t know how to interpret, I don’t know what is measured… and it is scary. So, so far, we try to do some maths, to test dynamics by tuning parameters “the best we can” (and not estimate them). But if anyone has good references on testing, in the context of Covid-19 (for instance on specificity, sensitivity of all those tests) I would love to hear about it !

INF7100, à distance

Suite au contexte de la covid-19, la session d’été commencera une semaine plus tard.

Le cours INF7100 Initiation à la science des données et à l’intelligence artificielle – qui devait commencer mardi 28 avril commencera le 5 mai prochain. Les étudiants inscrits recevront un lien pour se connecter, pour une présentation globale. On le donnera conjointement avec Marie-Jean Meurs (du département d’informatique) et Jean-Hugues Roy (de l’école des médias). Je posterais des informations dans les semaines à venir sur les points que je vais aborder (essentiellement sur la science des données).

Analyse des résultats au baccalauréat des séries générales

Pour continuer sur les manipulation de données publiques, je voulais m’inspirer du projet de Cédric, de la formation Data Science pour l’Actuariat sur les résultats au baccalauréat. Les données nécessaires à cette étude sont disponibles sur plusieurs sites,

Il ne s’agit aucunement d’une analyse poussée des résultats, juste un peu de visualisation, sans aucune autre prétention ! Ah oui, même si on ne va pas faire de carte (je les trouve peu lisibles) on va quand même utiliser les données spatiales : les établissements scolaires sont géolocalises, et on peut obtenir des informations locales, sur le taux de chômage, ou le revenu médian. Et faire des graphiques.

Ce préambule passé, on peut commencer.

library(dplyr)
library(readxl)
library(sp)
library(ggmap)
library(raster)
library(leaflet)
library(DT)
library(cowplot)
library(gstat)
library(tmap)

On va commencer par récupérer par établissement, les résultats au bac.

url_resultat_etab = "https://data.education.gouv.fr/explore/dataset/fr-en-indicateurs-de-resultat-des-lycees-denseignement-general-et-technologique/download/?format=csv&timezone=Europe/Berlin&use_labels_for_header=true"
download.file(url_resultat_etab,destfile = paste0(librairie,"import_resultat_etab.csv"), method="curl")
df_resultat_etab = read.csv("import_resultat_etab.csv",header=TRUE, sep= ";", encoding="UTF-8")

Comme bien souvent avec les données des administrations françaises, on a souvent des soucis de typographie. Pour simplifier, on va supprimer les accents, et uniformiser un peu les noms

MiseEnForme_Colonnes = function(text) {
  text <- gsub("è", "e", text)  
  text <- gsub("é", "e", text)         
  text <- gsub("_", ".", text)
  text <- gsub("serie.", "", text)
  text <- gsub("Effectif.Presents.", "Effectif.", text)
  text <- gsub("Taux.","Tx.",text)
  text <- gsub("Brut.de.Reussite.", "Admis.Etab.", text)
  text <- gsub("Reussite.Attendu.", "Admis.", text)
  text <- gsub("brut", "Etab", text)
  text <- gsub("attendu", "Academie", text)
  text <- gsub("toutes.", "TOTAL", text)
  text <- gsub("Total.", "TOTAL", text)
  text <- gsub("..Etablissement", ".Etab", text)
  text <- gsub("Pourcentage", "Tx", text)
  return(text)
}
for(i in 1:ncol(df_resultat_etab)){
  colnames(df_resultat_etab)[i] <- MiseEnForme_Colonnes(names(df_resultat_etab)[i])
}

On va ensuite supprimer les départements et régions d’outre-mer,

df_resultat_etab = df_resultat_etab[-which(toupper(df_resultat_etab$Departement) %in% c("GUADELOUPE","MAYOTTE","MARTINIQUE","REUNION","GUYANE")),]

récupérer les noms des colonnes

Colonnes = colnames(df_resultat_etab)

et comme on s’intéresse aux premières variables

Colonnes_Generiques = Colonnes[1:8]

on les recupere, pour construire quelques statistiques pour colonnes relatives aux séries L, ES et S

Colonnes_Series = Colonnes[grepl("([a-zA-Z]*?.)*\\.S$|([a-zA-Z]*?.)*\\.ES$|([a-zA-Z]*?.)*\\.L$|([a-zA-Z]*?.)*\\.TOTAL$",Colonnes)]

Et on finit avec les autres

Colonnes_Autres = Colonnes[grepl("(Tx.Bacheliers.*)|(Tx.acces.*)|(Effectif.de.*)|(libelle.region)|(code.region)|(element)",Colonnes)] 
df_resultat_etab = cbind(df_resultat_etab[Colonnes_Generiques],df_resultat_etab[Colonnes_Series],df_resultat_etab[Colonnes_Autres])

On peut aussi localiser les établissements

url_carto_etab <- "https://www.data.gouv.fr/s/resources/adresse-et-geolocalisation-des-etablissements-denseignement-du-premier-et-second-degres/20160526-143453/DEPP-etab-1D2D.csv"
 
download.file(url_carto_etab,destfile=paste0(librairie,"import_carto_etab.csv"))
df_carto_etab = read.csv2("import_carto_etab.csv",header=TRUE

On récupère ici la géolocalisation de 66556 établissements nationaux ! On peut croiser avec des données socio-économiques des communes

nom_base_emploi = "base-cc-emploi-pop-act-2014"
url_baseemploi_popactive = paste0("https://www.insee.fr/fr/statistiques/fichier/2862207/",nom_base_emploi,".zip")
download.file(url_baseemploi_popactive,destfile=paste0(librairie,nom_base_emploi,".zip"))
unzip(paste0(nom_base_emploi,".zip"),overwrite = TRUE) 
df_base_emploi_source = read_excel(paste0(nom_base_emploi,".xls"),sheet="COM_2014",skip=5)

On va exclure les territoires d’outre-mer ici

df_base_emploi_source <- df_base_emploi_source[-which(df_base_emploi_source$DEP %in% c("971","972","973","974","975")),]
df_base_emploi_colonnes = c("CODGEO","P14_POP1564","P14_H1564","P14_F1564","P14_ACT1564","P14_ACTOCC1564","P14_CHOM1564","P14_INACT1564", "P14_ETUD1564", "P14_RETR1564", "P14_AINACT1564", "P14_HCHOM1524", "P14_FCHOM1524", "C14_ACT1564","C14_ACT1564_CS1","C14_ACT1564_CS2","C14_ACT1564_CS3","C14_ACT1564_CS4","C14_ACT1564_CS5","C14_ACT1564_CS6","P14_POP15P")
df_base_emploi = df_base_emploi_source[,names(df_base_emploi_source) %in% df_base_emploi_colonnes]

et corriger les soucis classiques de la Corse,

MiseEnForme_CodeGeo = function(text) {
  text <- gsub("2A", "20", text)  
  text <- gsub("2B", "20", text)  
}
df_base_emploi$CODGEO = MiseEnForme_CodeGeo(df_base_emploi$CODGEO)

On peut aussi utiliser des données de revenus, par communes

nom_base_revenus = "indic-struct-distrib-revenu-2014-COMMUNES"
url_baserevenus = paste0("https://www.insee.fr/fr/statistiques/fichier/3126151/",nom_base_revenus,".zip")
download.file(url_baserevenus,destfile=paste0(librairie,nom_base_revenus,".zip"))
unzip(paste0(nom_base_revenus,".zip"),overwrite = TRUE)
df_base_revenus = read_excel("FILO_DISP_COM.xls",sheet="ENSEMBLE",skip=5)[,c(1,4,7)]
df_base_revenus$CODGEO = MiseEnForme_CodeGeo(df_base_revenus$CODGEO)

On recupere des donnees spatiales relatives aux communes

url_geoloc_communes = "http://www.nosdonnees.fr/wiki/images/b/b5/EUCircos_Regions_departements_circonscriptions_communes_gps.csv.gz"
download.file(url_geoloc_communes,destfile=paste0(librairie,"geoloc_communes.csv.gz"))
df_geoloc_communes = read.csv2(gzfile("geoloc_communes.csv.gz"),header=TRUE, stringsAsFactors = FALSE,encoding="UTF-8")

et comme toujours, un peu de corrections s’imposent

df_geoloc_communes = df_geoloc_communes[-which(df_geoloc_communes$numéro_département %in% c("971","972","973","974","975")),]
df_geoloc_communes = df_geoloc_communes[,names(df_geoloc_communes) %in% c("code_insee","latitude","longitude","codes_postaux")]
df_geoloc_communes_nb <- nrow(df_geoloc_communes)

On va ensuite creer une fonction de remplacement des valeurs manquantes, et de correction des séparateurs décimaux

MiseEnForme_CoordonneesGeo = function(valeur){
pretraitement = ifelse(as.character(valeur)=="-","0",as.character(valeur))
traitement = as.numeric(ifelse(pretraitement==".","0",gsub(pattern=",",replacement=".",pretraitement)))
  return(traitement)
}
df_geoloc_communes$latitude = MiseEnForme_CoordonneesGeo(df_geoloc_communes$latitude)
df_geoloc_communes$longitude = MiseEnForme_CoordonneesGeo(df_geoloc_communes$longitude)

On passe ensuite a l’élimination des lignes en double

df_geoloc_communes = unique(df_geoloc_communes)

On va ensuite changer les noms des colonnes pour harmoniser avec les autres bases

names(df_geoloc_communes) = c("Codes_Postaux","CODGEO","coordonnee_y","coordonnee_x")

On peut ensuite rechercher les lignes en double sur les codes insee

liste_CODGEO2 = aggregate(x=df_geoloc_communes$Codes_Postaux,by=list(df_geoloc_communes$CODGEO),FUN="length")
list_geoloc_communes_CODGEO2 = liste_CODGEO2[liste_CODGEO2$x>1,1]
df_geoloc_communes_CODGEO2 = df_geoloc_communes[df_geoloc_communes$CODGEO %in% list_geoloc_communes_CODGEO2,1:2]

Ici, un correction manuelle s’impose pour 4 configurations : les données propres à Lyon, Paris et Marseille ne sont pas géolocalisées, les données propres à la ville de Laguépie sont géolocalisées en doubles

df_geoloc_communes_propre = df_geoloc_communes[!df_geoloc_communes$CODGEO %in% list_geoloc_communes_CODGEO2,]
df_geoloc_communes_corrige = data.frame(Codes_Postaux=c("13001","69001","75001","82250"),                        CODGEO=c("13055","69123","75056","82088"),
coordonnee_y=c(43.3,45.75,48.85,44.15),
coordonnee_x=c(5.4,4.85,2.31,1.97))
df_geoloc_communes = rbind(df_geoloc_communes_propre,df_geoloc_communes_corrige)

On peut enfin fusionner les bases

df_etab = merge(df_resultat_etab,df_carto_etab,by="Cod.Etab")

Certains établissements ne peuvent être géolocalisés pour certaines années

df_etab_total_nongeolocalises <- df_resultat_etab[!df_resultat_etab$Cod.Etab %in% df_carto_etab$Cod.Etab,]

Comme l’étude ne porte que sur les seuls lycées d’enseignement général et technologique, le dataframe est réduit aux observations relatives d’une part aux lycées, d’autre part aux établissements d’enseignement polyvalent, général ou général et technologique.

df_etab = df_etab[grep("LYCÉE",toupper(df_etab$nature_uai_libe)),]
df_etab = df_etab[grep("GÉNÉRAL|POLYVALENT",toupper(df_etab$nature_uai_libe)),]
df_etab_nongeolocalises = df_etab[df_etab$Cod.Etab %in% df_etab_total_nongeolocalises$Cod.Etab,]
df_etab_geolocalise = df_etab[!is.na(df_etab$coordonnee_x),]
df_etab_geolocalise = df_etab_geolocalise[!is.na(df_etab_geolocalise$coordonnee_y),]

Enfin, on va convertir les code géographiques (ici reconnus comme facteur) en chaines de caractères (de 5 caractères) pour pouvoir fusionner les tables

ConvertCODGEO = function(code) {
  if(is.character(code)) {
    code_character = ifelse(nchar(code)<5, paste0("0",code), code)
    return(code_character)
  }
  else if(is.factor(code)){
    code_character = ifelse(code<10000, paste0("0",as.numeric(as.character(code))), as.numeric(as.character(code)))
    return(code_character)
  }
  else if(is.numeric(code)){
    code_character = ifelse(code<10000, paste0("0",code), as.character(code))
    return(code_character)
  } 
}
df_etab_geolocalise$Code.commune = ConvertCODGEO(df_etab_geolocalise$Code.commune) 
df_etab_geolocalise$Secteur.Public.Prive = sapply(df_etab_geolocalise$Secteur.Public.Prive,function(nature) {ifelse(nature=="PU","Lycées Publics","Lycées Privés")})

On conservation alors les établissements dont la commune n’est pas manquante

df_etab_geolocalise = df_etab_geolocalise[!is.na(df_etab_geolocalise$Code.commune),]

Pour finir, on va creer une base, pour ensuite faire une graphique

tbl_etab_nature_res_source = df_etab_geolocalise[,c(3,8,9,10,11,13,14,15)]
for(i in c(6,7,8)){
  temp = tbl_etab_nature_res_source[!is.na(tbl_etab_nature_res_source[i]),c(1,2,i-3,i)]
  temp$Serie = ifelse(i==6,"L",ifelse(i==7,"ES","S"))
  names(temp)[2:4] = c("Nature","Effectif","Tx.Admis")
  if(i==6){
    tbl_etab_nature_result = temp
  }
  else{
    tbl_etab_nature_result = rbind(tbl_etab_nature_result,temp)
  }
}
graph = ggplot(tbl_etab_nature_result,aes(x=Effectif,y=Tx.Admis,colour=factor(Annee))) 
graph = graph + geom_point(alpha=0.45)
graph = graph + facet_grid(Serie~Nature)
graph = graph + xlab("Effectifs de l'établissement en terminale (par série)") + ylab("Taux d'admission (%)") 
graph = graph + scale_color_discrete(name="Année des\nrésultats")
graph = graph + theme(legend.title = element_text(size=9,face="bold"),
       legend.text = element_text(size=9),
       strip.background = element_rect(colour="black", fill="gray95"),
       panel.border = element_rect(linetype = "solid"),
       panel.grid.major = element_line(colour = "gray75",linetype = "dashed"),
       panel.grid.minor = element_line(colour = "gray95",linetype = "dashed"),
       axis.title.x = element_text(size=9, face="bold"),
       axis.text.x  = element_text(size=8),
       axis.title.y = element_text(size=9, face="bold"),
       axis.text.y  = element_text(size=8))
graph

On a ici l’evolution des resultats en fonction de la taille des etablissements.

df_communes_CorrNaN = df_communes_Corr[which(!df_communes_Corr$TxChomage == "NaN" & !df_communes_Corr$TxCadres == "NaN" & !df_communes_Corr$TxOuvriers == "NaN" & !df_communes_Corr$NbPopulation == "NaN" & !df_communes_Corr$TxSenior == "NaN" & !df_communes_Corr$RevenusMedians == "NaN"),]
df_communes_sp = SpatialPointsDataFrame(coords = df_communes_CorrNaN[, c("coordonnee_x", "coordonnee_y")], data = df_communes_CorrNaN) 
Grille              = as.data.frame(makegrid(df_communes_sp, nsig=2, cellsize = 0.1))
names(Grille)       = c("X", "Y")
coordinates(Grille) = c("X", "Y")
gridded(Grille)     = TRUE  
fullgrid(Grille)    = TRUE  
proj4string(Grille) = proj4string(df_communes_sp)

On peut ensuite faire du krigeage, histoire de lisser un peu nos donnees de chomage et de revenu

df_communes_sp.TxChomage = krige(TxChomage ~ 1, df_communes_sp, Grille, nmax=1)
df_communes_sp.RevenusMedians = krige(RevenusMedians ~ 1, df_communes_sp, Grille, nmax=1)
sp_lycee_WGS84@data$TxChomage      = extract(R.TxChomage,sp_lycee_WGS84)
sp_lycee_WGS84@data$TxCadres       = extract(R.TxCadres,sp_lycee_WGS84)
sp_lycee_WGS84@data$TxOuvriers     = extract(R.TxOuvriers,sp_lycee_WGS84)
sp_lycee_WGS84@data$NbPopulation   = extract(R.NbPopulation,sp_lycee_WGS84)
sp_lycee_WGS84@data$TxSenior       = extract(R.TxSenior,sp_lycee_WGS84)
sp_lycee_WGS84@data$RevenusMedians = extract(R.RevenusMedians,sp_lycee_WGS84)

On peut enfin conclure, en faisant une fonction generique de visualisation

Creation_Graphique = function(df, AnneeObs_Ouv, AnneeObs_Clo, Effectifs, Abscisses, Ordonnees, TitreAbs, TitreOrd, CouleurGraph, CouleurLiss, Serie) {
  df_temp = df[which(df$Annee>=AnneeObs_Ouv & df$Annee<=AnneeObs_Clo),]
  df_temp = df_temp[which(!is.na(df_temp[,Effectifs]) & !is.na(df_temp[,Abscisses]) & !is.na(df_temp[,Ordonnees])),]
  df_temp = df_temp[!df_temp[,Effectifs]==0,]
  df_temp = df_temp[,c(Abscisses,Ordonnees)]
  graphique = ggplot(df_temp,aes(x = df_temp[,Abscisses],y = df_temp[,Ordonnees])) 
  graphique = graphique + geom_point(data = df_temp, aes(x = df_temp[,Abscisses],y = df_temp[,Ordonnees]),size=1, color=CouleurGraph,alpha=0.25) 
  graphique = graphique + geom_density2d(aes(colour=..level..),show.legend=F) + scale_colour_gradient(low="gray55",high="gray25") 
  graphique = graphique + scale_y_continuous(breaks= seq(80,100,by=2), limits = c(80,100))
  graphique = graphique + xlab(TitreAbs) + ylab(TitreOrd) 
  graphique = graphique + ggtitle(Serie) 
  graphique = graphique + theme(plot.title   = element_text(size=13,color=CouleurLiss, face="bold", hjust=0),
       axis.title.x = element_text(size=8, face="bold"),
       axis.text.x  = element_text(size=8),
       axis.title.y = element_text(size=8, face="bold"),
       axis.text.y  = element_text(size=8),
       panel.border = element_rect(linetype = "solid"),
       panel.grid.major = element_line(colour = "gray55",linetype = "dashed"),
       panel.grid.minor = element_line(colour = "gray75",linetype = "dashed")) 
graphique = graphique + stat_smooth(method = "loess",fill=CouleurLiss,color=CouleurLiss)
  return(graphique)
}
Production_Graphique_VI_1 = function(df, Titre_General, Axe_Abscisses, Titre_Abscisses, Annee_Observee_Ouv, Annee_Observee_Clo){
  Graph_S = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.S", Axe_Abscisses, "Tx.Admis.Etab.S", Titre_Abscisses, "Taux d'admission (%)", "dodgerblue3","dodgerblue4","Série S")
  Graph_ES = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.ES", Axe_Abscisses, "Tx.Admis.Etab.ES", Titre_Abscisses, "Taux d'admission (%)","darkorange2","darkorange3","Série ES")
  Graph_L = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.L", Axe_Abscisses, "Tx.Admis.Etab.L", Titre_Abscisses, "Taux d'admission (%)", "chartreuse4","darkgreen","Série L")
  Graph_TS = Creation_Graphique(df, Annee_Observee_Ouv, Annee_Observee_Clo, "Effectif.Etab", Axe_Abscisses, "Tx.Admis.Etab", Titre_Abscisses, "Taux d'admission (%)", "indianred1","red4","Toutes séries")
  p = plot_grid(Graph_S, Graph_ES, Graph_L, Graph_TS, ncol = 2, nrow = 2,align = 'hv',
  scale = c(0.95, 0.95, 0.95, 0.95),vjust = 0.9, hjust=-0.5)
  titre <- ggdraw() + draw_label(Titre_General,fontface="bold", size=10)
  plot_grid(titre, p, ncol = 1, rel_heights=c(.25,5))
}

On note ici

df_lycee <- sp_lycee_WGS84@data

et on peut faire un premier graphique, avec le taux de chomage

Production_Graphique_VI_1(df              = df_lycee,
                          Titre_General   = "Taux d'admission par série en fonction du taux de chômage \n dans la population active - Tous lycées confondus",
                          Axe_Abscisses   = "TxChomage",
                          Titre_Abscisses = "Taux de chômage dans la population active (%)",
                          Annee_Observee_Ouv     = "2013",
                          Annee_Observee_Clo     = "2015")

On ne va pas enfoncer les portes ouvertes de l’inference ecologique, en affirmant des choses aussi stupides que “on a moins de chances d’avoir le bac quand on est au chômage”. Mais on peut noter que dans les zones avec un fort taux de chômage, les résultats au bac sont moins bons.

On peut ensuite regarder en fonction du revenu de la commune du lycée

Production_Graphique_VI_1(df                     = df_lycee,
                          Titre_General          = "Taux d'admission par série en fonction du niveau des revenus disponibles médians - Tous lycées confondus",
                          Axe_Abscisses          = "RevenusMedians",
                          Titre_Abscisses        = "Quantile du niveau des revenus disponibles médians (%)",
                          Annee_Observee_Ouv     = "2013",
                          Annee_Observee_Clo     = "2015")

Fascinant, non ? Mais c’est clairement juste une première approche… il faudrait aller plus loin ensuite !