Tag Archives: data

Criminal Records, the Right to be Forgotten, and Inclusive Insurance

I just uploaded an article, “Criminal Records, the Right to be Forgotten, and Inclusive Insurance: Risk, Rehabilitation, and Data Governance in Underwriting“, written with David Schraub,

Criminal history information (CHx) is increasingly used in insurance screening and underwriting, yet it remains under-theorized in the inclusive insurance literature despite its potential to widen protection gaps through quote-gating, opaque third-party data pipelines, and limited avenues for correction. This article argues that criminalhistory underwriting is a joint problem of risk governance and information governance. Criminal justice information is time-sensitive and legally structured, but automated decision systems and vendor supply chains can operationalize it as durable stigma, particularly when records persist across brokers, archives, and derivative products that fail to reflect status changes such as sealing, expungement, or spentness. The article maps where CHx can enter the insurance lifecycle (pre-quote funnels, underwriting/pricing, claims and renewal) and identifies inclusion-relevant failure modes linked to provenance, data quality, update lag, and weak procedural safeguards. It then develops a conceptual framework distinguishing defensible risk relevance from digital punishment, and translates that framework into implementable mechanisms for more inclusive practice: redemption-by-design (time windows and decaying weights), data minimization and purpose limitation grounded in marginal predictive value, auditable vendor governance, and due-process safeguards providing actionable notice, explanation, and rapid correction. The article concludes with a governance standard for regulators and insurers: the key question is not whether CHx is used, but whether any use is demonstrably incremental, time-bounded, contestable, and governable across the data supply chain.

IDSC’24, Insurance Data Science Conference, in Stockholm

Great time at the IDSC’24, Insurance Data Science Conference, in Stockholm, those two days…

I am glad to see so many people using the datasets of the CASdatasets package… good news, we are working with Christophe Dutang, Julien Siharath and Ewen Gallic this summer to enrich it, with fresh and new data, and with vignettes ! more about it this Fall !

From Contemplative to Predictive Modeling

As mentioned yesterday, I gave a talk, this afternoon, entitled From Contemplative to Predictive Modeling (in actuarial science and risk management). Slides are available online, but maybe I can take some time to explain what I talked about…

It is usually claimed that actuaries build ‘predictive models’ but most of the time, what they consider would be simply ‘contemplative modeling’, in the sense that they use past information and hope that the future will be more or less the same (corresponding to the idea of generalization in machine learning). In the context of climate change (but also when modeling insurance market competition) it is not the case, data used to train models do not have the same distribution as the one we will have in the future.

Continue reading From Contemplative to Predictive Modeling

Discrimination by proxy (a real case study)

Yesterday, with Laurence Barry, we posted a blog post “Who benefits from data sharing?” explaining why data sharing, in insurance, could end mutualization. Actually, it can also be bad in the context of discrimination. Consider here the same dataset, with claim occurence, in a real insurance portfolio,

library(InsurFair)
library(randomForest)

Consider a version of this dataset without the gender, and use variable importance to get a list of variables we can use in a predictive model

subfrenchmotor = frenchmotor[,-which(names(frenchmotor)=="sensitive")]
RF = randomForest(y~. ,data=subfrenchmotor)
vi = varImpPlot(RF , sort = TRUE)

We sort variables based on variable importance (the first one is the “most important” one), and add splines for three continuous variables

dfvi = data.frame(nom = names(subfrenchmotor)[-15], g = as.numeric(vi))
dfvi = dfvi[rev(order(dfvi$g)),]
nom = dfvi$nom
nom[1] = "bs(LicAge)"
nom[3] = "bs(DrivAge)"
nom[7] = "bs(BonusMalus)"

Then, the idea is simple : at stage k, we keep the k most important variables, and run a logistic regression on those k variables. Again, I should stress that the gender of the driver is not among those k variables. Then, we compute the average prediction of claim frequency, for mean and women.

n=nrow(subfrenchmotor)
library(splines)
idx_F = which(frenchmotor$sensitive == "Female")
idx_M = which(frenchmotor$sensitive == "Male")
metric_gender= function(k =3){
if(k==0){
reg = glm(y~1, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm = paste("y ~ ",vr,sep="")
reg = glm(fm, family=binomial, data=subfrenchmotor)
yp = predict(reg, type="response")
yp_F = yp[idx_F]
yp_M = yp[idx_M]
sortie = c(mean(yp_F),mean(yp_M),quantile(yp_F,c(.1,.9)),quantile(yp_M,c(.1,.9)))
names(sortie)[1:2]=c("mean_F","mean_M")
}
sortie}

Let us not compute it for all variables

N = 0:15
M = Vectorize(metric_gender)(N)

and plot it

plot(N,M[1,]*100, xlab="Number of predictive variables (without gender)", ylab=
"Average predicted claims frequency (%)", type="b", pch=19, col=COLORS[2], ylim=c(8.12,9))
lines(N, M[2,]*100, type="b", pch=15, col=COLORS[3])

Interestingly, we can clearly see that with 15 explanatory variables, even if our model is gender-blind (since it is not in the training dataset), our model reproduce the difference we can observe in the dataset : annual claim frequency for men is almost 9% and 8.2% for women.

Actually, it is not possible to predict the gender for our 15 variables (below is the ROC curve of the logistic regression to predict the gender)

metric_gender_2= function(k =3){
if(k==0){
reg = glm((sensitive=="Female")~1, family=binomial, data=frenchmotor)
}
if(k>0){
vr = paste(nom[1:k],collapse = " + ")
fm_genre = paste('(sensitive=="Female") ~ ',vr,sep="")
reg = glm(fm_genre, family=binomial, data=frenchmotor)
}
pred = prediction(predict(reg,type="response"),(frenchmotor$sensitive=="Female"))
performance(pred,"tpr","fpr")}
plot(metric_gender_2(15))

but still, when using 15 variables, we obtain discrimination in our portfolio, since the average predictions for mean and women are significantly difference (even if our models are, per se, gender-blind).

Fairness and discrimination, PhD Course, #5 Models and Data

For the fifth course, we will discuss machine learning and standard techniques used to get predictive models, and to assess accuracy of those models.

GLM (possibly constrained)

Classically, we use a penalized version of least squares (but this can be adapted to GLMs, when penalizing the negative log-likelihood).  Because of Karush–Kuhn–Tucker conditions, having a constraint on the parameter is equivalent to the following penalized problem, when the constraint is on the \ell_2 norm of \boldsymbol{\beta},

We can also consider the \ell_1 norm of \boldsymbol{\beta},

Those two approaches can be see as a trade-off between accuracy (here the empirical risk on the left) and complexity of the model (on the right). And we can also consider a mixture of the two norms,

As we will see, it will also be possible to consider some penality related to fairness and discriminiation measures (in-processing).

Classifier and ROC Curves

We will also recall metrics used in the context of classification, such as the ROC curve

Each point of the curve can be related to two areas related to the distributions of the scores (in the two groups), for the same threshold – namely the false positive rate and true positive rate

Based on the ROC curve, we can define the AUC, the area under the ROC curve,

But for classifiers, the important challenge is to have calibrated scores, meaning that we want the score to be interpreted as the true underlying probability.

Calibration

Well-calibration is defined as follows

or (with different notations)

It is a well know properties in several applications.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-07.png

The plot on the right is the calibration plot,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-10.png

We can easily get that plot

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-09.png

This concept is related to the question “do probabilities returned by some model represent reals probabilities ?” For instance, below, we have pictures generated as some sort of geodesic between two pictures, with a woman on the top left, and a man in the bottom right, published in the New York Times. And below, “probabilities” given by  https://www.picpurify.com/demo-face-gender-age.html.

We could agree that it is rather strange that probabilities (to have a man) do not increase continuously, but on top, with extremely high confidence, the model predicts that the picture is the one of a woman, and below, also with extremely high confidence, that the person is a man…

Data, observations vs. experiments

Then, after concept and notations related to models, we will talk about data. More specifically, the distinction between observations and experimentations.

Another popular classification is the one discussed by Judea Pearl.

So we will talk about association, correlation, causal inference, and counterfactuals.

“Correlated variables” or proxies

One important issue, is that with massive data, one can easily get a (good) proxy of almost any sensitive variable.

The concept is related to comonotonicity, or perfect correlation.

But this is clearly too strong, so we will discuss depedence measures, too.

Independence properties

Recall that independence is defined as follows

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-11.png

and we can consider a weaker form, based on null-covariance

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-12.png

or null-correlation

(sidenote, this correlation measure is bounded, and those bounds are related to Hardy-Littlewood inequality and optimal transport)

An interesting measure is the maximal correlation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-13.png

or we can consider a weaker version, without consider all possible transformation, but only a subset

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-14.png

Another important concept is the one of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-16.png

(the later will be used in the context of causal graphs).

Causality

Before talking about causality, recall that what non-independence mean…

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-17.png

We can then construct causal graphs, or “directed acyclic graphs”

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-20.png

where nodes are the variables used in the model, and the outcome (usually that the end of the causal graph). Then we define paths

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-18.png

and the concept of d-separation

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-19.png

This concept is related to the statistical property of conditional independence

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-21.png

More precisely, we have the following Markov property on causal graphs

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-22.png

For example, for such a graphical model,

the joint distribution is \mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2|x_1]\times \mathbb{P}[x_3|x_2]\times \mathbb{P}[x_4|x_3]and for the graphical model below

we have\mathbb{P}[x_1,x_2,x_3,x_4]=\mathbb{P}[x_1]\times \mathbb{P}[x_2]\times \mathbb{P}[x_3|x_1,x_2]\times \mathbb{P}[x_4|x_3]Those graphs can be related to structural models (with idiosyncratic noise denoted U), since

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-23.png

Potential outome

Another important concept is the concept of counterfactuals, and potential outome. In an ideal world, we would have observed the outome in both cases, with and without the treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-24.png

but in real life, it’s only one of them,

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-25.png

And the goal will be, somehow, to estimate what the non-observed outcome would be. And then, classical quantites we wish to estimate are the average treatement effect, and the conditional version, based on some covariates.

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-26.png

This concept will be related to counterfactual fairness actually, when the “treatement” will be the sensitive attribute.

Twin network representation of the counterfactual

Finally, we will consider a so-called “twin network representation”. Consider a DAG, associated with some simple structural model

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-27.png

Based on a structural model, we can get values of idiosyncratic noise component

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-28.png

Then, we use those values on the twin representation, when the treatement is not 0, but 1. Counterfactuals are created by using the same noises

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-29.png

The difference between the two outcomes is the treatement effect, or the disparate treatement

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-30.png

or more generally, we write

https://freakonometrics.hypotheses.org/files/2024/01/cours-slides-fairness-31.png

This is an idea used in Plecko & Meinshausen, 2019, in the context of fairness, but we will discuss this more, later on…

Data Augmentation for Imbalanced Regression

Our paper Data Augmentation for Imbalanced Regression, writen with Denys Pommeret and Sam Stocksieker, is now available on ArXiv.

In this work, we consider the problem of imbalanced data in a regression framework when the imbalanced phenomenon concerns continuous or discrete covariates. Such a situation can lead to biases in the estimates. In this case, we propose a data augmentation algorithm that combines a weighted resampling (WR) and a data augmentation (DA) procedure. In a first step, the DA procedure permits exploring a wider support than the initial one. In a second step, the WR method drives the exogenous distribution to a target one. We discuss the choice of the DA procedure through a numerical study that illustrates the advantages of this approach. Finally, an actuarial application is studied.

Probabilités prédictives, pour 100% actuaires, et 100% data science

Ce jeudi, je participe (virtuellement) à un atelier sur les probabilités prédictives à la journée 100% actuaires, et 100% data science, de l’Institut des Actuaires, en France, avec Nicolas Marescaux et Florence Picard.

L’évènement n’étant pas offert en mode “hybride” je serais présent via une courte vidéo, qui sera transmise lors de l’atelier.

Je vais reprendre graphiquement un point que j’évoque régulièrement, à savoir la variabilité fondamentale qui existe lorsqu’on modéliser des évènements binaires, y compris si la vraie probabilité est connue. Dans un modèle Gaussien, si Y\sim\mathcal{N}(\mu,\sigma^2), même en connaissant (parfaitement) la moyenne \mu, Y-\mu est une variable, aléatoire, Y-\mu\sim\mathcal{N}(0,\sigma^2) qui peut avoir une petite variance, si \sigma est faible. Et dans un modèle de régression Y|X=x\sim\mathcal{N}(\mu_x,\sigma_0^2) avec \sigma_0^2\leq \sigma^2. Mais dans les GLM, avec les modèles de classification ou de Poisson, la variance ne diminue pas : si Y|X=x\sim\mathcal{B}(p_x), et la variance sera p_x(1-p_x) qui restera toujours non-nulle quand p_x\in(0,1).

L’autre point est que si on a une loi de Bernoulli, p ne peut être vraiment connu que si on a beaucoup d’observations, c’est la loi des grands nombres, sinon on a de la variabilité. Sur la figure ci-dessous, j’ai plusieurs niveaux de probabilité p, et je trace à gauche l’intervalle de “confiance” pour la fréquence, quand on n=100 observations, avec en bas la distribution de la fréquence. Par exemple, si p=15\%, avec une centaine d’observations, la fréquence a 90% de chances d’être entre 10\% et 20\%. Autrement dit, prédire une probabilité d’avoir un accident de 15.723\% n’a pas trop de sens. Au centre, le regarde la distribution de \overline{y}_n-p, et on voit qu’il reste une part non négligeable de variance. À droite, c’est la variation relative (\overline{y}_n-p)/p, qui varie en sens inverse par rapport à la variation absolue : si p est faible (disons 2\%), on s’écarte peu en valeur absolue (\pm 2\%) mais il n’est pas impossible (y compris avec n=100 observations) d’observer 2 fois trop de 1).

Regarder \overline{y}_n-p, c’est supposer qu’on peut connaître la vraie probabilité, et qu’on l’utilise : \overline{y} c’est la charge moyenne par années, et p, c’est la prime pure (annuelle). Donc \overline{y}_n-p, c’est la perte annuelle. Mais au lieu d’utiliser p, on peut imaginer qu’on mutualiser : par exemple ci-dessous, si p\in[0\%;4\%] on utilise la valeur moyenne (soit 2\%). On voit qu’on rajoute de la variabilité sur la distribution des pertes individuelle, d’autant plus qu’il y a peu de classes.

On en reparlera…

DataDay 2022 MAIF

Ce mercredi, je participerais au à la semaine DataDay 2022 de la MAIF, pour parler de données et de climat…

Je commencerais pas revenir (il s’agit d’un exposé grand public) sur les spécificités des différents types de données, en commençant par les données individuelles vs. les données temporelles

Dans le premier cas, on parlera surtout (dans les applications climatiques) de données spatiales

et dans le second cas, de séries temporelles,

On finira avec les données spatio-temporelles

Insurance Data Science Conference 2021 (online)

The Insurance Data Science Conference returns in 2021 for an on-line global event. The conference will run over three half-days (afternoons in Europe & Africa / mornings in the Americas). The conference brings together academics and practitioners in areas including data science, analytics, machine learning, artificial intelligence, computational statistics and software, as applied in the insurance industry. For more information, see https://insurancedatascience.org/

INF7100, statistiques

La seconde partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les statistiques, univariées et multivarirées. Le plan sera le suivant

  • 201: De la Statistique aux Sciences de Données pdf video (14:24)
  • 211: Fonctions Usuelles en Statistique (fonction de réparition, densité, histogramme) pdf video (28:37)
  • 221: Indicateurs Statistique: Valeur Centrale (moyenne) pdf video (32:56)
  • 222: Indicateurs Statistique: Dispersion (variance, inégalités) pdf video (22:21)
  • 223: Indicateurs Statistique: Approximations (approximation normale) pdf video (18:42)
  • 224: Indicateurs Statistique: Quantiles pdf video (24:54)
  • 231: Inférence (statistique bayésienne) pdf video (39:33)
  • 241: Tests Statistiques (1) (tests, significativité, p-value) pdf video (43:41)
  • 242: Tests Statistiques (2) (erreurs) pdf video (16:51)
  • 261: Statistiques Bivariées pdf video (25:16)
  • 271: Statistiques Multivariées: Projections pdf video (29:06)
  • 272: Statistiques Multivariées: Clusters pdf video (32:21)
  • 281: Réseaux et Graphs pdf video (32:40)
  • 291: Données Chronologiques pdf video (29:01)

 

INF7100, les données

La première partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les données (et la distinction entre les données d’observations et les données d’expérience). Le plan est le suivant

  • 111: Observation vs. Expérimentation pdf video (19:48)
  • 112: Données Observationnelles et Biais pdf video (22:39)
  • 121: Paradoxe de Simpson pdf video (22:24)
  • 122: Recherche de Contrefactuels pdf video (25:27)
  • 123: A/B Testing et Renforcement pdf video (12:13)
  • 131: Incertitude (1) pdf video (39:01)
  • 132: Incertitude (2) pdf video (41:45)

(oui je suis passé sur youtube pour ce cours… mon compte viméo a malheureusement été suspendu… et toutes les capsules de mon cours précédent ne sont plus visibles)

Testing for Covid-19 in the U.S.

For almost a month, on a daily basis, we are working with colleagues (Romuald, Chi and Mathieu) on modeling the dynamics of the recent pandemic. I learn of lot of things discussing with them, but we keep struggling with the tests. Paul, in Montréal, helped me a little bit, but I think we will still have to more to get a better understand. To but honest, we stuggle with two very simple questions

  • how many people are tested on a daily basis ?

Recently, I discovered Modelling COVID-19 exit strategies for policy makers in the United Kingdom, which is very close to what we try to do… and in the document two interesting scenarios are discussed, with, for the first one, “1 million ‘reliable’ daily tests are deployed” (in the U.K.) and “5 million ‘useless’ daily tests are deployed”. There are about 65 millions unhabitants in the U.K. so we talk here about 1.5% people tested, on a daily basis, or 7.69% people ! It could make sense, but our question was, at some point, is that realistic ? where are we today with testing ? In the U.S. https://covidtracking.com/ collects interesting data, on a daily basis, per state.

url = "https://raw.githubusercontent.com/COVID19Tracking/covid-tracking-data/master/data/states_daily_4pm_et.csv"
download.file(url,destfile="covid.csv")
base = read.csv("covid.csv")

Unfortunately, there is no information about the population. That we can find on wikipedia. But in that table, the state is given by its full name (and the symbol in the previous dataset). So we new also to match the two datasets properly,

url="https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_population"
download.file(url,destfile = "popUS.html")
#pas contaminé 2/3 R=3
library(XML)
tables=readHTMLTable("popUS.html")
T=tables[[1]][3:54,c("V3","V4")]
names(T)=c("state","pop")
url="https://en.wikipedia.org/wiki/List_of_U.S._state_abbreviations"
download.file(url,destfile = "nameUS.html")
tables=readHTMLTable("nameUS.html")
T2=tables[[1]][13:63,c(1,4)]
names(T2)=c("state","symbol")
T=merge(T,T2)
T$population = as.numeric(gsub(",", "", T$pop, fixed = TRUE))
names(base)[2]="symbol"
base = merge(base,T[,c("symbol","population")])

Now our dataset is fine… and we can get a function to plot the number of people tested in the U.S. (cumulated). Here, we distinguish between the positive and the negative,

drawing = function(st ="NY"){
sbase=base[base$symbol==st,c("date","positive","negative","population")]
sbase$DATE = as.Date(as.character(sbase$date),"%Y%m%d")
sbase=sbase[order(sbase$DATE),]
par(mfrow=c(1,2))
plot(sbase$DATE,(sbase$positive+sbase$negative)/sbase$population,ylab="Proportion Test (/population of state)",type="l",xlab="",col="blue",lwd=3)
lines(sbase$DATE,sbase$positive/sbase$population,col="red",lwd=2)
legend("topleft",c("negative","positive"),lwd=2,col=c("blue","red"),bty="n")
title(st)
plot(sbase$DATE,sbase$positive/(sbase$positive+sbase$negative),ylab="Ratio of positive tests",ylim=c(0,1),type="l",xlab="",col="black",lwd=3)
title(st)}

Let us start with New York

drawing("NY")

As at now, 4% of the entiere population got tested… over 6 weeks…. The graph on the right is the proportion of people who tested positive… I won’t get back on that one here today, I keep it for our work. In New Jersey, we got about 2.5% of the entiere population tested, overall,

drawing("NJ")

Let us try a last one, Florida

drawing("FL")

As at today, it is 1.5% of the population, over 6 weeks. Overall, in the U.S. less than 0.1% people are tested, on a daily basis. Which is far from the 1.5% in the U.K. scenarios. Now, here come the second question,

  • what are we actually testing for ?

On that one, my experience in biology is… very limited, and Paul helped me. He mentioned this morning a nice report, from a lab in UC Berkeley

One of my question was for instance, if you get tested positive, and you do it again, can you test negative ? Or, in the context of our data, do we test different people ? are some people tested on a regular basis (perhaps every week) ? For instance, with antigen tests (Reverse Transcription Quantitative Polymerase Chain Reaction (RT-qPCR) – also called molecular or PCR – Polymerase Chain Reaction – test) we test if someone is infectious, while with antibody test (using serological immunoassays that detect viral-specific antibodies — Immunoglobin M (IgM) and G (IgG) — also called serology test), we test for immunity. Which is rather different…

I have no idea what we have in our database, to be honest… and for the past six weeks, I have seen a lot of databases, and most of the time, I don’t know how to interpret, I don’t know what is measured… and it is scary. So, so far, we try to do some maths, to test dynamics by tuning parameters “the best we can” (and not estimate them). But if anyone has good references on testing, in the context of Covid-19 (for instance on specificity, sensitivity of all those tests) I would love to hear about it !

INF7100, à distance

Suite au contexte de la covid-19, la session d’été commencera une semaine plus tard.

Le cours INF7100 Initiation à la science des données et à l’intelligence artificielle – qui devait commencer mardi 28 avril commencera le 5 mai prochain. Les étudiants inscrits recevront un lien pour se connecter, pour une présentation globale. On le donnera conjointement avec Marie-Jean Meurs (du département d’informatique) et Jean-Hugues Roy (de l’école des médias). Je posterais des informations dans les semaines à venir sur les points que je vais aborder (essentiellement sur la science des données).