Category Archives: Seminar

“Scope and limits of artificial intelligence” at the SCOR foundation monthly webinar

This morning, I will give a talk on “scope and limits of artificial intelligence” at the SCOR foundation monthly webinar. As discussed previsously, we currently have ongoing research on discrimination and fairness founded by the fondation (newsletter #1 is online).

Insurance (and further motivations)

Since we will talk about fairness, I will start with a couple of motivations. The first one is about COMPAS,

Interestingly, we have the data to analyse that one. In the original analysis, conditional on non-re-offending, proportions of being wrongly classified in the two protected groups are significantly different, so the algorithm is racist,

The answer was that actually, conditional on being classified as high risk, the probability of re-offense in the two protected groups are significantly similar, so the algorithm is not racist,

So clearly, we can start to see that it will not be so easy, since using the same data and the same models, two different conclusions can be obtained.

We will also disccuss legal aspects.

This idea of “determining actuarial factor” has been remove in Europe, but we can still find it in Québec

I can also mention some recent projects, in Colorado, where insurers are asked  to predict race and ethnicity (that specific topic is on our agenda for the summer)

And finally, I should stress that discrimination has not much to do with the intention of the statistician. This is the idea of indirect discrimination

I should also mention “redlining“. About 100 years ago, in the US, we started to see maps, created by HOLC (based on City Survey Files, 1935-1940). Those maps contained “red” areas and “green” areas. Bankers were supposed to avoid the red areas, because they were considered too risky.

As a sidenote, we see nowadays some blue-lining related to climate risks,

“Blue-lining,” from the consumer’s perspective, is when banks or mortgage lenders draw lines of risk around certain streets or neighborhoods, often without clear disclosure.

Finally, I just want to recall that algorithms just tend to reproduce what can be observed in data. If there is a difference between men and women, they will reproduce it.

A bit more on insurance

I should also stress an important problem (that could be related to a paper we wrote, in French, a few years ago). Classically, when modeling categorical variables, such as a binary variable y\in\{0,1\}, practitionners just care about getting the good category. On the left, we have pictures of cats and dogs to train a model, then we try on a new picture, that is either a cat or a dog. Somehow, there is a ground truth and it is possible to see if we are right or wrong. Same if we want to detect a disease on medical pictures. Now, if we move to the right. In the middle, we have a model that predicts if it will rain, or not. But here, maybe, what we care about is actually the probability to have rain. On the right, we have the actuarial problem of modeling claims frequencies. We do not want to predict who will claim a loss, but we want a good estimator of the probability to claim a loss. The challenge, clearly, is that we cannot observe that one. We cannot observe the latent risk factor. We only observe if people got an accident or not. But some people with a very small probability can still claim a loss. And very bad drivers can actually be very lucky, and got no accident one year.

Again, in insurance, we care more about the score, the estimation of the probability than the class \widehat{y}. So we can slightly modify standard fairness definitions, to be based not on predicted classes \widehat{y}, but on the score m(\boldsymbol{x},s). As we will discuss, there are usually three general definitions of so-called “group fairness”

Quantifying unfairness with optimal transport

Let us start with demographic parity. A weak version is that, on average, scores in the two groups should identical (or close). An alternative is the strong version, asking for equalities in distributions : for any set \mathcal{I}\subset[0,1], the probability that the score is in \mathcal{I} (e.g. between 40% and 60%) should be the same in the two groups.

Mathematically, we need a distance between the distributions of scores in the two groups. And a popular distance is Wasserstein distance, that is related to optimal transport.

The empirical version is perhaps easier to understand, and mapping is based on matching of individuals. xxx

As a cultural sidenote, a couple of slides to explain why it has to do with “optimal transport”, going back to Monge (1781)‘s problem. It’s all about transporting the sand, grain by grain, from the hole to the pile. Below, we have a (purely) random transport. Which is not efficient at all…

and then the optimal version (for a strictly convex cost function), he leftmost grain in the hole goes on the lefttmost part of the stack, etc.

Mitigation

For mitigation (once we have observe that there was discrimination, as discussed previously) heuristically, we want to be somewhere in between the two distributions in the two subgroups,

Being “in between” can be interpreted locally: for someone in group A, it should be between (weights are related to proportions in the two groups) the prediction, as someone in group A, and then some sort of counterfactual in the other group, namely the prediction that person would have obtained if she had been in group B, based on the same probability level,

For the other group it is the opposite

Beyond demographic parity

If we get back to our COMPAS examples, demographic parity, in the standard classification-based definition, would be translated as

If we get back to the original motivation we gave, it had nothing to do with demographic parity, the first slide had to do with separation, or equalized odds, while the second one had to do with sufficiency, or calibration.

More generally, if we consider a weak version of the independence criterias, we have moments equality, within each protected subgroup,

Let us mention a bit more calibration. Calibration is deeply related to the interpretation of “probabilities” as returned by models as “real probabilities”. In machine learning, it is hard to define properly what those “probabilities” are.

Calibration is related to the following idea, discussed above: if we consider all cases where the predicted probability was 40% (or say, close to 40%), then the proportion on 1’s should be close to 40%.

To conclude that disgression, I can mention the following example highlighting that we should be concerned by probabilities returned by machine learning algorithms. Consider some pictures, generated by some algorithm, and more precisely, some flow of pictures, from a woman to a man

Below, we can see probabilities given by some online appplication, that returns probabilities to be a woman, given a picture. Can’t we agree that it is surprising that those probabilities (of beeing a woman) do not decrease continuous, from the picture in the top left corner and the one in the bottom right one ?

Finally, I can also mention “individual fairness”, or “counterfactual fairness”. Here also, optimal transport can be used, to quantify counterfactual unfairness. But I won’t be too long here.

Finally, an opening for next year’s agenda, with interpretability. Interpretability is a very important issue in actuarial science, which is not as objective as people might think, and the popular

let the data speak for itself

In insurance, interpretation is very important, probably more important than model assumptions

Interpretation become a key concept when dealing with multiple sensitive attributes

To conclude, just a final reminder that dealing with mitigation is a complex philosophical problem….

Tomorrow, we will discuss further at our workshop, in Québec city

Brief talk on non-diversification of extreme risks, for France Stratégie

Tomorrow, I was invited to give a (brief) talk at our working group, at France Stratégies, on (non) diversification of extreme risks. Slides are online, and results are related to recent papers by Paul Embrechts and Ruodu Wang. More precisely, here are some references

But first, before discussing large risks, I need to get back (quickly) on the Pareto distribution,

To visualize Pareto tails, one can consider the Pareto plot. If points are on a straight line with (negative) slope \alpha, then observations are Pareto distributed, with tail index precisely \alpha. Depending on the slope (compared with -1), risks have either finite or infinite mean.

Infinite mean is not that common actually. It is hard to visualize what it means, actually because for any (finite) n, the empirical average \displaystyle{\overline{x}=\frac{1}{n}\sum_{i=1}^nx_i} always exists. To visualize what’s going on, we can plot the ratio \max\{x_i\} over the sum. That could be related to the concept of “top share” in inequality.

On the left, risks with finite variance (and of course finite mean). In the middle, infinite variance by finite mean. After a while, it is quite rare to have the maximum weighting for more than 1% in the total sum. With infinite mean, on the right (not too far from the limit, since \alpha is here 0.95 – finite mean means that \alpha exceeds one).

Now, if we get back to risks and insurance, recall basic things on stochastic dominance,

Then we have the following results (that is actually the most important slide)

I did include a slide with the mathematical proof (that is quite lovely actually, and straightforward)

Online Seminar Finance & Modeling, Centre d’Économie de la Sorbonne

In a week,  I will give a talk at the Modélisation Financière seminar (“Online Seminar Finance & Modeling” according to the invitation) on Using optimal transport to mitigate unfair predictions. Slides are now on line.

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).

Exposé au séminaire de statistique (StatQAM)

Tomorrow, Ewen Gallic will present some recent work at the StatQAM statistical seminar, on calibration, with Agathe Fernandes Machado, François Hu, and Emmanuel Flachaire. It will substantially be based on our recent paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration

The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

To illustrate, consider predictions about the gender of the person on the picture, including probabilities (confidence), obtained from https://www.picpurify.com/demo-face-gender-age.html, with fake pictures, from https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html.

TD General Insurance Pricing Seminar

Tomorrow, I will give a talk at TD General Insurance Pricing Seminar, on fairness and ethics in insurance. Slides are now online.

After a very general (and long) introduction, to motivate our recent work on discrimination, I will try to explain how to quantify possible discrimination (with respect to a binary sensitive attribute), using Wasserstein distance, and optimal transport

and the use of Wasserstein Barycenter to mitigate discrimination

I will also mention our worshop in May, at Laval University,

Talk at the Groupe de travail ARC (Actuariat et Risques Contemporains), in Paris

This Friday, I will give a talk in Paris, on using optimal transport to mitigate unfair predictions, at the ARC Seminar.

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).

Slides are now available online.

Talk on Fairness and Discrimination in Insurance at the Thelem-ILB Chaire

This morning, I will be giving a talk for the Thelem-ILB Chaire (Thelem – historically compagnie d’assurance mutuelle contre l’incendie dans le département du Loiret – founded in 1820, one of France’s oldest companies), on fairness and discrimination in insurance, and more specifically on counterfactual fairness, and causal graphs. It is based on recent work with Olivier Côté, our PhD student in Laval (Québec), co-supervised with Marie-Pier,

Slides are available online. I can mention here that we obtained a grant from the Thelem-ILB Chaire to sponsor a trip in Europe for Olivier, to attend the Insurance Data Science conference in London, and the 26th International Congress on Insurance: Mathematics and Economics in Edinburgh.

Table ronde sur le risque sécheresse en France, à l’ENS Ker Lann

Jeudi 7 décembre, je participerai (en distanciel) à une “table ronde” sur le risque sécheresse en France, à l’École Normale Supérieure de Ker Lann, intitulée “enjeux actuels et futurs des sécheresses“. Comme on m’a demandé de présenter nos travaux récents, j’ai prévu quelques slides rapides…

[mise à jour] la conférence est en ligne sur youtube.

Econometrics Seminars at Université de Montréal

This Thursday, I will present at the CIREQ Séminaire Marcel-Dagenais en Économétrie at Université de Montréal, ou paper Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, written with Emmanuel Flachaire and Ewen Gallic.

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

Slides are available online. I will try to mention additional papers published this year, such as Fairness in Multi-Task Learning via Wasserstein Barycenters, Mitigating Discrimination in Insurance with Wasserstein Barycenters or more recently A Sequentially Fair Mechanism for Multiple Sensitive Attributes.

Présentation sur l’équité et la discrimination en assurance, pour Intact

Demain matin, avec Olivier et Marie-Pier Côté, on sera chez l’assureur Intact pour parler équité et discrimination. Olivier présentera ses travaux récents sur l’utilisation de modèles causaux pour proposer des modèles “équitables” en assurance. Le papier (a fair price to pay: exploiting directed acyclic graphs for fairness in insurance) sera bientôt disponible !

A Fair price to pay: exploiting directed acyclic graphs for fairness in insurance

Tonight (Montréal time), Marie-Pier Côté will give a talk on “a fair price to pay: exploiting directed acyclic graphs for fairness in insurance” based on recent joint work with our PhD student, Olivier Côté, in Melbourne, Australia

Many jurisdictions have laws or guidelines stipulating that insurance companies must not discriminate on some specified policyholder characteristics. Omission of the prohibited variables from the models removes direct discrimination, but does not prevent proxy discrimination — a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of allowed covariates. In the actuarial literature, there remains some confusion on the definition of indirect discrimination: this impedes the understanding of the goals of each fairness methodology and their comparison. In the causal inference literature, many tools, such as directed acyclic graphs (DAGs), help uncover various types of biases. A DAG describes the causal relationships between variables of interest and has clear dependence implications. We exploit this tool for fairness to formally define direct and indirect discrimination, to discuss potential sources of bias, and to understand the properties of different fairness methodologies. Four families of fair scores (best-estimate, unaware, aware and corrective) are placed in the DAG representing the insurance pricing problem. This allows us to study their behaviour in terms of direct and indirect discrimination. A comprehensive pedagogical example illustrates our findings.

More to come soon…

Talk at the ESSEC Risk Seminar

Thursday, I will be at La Défense, in Paris (France), to give a talk at the ESSEC Risk Seminar, entitled Causal Inference and Counterfactuals with Optimal Transport, with Applications in Fairness and Discrimination. Slides are now available, and the talk will be based on same recent papers, starting with Mitigating Discrimination in Insurance with Wasserstein Barycenters (presented last week-end at BIAS 2023) but also Fairness in Multi-Task Learning via Wasserstein Barycenters, and A Sequentially Fair Mechanism for Multiple Sensitive Attributes. There is also the textbook that should appear before the winter.