Tag Archives: fairness

Optimal Transport for Counterfactual Estimation: A Method for Causal Inference

For those who wish to reproduce the techniques proposed in our paper, Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, Ewen Gallic has put online some nice pages, with the application mentioned in the paper (both univariate and bivariate, including confidence intervals with bootstrap), as well as simpler examples, which I use in the slides, to present the method

http://egallic.fr/Recherche/Transport_Counterfactual/

I will present this work at the Bachelier Seminar, in Paris, at the end of the week. Slides are online here.

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

A Fair Pricing Model via Adversarial Learning

We recently uploaded a revised version of our joint paper A Fair Pricing Model via Adversarial Learning, on ArXiv.

At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use econometric or machine learning techniques to classify, but the distinction between a fair actuarial classification and `discrimination’ is subtle. For this reason, there is a growing interest about fairness and discrimination in the actuarial community Lindholm et al. (2022). Presumably, non-sensitive characteristics can serve as substitutes or proxies for protected attributes. For example, the color and model of a car, combined with the driver’s occupation, may lead to an undesirable gender bias in the prediction of car insurance prices. Fairness in insurance pricing is a relatively new and much-requested topic, especially in light of new laws and regulations and past issues encountered in practice (Embrechts and Wüthrich, 2022; Frees and Huang, 2021; Gao and Wüthrich, 2018). Consequently, companies/regulators are looking for new methodologies to ensure a sufficient level of fairness while maintaining an adequate accuracy of predictive models. This paper discusses the importance of adapting the traditional fairness algorithms to specific real-life applications and, in particular, to insurance pricing. We claim that mitigating undesired biases with a generic fair algorithm can be counterproductive insurance. We will show that traditional Fair-ML as adversarial methods are not currently adequate for insurance pricing. Therefore, for these purposes, we have developed a more suitable and effective framework to satisfy a fairness objective while maintaining a sufficient level of predictor accuracy. Inspired by the recent approaches, Blier et al. (2021) and Wuthrich et al. (2021), that have shown the value of autoencoders in pricing, we will show that (2) it can be generalized to multiple pricing factors (geographic, car type), (3) it is more adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric. 

There are more examples in this revised version, including the case of a non-binary target, and a non-binary senssitive attribute, such as a spatial one

Quantifying fairness and discrimination in predictive models 

In about ten days, late in the evening (Montréal time), I will attend the 16th Annual Conference of Thailand Econometric Society (on Machine Learning for Econometrics and Related Topics), at Chiang Mai University (มหาวิทยาลัยเชียงใหม่). I will give a talk (introductionary talk, at 21:30 pm, with the jet lag) on quantifying fairness and discrimination in predictive models (the state-of-the-art paper I will present is online on arXiv) and the slides are now also available (I won’t be able to go to Chiang Mai, unfortunately, and I will be on zoom).

The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to Kranzberg (1986), “technology is neither good nor bad, nor is it neutral”, and therefore, “machine learning won’t give you anything like gender neutrality `for free’ that you didn’t explicitely ask for”, as claimed by Kearns et a. (2019). In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical

Montréal AI Symposium 2022

In about ten days (Saturday afternoon), I will be presenting a poster on fairness, discrimination and insurance at the Montréal AI Symposium, based on our joint paper The Fairness of Machine Learning in Insurance: New Rags for an Old Man?, written with Laurence Barry. Since the paper was quite literary, I used material from the document Insurance: Discrimination, Biases & Fairness to get more a visual poster. Additional information will come while discussing…

This is a poster used at the conference

What is the future of predictive probabilities in insurance?

This post was written with Laurence Barry and Ewen Gallic, in French, in November 2019 (see hal-02350006)

Insurance policies are classic examples of random contracts. This forces insurers to regularly quantify this uncertainty, to calculate probabilities in order to propose “fair” premiums for the commitments they are going to make. Isn’t it time to question this practice, at a time when artificial intelligence is exploding, offering predictive algorithms of a precision never seen before? At a time when big data / big brother could mean the disappearance of uncertainty itself?
Continue reading What is the future of predictive probabilities in insurance?

A fair pricing model via adversarial learning

Nice review of our paper, with Vincent Grari, on montrealethics.ai.

Sacrificing predictive performance is often viewed as an unacceptable option in machine learning. However, we note that to satisfy a fairness objective, the predictive performance can be reduced too much, especially for generic fair algorithms. Therefore, we have developed a more suitable and practical framework by using autoencoders techniques.

A fair pricing model via adversarial learning

Talk at University of Illinois Urbana-Champaign

This Friday, it is our semester break in Montréal, so I will be giving a talk at the University of Illinois Urbana-Champaign, on fairness and discrimination in actuarial pricing. As mentioned in the abstract, the talk will be based on two recent papers, The Fairness of Machine Learning in Insurance: New Rags for an Old Man? and A fair pricing model via adversarial learning

Slides are now online.

A fair pricing model via adversarial learning

with Vincent Grari, Sylvain Lamprier and Marcin Detyniecki we recently uploaded a paper a fair pricing model via adversarial learning on ArXiv

At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use econometric or machine learning techniques to classify, but the distinction between a fair actuarial classification and “discrimination” is subtle. For this reason, there is a growing interest about fairness and discrimination in the actuarial community Lindholm, Richman, Tsanakas, and Wuthrich (2022). Presumably, non-sensitive characteristics can serve as substitutes or proxies for protected attributes. For example, the color and model of a car, combined with the driver’s occupation, may lead to an undesirable gender bias in the prediction of car insurance prices. Surprisingly, we will show that debiasing the predictor alone may be insufficient to maintain adequate accuracy (1). Indeed, the traditional pricing model is currently built in a two-stage structure that considers many potentially biased components such as car or geographic risks. We will show that this traditional structure has significant limitations in achieving fairness. For this reason, we have developed a novel pricing model approach. Recently some approaches have Blier-Wong, Cossette, Lamontagne, and Marceau (2021); Wuthrich and Merz (2021) shown the value of autoencoders in pricing. In this paper, we will show that (2) this can be generalized to multiple pricing factors (geographic, car type), (3) it perfectly adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric.

Assurance, biais, discrimination et équité, chez IVADO

Ce vendredi, je donnerai un exposé au groupe de travail d’IVADO, invité par Dany Plourde. Les slides sont maintenant en ligne. En guise d’introduction, cette petite anecdote

L’exposé sera assez standard, avec une rapide introduction sur l’assurance, avant de revenir sur trois notions clés: la discrimination, les biais et les mesures d’équité

Comme c’est la première fois que je présente au Québec, je vais revenir sur quelques statistiques de marché

et une comparaison d’états aux États-Unis, et de provinces au Canada, pour caractériser des possibles discriminations

Sur les biais, je reviendrais deux de mes préférées, le paradoxe de Simpson et l’inférence écologique

et la loi de Goodhard, et le biais de rétroaction

On finira avec quelques définitions de l’équité