Insurance, Biases, Discrimination and Fairness is officially annonced for February 2024… (ISBN 978-3-031-49782-7)
Since I will use it for teaching in January and February, we will use the v2, mentioned in a previous post (pdf available on my blog).
Insurance, Biases, Discrimination and Fairness is officially annonced for February 2024… (ISBN 978-3-031-49782-7)
Since I will use it for teaching in January and February, we will use the v2, mentioned in a previous post (pdf available on my blog).
This Thursday, I will give a talk on fairness and ethics at the TD Insurance IA/ML annuel conference. Slides are available (it is an overview of Insurance, biaises, discrimination and fairness v2, emphasizing the use of optimal transport techniques).
This Thursday, I will present at the CIREQ Séminaire Marcel-Dagenais en Économétrie at Université de Montréal, ou paper Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, written with Emmanuel Flachaire and Ewen Gallic.
Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).
Slides are available online. I will try to mention additional papers published this year, such as Fairness in Multi-Task Learning via Wasserstein Barycenters, Mitigating Discrimination in Insurance with Wasserstein Barycenters or more recently A Sequentially Fair Mechanism for Multiple Sensitive Attributes.
Demain matin, avec Olivier et Marie-Pier Côté, on sera chez l’assureur Intact pour parler équité et discrimination. Olivier présentera ses travaux récents sur l’utilisation de modèles causaux pour proposer des modèles “équitables” en assurance. Le papier (a fair price to pay: exploiting directed acyclic graphs for fairness in insurance) sera bientôt disponible !
Début 2010, Nicolas Sarkozy, suivit par Jérôme Cahuzac, entrait “en guerre contre les fraudeurs”. En mars 2020, la France (par la voix de son président) entrait en “guerre sanitaire” contre un coronavirus. Et plus récemment, le sociologue Dominique Méda reprenait l’expression de l’économiste Christian Gollier qui commençait son livre par la phrase « dans ce livre, j’exprime mes espoirs et mes doutes quant à la possibilité de gagner la guerre mondiale contre les dérèglements du climat ». Évoquer la guerre permet probablement de marquer les esprits, mais à force d’être en guerre contre tout on peut se demander si l’association a encore du sens.
Continue reading Est-il nécessaire (et utile) d’être en guerre contre tout ?
Le 16 mai 2024, avec Marie-Pier Côté, on organise la seconde journée sur l’équité et la discrimination en assurance, à l’Université Laval. Plus d’information dans les mois à venir.
On Monday, December 18th, 2023, we organize, at UQAM, a workshop on “Networks, Games and Risk“.
Decentralized risk-sharing markets are markets for risk exchange in which a pool of individuals agree to mutually insurer each other, without recourse to a centralized insurance provider. Some important problems to examine in these markets are the following:
Examining these problems requires an interdisciplinary approach, drawing from economic theory, insurance and actuarial science, game theory, and related fields of applications. It is the aim of this workshop to bring together researchers from various fields, to discuss open problems in the theory of decentralized risk-sharing along networks, as well as potential interdisciplinary approaches to tackle these problems.
Tonight (Montréal time), Marie-Pier Côté will give a talk on “a fair price to pay: exploiting directed acyclic graphs for fairness in insurance” based on recent joint work with our PhD student, Olivier Côté, in Melbourne, Australia
Many jurisdictions have laws or guidelines stipulating that insurance companies must not discriminate on some specified policyholder characteristics. Omission of the prohibited variables from the models removes direct discrimination, but does not prevent proxy discrimination — a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of allowed covariates. In the actuarial literature, there remains some confusion on the definition of indirect discrimination: this impedes the understanding of the goals of each fairness methodology and their comparison. In the causal inference literature, many tools, such as directed acyclic graphs (DAGs), help uncover various types of biases. A DAG describes the causal relationships between variables of interest and has clear dependence implications. We exploit this tool for fairness to formally define direct and indirect discrimination, to discuss potential sources of bias, and to understand the properties of different fairness methodologies. Four families of fair scores (best-estimate, unaware, aware and corrective) are placed in the DAG representing the insurance pricing problem. This allows us to study their behaviour in terms of direct and indirect discrimination. A comprehensive pedagogical example illustrates our findings.
More to come soon…
In the Summer 2022, my report Insurance, biaises, discrimination and fairness (v1) was officially uploaded on the website of the Institut Louis Bachelier. I have spent another year to add illustrations and examples, and I sent the manuscript to the publisher at the beginning of the Summer 2023. Because of delays, the book is not out yet, but the publisher allowed me to upload Insurance, biaises, discrimination and fairness v2 of the document. Note that it will be the lecture notes of the doctoral course I will give this Winter at ENSAE, in Paris, France.
The R functions (and package) will be uploaded on https://github.com/freakonometrics/InsurFair soon.
Cet article, publié dans le dernier numéro de la revue Risques, a été co-écrit avec Nicolas Marescaux.
Récemment, une chronique sur France Info citait un décideur politique, exprimant sa frustration envers le Conseil d’orientation des retraites (COR). Selon lui, le COR, « en définissant plusieurs scénarios prévisionnels, a empêché tout consensus sur la nécessité (ou non) d’une réforme ». Cette déclaration, bien que centrée sur la polémique de la réforme des retraites, soulève une question plus large : comment naviguer et communiquer efficacement dans un environnement incertain, surtout lorsqu’il s’agit de prendre des décisions cruciales ? Cette question est d’autant plus pertinente lorsqu’on la met en parallèle avec la récente demande de moratoire sur la recherche en intelligence artificielle et les réformes réglementaires liées au changement climatique. L’incertitude, en créant un flou ou un manque de confiance dans les informations disponibles, rend-elle plus ardue la recherche d’un accord ou d’un consensus sur une question donnée ? Et par extension, freine-t-elle le processus de prise de décision ? (à suivre)