18 months ago, we organized a Workshop on fairness and discrimination in insurance, at Laval University (in Québec city), with Marie-Pier Côté. And we can now officially annonce that in 6 months, the second workshop will be organized. More to come soon…
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Nice review of our paper, with Philipp Ratz and François Hu, on montrealethics.ai.
Ask a group of people which biases in machine learning should be reduced, and you are likely to be showered with suggestions, making it difficult to decide where to start. To enable an objective discussion, we study a way to sequentially get rid of biases and propose a tool that can efficiently analyze the effects that the order of correction has on outcomes.
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Podcast: IA, biais et éthique en assurance
Lors de mon dernier passage à Paris, en France, j’en avais profité pour discuter avec Vivien, et enregistrer un podcast comme on dit en France, ou balado comme on dit ici. C’était l’occasion de revenir un peu sur le Manuel d’Assurance, sorti un an auparavant, et le livre Insurance, Biases, Discrimination and Fairness annoncé pour l’hiver prochain… Le podcast peut s’écouter sur youtube, et spotify
(image générée par ai-comic-factory)
Table ronde sur le risque sécheresse en France, à l’ENS Ker Lann
Jeudi 7 décembre, je participerai (en distanciel) à une “table ronde” sur le risque sécheresse en France, à l’École Normale Supérieure de Ker Lann, intitulée “enjeux actuels et futurs des sécheresses“. Comme on m’a demandé de présenter nos travaux récents, j’ai prévu quelques slides rapides…
[mise à jour] la conférence est en ligne sur youtube.
Fondation SCOR, Fairness of predictive models: an application to insurance markets
The Scientific Council of the SCOR Foundation has decided to fund the research project “Fairness of predictive models: an application to insurance markets” until its anticipated completion date in three years (2023-2025). The project will be led by the University of Quebec and directed by Arthur Charpentier, professor in the mathematics department of the University of Quebec in Montreal. This project aims to propose corrections to the automatic artificial intelligence algorithms that can be used to determine the optimal pricing of individual policies in order to remove or limit the biases likely to generate inequities or even discrimination based on gender, race, religion, origin, etc. in the coverage offered by insurers or reinsurers to policyholders. The subject is of both theoretical (better control of black boxes constituted by models based on artificial intelligence algorithms) and practical (reduction of the risks of discrimination and inequity) interest. From this point of view, it is very topical for insurers and reinsurers facing major reputational challenges in the context of the growing importance of social networks. In addition to his role at the University of Quebec, Arthur Charpentier is a member of the Institute of Actuaries, internationally recognized expert in actuarial science, author of numerous academic articles published in renowned academic actuarial journals in both nationally and internationally.
Measuring and Mitigating Biases in Motor Insurance Pricing
Our paper, Measuring and Mitigating Biases in Motor Insurance Pricing, with Mulah Moriah and Franck Vermet, is now available on Arxiv.
The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. These rates must conform to principles of transparency, explainability, and ethical considerations. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, taking into account various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender or mutualist group dynamics in accordance with respective corporate strategies. Age-based premium fairness is also mandated. In certain insurance domains, variables such as the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any ethical biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance.
Networks, Games and Risk
Intelligence artificielle et individualisation des garanties en assurance: échec ou retard à l’allumage ?
Avec , on vient de finaliser un article, paru dans les documents de travail de la chaire PARI, sur le thème “Intelligence artificielle et individualisation des garanties en assurance: échec ou retard à l’allumage ?“
Derrière l’engouement suscité par l’utilisation de l’intelligence artificielle dans le secteur de l’assurance se cache une réalité plus nuancée. Prenons par exemple l’assurance automobile et l’assurance santé. Les tentatives d’usage de l’intelligence artificielle à des fins de tarification n’y ont pour le moment pas engendré le « changement de paradigme » pourtant annoncé. Pourquoi ? Plusieurs raisons peuvent être invoquées, qui vont des fondamentaux de l’assurance au choix assumé de certains assureurs de ne pas toucher à la mutualisation des risques, dans un contexte marqué par deux tendances opposées : la recherche par les consommateurs de services et produits toujours plus personnalisés et le refus sociétal de solutions qui laisseraient certains individus au bord du chemin de l’assurabilité.
(pour tenir compte des contraintes esthétiques des documents de travail de la chaire (“illustration d’un peintre flamand”), rien de très exotique, et donc merci à l’alchimiste de Jan Havickszoon Steen)
Maths for journalists (and probably many others) by Naël Shiab
I just want to mention the very nice work of Naël Shiab, with a lovely interactive notebook created to help journalists understand math concepts often used in news stories.
Fairness Explainability using Optimal Transport with Applications in Image Classification
A revised version of our paper “Fairness Explainability using Optimal Transport with Applications in Image Classification” is now online, with more discussion about conterfactuals
Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain why a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence on the bias.Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
Parametric Fairness with Statistical Guarantees
Our paper Parametric Fairness with Statistical Guarantees is now available on ArXiv.
Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications.
Forthcoming… Insurance, Biases, Discrimination and Fairness, v3
Insurance, Biases, Discrimination and Fairness is officially annonced for February 2024… (ISBN 978-3-031-49782-7)
Since I will use it for teaching in January and February, we will use the v2, mentioned in a previous post (pdf available on my blog).
Fairness and ethics for Insurance Pricing, at TD Insurance Seminar
This Thursday, I will give a talk on fairness and ethics at the TD Insurance IA/ML annuel conference. Slides are available (it is an overview of Insurance, biaises, discrimination and fairness v2, emphasizing the use of optimal transport techniques).
Econometrics Seminars at Université de Montréal
This Thursday, I will present at the CIREQ Séminaire Marcel-Dagenais en Économétrie at Université de Montréal, ou paper Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, written with Emmanuel Flachaire and Ewen Gallic.
Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).
Slides are available online. I will try to mention additional papers published this year, such as Fairness in Multi-Task Learning via Wasserstein Barycenters, Mitigating Discrimination in Insurance with Wasserstein Barycenters or more recently A Sequentially Fair Mechanism for Multiple Sensitive Attributes.
Présentation sur l’équité et la discrimination en assurance, pour Intact
Demain matin, avec Olivier et Marie-Pier Côté, on sera chez l’assureur Intact pour parler équité et discrimination. Olivier présentera ses travaux récents sur l’utilisation de modèles causaux pour proposer des modèles “équitables” en assurance. Le papier (a fair price to pay: exploiting directed acyclic graphs for fairness in insurance) sera bientôt disponible !