The Scientific Council of the SCOR Foundation has decided to fund the research project “Fairness of predictive models: an application to insurance markets” until its anticipated completion date in three years (2023-2025). The project will be led by the University of Quebec and directed by Arthur Charpentier, professor in the mathematics department of the University of Quebec in Montreal. This project aims to propose corrections to the automatic artificial intelligence algorithms that can be used to determine the optimal pricing of individual policies in order to remove or limit the biases likely to generate inequities or even discrimination based on gender, race, religion, origin, etc. in the coverage offered by insurers or reinsurers to policyholders. The subject is of both theoretical (better control of black boxes constituted by models based on artificial intelligence algorithms) and practical (reduction of the risks of discrimination and inequity) interest. From this point of view, it is very topical for insurers and reinsurers facing major reputational challenges in the context of the growing importance of social networks. In addition to his role at the University of Quebec, Arthur Charpentier is a member of the Institute of Actuaries, internationally recognized expert in actuarial science, author of numerous academic articles published in renowned academic actuarial journals in both nationally and internationally.
Tag Archives: fairness
Fairness Explainability using Optimal Transport with Applications in Image Classification
A revised version of our paper “Fairness Explainability using Optimal Transport with Applications in Image Classification” is now online, with more discussion about conterfactuals
Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain why a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence on the bias.Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
Parametric Fairness with Statistical Guarantees
Our paper Parametric Fairness with Statistical Guarantees is now available on ArXiv.
Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications.
Forthcoming… Insurance, Biases, Discrimination and Fairness, v3
Insurance, Biases, Discrimination and Fairness is officially annonced for February 2024… (ISBN 978-3-031-49782-7)
Since I will use it for teaching in January and February, we will use the v2, mentioned in a previous post (pdf available on my blog).
Présentation sur l’équité et la discrimination en assurance, pour Intact
Demain matin, avec Olivier et Marie-Pier Côté, on sera chez l’assureur Intact pour parler équité et discrimination. Olivier présentera ses travaux récents sur l’utilisation de modèles causaux pour proposer des modèles “équitables” en assurance. Le papier (a fair price to pay: exploiting directed acyclic graphs for fairness in insurance) sera bientôt disponible !
A Fair price to pay: exploiting directed acyclic graphs for fairness in insurance
Tonight (Montréal time), Marie-Pier Côté will give a talk on “a fair price to pay: exploiting directed acyclic graphs for fairness in insurance” based on recent joint work with our PhD student, Olivier Côté, in Melbourne, Australia
Many jurisdictions have laws or guidelines stipulating that insurance companies must not discriminate on some specified policyholder characteristics. Omission of the prohibited variables from the models removes direct discrimination, but does not prevent proxy discrimination — a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of allowed covariates. In the actuarial literature, there remains some confusion on the definition of indirect discrimination: this impedes the understanding of the goals of each fairness methodology and their comparison. In the causal inference literature, many tools, such as directed acyclic graphs (DAGs), help uncover various types of biases. A DAG describes the causal relationships between variables of interest and has clear dependence implications. We exploit this tool for fairness to formally define direct and indirect discrimination, to discuss potential sources of bias, and to understand the properties of different fairness methodologies. Four families of fair scores (best-estimate, unaware, aware and corrective) are placed in the DAG representing the insurance pricing problem. This allows us to study their behaviour in terms of direct and indirect discrimination. A comprehensive pedagogical example illustrates our findings.
More to come soon…
Insurance, biases, discrimination and fairness, v2
In the Summer 2022, my report Insurance, biaises, discrimination and fairness (v1) was officially uploaded on the website of the Institut Louis Bachelier. I have spent another year to add illustrations and examples, and I sent the manuscript to the publisher at the beginning of the Summer 2023. Because of delays, the book is not out yet, but the publisher allowed me to upload Insurance, biaises, discrimination and fairness v2 of the document. Note that it will be the lecture notes of the doctoral course I will give this Winter at ENSAE, in Paris, France.
The R functions (and package) will be uploaded on https://github.com/freakonometrics/InsurFair soon.
Talk at the ESSEC Risk Seminar
Thursday, I will be at La Défense, in Paris (France), to give a talk at the ESSEC Risk Seminar, entitled Causal Inference and Counterfactuals with Optimal Transport, with Applications in Fairness and Discrimination. Slides are now available, and the talk will be based on same recent papers, starting with Mitigating Discrimination in Insurance with Wasserstein Barycenters (presented last week-end at BIAS 2023) but also Fairness in Multi-Task Learning via Wasserstein Barycenters, and A Sequentially Fair Mechanism for Multiple Sensitive Attributes. There is also the textbook that should appear before the winter.
Fairness and Ethic in (insurance) Pricing
This Tuesday, I will give a talk on fairness at the Akur8 ratemaking seminar. Slides are available online.
Bias 2023 (3rd Workshop on Bias and Fairness in AI)
At the end of the week, François will be in Torino, to present our work on discrimination mitigation in the context of insurance. Tomorrow, he will present our work on Discrimination in Insurance with Wasserstein Barycenters.
Melting contestation: insurance fairness and machine learning
Our paper, Melting contestation: insurance fairness and machine learning, with Laurence Barry, is now published (in Ethics and Information Technology).
With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.
Fairness in Multi-Task Learning via Wasserstein Barycenters, at ECML PKDD 2023
Toady, presentation of our paper Fairness in Multi-Task Learning via Wasserstein Barycenters, ECML PKDD, in Torino, by François. Slides are available online (and a poster can be found below)
The paper was actually published in Machine Learning and Knowledge Discovery in Databases: Research Track (295–312), available here.
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Our paper, A Sequentially Fair Mechanism for Multiple Sensitive Attributes, with François Hu and Philipp Ratz, is now available on Arxiv,
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.
Addressing Fairness and Explainability in Image Classification Using Optimal Transport
Our new paper, with François Hu and Philipp Ratz, Addressing Fairness and Explainability in Image Classification Using Optimal Transport, is now available on ArXiv.
Algorithmic Fairness and the explainability of potentially unfair outcomes are crucial for establishing trust and accountability of Artificial Intelligence systems in domains such as healthcare and policing. Though significant advances have been made in each of the fields separately, achieving explainability in fairness applications remains challenging, particularly so in domains where deep neural networks are used. At the same time, ethical data-mining has become ever more relevant, as it has been shown countless times that fairness-unaware algorithms result in biased outcomes. Current approaches focus on mitigating biases in the outcomes of the model, but few attempts have been made to try to explain why a model is biased. To bridge this gap, we propose a comprehensive approach that leverages optimal transport theory to uncover the causes and implications of biased regions in images, which easily extends to tabular data as well. Through the use of Wasserstein barycenters, we obtain scores that are independent of a sensitive variable but keep their marginal orderings. This step ensures predictive accuracy but also helps us to recover the regions most associated with the generation of the biases. Our findings hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
Insurance, biases, discrimination and fairness
The initial report was published almost one year ago (https://www.institutlouisbachelier.org/…). It took me one more year of additional work to get real textbook…
(much more soon… now I need a break…)