Category Archives: Publications

Who benefits from data sharing?

This post was co-written with Laurence Barry, originally in French.

Recently, the European Commission has laid the groundwork for a new framework for accessing financial data (FIDA, or Financial Data Access), allowing consumers and businesses to authorize third parties to access their data held by financial institutions, including insurers.

One of the main arguments in favor of this regulation is transparency, or as the texts put it, ‘promoting financial transparency.’ However, it is difficult to argue against transparency unless one has something to hide. This is the famous ‘nothing to hide’ argument! As Solove (2011) reminds us, the British government used it as an argument to install surveillance cameras: ‘if you’ve got nothing to hide, you’ve got nothing to fear.’ Academic Shoshana Zuboff is much more reserved, stating, ‘if you have nothing to hide, then you are nothing…’ Sharing personal data without limits or accountability for how this information is used is dangerous, both for the individual doing so and collectively. We focus here on how insurers could potentially use more information: this opening of data access significantly compromises the very idea of risk pooling and sharing.

Continue reading Who benefits from data sharing?

Partage des données, à qui profite le crime ?

Ce billet a été co-écrit avec Laurence Barry.

Récemment, la Commission Européenne a posé les bases d’un nouveau cadre d’accès aux données financières (FIDA, ou Financial Data Access), permettant aux consommateurs et aux entreprises d’autoriser des tiers à accéder à leurs données détenues par des institutions financières, y compris les assurances.

Un des principaux arguments en faveur de cette réglementation est la transparence, ou comme le disent les textes « promoting financial transparency ». Or il est difficile d’être contre la transparence, à moins d’avoir quelque chose à cacher. C’est le fameux argument “nothing to hide” ! Comme le rappelle Solove (2011), le gouvernement britannique l’avait utilisé comme argument, pour installer des caméras de surveillance: « if you’ve got nothing to hide, you’ve got nothing to fear. » L’universitaire Shoshana Zuboff est beaucoup plus réservée, « if you have nothing to hide, then you are nothing... » Partager des données personnelles sans limites, ni responsabilité quant à la manière dont ces informations sont utilisées, est dangereux, pour la personne qui le fait, mais aussi collectivement: Nous nous concentrons ici sur l’utilisation que pourraient faire les assureurs de davantage d’informations : cette  ouverture de l’accès aux données compromet largement l’idée même de mutualisation et de partage des risques.

Continue reading Partage des données, à qui profite le crime ?

From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration

Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv,

The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

A Fair price to pay: exploiting causal graphs for fairness in insurance

Our paper, a fair price to pay: exploiting causal graphs for fairness in insurance, writen with Olivier Côté and Marie-Pier Côté is now available,  on SSRN

In many jurisdictions, insurance companies must not discriminate on some given policyholder characteristics. Omission of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders the fairness assessment of methodologies. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the practical implications of our findings: the interplay between our fair score families, group fairness criteria, and sources of discrimination.

Melting contestation: insurance fairness and machine learning

Nice review of our paper , with Laurence Barry, on montrealethics.ai,

Machine learning tends to replace the actuary in the selection of features and the building of pricing models. However, avoiding subjective judgments thanks to automation does not necessarily mean that biases are removed. Nor does the absence of bias warrant fairness. This paper critically analyzes discrimination and insurance fairness with machine learning.

Melting contestation: insurance fairness and machine learning

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Nice review of our paper, with Philipp Ratz and François Hu, on montrealethics.ai.

Ask a group of people which biases in machine learning should be reduced, and you are likely to be showered with suggestions, making it difficult to decide where to start. To enable an objective discussion, we study a way to sequentially get rid of biases and propose a tool that can efficiently analyze the effects that the order of correction has on outcomes.

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Measuring and Mitigating Biases in Motor Insurance Pricing

Our paper, Measuring and Mitigating Biases in Motor Insurance Pricing, with Mulah Moriah and Franck Vermet, is now available on Arxiv.

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. These rates must conform to principles of transparency, explainability, and ethical considerations. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, taking into account various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender or mutualist group dynamics in accordance with respective corporate strategies. Age-based premium fairness is also mandated. In certain insurance domains, variables such as the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any ethical biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance.

Intelligence artificielle et individualisation des garanties en assurance: échec ou retard à l’allumage ?

Avec , on vient de finaliser un article, paru dans les documents de travail de la chaire PARI, sur le thème “Intelligence artificielle et individualisation des garanties en assurance: échec ou retard à l’allumage ?

Derrière l’engouement suscité par l’utilisation de l’intelligence artificielle dans le secteur de l’assurance se cache une réalité plus nuancée. Prenons par exemple l’assurance automobile et l’assurance santé. Les tentatives d’usage de l’intelligence artificielle à des fins de tarification n’y ont pour le moment pas engendré le « changement de paradigme » pourtant annoncé. Pourquoi ? Plusieurs raisons peuvent être invoquées, qui vont des fondamentaux de l’assurance au choix assumé de certains assureurs de ne pas toucher à la mutualisation des risques, dans un contexte marqué par deux tendances opposées : la recherche par les consommateurs de services et produits toujours plus personnalisés et le refus sociétal de solutions qui laisseraient certains individus au bord du chemin de l’assurabilité.

(pour tenir compte des contraintes esthétiques des documents de travail de la chaire (“illustration d’un peintre flamand”), rien de très exotique, et donc merci à l’alchimiste de Jan Havickszoon Steen)

Fairness Explainability using Optimal Transport with Applications in Image Classification

A revised version of our paper “Fairness Explainability using Optimal Transport with Applications in Image Classification” is now online, with more discussion about conterfactuals

Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain why a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence on the bias.Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.

Parametric Fairness with Statistical Guarantees

Our paper Parametric Fairness with Statistical Guarantees is now available on ArXiv.

Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications.

Est-il nécessaire (et utile) d’être en guerre contre tout ?

Début 2010, Nicolas Sarkozy, suivit par Jérôme Cahuzac, entrait “en guerre contre les fraudeurs”. En mars 2020, la France (par la voix de son président) entrait en “guerre sanitaire” contre un coronavirus. Et plus récemment, le sociologue Dominique Méda reprenait l’expression de l’économiste Christian Gollier qui commençait son livre par la phrase « dans ce livre, j’exprime mes espoirs et mes doutes quant à la possibilité de gagner la guerre mondiale contre les dérèglements du climat ». Évoquer la guerre permet probablement de marquer les esprits, mais à force d’être en guerre contre tout on peut se demander si l’association a encore du sens.

Continue reading Est-il nécessaire (et utile) d’être en guerre contre tout ?

L’incertitude empêche-t-elle de prendre des décisions ?

Cet article, publié dans le dernier numéro de la revue Risques, a été co-écrit avec Nicolas Marescaux.

Récemment, une chronique sur France Info citait un décideur politique, exprimant sa frustration envers le Conseil d’orientation des retraites (COR). Selon lui, le COR, « en définissant plusieurs scénarios prévisionnels, a empêché tout consensus sur la nécessité (ou non) d’une réforme ». Cette déclaration, bien que centrée sur la polémique de la réforme des retraites, soulève une question plus large : comment naviguer et communiquer efficacement dans un environnement incertain, surtout lorsqu’il s’agit de prendre des décisions cruciales ? Cette question est d’autant plus pertinente lorsqu’on la met en parallèle avec la récente demande de moratoire sur la recherche en intelligence artificielle et les réformes réglementaires liées au changement climatique. L’incertitude, en créant un flou ou un manque de confiance dans les informations disponibles, rend-elle plus ardue la recherche d’un accord ou d’un consensus sur une question donnée ? Et par extension, freine-t-elle le processus de prise de décision ? (à suivre)

Melting contestation: insurance fairness and machine learning

Our paper, Melting contestation: insurance fairness and machine learning, with Laurence Barry, is now published (in Ethics and Information Technology).

With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Our paper, A Sequentially Fair Mechanism for Multiple Sensitive Attributes, with François Hu and Philipp Ratz, is now available on Arxiv,

In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.