Category Archives: Publications

Croissance, décroissance, de quoi parle-t-on ?

Ce petit billet a été coécrit avec Ewen Gallic,

« Fin du monde, fin du mois, même combat, » peut-on lire régulièrement sur des pancartes et les banderoles, lors de diverses manifestations, mais aussi comme titre de leçon inaugurale au Collège de France de l’économiste Christian Gollier, rappelant que le changement climatique et l’économie se font face dans un combat qui s’annonce sanglant. La “croissance” semble être un élément clé dans ce combat, mais ce dernier restera probablement vain tant que ce terme ne serra pas clairement discuté, permettant de quitter des tranchées souvent dogmatiques.
Continue reading Croissance, décroissance, de quoi parle-t-on ?

Oaxaca-Blinder decomposition of changes in means and inequality

Our paper, Oaxaca-Blinder decomposition of changes in means and inequality: A simultaneous approach, with Emmanuel Flachaire, was just published in the Economics Bulletin,

In this paper, we show that a decomposition of changes in inequality, with the mean log deviation index, can be
obtained directly from the Oaxaca-Blinder decompositions of changes in means of incomes and log-incomes. It allows
practitioners to conduct simultaneously empirical analyses to explain which factors account for changes in means and
in inequality indices between two distributions with strictly positive values.

Note that the Oaxaca-Blinder decomposition actually originated in the work of Evelyn Kitagawa in the 1950’s, to quantify gender discrimination in labour economics.

Kitagawa, E. M. (1955). Components of a difference between two rates. Journal of the American Statistical Association, 50 (272), 1168–1194.

Geospatial Disparities: A Case Study on Real Estate Prices in Paris

Our paper, Geospatial Disparities: A Case Study on Real Estate Prices in Paris, and Agathe Fernandes Machado, François Hu, Philipp Ratz and Ewen Gallic, is now online on ArXiv,

Driven by an increasing prevalence of trackers, ever more IoT sensors, and the declining cost of computing power, geospatial information has come to play a pivotal role in contemporary predictive models. While enhancing prognostic performance, geospatial data also has the potential to perpetuate many historical socio-economic patterns, raising concerns about a resurgence of biases and exclusionary practices, with their disproportionate impacts on society. Addressing this, our paper emphasizes the crucial need to identify and rectify such biases and calibration errors in predictive models, particularly as algorithms become more intricate and less interpretable. The increasing granularity of geospatial information further introduces ethical concerns, as choosing different geographical scales may exacerbate disparities akin to redlining and exclusionary zoning. To address these issues, we propose a toolkit for identifying and mitigating biases arising from geospatial data. Extending classical fairness definitions, we incorporate an ordinal regression case with spatial attributes, deviating from the binary classification focus. This extension allows us to gauge disparities stemming from data aggregation levels and advocates for a less interfering correction approach. Illustrating our methodology using a Parisian real estate dataset, we showcase practical applications and scrutinize the implications of choosing geographical aggregation levels for fairness and calibration measures.

Discussions autour du Manuel d’Assurance, partie 3

Après la discussion il y a deux semaines, avec Gilles, et la semaine avant, avec Patrick, je mets en ligne un dernier billet, reprenant un échange que nous avions, tous les trois, avec Patrick Thourot et Gilles Bénéplanc, l’automne dernier, et l’équipe d’Annie (qui s’occupe de formation au sein du groupe Séroni, et News Assurances Pro) pour parler du Manuel d’Assurance,

Le livre est toujours en ligne sur le site des PUF.

Who benefits from data sharing?

This post was co-written with Laurence Barry, originally in French.

Recently, the European Commission has laid the groundwork for a new framework for accessing financial data (FIDA, or Financial Data Access), allowing consumers and businesses to authorize third parties to access their data held by financial institutions, including insurers.

One of the main arguments in favor of this regulation is transparency, or as the texts put it, ‘promoting financial transparency.’ However, it is difficult to argue against transparency unless one has something to hide. This is the famous ‘nothing to hide’ argument! As Solove (2011) reminds us, the British government used it as an argument to install surveillance cameras: ‘if you’ve got nothing to hide, you’ve got nothing to fear.’ Academic Shoshana Zuboff is much more reserved, stating, ‘if you have nothing to hide, then you are nothing…’ Sharing personal data without limits or accountability for how this information is used is dangerous, both for the individual doing so and collectively. We focus here on how insurers could potentially use more information: this opening of data access significantly compromises the very idea of risk pooling and sharing.

Continue reading Who benefits from data sharing?

Partage des données, à qui profite le crime ?

Ce billet a été co-écrit avec Laurence Barry.

Récemment, la Commission Européenne a posé les bases d’un nouveau cadre d’accès aux données financières (FIDA, ou Financial Data Access), permettant aux consommateurs et aux entreprises d’autoriser des tiers à accéder à leurs données détenues par des institutions financières, y compris les assurances.

Un des principaux arguments en faveur de cette réglementation est la transparence, ou comme le disent les textes « promoting financial transparency ». Or il est difficile d’être contre la transparence, à moins d’avoir quelque chose à cacher. C’est le fameux argument “nothing to hide” ! Comme le rappelle Solove (2011), le gouvernement britannique l’avait utilisé comme argument, pour installer des caméras de surveillance: « if you’ve got nothing to hide, you’ve got nothing to fear. » L’universitaire Shoshana Zuboff est beaucoup plus réservée, « if you have nothing to hide, then you are nothing... » Partager des données personnelles sans limites, ni responsabilité quant à la manière dont ces informations sont utilisées, est dangereux, pour la personne qui le fait, mais aussi collectivement: Nous nous concentrons ici sur l’utilisation que pourraient faire les assureurs de davantage d’informations : cette  ouverture de l’accès aux données compromet largement l’idée même de mutualisation et de partage des risques.

Continue reading Partage des données, à qui profite le crime ?

From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration

Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv,

The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

A Fair price to pay: exploiting causal graphs for fairness in insurance

Our paper, a fair price to pay: exploiting causal graphs for fairness in insurance, writen with Olivier Côté and Marie-Pier Côté is now available,  on SSRN

In many jurisdictions, insurance companies must not discriminate on some given policyholder characteristics. Omission of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders the fairness assessment of methodologies. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the practical implications of our findings: the interplay between our fair score families, group fairness criteria, and sources of discrimination.

Melting contestation: insurance fairness and machine learning

Nice review of our paper , with Laurence Barry, on montrealethics.ai,

Machine learning tends to replace the actuary in the selection of features and the building of pricing models. However, avoiding subjective judgments thanks to automation does not necessarily mean that biases are removed. Nor does the absence of bias warrant fairness. This paper critically analyzes discrimination and insurance fairness with machine learning.

Melting contestation: insurance fairness and machine learning

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Nice review of our paper, with Philipp Ratz and François Hu, on montrealethics.ai.

Ask a group of people which biases in machine learning should be reduced, and you are likely to be showered with suggestions, making it difficult to decide where to start. To enable an objective discussion, we study a way to sequentially get rid of biases and propose a tool that can efficiently analyze the effects that the order of correction has on outcomes.

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

Measuring and Mitigating Biases in Motor Insurance Pricing

Our paper, Measuring and Mitigating Biases in Motor Insurance Pricing, with Mulah Moriah and Franck Vermet, is now available on Arxiv.

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. These rates must conform to principles of transparency, explainability, and ethical considerations. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, taking into account various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender or mutualist group dynamics in accordance with respective corporate strategies. Age-based premium fairness is also mandated. In certain insurance domains, variables such as the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any ethical biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance.

Intelligence artificielle et individualisation des garanties en assurance: échec ou retard à l’allumage ?

Avec , on vient de finaliser un article, paru dans les documents de travail de la chaire PARI, sur le thème “Intelligence artificielle et individualisation des garanties en assurance: échec ou retard à l’allumage ?

Derrière l’engouement suscité par l’utilisation de l’intelligence artificielle dans le secteur de l’assurance se cache une réalité plus nuancée. Prenons par exemple l’assurance automobile et l’assurance santé. Les tentatives d’usage de l’intelligence artificielle à des fins de tarification n’y ont pour le moment pas engendré le « changement de paradigme » pourtant annoncé. Pourquoi ? Plusieurs raisons peuvent être invoquées, qui vont des fondamentaux de l’assurance au choix assumé de certains assureurs de ne pas toucher à la mutualisation des risques, dans un contexte marqué par deux tendances opposées : la recherche par les consommateurs de services et produits toujours plus personnalisés et le refus sociétal de solutions qui laisseraient certains individus au bord du chemin de l’assurabilité.

(pour tenir compte des contraintes esthétiques des documents de travail de la chaire (“illustration d’un peintre flamand”), rien de très exotique, et donc merci à l’alchimiste de Jan Havickszoon Steen)

Fairness Explainability using Optimal Transport with Applications in Image Classification

A revised version of our paper “Fairness Explainability using Optimal Transport with Applications in Image Classification” is now online, with more discussion about conterfactuals

Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain why a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence on the bias.Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.