Tag Archives: fairness

A Fair Pricing Model via Adversarial Learning

We recently uploaded a revised version of our joint paper A Fair Pricing Model via Adversarial Learning, on ArXiv.

At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use econometric or machine learning techniques to classify, but the distinction between a fair actuarial classification and `discrimination’ is subtle. For this reason, there is a growing interest about fairness and discrimination in the actuarial community Lindholm et al. (2022). Presumably, non-sensitive characteristics can serve as substitutes or proxies for protected attributes. For example, the color and model of a car, combined with the driver’s occupation, may lead to an undesirable gender bias in the prediction of car insurance prices. Fairness in insurance pricing is a relatively new and much-requested topic, especially in light of new laws and regulations and past issues encountered in practice (Embrechts and Wüthrich, 2022; Frees and Huang, 2021; Gao and Wüthrich, 2018). Consequently, companies/regulators are looking for new methodologies to ensure a sufficient level of fairness while maintaining an adequate accuracy of predictive models. This paper discusses the importance of adapting the traditional fairness algorithms to specific real-life applications and, in particular, to insurance pricing. We claim that mitigating undesired biases with a generic fair algorithm can be counterproductive insurance. We will show that traditional Fair-ML as adversarial methods are not currently adequate for insurance pricing. Therefore, for these purposes, we have developed a more suitable and effective framework to satisfy a fairness objective while maintaining a sufficient level of predictor accuracy. Inspired by the recent approaches, Blier et al. (2021) and Wuthrich et al. (2021), that have shown the value of autoencoders in pricing, we will show that (2) it can be generalized to multiple pricing factors (geographic, car type), (3) it is more adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric. 

There are more examples in this revised version, including the case of a non-binary target, and a non-binary senssitive attribute, such as a spatial one

Quantifying fairness and discrimination in predictive models 

In about ten days, late in the evening (Montréal time), I will attend the 16th Annual Conference of Thailand Econometric Society (on Machine Learning for Econometrics and Related Topics), at Chiang Mai University (มหาวิทยาลัยเชียงใหม่). I will give a talk (introductionary talk, at 21:30 pm, with the jet lag) on quantifying fairness and discrimination in predictive models (the state-of-the-art paper I will present is online on arXiv) and the slides are now also available (I won’t be able to go to Chiang Mai, unfortunately, and I will be on zoom).

The analysis of discrimination has long interested economists and lawyers. In recent years, the literature in computer science and machine learning has become interested in the subject, offering an interesting re-reading of the topic. These questions are the consequences of numerous criticisms of algorithms used to translate texts or to identify people in images. With the arrival of massive data, and the use of increasingly opaque algorithms, it is not surprising to have discriminatory algorithms, because it has become easy to have a proxy of a sensitive variable, by enriching the data indefinitely. According to Kranzberg (1986), “technology is neither good nor bad, nor is it neutral”, and therefore, “machine learning won’t give you anything like gender neutrality `for free’ that you didn’t explicitely ask for”, as claimed by Kearns et a. (2019). In this article, we will come back to the general context, for predictive models in classification. We will present the main concepts of fairness, called group fairness, based on independence between the sensitive variable and the prediction, possibly conditioned on this or that information. We will finish by going further, by presenting the concepts of individual fairness. Finally, we will see how to correct a potential discrimination, in order to guarantee that a model is more ethical

Montréal AI Symposium 2022

In about ten days (Saturday afternoon), I will be presenting a poster on fairness, discrimination and insurance at the Montréal AI Symposium, based on our joint paper The Fairness of Machine Learning in Insurance: New Rags for an Old Man?, written with Laurence Barry. Since the paper was quite literary, I used material from the document Insurance: Discrimination, Biases & Fairness to get more a visual poster. Additional information will come while discussing…

This is a poster used at the conference

What is the future of predictive probabilities in insurance?

This post was written with Laurence Barry and Ewen Gallic, in French, in November 2019 (see hal-02350006)

Insurance policies are classic examples of random contracts. This forces insurers to regularly quantify this uncertainty, to calculate probabilities in order to propose “fair” premiums for the commitments they are going to make. Isn’t it time to question this practice, at a time when artificial intelligence is exploding, offering predictive algorithms of a precision never seen before? At a time when big data / big brother could mean the disappearance of uncertainty itself?
Continue reading What is the future of predictive probabilities in insurance?

Talk at University of Illinois Urbana-Champaign

This Friday, it is our semester break in Montréal, so I will be giving a talk at the University of Illinois Urbana-Champaign, on fairness and discrimination in actuarial pricing. As mentioned in the abstract, the talk will be based on two recent papers, The Fairness of Machine Learning in Insurance: New Rags for an Old Man? and A fair pricing model via adversarial learning

Slides are now online.

A fair pricing model via adversarial learning

with Vincent Grari, Sylvain Lamprier and Marcin Detyniecki we recently uploaded a paper a fair pricing model via adversarial learning on ArXiv

At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use econometric or machine learning techniques to classify, but the distinction between a fair actuarial classification and “discrimination” is subtle. For this reason, there is a growing interest about fairness and discrimination in the actuarial community Lindholm, Richman, Tsanakas, and Wuthrich (2022). Presumably, non-sensitive characteristics can serve as substitutes or proxies for protected attributes. For example, the color and model of a car, combined with the driver’s occupation, may lead to an undesirable gender bias in the prediction of car insurance prices. Surprisingly, we will show that debiasing the predictor alone may be insufficient to maintain adequate accuracy (1). Indeed, the traditional pricing model is currently built in a two-stage structure that considers many potentially biased components such as car or geographic risks. We will show that this traditional structure has significant limitations in achieving fairness. For this reason, we have developed a novel pricing model approach. Recently some approaches have Blier-Wong, Cossette, Lamontagne, and Marceau (2021); Wuthrich and Merz (2021) shown the value of autoencoders in pricing. In this paper, we will show that (2) this can be generalized to multiple pricing factors (geographic, car type), (3) it perfectly adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric.

Assurance, biais, discrimination et équité, chez IVADO

Ce vendredi, je donnerai un exposé au groupe de travail d’IVADO, invité par Dany Plourde. Les slides sont maintenant en ligne. En guise d’introduction, cette petite anecdote

L’exposé sera assez standard, avec une rapide introduction sur l’assurance, avant de revenir sur trois notions clés: la discrimination, les biais et les mesures d’équité

Comme c’est la première fois que je présente au Québec, je vais revenir sur quelques statistiques de marché

et une comparaison d’états aux États-Unis, et de provinces au Canada, pour caractériser des possibles discriminations

Sur les biais, je reviendrais deux de mes préférées, le paradoxe de Simpson et l’inférence écologique

et la loi de Goodhard, et le biais de rétroaction

On finira avec quelques définitions de l’équité

Insurance against Natural Catastrophes: Balancing Actuarial Fairness and Social Solidarity

Our research paper, Insurance against Natural Catastrophes: Balancing Actuarial Fairness and Social Solidarity, with Molly James and Laurence Barry, is now published in the Geneva Papers on Risk and Insurance.

Natural disasters offer a special case for the study of private and public insurance mix. Indeed, the experience accumulated over the past decades has made it possible to transform poorly known hazards, long considered uninsurable, into risks that can be assessed with some precision. They exemplify however the limits of the risk-based premiums method, as it might imply unaffordability for some. The French scheme reflects such ideas and offers a wide coverage for moderate premiums to all, but is shaken by climate change: we show that some wealthier areas, that were not perceived as “at risk” in the past, have become exposed to submersion risk in the future. This singularly makes some well-off properties the potential main beneficiaries of a scheme that was historically thought to protect the worst-off. Acknowledging that some segmentation might become desirable, we examine several models for flood risk and the disparity in premiums they entail.