A Fair Pricing Model via Adversarial Learning

We recently uploaded a revised version of our joint paper A Fair Pricing Model via Adversarial Learning, on ArXiv.

At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use econometric or machine learning techniques to classify, but the distinction between a fair actuarial classification and `discrimination’ is subtle. For this reason, there is a growing interest about fairness and discrimination in the actuarial community Lindholm et al. (2022). Presumably, non-sensitive characteristics can serve as substitutes or proxies for protected attributes. For example, the color and model of a car, combined with the driver’s occupation, may lead to an undesirable gender bias in the prediction of car insurance prices. Fairness in insurance pricing is a relatively new and much-requested topic, especially in light of new laws and regulations and past issues encountered in practice (Embrechts and Wüthrich, 2022; Frees and Huang, 2021; Gao and Wüthrich, 2018). Consequently, companies/regulators are looking for new methodologies to ensure a sufficient level of fairness while maintaining an adequate accuracy of predictive models. This paper discusses the importance of adapting the traditional fairness algorithms to specific real-life applications and, in particular, to insurance pricing. We claim that mitigating undesired biases with a generic fair algorithm can be counterproductive insurance. We will show that traditional Fair-ML as adversarial methods are not currently adequate for insurance pricing. Therefore, for these purposes, we have developed a more suitable and effective framework to satisfy a fairness objective while maintaining a sufficient level of predictor accuracy. Inspired by the recent approaches, Blier et al. (2021) and Wuthrich et al. (2021), that have shown the value of autoencoders in pricing, we will show that (2) it can be generalized to multiple pricing factors (geographic, car type), (3) it is more adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric. 

There are more examples in this revised version, including the case of a non-binary target, and a non-binary senssitive attribute, such as a spatial one