According to actuarial standards of practice, insurance pricing relies on grouping policyholders by risk to set adequate premiums. Modern predictive models, especially machine learning, excel at detecting statistical associations to differentiate risks, but they can learn spurious or undesired correlations. This raises concerns when socioeconomic or demographic factors may (intentionally or inadvertently) affect the fairness of insurance pricing.
Fairness in insurance is difficult to operationalize due to its ambiguity. Fairness metrics from the machine learning literature lack the segment-specific relevance actuaries require and are expressed in abstract units that obscure real-world consequences. For actuaries to intervene, proxy effects and unfair biases must be quantified in insurance-relevant terms: dollars and people.
In this paper, we focus on fairness in actuarial pricing. We study the situation where insurance rates should be fair with respect to a categorical (or discretized) sensitive variable, such as race or economic status, and the latter is fully observed (despite the possible privacy challenges).
We argue that actuarial fairness, solidarity, and causality form the three core dimensions of fairness in insurance pricing:
– Actuarial fairness aligns premiums with expected losses, mitigating cross-subsidies,
– Solidarity aligns premiums across protected groups, mitigating disparities,
– Causality ensures models capture only true risk factors, mitigating proxy effects.
We translate these dimensions into a five-point spectrum of premiums:
– The best-estimate premium is the most accurate predictor of losses using all available information, including the sensitive variable,
– The unaware premium is the most accurate predictor of losses using all information except the sensitive variable,
– The aware premium is the most accurate predictor of losses when controlling for the sensitive variable,
– The corrective premium is the most accurate predictor that enforces similar premium distributions across levels of the sensitive variable,
– The hyperaware premium is the most accurate approximation of the corrective premium that does not directly discriminate on the sensitive variable.
We define actuarially relevant local metrics that quantify the potential monetary impact of unfairness at the policyholder level. Proxy vulnerability is the difference between unaware and aware premiums. It locally measures how much the allowed variables pick up the signal of a missing sensitive variable We define post pricing local metrics to evaluate the fairness of any pricing structure relative to the estimated spectrum.
We partition policyholders to expose the segments in which unfair discrimination is most severe.
We integrate these components into a fairness assessment framework that partitions the policyholders, pinpoints segments most affected by unfairness, and evaluates
local metrics to diagnose unfairness and guide intervention.
We illustrate our approach with a large case study inspired by industry practice. The analysis relies a real dataset of approximately 768,000 vehicles insured in Québec
(2016–2017), covering at-fault material damage claims. We examine the fairness of a pseudo commercial price with respect to discretized credit score: low (vulnerable group) vs high. This sensitive variable measures the policyholder’s economic precariousness.
– Proxy vulnerability is both material and skewed: while most policyholders may receive a modest rebate, a vulnerable minority of them could face 15–30% overpricing if the regulation only requires that the sensitive variable be omitted,
– Our integrated framework illustrates that fairness in insurance pricing can be assessed efficiently, with minimal analyst effort. The framework provides simultaneous diagnostics from the three fairness dimensions, translates
unfairness into dollar terms at the individual level, and highlights disparities across population segments.
We provide additional information and the complete code illustrated on a comprehensive simulated data example in the online supplementary material.
Designed for routine portfolio monitoring, our toolbox delivers valuable insights whether or not the sensitive attribute is included in pricing, provided it is available for assessment. The toolbox’s scalability, across large datasets and rich covariate sets, makes fairness operationalizable for actuaries: intuitive, practical, and encompassing the three fairness dimensions.
From Tuesday to Friday, I will attend the 60th Actuarial Research Conference in Toronto. With Olivier and Marie-Pier Côté, we will give a series of talk on fairness and discrimination.
I will talk in an Invited Session on Artificial Intelligence in Insurance (as well as Marie-Pier Côté)
Olivier Côté will present in Session 1 – Bias in Assessing Financial Risk
With Olivier and Marie-Pier, we will present in one of the Casualty Actuarial Society Sponsored Sessions, Session 1 – A Scalable Toolbox for Exposing Indirect Discrimination in Insurance Rates
Fairness centres on people. In insurance, the scope of fairness should be the entire insured population, not solely an insurer’s clients. However, each insurance company’s portfolio represents a possibly skewed subsample. Models fit to these selection-biased data do not generalise well for the broader population of insureds. Two biases stem from portfolio composition: representation bias, when large prediction errors are made on individuals from subpopulations infrequently observed, and selection bias, when underwriting and marketing skew the portfolio away from the insured population. We examine how portfolio composition affects fair premium methodologies for mitigating direct and indirect discrimination on a protected attribute. We illustrate how unfairness mitigation based on a selection-biased portfolio does not yield a fair market from the perspective of insureds. Relying on causal inference and a portfolio composition indicator, we describe the selection mechanism and determine conditions under which each bias affects various fairness-adjusted premiums. We propose a method to recover the population-wide fairness-adjusted premiums from selection-biased data, by using a (third-party provided) unbiased estimate of the prohibited attribute distribution. We show that this approach effectively mitigates selection bias but leads to overall premiums that are not balanced. In a limiting case, we show that portfolio-specific fairness-aware premiums can lead to a market-wide unawareness strategy: portfolio composition opens the back door to proxy discrimination.
In many jurisdictions, insurance companies must not discriminate on some given policyholder characteristics. Omission of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders the fairness assessment of methodologies. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the practical implications of our findings: the interplay between our fair score families, group fairness criteria, and sources of discrimination.
This morning, I will be giving a talk for the Thelem-ILB Chaire (Thelem – historically compagnie d’assurance mutuelle contre l’incendie dans le département du Loiret – founded in 1820, one of France’s oldest companies), on fairness and discrimination in insurance, and more specifically on counterfactual fairness, and causal graphs. It is based on recent work with Olivier Côté, our PhD student in Laval (Québec), co-supervised with Marie-Pier,
Demain matin, avec Olivier et Marie-Pier Côté, on sera chez l’assureur Intact pour parler équité et discrimination. Olivier présentera ses travaux récents sur l’utilisation de modèles causaux pour proposer des modèles “équitables” en assurance. Le papier (a fair price to pay: exploiting directed acyclic graphs for fairness in insurance) sera bientôt disponible !
Many jurisdictions have laws or guidelines stipulating that insurance companies must not discriminate on some specified policyholder characteristics. Omission of the prohibited variables from the models removes direct discrimination, but does not prevent proxy discrimination — a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of allowed covariates. In the actuarial literature, there remains some confusion on the definition of indirect discrimination: this impedes the understanding of the goals of each fairness methodology and their comparison. In the causal inference literature, many tools, such as directed acyclic graphs (DAGs), help uncover various types of biases. A DAG describes the causal relationships between variables of interest and has clear dependence implications. We exploit this tool for fairness to formally define direct and indirect discrimination, to discuss potential sources of bias, and to understand the properties of different fairness methodologies. Four families of fair scores (best-estimate, unaware, aware and corrective) are placed in the DAG representing the insurance pricing problem. This allows us to study their behaviour in terms of direct and indirect discrimination. A comprehensive pedagogical example illustrates our findings.
More to come soon…
"sendo l'intento mio scrivere cosa utile a chi la intende…"