Talk at the Financial Conduct Authority, UK

This morning (Montréal time), I will give a talk for the Financial Conduct Authority in London, in the UK, on “Demystify fairness and discrimination in insurance, and avoid some pitfalls“.

What’s unique about insurance is that even statistical discrimination, which by definition is devoid of malicious intent, poses significant challenges. Because, on the one hand, policymakers would like insurers to treat their policyholders equally, without discrimination based on race, gender, age or other characteristics, even if it could make (statistical) sense to (indirectly) discriminate. On the other hand, at the core of actuaries’ activities lies discrimination, between risky and non-risky policyholders. And this risk is often statistically correlated with sensitive characteristics that regulation would like to prohibit insurers from taking into account. The analysis of possible discrimination in decision rules, whether human or algorithmic, is an old subject. Most of the concepts date back at least to the 50s, but recent developments in artificial intelligence have brought these issues back into the spotlight. Massive data facilitate statistical or proxy discrimination, and black-box algorithms do not facilitate understanding. Not to mention the various regulations that make it difficult to collect sensitive information, and ultimately test whether decisions can be discriminated against, especially indirectly.