Category Archives: Research

Artificial Intelligence and Personalization of Insurance: Failure or Delayed Ignition?

Our joint paper, Artificial Intelligence and Personalization of Insurance: Failure or Delayed Ignition?, with Xavier Vamparys, has been published in Big Data & Society.

In insurance, there is still a significant gap between the anticipated disruption, due to big data and machine learning algorithms, and the actual implementation of behaviour-based personalization, as described by Meyers (2018). Here, we identify eight key factors that serve as fundamental obstacles to the radical transformation of insurance guarantees, aiming to closely align them with the risk profile of each policyholder. These obstacles include the collective nature of insurance, the entrenched beliefs of some insurance companies, challenges related to data collection and use for personalized pricing, limited interest from insurers in adopting new models as well as policyholders’ reluctance towards embracing connected devices. Additionally, the hurdles of explainability, insurer inertia and ethical or societal considerations further complicate the path toward achieving highly individualized insurance pricing.

Assurance, IA, biais et équité

Pour compléter mon exposé auprès de l’Association des Masters d’Actuariat, je vais poster un billet, qui reprend les slides de mon intervention, avec un peu du discours qui va avec. Tout d’abord, mille excuses, mais les slides sont en anglais. Et ce que je raconte est très librement basé sur des travaux récents qu’on a pu mener, avec pas mal de monde, dont Laurence Barry (de la chaire PARI), Marie-Pier Côté (professeure à l’université Laval, à Québec), Olivier Côté (doctorant que je co-supervise avec Marie-Pier), Agathe Fernandes Machado (doctorante à l’UQAM), Ewen Gallic, (maître de conférences, à Marseille), François Hu (ancien stagiaire postdoctoral et aujourd’hui consultant en France), Philipp Ratz (ancien stagiaire postdoctoral et aujourd’hui consultant en Suisse), ainsi qu’Ana Patron Pinerez (ancienne stagiaire qui venait de Colombie) et Mulah Moriah (doctorant en France, que j’ai suivi pendant son mémoire de fin de maîtrise).

Continue reading Assurance, IA, biais et équité

KNN and K-means in Gini Prametric Spaces

With Cassandra Mussard, intern last summer, and Stéphane Mussard, we uploaded a paper entitled KNN and K-means in Gini Prametric Spaces on ArXiv.

This paper introduces innovative enhancements to the K-means and K-nearest neighbors (KNN) algorithms based on the concept of Gini prametric spaces. Unlike traditional distance metrics, Gini-based measures incorporate both value-based and rank-based information, improving robustness to noise and outliers. The main contributions of this work include: proposing a Gini-based measure that captures both rank information and value distances; presenting a Gini K-means algorithm that is proven to converge and demonstrates resilience to noisy data; and introducing a Gini KNN method that performs competitively with state-of-the-art approaches such as Hassanat’s distance in noisy environments. Experimental evaluations on 14 datasets from the UCI repository demonstrate the superior performance and efficiency of Gini-based algorithms in clustering and classification tasks. This work opens new avenues for leveraging rank-based measures in machine learning and statistical analysis.

Optimal Transport on Categorical Data for Counterfactuals using Compositional Data and Dirichlet Transport

Our recent paper, Optimal Transport on Categorical Data for Counterfactuals using Compositional Data and Dirichlet Transport, with Agathe and Ewen is now online

Recently, optimal transport-based approaches have gained attention for deriving counterfactuals, e.g., to quantify algorithmic discrimination. However, in the general multivariate setting, these methods are often opaque and difficult to interpret. To address this, alternative methodologies have been proposed, using causal graphs combined with iterative quantile regressions (Plečko and Meinshausen (2020)) or sequential transport Fernandes Machado et al. (2025)) to examine fairness at the individual level, often referred to as “counterfactual fairness.” Despite these advancements, transporting categorical variables remains a significant challenge in practical applications with real datasets.
In this paper, we propose a novel approach to address this issue. Our method involves (1) converting categorical variables into compositional data and (2) transporting these compositions within the probabilistic simplex of \mathbb{R}^d. We demonstrate the applicability and effectiveness of this approach through an illustration on real-world data, and discuss limitations.

https://freakonometrics.hypotheses.org/files/2025/01/transp2.png

See https://github.com/fer-agathe/transport-simplex for the codes

Assurance des catastrophes : les routes de l’enfer sont pavées de bonnes intentions

C’était le titre initial de l’article qu’on avait écrit avec Laurence Barry (co-titulaire de la chaire de recherche PARI – programme de recherche sur l’appréhension des risques et des incertitudes – placée sous l’égide de l’Institut Louis Bachelier en partenariat avec l’ENSAE/CREST et Sciences Po), la semaine dernière et qui a été publié sur le site du quotidien Le Monde,

Je mettrais l’article original en ligne  plus tard, mais en attendant, je peux mettre en ligne un brouillon de la partie sur ce qui se passait en Californie (et qui présente l’avantage d’être en partie sourcé). Pour le cas français, je peux remettre un lien vers un court article qu’on avait écrit il y a quelques mois Rapport Langreney : lutter contre le désengagement des assureurs dans la couverture des risques climatiques

Continue reading Assurance des catastrophes : les routes de l’enfer sont pavées de bonnes intentions

A fair price to pay: Exploiting causal graphs for fairness in insurance

Our paper “A fair price to pay: Exploiting causal graphs for fairness in insurance“, with Olivier Côté and Marie-Pier Côté just appeared in the Journal of Risk and Insurance,

In many jurisdictions, insurance companies are prohibited from discriminating based on certain policyholder characteristics. Exclusion of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders effective fairness assessment. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the theoretical properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the implications of our findings: the interplay between our fair score families, group fairness criteria, and discrimination.

Le dernier numéro de l’Actuariel est paru

Tout nouveau tout chaud, le dernier numéro de l’Actuariel est paru, avec en particulier un article Open finance : Big bang annoncé dans l’assurance pour lequel j’avais eu long entretien. Au delà de quelques idées qu’on retrouve ici ou là, je peux mentionner une petite phrase, qui semble avoir retenu l’attention…

« Le débat est posé de manière sournoise en faisant croire aux citoyens qu’ils bénéficieront de produits plus personnalisés, sans rappeler que l’assurance est bien souvent un jeu à somme nulle et que si certains paient moins cher, cela signifie que d’autres paient plus » indique Arthur Charpentier, professeur à l’Université du Québec à Montréal et actuaire agrégé.

Optimal vaccination policy to prevent endemicity: a stochastic model

Our paper optimal vaccination policy to prevent endemicity: a stochastic model, written with Félix Foutel-Rodier and Hélène Guérin was just published in the Journal of Mathematical Biology

We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity, is introduced, considering stochasticity at the individual level. By letting the population size to go to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model’s equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an endemic state with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions fair vaccine allocation is the optimal strategy to prevent endemicity.

Data Augmentation with Variational Autoencoder for Imbalanced Dataset

Our paper, Data Augmentation with Variational Autoencoder for Imbalanced Dataset, with Samuel Stocksieker and Denys Pommeret is now online on ArXiv.

Learning from an imbalanced distribution presents a major challenge in predictive modeling, as it generally leads to a reduction in the performance of standard algorithms. Various approaches exist to address this issue, but many of them concern classification problems, with a limited focus on regression. In this paper, we introduce a novel method aimed at enhancing learning on tabular data in the Imbalanced Regression (IR) framework, which remains a significant problem. We propose to use variational autoencoders (VAE) which are known as a powerful tool for synthetic data generation, offering an interesting approach to modeling and capturing latent representations of complex distributions. However, VAEs can be inefficient when dealing with IR. Therefore, we develop a novel approach for generating data, combining VAE with a smoothed bootstrap, specifically designed to address the challenges of IR. We numerically investigate the scope of this method by comparing it against its competitors on simulations and datasets known for IR.

Discounting the Future?

this post is written with Béatrice Cherrier (Research Director, CNRS-ENSAE / CREST)

The first lessons in insurance and financial mathematics address discounting and the value of time, borrowing Christian Gollier’s expression, because insurers must account for this temporal aspect in medium-term annuity calculations. But do these discounting calculations, used for centuries to reflect individual decisions (of policyholders, investors, companies), still make sense when used to guide public policy decisions with long-term consequences, like climate policies?

When Kenneth Arrow joined the IPCC team in 1993, he expressed this concern to the coordinator of certain chapters: discounting in climate economics is as necessary as it is controversial. He wrote: “Your outline is very complete, with one exception. There needs to be discussion of discount rates. To a considerable extent, suggested policies require present costs (reduced carbon consumption) to prevent future disutilities and costs. Clearly, the tradeoff between present and future is very important, controversial though it be” (Cherrier and Duarte 2024).

The history of this transfer of a mathematical tool from the individual to the collective dimension since the 1930s, summarized here, is rich with lessons.
Continue reading Discounting the Future?

Selection bias in insurance: why portfolio-specific fairness fails to extend market-wide

With Marie-Pier Côté and Olivier Côté, we recently upload a short note, selection bias in insurance: why portfolio-specific fairness fails to extend market-wide, now available on SSRN,

Fairness centres on people. In insurance, the scope of fairness should be the entire insured population, not solely an insurer’s clients. However, each insurance company’s portfolio represents a possibly skewed subsample. Models fit to these selection-biased data do not generalise well for the broader population of insureds. Two biases stem from portfolio composition: representation bias, when large prediction errors are made on individuals from subpopulations infrequently observed, and selection bias, when underwriting and marketing skew the portfolio away from the insured population. We examine how portfolio composition affects fair premium methodologies for mitigating direct and indirect discrimination on a protected attribute. We illustrate how unfairness mitigation based on a selection-biased portfolio does not yield a fair market from the perspective of insureds. Relying on causal inference and a portfolio composition indicator, we describe the selection mechanism and determine conditions under which each bias affects various fairness-adjusted premiums. We propose a method to recover the population-wide fairness-adjusted premiums from selection-biased data, by using a (third-party provided) unbiased estimate of the prohibited attribute distribution. We show that this approach effectively mitigates selection bias but leads to overall premiums that are not balanced. In a limiting case, we show that portfolio-specific fairness-aware premiums can lead to a market-wide unawareness strategy: portfolio composition opens the back door to proxy discrimination.

(to be continued…)