Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Le dernier numéro de l’Actuariel est paru

Tout nouveau tout chaud, le dernier numéro de l’Actuariel est paru, avec en particulier un article Open finance : Big bang annoncé dans l’assurance pour lequel j’avais eu long entretien. Au delà de quelques idées qu’on retrouve ici ou là, je peux mentionner une petite phrase, qui semble avoir retenu l’attention…

« Le débat est posé de manière sournoise en faisant croire aux citoyens qu’ils bénéficieront de produits plus personnalisés, sans rappeler que l’assurance est bien souvent un jeu à somme nulle et que si certains paient moins cher, cela signifie que d’autres paient plus » indique Arthur Charpentier, professeur à l’Université du Québec à Montréal et actuaire agrégé.

Optimal vaccination policy to prevent endemicity: a stochastic model

Our paper optimal vaccination policy to prevent endemicity: a stochastic model, written with Félix Foutel-Rodier and Hélène Guérin was just published in the Journal of Mathematical Biology

We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity, is introduced, considering stochasticity at the individual level. By letting the population size to go to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model’s equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an endemic state with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions fair vaccine allocation is the optimal strategy to prevent endemicity.

Data Augmentation with Variational Autoencoder for Imbalanced Dataset

Our paper, Data Augmentation with Variational Autoencoder for Imbalanced Dataset, with Samuel Stocksieker and Denys Pommeret is now online on ArXiv.

Learning from an imbalanced distribution presents a major challenge in predictive modeling, as it generally leads to a reduction in the performance of standard algorithms. Various approaches exist to address this issue, but many of them concern classification problems, with a limited focus on regression. In this paper, we introduce a novel method aimed at enhancing learning on tabular data in the Imbalanced Regression (IR) framework, which remains a significant problem. We propose to use variational autoencoders (VAE) which are known as a powerful tool for synthetic data generation, offering an interesting approach to modeling and capturing latent representations of complex distributions. However, VAEs can be inefficient when dealing with IR. Therefore, we develop a novel approach for generating data, combining VAE with a smoothed bootstrap, specifically designed to address the challenges of IR. We numerically investigate the scope of this method by comparing it against its competitors on simulations and datasets known for IR.

Insurance analytics: prediction, explainability and fairness

This article was written jointly with Kjersti Aas (Norwegian Computing Center & Norwegian University of Science and Technology), Fei Huang (University of New South Wales) and Ronald Richman (Old Mutual Insure & University of the Witwatersrand), for the introduction of a special issue of the Annals of Actuarial Science.

.The expanding application of advanced analytics in insurance has generated numerous opportunities, such as more accurate predictive modelling powered by Machine Learning and Artificial Intelligence (AI) methods, the utilization of novel and unstructured datasets, and the automation of key operations. Significant advances in these areas are being made through novel applications and adaptations of predictive modelling techniques for insurance purposes, while, concurrently, rapid advances in machine learning methods are being made outside of the insurance sector. However, , these innovations also bring substantial challenges, particularly around the transparency, explanation, and fairness of complex algorithmic models and the economic and societal impacts of their adoption in decision-making. As insurance is a highly regulated industry, models may be required by regulators to be explainable, in order to enable analysis of the basis for decision making. Due to the societal importance of insurance, significant attention is being paid to ensuring that insurance models do not discriminate unfairly. In this special issue, we feature papers that explore key issues in insurance analytics, focusing on prediction, explainability, and fairness.


Continue reading Insurance analytics: prediction, explainability and fairness

Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment (NeurIPS’24)

Agathe Fernandes Machado will soon be on her way to Vancouver. She will attend the Thirty-Eighth Annual Conference on Neural Information Processing Systems (also known as NeurIPS 2024), to present a short paper on Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment

A binary scoring classifier can appear well-calibrated according to standard calibration metrics, even when the distribution of scores does not align with the distribution of the true events. In this paper, we investigate the impact of post-processing calibration on the score distribution (sometimes named “recalibration”). Using simulated data, where the true probability is known, followed by real-world datasets with prior knowledge on event distributions, we compare the performance of an XGBoost model before and after applying calibration techniques. The results show that while applying methods such as Platt scaling, Beta calibration, or isotonic regression can improve the model’s calibration, they may also lead to an increase in the divergence between the score distribution and the underlying event probability distribution.

Discounting the Future?

this post is written with Béatrice Cherrier (Research Director, CNRS-ENSAE / CREST)

The first lessons in insurance and financial mathematics address discounting and the value of time, borrowing Christian Gollier’s expression, because insurers must account for this temporal aspect in medium-term annuity calculations. But do these discounting calculations, used for centuries to reflect individual decisions (of policyholders, investors, companies), still make sense when used to guide public policy decisions with long-term consequences, like climate policies?

When Kenneth Arrow joined the IPCC team in 1993, he expressed this concern to the coordinator of certain chapters: discounting in climate economics is as necessary as it is controversial. He wrote: “Your outline is very complete, with one exception. There needs to be discussion of discount rates. To a considerable extent, suggested policies require present costs (reduced carbon consumption) to prevent future disutilities and costs. Clearly, the tradeoff between present and future is very important, controversial though it be” (Cherrier and Duarte 2024).

The history of this transfer of a mathematical tool from the individual to the collective dimension since the 1930s, summarized here, is rich with lessons.
Continue reading Discounting the Future?

Julien Trufin, on “Predictive Modeling and Balance Property through Autocalibration”

This Thursday, Julien Trufin will be giving a talk at the CANSSI SSC Seminar, live from Montréal

Machine learning techniques provide actuaries with predictors exhibiting high correlation with claim frequencies and severities. However, these predictors generally fail to achieve financial equilibrium and thus do not qualify as pure premiums. Autocalibration effectively addresses this issue since it ensures that every group of policyholders paying the same premium is on average self-financing. This talk proposes to look at recent results concerning autocalibration. In particular, we present a new characterization of autocalibration which enables us to identify whether a predictor is autocalibrated or not, we study a method (called balance correction) for obtaining an autocalibrated predictor from any regression model, we highlight the effect of balance correction on resulting pure premiums, and finally we go through some performance criteria that are particularly relevant for autocalibrated predictors.

Julien is actually with us the entiere week.

The role of government versus private sector provision of insurance

A short paper The role of government versus private sector provision of insurance has just been published in the Journal of Risk and Insurance.

Insurance markets are important for managing risk and promoting economic stability, since they play a key role in mitigating financial losses from unpredictable events such as natural disasters, cyberattacks, and health crises. However, these markets often face challenges, including market failures, information asymmetries, and correlated risks that can destabilize private insurers. In response, governments frequently intervene in insurance markets, either by providing insurance directly or by acting as a reinsurer of last resort. The interaction between government and private sector provision of insurance raises interesting and important questions about the appropriate role of each player in ensuring market efficiency and protecting individuals and businesses from catastrophic risks.

Selection bias in insurance: why portfolio-specific fairness fails to extend market-wide

With Marie-Pier Côté and Olivier Côté, we recently upload a short note, selection bias in insurance: why portfolio-specific fairness fails to extend market-wide, now available on SSRN,

Fairness centres on people. In insurance, the scope of fairness should be the entire insured population, not solely an insurer’s clients. However, each insurance company’s portfolio represents a possibly skewed subsample. Models fit to these selection-biased data do not generalise well for the broader population of insureds. Two biases stem from portfolio composition: representation bias, when large prediction errors are made on individuals from subpopulations infrequently observed, and selection bias, when underwriting and marketing skew the portfolio away from the insured population. We examine how portfolio composition affects fair premium methodologies for mitigating direct and indirect discrimination on a protected attribute. We illustrate how unfairness mitigation based on a selection-biased portfolio does not yield a fair market from the perspective of insureds. Relying on causal inference and a portfolio composition indicator, we describe the selection mechanism and determine conditions under which each bias affects various fairness-adjusted premiums. We propose a method to recover the population-wide fairness-adjusted premiums from selection-biased data, by using a (third-party provided) unbiased estimate of the prohibited attribute distribution. We show that this approach effectively mitigates selection bias but leads to overall premiums that are not balanced. In a limiting case, we show that portfolio-specific fairness-aware premiums can lead to a market-wide unawareness strategy: portfolio composition opens the back door to proxy discrimination.

(to be continued…)

Les algorithmes et la bureaucratie

La semaine dernière, j’assistais à une série d’exposés sur les algorithmes, l’IA, les inégalités, les injustices, etc, et j’avais l’impression que beaucoup de monde se trompait de combat, en mettant beaucoup de choses “sur le dos des algorithmes”… Je vais en profiter pour ressortir un article que j’avais écrit voilà quelques années… “L’intelligence artificielle dilue-t-elle la responsabilité ?

On essaie de nous faire croire que l’intelligence artificielle est une révolution. Et s’il n’en était rien ? Ne peut-on pas voir tout simplement la logique d’un processus qui remonte au moins aux cinquante dernières années ? La bureaucratie nous a poussés à mettre en place dans tous les domaines de la vie quotidienne des procédures simples, permettant à tout le monde de se dégager de toute responsabilité, de ne plus avoir à faire preuve d’intelligence. Les algorithmes font peur ; on se demande où se trouve l’« humain » dans ces procédures décisionnelles… Et s’il avait déjà disparu depuis bien longtemps ?

à suivre…

Talk at the Financial Conduct Authority, UK

This morning (Montréal time), I will give a talk for the Financial Conduct Authority in London, in the UK, on “Demystify fairness and discrimination in insurance, and avoid some pitfalls“.

What’s unique about insurance is that even statistical discrimination, which by definition is devoid of malicious intent, poses significant challenges. Because, on the one hand, policymakers would like insurers to treat their policyholders equally, without discrimination based on race, gender, age or other characteristics, even if it could make (statistical) sense to (indirectly) discriminate. On the other hand, at the core of actuaries’ activities lies discrimination, between risky and non-risky policyholders. And this risk is often statistically correlated with sensitive characteristics that regulation would like to prohibit insurers from taking into account. The analysis of possible discrimination in decision rules, whether human or algorithmic, is an old subject. Most of the concepts date back at least to the 50s, but recent developments in artificial intelligence have brought these issues back into the spotlight. Massive data facilitate statistical or proxy discrimination, and black-box algorithms do not facilitate understanding. Not to mention the various regulations that make it difficult to collect sensitive information, and ultimately test whether decisions can be discriminated against, especially indirectly.

"sendo l'intento mio scrivere cosa utile a chi la intende…"