François presented our joint paper “A Sequentially Fair Mechanism for Multiple Sensitive Attributes“, written with Philipp Ratz, at the L^P seminar(ISFA-CNAM-HEC Lausanne) today
Thanks François ! (more to come soon)
François presented our joint paper “A Sequentially Fair Mechanism for Multiple Sensitive Attributes“, written with Philipp Ratz, at the L^P seminar(ISFA-CNAM-HEC Lausanne) today
Thanks François ! (more to come soon)
C’était le titre initial de l’article qu’on avait écrit avec Laurence Barry (co-titulaire de la chaire de recherche PARI – programme de recherche sur l’appréhension des risques et des incertitudes – placée sous l’égide de l’Institut Louis Bachelier en partenariat avec l’ENSAE/CREST et Sciences Po), la semaine dernière et qui a été publié sur le site du quotidien Le Monde,
Je mettrais l’article original en ligne plus tard, mais en attendant, je peux mettre en ligne un brouillon de la partie sur ce qui se passait en Californie (et qui présente l’avantage d’être en partie sourcé). Pour le cas français, je peux remettre un lien vers un court article qu’on avait écrit il y a quelques mois Rapport Langreney : lutter contre le désengagement des assureurs dans la couverture des risques climatiques
Continue reading Assurance des catastrophes : les routes de l’enfer sont pavées de bonnes intentions
A few months ago, we spent time with Raphaël Suire to write a short article on the insurance market, or more specifically, “The Insurance Market in the Era of Digital Transitions: Relationships Between Insurers, Big Tech, and Insurtechs“. The report is now available, on the webiste of the Society of Actuaries.
The digital revolution has profoundly transformed market dynamics, particularly within the insurance sector. This transformation encompasses the infrastructure and technologies that facilitate information exchange, the emergence of new business practices, a deluge of data, and the rise of innovative players capitalizing on these changes to deliver unique value propositions to customers. Traditional insurance companies face significant challenges and opportunities as they navigate competition from established Big Tech firms and agile insurtech startups. This study examines the disruptive nature of digital advancements, compelling historical players to confront the innovator’s dilemma (Christensen, 1997): whether to adapt and develop established practices or invest in new strategies to leverage digital opportunities. In doing so, they also come up against smaller, more agile start-ups. We highlight the necessity for insurance actors to rethink their roles in light of new market entrants and the evolving landscape shaped by Big Tech’s data monetization strategies. To analyze these dynamics, we propose an original framework in the form of a triangle of possibilities, which positions various market players and elucidates their strategic movements, innovations, and possible partnerships. This framework also aids in identifying competitive advantages and development trajectories, ultimately offering scenarios for the evolution of traditional insurance players in a digital and data-driven era.
Our paper “A fair price to pay: Exploiting causal graphs for fairness in insurance“, with Olivier Côté and Marie-Pier Côté just appeared in the Journal of Risk and Insurance,
In many jurisdictions, insurance companies are prohibited from discriminating based on certain policyholder characteristics. Exclusion of prohibited variables from models prevents direct discrimination, but fails to address proxy discrimination, a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of acceptable covariates. The lack of formal definition for key fairness concepts, in particular indirect discrimination, hinders effective fairness assessment. We review causal inference notions and introduce a causal graph tailored for fairness in insurance. Exploiting these, we discuss potential sources of bias, formally define direct and indirect discrimination, and study the theoretical properties of fairness methodologies. A novel categorization of fair methodologies into five families (best-estimate, unaware, aware, hyperaware, and corrective) is constructed based on their expected fairness properties. A comprehensive pedagogical example illustrates the implications of our findings: the interplay between our fair score families, group fairness criteria, and discrimination.
Le dernier numéro de Risques est sorti.
Tout nouveau tout chaud, le dernier numéro de l’Actuariel est paru, avec en particulier un article Open finance : Big bang annoncé dans l’assurance pour lequel j’avais eu long entretien. Au delà de quelques idées qu’on retrouve ici ou là, je peux mentionner une petite phrase, qui semble avoir retenu l’attention…
« Le débat est posé de manière sournoise en faisant croire aux citoyens qu’ils bénéficieront de produits plus personnalisés, sans rappeler que l’assurance est bien souvent un jeu à somme nulle et que si certains paient moins cher, cela signifie que d’autres paient plus » indique Arthur Charpentier, professeur à l’Université du Québec à Montréal et actuaire agrégé.
Our paper optimal vaccination policy to prevent endemicity: a stochastic model, written with Félix Foutel-Rodier and Hélène Guérin was just published in the Journal of Mathematical Biology
We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity, is introduced, considering stochasticity at the individual level. By letting the population size to go to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model’s equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an endemic state with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions fair vaccine allocation is the optimal strategy to prevent endemicity.
With Emmanuel Flachaire, we have been asked to be the invited editors of a special issue of a French journal, the Revue d’Économie Politique. The volume will be out soon, the introduction is below…
Our paper, Data Augmentation with Variational Autoencoder for Imbalanced Dataset, with Samuel Stocksieker and Denys Pommeret is now online on ArXiv.
Learning from an imbalanced distribution presents a major challenge in predictive modeling, as it generally leads to a reduction in the performance of standard algorithms. Various approaches exist to address this issue, but many of them concern classification problems, with a limited focus on regression. In this paper, we introduce a novel method aimed at enhancing learning on tabular data in the Imbalanced Regression (IR) framework, which remains a significant problem. We propose to use variational autoencoders (VAE) which are known as a powerful tool for synthetic data generation, offering an interesting approach to modeling and capturing latent representations of complex distributions. However, VAEs can be inefficient when dealing with IR. Therefore, we develop a novel approach for generating data, combining VAE with a smoothed bootstrap, specifically designed to address the challenges of IR. We numerically investigate the scope of this method by comparing it against its competitors on simulations and datasets known for IR.
This article was written jointly with Kjersti Aas (Norwegian Computing Center & Norwegian University of Science and Technology), Fei Huang (University of New South Wales) and Ronald Richman (Old Mutual Insure & University of the Witwatersrand), for the introduction of a special issue of the Annals of Actuarial Science.
.The expanding application of advanced analytics in insurance has generated numerous opportunities, such as more accurate predictive modelling powered by Machine Learning and Artificial Intelligence (AI) methods, the utilization of novel and unstructured datasets, and the automation of key operations. Significant advances in these areas are being made through novel applications and adaptations of predictive modelling techniques for insurance purposes, while, concurrently, rapid advances in machine learning methods are being made outside of the insurance sector. However, , these innovations also bring substantial challenges, particularly around the transparency, explanation, and fairness of complex algorithmic models and the economic and societal impacts of their adoption in decision-making. As insurance is a highly regulated industry, models may be required by regulators to be explainable, in order to enable analysis of the basis for decision making. Due to the societal importance of insurance, significant attention is being paid to ensuring that insurance models do not discriminate unfairly. In this special issue, we feature papers that explore key issues in insurance analytics, focusing on prediction, explainability, and fairness.
Continue reading Insurance analytics: prediction, explainability and fairness
While Agathe just arrived in Vancouver for the Thirty-Eighth Annual Conference on Neural Information Processing Systems (also known as NeurIPS 2024), we just got the news that our paper Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness was accepted at the 39th Annual AAAI Conference on Artificial Intelligence
Agathe Fernandes Machado will soon be on her way to Vancouver. She will attend the Thirty-Eighth Annual Conference on Neural Information Processing Systems (also known as NeurIPS 2024), to present a short paper on Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment
A binary scoring classifier can appear well-calibrated according to standard calibration metrics, even when the distribution of scores does not align with the distribution of the true events. In this paper, we investigate the impact of post-processing calibration on the score distribution (sometimes named “recalibration”). Using simulated data, where the true probability is known, followed by real-world datasets with prior knowledge on event distributions, we compare the performance of an XGBoost model before and after applying calibration techniques. The results show that while applying methods such as Platt scaling, Beta calibration, or isotonic regression can improve the model’s calibration, they may also lead to an increase in the divergence between the score distribution and the underlying event probability distribution.
this post is written with Béatrice Cherrier (Research Director, CNRS-ENSAE / CREST)
The first lessons in insurance and financial mathematics address discounting and the value of time, borrowing Christian Gollier’s expression, because insurers must account for this temporal aspect in medium-term annuity calculations. But do these discounting calculations, used for centuries to reflect individual decisions (of policyholders, investors, companies), still make sense when used to guide public policy decisions with long-term consequences, like climate policies?
When Kenneth Arrow joined the IPCC team in 1993, he expressed this concern to the coordinator of certain chapters: discounting in climate economics is as necessary as it is controversial. He wrote: “Your outline is very complete, with one exception. There needs to be discussion of discount rates. To a considerable extent, suggested policies require present costs (reduced carbon consumption) to prevent future disutilities and costs. Clearly, the tradeoff between present and future is very important, controversial though it be” (Cherrier and Duarte 2024).
The history of this transfer of a mathematical tool from the individual to the collective dimension since the 1930s, summarized here, is rich with lessons.
Continue reading Discounting the Future?
The European Actuary no 40 is out !