Optimal vaccination policy to prevent endemicity: a stochastic model

Our paper optimal vaccination policy to prevent endemicity: a stochastic model, written with Félix Foutel-Rodier and Hélène Guérin was just published in the Journal of Mathematical Biology

We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity, is introduced, considering stochasticity at the individual level. By letting the population size to go to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model’s equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an endemic state with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions fair vaccine allocation is the optimal strategy to prevent endemicity.

Data Augmentation with Variational Autoencoder for Imbalanced Dataset

Our paper, Data Augmentation with Variational Autoencoder for Imbalanced Dataset, with Samuel Stocksieker and Denys Pommeret is now online on ArXiv.

Learning from an imbalanced distribution presents a major challenge in predictive modeling, as it generally leads to a reduction in the performance of standard algorithms. Various approaches exist to address this issue, but many of them concern classification problems, with a limited focus on regression. In this paper, we introduce a novel method aimed at enhancing learning on tabular data in the Imbalanced Regression (IR) framework, which remains a significant problem. We propose to use variational autoencoders (VAE) which are known as a powerful tool for synthetic data generation, offering an interesting approach to modeling and capturing latent representations of complex distributions. However, VAEs can be inefficient when dealing with IR. Therefore, we develop a novel approach for generating data, combining VAE with a smoothed bootstrap, specifically designed to address the challenges of IR. We numerically investigate the scope of this method by comparing it against its competitors on simulations and datasets known for IR.

Insurance analytics: prediction, explainability and fairness

This article was written jointly with Kjersti Aas (Norwegian Computing Center & Norwegian University of Science and Technology), Fei Huang (University of New South Wales) and Ronald Richman (Old Mutual Insure & University of the Witwatersrand), for the introduction of a special issue of the Annals of Actuarial Science.

.The expanding application of advanced analytics in insurance has generated numerous opportunities, such as more accurate predictive modelling powered by Machine Learning and Artificial Intelligence (AI) methods, the utilization of novel and unstructured datasets, and the automation of key operations. Significant advances in these areas are being made through novel applications and adaptations of predictive modelling techniques for insurance purposes, while, concurrently, rapid advances in machine learning methods are being made outside of the insurance sector. However, , these innovations also bring substantial challenges, particularly around the transparency, explanation, and fairness of complex algorithmic models and the economic and societal impacts of their adoption in decision-making. As insurance is a highly regulated industry, models may be required by regulators to be explainable, in order to enable analysis of the basis for decision making. Due to the societal importance of insurance, significant attention is being paid to ensuring that insurance models do not discriminate unfairly. In this special issue, we feature papers that explore key issues in insurance analytics, focusing on prediction, explainability, and fairness.


Continue reading Insurance analytics: prediction, explainability and fairness

Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment (NeurIPS’24)

Agathe Fernandes Machado will soon be on her way to Vancouver. She will attend the Thirty-Eighth Annual Conference on Neural Information Processing Systems (also known as NeurIPS 2024), to present a short paper on Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment

A binary scoring classifier can appear well-calibrated according to standard calibration metrics, even when the distribution of scores does not align with the distribution of the true events. In this paper, we investigate the impact of post-processing calibration on the score distribution (sometimes named “recalibration”). Using simulated data, where the true probability is known, followed by real-world datasets with prior knowledge on event distributions, we compare the performance of an XGBoost model before and after applying calibration techniques. The results show that while applying methods such as Platt scaling, Beta calibration, or isotonic regression can improve the model’s calibration, they may also lead to an increase in the divergence between the score distribution and the underlying event probability distribution.

Discounting the Future?

this post is written with Béatrice Cherrier (Research Director, CNRS-ENSAE / CREST)

The first lessons in insurance and financial mathematics address discounting and the value of time, borrowing Christian Gollier’s expression, because insurers must account for this temporal aspect in medium-term annuity calculations. But do these discounting calculations, used for centuries to reflect individual decisions (of policyholders, investors, companies), still make sense when used to guide public policy decisions with long-term consequences, like climate policies?

When Kenneth Arrow joined the IPCC team in 1993, he expressed this concern to the coordinator of certain chapters: discounting in climate economics is as necessary as it is controversial. He wrote: “Your outline is very complete, with one exception. There needs to be discussion of discount rates. To a considerable extent, suggested policies require present costs (reduced carbon consumption) to prevent future disutilities and costs. Clearly, the tradeoff between present and future is very important, controversial though it be” (Cherrier and Duarte 2024).

The history of this transfer of a mathematical tool from the individual to the collective dimension since the 1930s, summarized here, is rich with lessons.
Continue reading Discounting the Future?