The initial report was published almost one year ago (https://www.institutlouisbachelier.org/…). It took me one more year of additional work to get real textbook…
(much more soon… now I need a break…)
The initial report was published almost one year ago (https://www.institutlouisbachelier.org/…). It took me one more year of additional work to get real textbook…
(much more soon… now I need a break…)
Our paper optimal vaccination policy to prevent endemicity: a stochastic model, written with Félix Foutel-Rodier and Hélène Guérin is now available on ArXiv,
We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity, is introduced, considering stochasticity at the individual level. By letting the population size to go to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model’s equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an endemic state with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions fair vaccine allocation is the optimal strategy to prevent endemicity.
Our new paper, with François Hu and Philipp Ratz, Mitigating Discrimination in Insurance with Wasserstein Barycenters is now available on ArXiv.
The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.
(fictitious maps used in the article)
Our new paper, with François Hu and Philipp Ratz, Fairness in Multi-Task Learning via Wasserstein Barycenters, is now available.
Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making.
It will be presented in September, at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023), in Torino.
L’autre jour, j’avais discuté Francesca Argiroffo, de RTS, au sujet de la décision de All State et State Farm, de ne plus assurer de nouveaux propriétaires de maisons ou locaux commerciaux, en Californie (et les liens entre assurance et changement climatique. Tout est en ligne, dans un article intitulé quand les assurances n’assurent plus, un autre effet du changement climatique (avec aussi une version audio, mais de mauvaise qualité…)
Next week, the Insurance Data Science Conference will take place at the Bayes Business School. The conference programme and abstract booklet are now online. Philipp and Olivier will present some recent work (unfortunately, I will not be able to attend, I was in already in London last month)
Olivier will present some recent work on Causal Inference and Fairness in Insurance Pricing