Just after arriving in Warsaw, I had a zoom meeting with the SCOR Foundation for Science to present our annual report (#1) for the SCOR research project. The report is available online, and the second newsletter will be uploaded soon.
Just after arriving in Warsaw, I had a zoom meeting with the SCOR Foundation for Science to present our annual report (#1) for the SCOR research project. The report is available online, and the second newsletter will be uploaded soon.
In a couple of days, I will be in Warsow for a Summer School, to give a short course on Insurance, biases, discrimination and fairness, invited by Łukasz Delong. Slides are now available.
This course will provide a state-of-the-art, on fairness and discrimination, in the context of insurance pricing (and more
generally, predictive models). As explained by Avraham et al. (2014) “insurance companies are in the business of discrimination. Insurers attempt to segregate insureds into separate risk pools based on the differences in their risk profiles, first, so that different premiums can be charged to the different groups based on their differing risks and, second, to incentivize risk reduction by insureds. This is why we let insurers discriminate. There are limits, however, to the types of discrimination that are permissible for insurers. But what exactly are those limits and how are they justified“. First, we will come back to the specificities of predictive models in insurance. We will come back to the different places where a potential discrimination can intervene, by insisting on the possible biases in the data, in the models. We will present in particular the regulations in Europe and North America. In a second step, we will see how to quantify a possible discrimination, insisting on the main measures of “group-fairness”, before discussing the individual approach, in particular in relation with the causal approaches. Indeed, the central question of discrimination is “would the price have been different if this person had been a man instead of a woman“. We will see how to build a counterfactual allowing to quantify a possible discrimination. Finally, we will see how to correct a discrimination, insisting on the in-processing (throught penalized models) and post-processing approaches (using optimal transport). This course will be based on the recent textbook, Charpentier (2024) Insurance, biases, discrimination and fairness. Springer.
Je serai pour les 6 prochains jours au Centre Culturel International de Cerisy, pour un colloque sur “l’assurance face à ses ruptures“. Au programme (13/9) histoire de l’assurance, (14/9) la financiarisation, (15/9) les données massives, (16/9) le changement climatique et (17/9) le rôle de l’État. Je ferais une intervention dimanche matin, sur le thème Certitudes collective et incertitudes individuelles, les données massives changent-elles la donne ?
Dans cette intervention, nous commencerons par un détour historique et sociologique en revenant sur les travaux d’Émile Durkheim et Max Weber. Ces chercheurs ont montré que bien que les actions individuelles puissent être imprévisibles, les comportements collectifs suivent des schémas réguliers. En agrégeant les actions individuelles, nous observons des régularités au sein de groupes plus larges, une idée illustrée par la prédictibilité du suicide dans le contexte des données massives. Ensuite, nous aborderons les réflexions de Herbert Simon et sa théorie de la rationalité limitée, suggérant que malgré les limites cognitives des individus, les rendant individuellement imprévisibles, des modèles prévisibles émergent dans la prise de décision globale. Nous explorerons ensuite des questions épistémologiques en discutant l’interprétation fréquentiste des probabilités, laquelle requiert la répétition pour être quantifiée, et la difficulté d’associer une probabilité à un événement unique. La réponse bayésienne, qui interprète les probabilités comme des croyances ou des scores, complique la relation avec l’équité actuarielle. Nous questionnerons la calibration des modèles et l’interprétation des scores à travers des exemples simples, tels que “avoir 70% de chances qu’une opération militaire réussisse” ou “avoir 70% de chances de pluie entre 14 et 15 heures”. Enfin, nous conclurons en prenant en compte le caractère temporel de la prévision: prévoir un accident 15 minutes avant qu’il ne survienne n’est pas équivalent à le prévoir un an à l’avance.
Les slides sont en ligne. Sinon Le Monde, quotidien français, publiait un portrait des colloques (organisés depuis 1952) cet été, il y a un mois,
On Thursday 12th, I will attend the Mathematical Foundations of AI day, organized by the DATAIA Institute and SCAI (Sorbonne Center for Artificial Intelligence), in association with several scientific societies (namely, the Fondation Mathématique Jacques Hadamard (FMJH), the Fondation Sciences Mathématiques de Paris-FSMP, the MALIA group of the Société Française de Statistique and the Société Savante Francophone d’Apprentissage Machine (SSFAM)).
Slides are now online. Since I have one hour, I will present ideas I have been talking about for more than a year, with two additional discussions:
Next week, I will be at the European Actuarial Journal Conference, at the Lisbon School of Economics and Business, EAJ’24.
I will give a talk on calibration of actuarial models, based on our recent paper with Agathe Fernandes Machado, Emmanuel Flachaire, Ewen Gallic and François Hu, mainly “Probabilistic Scores of Classifiers, Calibration is not Enough” (as well as recent work on recalibration). Slides are available.
In binary classification tasks, accurate representation of probabilistic predictions is essential for various real-world applications such as predicting payment defaults or assessing medical risks. The model must then be well-calibrated to ensure alignment between predicted probabilities and actual outcomes. However, when score heterogeneity deviates from the underlying data probability distribution, traditional calibration metrics lose reliability, failing to align score distribution with actual probabilities. In this study, we highlight approaches that prioritize optimizing the alignment between predicted scores and true probability distributions over minimizing traditional performance or calibration metrics. When employing tree-based models such as Random Forest and XGBoost, our analysis emphasizes the flexibility these models offer in tuning hyperparameters to minimize the Kullback-Leibler (KL) divergence between predicted and true distributions. Through extensive empirical analysis across 10 UCI datasets and simulations, we demonstrate that optimizing tree-based models based on KL divergence yields superior alignment between predicted scores and actual probabilities without significant performance loss. In real-world scenarios, the reference probability is determined a priori as a Beta distribution estimated through maximum likelihood. Conversely, minimizing traditional calibration metrics may lead to suboptimal results, characterized by notable performance declines and inferior KL values. Our findings reveal limitations in traditional calibration metrics, which could undermine the reliability of predictive models for critical decision-making.
This week, Fallou Niakh will be in Vienna, at the Workshop “Climate Change and Insurance“, to present our recent joint work, witn Philipp Ratz and Caroline Hillairet