Tag Archives: Gallic

Probabilistic Scores of Classifiers, Calibration is not Enough

Our paper “Probabilistic Scores of Classifiers, Calibration is not Enough”, with Agathe Fernandes Machado, Emmanuel Flachaire, Ewen Gallic and François Hu is now available on https://arxiv.org/abs/2408.03421

In binary classification tasks, accurate representation of probabilistic predictions is essential for various real-world applications such as predicting payment defaults or assessing medical risks. The model must then be well-calibrated to ensure alignment between predicted probabilities and actual outcomes. However, when score heterogeneity deviates from the underlying data probability distribution, traditional calibration metrics lose reliability, failing to align score distribution with actual probabilities. In this study, we highlight approaches that prioritize optimizing the alignment between predicted scores and true probability distributions over minimizing traditional performance or calibration metrics. When employing tree-based models such as Random Forest and XGBoost, our analysis emphasizes the flexibility these models offer in tuning hyperparameters to minimize the Kullback-Leibler (KL) divergence between predicted and true distributions. Through extensive empirical analysis across 10 UCI datasets and simulations, we demonstrate that optimizing tree-based models based on KL divergence yields superior alignment between predicted scores and actual probabilities without significant performance loss. In real-world scenarios, the reference probability is determined a priori as a Beta distribution estimated through maximum likelihood. Conversely, minimizing traditional calibration metrics may lead to suboptimal results, characterized by notable performance declines and inferior KL values. Our findings reveal limitations in traditional calibration metrics, which could undermine the reliability of predictive models for critical decision-making.

Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness

Our paper “Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness“, written with Agathe Fernandes Machado and Ewen Gallic, is now online

In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020) and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss individual fairness. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

IDSC’24, Insurance Data Science Conference, in Stockholm

Great time at the IDSC’24, Insurance Data Science Conference, in Stockholm, those two days…

I am glad to see so many people using the datasets of the CASdatasets package… good news, we are working with Christophe Dutang, Julien Siharath and Ewen Gallic this summer to enrich it, with fresh and new data, and with vignettes ! more about it this Fall !

Growth, Degrowth: What Are We Talking About?

This little post has been written with Ewen Gallic,

End of the world, end of the month – same fight!” can often be seen on signs during various demonstrations, in France, but also as the title of the inaugural lesson at the Collège de France by economist Christian Gollier, reminding us that climate change and the economy are facing off in a battle that promises to be bloody. “Growth” seems to be a key element in this battle, but this battle will likely remain in vain as long as this term is not clearly discussed, allowing us to leave the often dogmatic trenches.
Continue reading Growth, Degrowth: What Are We Talking About?

Croissance, décroissance, de quoi parle-t-on ?

Ce petit billet a été coécrit avec Ewen Gallic,

« Fin du monde, fin du mois, même combat, » peut-on lire régulièrement sur des pancartes et les banderoles, lors de diverses manifestations, mais aussi comme titre de leçon inaugurale au Collège de France de l’économiste Christian Gollier, rappelant que le changement climatique et l’économie se font face dans un combat qui s’annonce sanglant. La “croissance” semble être un élément clé dans ce combat, mais ce dernier restera probablement vain tant que ce terme ne serra pas clairement discuté, permettant de quitter des tranchées souvent dogmatiques.
Continue reading Croissance, décroissance, de quoi parle-t-on ?

Geospatial Disparities: A Case Study on Real Estate Prices in Paris

Our paper, Geospatial Disparities: A Case Study on Real Estate Prices in Paris, and Agathe Fernandes Machado, François Hu, Philipp Ratz and Ewen Gallic, is now online on ArXiv,

Driven by an increasing prevalence of trackers, ever more IoT sensors, and the declining cost of computing power, geospatial information has come to play a pivotal role in contemporary predictive models. While enhancing prognostic performance, geospatial data also has the potential to perpetuate many historical socio-economic patterns, raising concerns about a resurgence of biases and exclusionary practices, with their disproportionate impacts on society. Addressing this, our paper emphasizes the crucial need to identify and rectify such biases and calibration errors in predictive models, particularly as algorithms become more intricate and less interpretable. The increasing granularity of geospatial information further introduces ethical concerns, as choosing different geographical scales may exacerbate disparities akin to redlining and exclusionary zoning. To address these issues, we propose a toolkit for identifying and mitigating biases arising from geospatial data. Extending classical fairness definitions, we incorporate an ordinal regression case with spatial attributes, deviating from the binary classification focus. This extension allows us to gauge disparities stemming from data aggregation levels and advocates for a less interfering correction approach. Illustrating our methodology using a Parisian real estate dataset, we showcase practical applications and scrutinize the implications of choosing geographical aggregation levels for fairness and calibration measures.

Exposé au séminaire de statistique (StatQAM)

Tomorrow, Ewen Gallic will present some recent work at the StatQAM statistical seminar, on calibration, with Agathe Fernandes Machado, François Hu, and Emmanuel Flachaire. It will substantially be based on our recent paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration

The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

To illustrate, consider predictions about the gender of the person on the picture, including probabilities (confidence), obtained from https://www.picpurify.com/demo-face-gender-age.html, with fake pictures, from https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html.

Optimal Transport for Counterfactual Estimation: A Method for Causal Inference

With Emmanuel Flachaire et Ewen Gallic, we recently uploaded a paper entitled Optimal Transport for Counterfactual Estimation: A Method for Causal Inference on ArXiv.

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

Slides from a talk given last week are online.

What is the future of predictive probabilities in insurance?

This post was written with Laurence Barry and Ewen Gallic, in French, in November 2019 (see hal-02350006)

Insurance policies are classic examples of random contracts. This forces insurers to regularly quantify this uncertainty, to calculate probabilities in order to propose “fair” premiums for the commitments they are going to make. Isn’t it time to question this practice, at a time when artificial intelligence is exploding, offering predictive algorithms of a precision never seen before? At a time when big data / big brother could mean the disappearance of uncertainty itself?
Continue reading What is the future of predictive probabilities in insurance?

“Family History” and Life Insurance

Today and tomorrow, I will attend the Online International Conference in Actuarial science, data science and finance, organised by colleagues in Lyon. But I won’t be in Lyon, I will be at home, in Montréal…

I will give a talk on wednesday afternoon, on a joint paper with Ewen Gallic and Olivier Cabrignac. Slides are available here, and if I can get a copy of the video, I will share it…