Category Archives: Research

Les paradoxes de la segmentation et de la discrimination en assurance

La décision ne peut pas être raciste puisqu’elle a été prise sans aucune information sur l’origine ethnique de la personne.” Ce genre d’affirmation, on l’a tous entendu un jour ou l’autre. Que ce soit sur le racisme, l’âgisme ou le sexisme. Que ce soient des décisions humaines, des modèles ou des algorithmes. Pourtant, la “loi de Kranzberg1 nous rappelle que la technologie n’est ni bonne ni mauvaise mais, elle n’est pas neutre pour autant. La neutralité risque de ne venir qu’à un certain prix. Et il est peut être temps de revenir sur les grands principes autour de la segmentation et de l’équité, en assurance, pour mieux comprendre de quoi on parle quand on évoque la question de la discrimination.

Figure: Krater représentant Thésée et Procruste, et Thésée tuant le sanglier de Crommyon (source The Miriam and Ira D. Wallach Division of Art, 1862 – 1864)

Continue reading Les paradoxes de la segmentation et de la discrimination en assurance

  1. La loi de Kranzberg, énoncée par l’historien des technologies Melvin Kranzberg en 1986, a été formulée dans un article intitulé “Technology and History: ‘Kranzberg’s Laws'”, publié dans la revue Technology and Culture. Six lois sont proposées, mais la première est la plus connue, rappelant que les effets de la technologie dépendent du contexte social, politique, économique et culturel dans lequel elle est utilisée: “Technology is neither good nor bad; nor is it neutral.” []

Mesurer et corriger les biais dans les systèmes d’IA

Ce midi, je participerai (en ligne) aux lundis de l’IA et de la finance, avec comme thème “mesurer et corriger les biais dans les systèmes d’IA”, dans le cadre d’un séminaire co-organisé par l’Autorité de Contrôle Prudentiel et de Résolution (ACPR/Banque de France) et Télécom Paris. Mon intervention, en ouverture, reviendra sur l’équité dans le contexte de l’assurance [les slides sont disponibles].

Continue reading Mesurer et corriger les biais dans les systèmes d’IA

Confidence and Fairness: Scientific Foundations in AI and Risk (mid-May in Paris)

Mid-May, we organize, with the SCOR Foundation for Science a one-day workshop on Confidence and Fairness, Scientific Foundations in AI and Risk. Registrations are now open ! The agenda will be

9:00 registration
9:20 – introduction speach
9:30 – Arthur Charpentier
10:15 coffee break
10:45 – Toon Calders
11:30 – Isabel Valera
12:15 lunch break
13:15 – Jean Michel Loubes
14:00 – Evgeny Chzhen
14:45 – Michele Loi
15:30 coffee break
16:00 – Aurélie Lemmens
16:45 – François Hu and Antoine Ly
17:30 – closing cocktail

Les lundis de l’IA et de la finance

Dans dix jours, je participerai (en ligne) aux lundis de l’IA et de la finance, avec comme thème “mesurer et corriger les biais dans les systèmes d’IA”.  Co-organisés par l’Autorité de Contrôle Prudentiel et de Résolution (ACPR/Banque de France) et Télécom Paris, les « Lundis de l’IA et de la Finance » forment un cycle de conférences autour de la réglementation de l’IA dans le secteur financier. Dans ce cadre, régulateurs, chercheurs et autres acteurs de l’industrie financière se sont réunis tous les deux mois pour échanger autour de différentes thématiques du domaine.

Le programme est incroyable

  • 17h – 17h10 : Introduction, Olivier Fliche (ACPR/Banque de France) et/ou Winston Maxwell (Télécom Paris)
  • 17h10 – 17h30 : Arthur Charpentier (UQAM Montréal) : présentation de travaux sur l’équité dans le domaine de l’assurance (où le cœur du métier est précisément de « discriminer » les risques), y compris une présentation des métriques d’équité et de leurs implications [les slides sont disponibles]
  • 17h30- 17h50 : Benoît Rottembourg et Jean-Michel Loubes (Inria) : méthodes d’identification des biais sur un cas concret (impayés en téléphonie)
  • 17h50-18h10 : David Cortés (AI-vidence) et/ou Stephan Clémençon (Télécom Paris) : présentation d’une méthode empirique de correction des biais directement dans les données d’entrée
  • 18h10-18h25 : Questions / discussions
  • 18h25-18h30 : Remarques de clôture, O. Fliche et/ou D. Bounie

 

On my way to Toronto

Tomorrow, I will be on my way to Toronto (by train, as always). I will give a seminar on Monday, at the University of Toronto. The long title is Using optimal transport to mitigate unfair predictions and quantify counterfactual fairness (slides are available)

Many industries are heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation, is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective.  In  the first part of this seminar, we propose to mitigate possible discrimination (related to so call « group fairness », related to discrepancies in score distributions) through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. This part will be based on recent work with François Hu and Philipp Ratz (2310.205082309.066272306.12912 and 2306.10155). In the second part, we will focus on another aspect of discrimination usually called « counterfactual fairness », where the goal is to quantify a potential discrimination « if that person had not been Black » or « if that person had not been a woman ». The standard approach, called « ceteris paribus » (everything remains unchanged) is not sufficient to take into account indirect discrimination, and therefore, we consider a « mutates mutants » approach based on optimal transport. With multiple features, optimal transport becomes more challenging and we suggest a sequential approach based on probabilistic graphical models. This part will be based on recent work with Agathe Fernandes Machado and Ewen Gallic (2408.03425 and 2501.15549).

EquiPy: Sequential Fairness using Optimal Transport in Python

Our article EquiPy: Sequential Fairness using Optimal Transport in Python, with Agathe Fernandes Machado, Suzie Grondin, François Hu and Philipp Ratz is now online. See also equilibration.github.io/equipy/ for the Python package

Algorithmic fairness has received considerable attention due to the failures of various predictive AI systems that have been found to be unfairly biased against subgroups of the population. Many approaches have been proposed to mitigate such biases in predictive systems, however, they often struggle to provide accurate estimates and transparent correction mechanisms in the case where multiple sensitive variables, such as a combination of gender and race, are involved. This paper introduces a new open source Python package, EquiPy, which provides a easy-to-use and model agnostic toolbox for efficiently achieving fairness across multiple sensitive variables. It also offers comprehensive graphic utilities to enable the user to interpret the influence of each sensitive variable within a global context. EquiPy makes use of theoretical results that allow the complexity arising from the use of multiple variables to be broken down into easier-to-solve sub-problems. We demonstrate the ease of use for both mitigation and interpretation on publicly available data derived from the US Census and provide sample code for its use.

Conference at CIMAT, D3

After the colloquium yesterday and our conference diner in a beautiful restaurant, downtown, we are back to work at the “Montréal – Guanajuato Workshop on Probability and Machine Learning“. Third day, with a focus on machine learning. Courtney Paquette (McGill) was our first plenary speaker on ‘High-dimensional Optimization with Applications to Compute-Optimal Neural Scaling Laws’, then Emilien Joly Joly (CIMAT) second plenary speaker on ‘GROS: A Unified Framework for Robust Aggregation in Metric Spaces with Applications to Machine Learning and Statistics’ , Marouane Il Idrissi (UQAM); Wilson Zuniga Galingo (Texas); Juan Jiminez (Ottawa); James Melbourne (CIMAT), our third plenary speaker on ‘Towards optimal privacy mechanisms under estimated sensitivity’; and finally Imanol Nuñez Morales (CIMAT). It is a great, workshop, thanks again to our sponsors (Quantact, SCOR Foundation for Science, CIMAT and the probability lab of Centre de recherches mathématiques (CRM)), thanks to the organizing team (mainly Dante Mata López, without whom nothing would have been possible), the great and enthusiastic speakers we had, and a terrific location…

Agathe at AAAI’25, Philadelphia

Agathe Fernandes Machado is currently in Philadelphia, to present our paper Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness, written with Ewen Gallic

In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, and optimal transport. We extend “Knothe’s rearrangement” and “triangular transport” to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss fairness at the individual level. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

Conference at CIMAT, D1

Day 1 at our conference at CIMAT, Guanajauto (Mexico), on “Probability and Machine Learning“. Everyone is here, with Alma Sarai Hernandez Torres (UNAM), first plenary talk on ‘Uniform spanning trees: theory and applications’, Benjamin Côté (Waterloo), Elliot Paquette (McGill), second plenary talk on ‘From magic squares, through random matrices, and to the multiplicative chaos’, Tulio Gaxiola (UAS), Arturo jaramillo Jaramillo (CIMAT), third plenary talk on ‘High-Frequency Statistics for Lévy Processes: A Stein’s Method Perspective’, and Sayle Sigarreta Ricardo (BUAP). Great talks, great day. After flash presentations of posters, and short break before the tacos welcome evening (with posters). A big thank you to the presenters, and to the audience, the room was full all day, with active participation !

Conference at CIMAT, D-1

Pre-conference day at CIMAT, Guanajauto (Mexico), “Probability and Machine Learning“. Finalizing the last details, with the incredible local team, financial services, etc. So far, no last-minute imponderables, fingers crossed. At lunchtime, Hélène Guérin gives a presentation at the probability seminar, and tomorrow morning, the conference begins. Thanks again to our sponsors, SCOR Foundation for Science CIMAT the probability lab of Centre de recherches mathématiques (CRM) and the actuarial science lab, Quantact.

Mathematics Colloquium at CIMAT

Wednesday afternoon, I am invited to give a talk at the Mathematics Colloquium at Centro de Investigación en Matemáticas (CIMAT). I will present an overview of recent work. The long title is Using optimal transport to mitigate unfair predictions and quantify counterfactual fairness (slides are available)

Many industries are heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation, is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective.  In  the first part of this seminar, we propose to mitigate possible discrimination (related to so call « group fairness », related to discrepancies in score distributions) through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. This part will be based on recent work with François Hu and Philipp Ratz (2310.205082309.066272306.12912 and 2306.10155). In the second part, we will focus on another aspect of discrimination usually called « counterfactual fairness », where the goal is to quantify a potential discrimination « if that person had not been Black » or « if that person had not been a woman ». The standard approach, called « ceteris paribus » (everything remains unchanged) is not sufficient to take into account indirect discrimination, and therefore, we consider a « mutates mutants » approach based on optimal transport. With multiple features, optimal transport becomes more challenging and we suggest a sequential approach based on probabilistic graphical models. This part will be based on recent work with Agathe Fernandes Machado and Ewen Gallic (2408.03425 and 2501.15549).

Continue reading Mathematics Colloquium at CIMAT

Workshop on Probability and Machine Learning, in Guanajuato, Mexico

Next week, we organize the first Montréal – Guanajuato Workshop on Probability and Machine Learning, at CIMAT, Centro de Investigación en Matemáticas.

with Arturo Jaramillo Gil (CIMAT), Saraí Hernández-Torres (UNAM), Emilien Joly (CIMAT), Sandra Palau (UNAM), Courtney Paquette (McGill), Elliot Paquette (McGill), José Luis Pérez (CIMAT), James Melbourne (CIMAT), and Jean-François Renaud (UQAM), among invited speakers.

The Centro de Investigación en Matemáticas (CIMAT), located in Guanajuato, Mexico, is a leading research institution focused on mathematics, statistics, and computer science. Part of Mexico’s National System of Public Research Centers (CONACYT), CIMAT excels in both theoretical and applied research, fostering innovation and solving complex real-world problems. Its vibrant academic environment supports advanced studies, offering master’s and doctoral programs, while promoting interdisciplinary collaboration. Housed in the picturesque city of Guanajuato, a UNESCO World Heritage Site, CIMAT attracts top researchers and students from around the globe, contributing significantly to scientific and technological advancement in Mexico and beyond.

The first goal of the workshop is to bring together researchers, and scholars from Québec and Mexico in the field of probability theory, and Machine learning. With an emphasis on both theoretical foundations and practical applications, the conference will feature research presentations from speakers who represent a range of career stages at the faculty level. This will foster the exchange of ideas and opportunities for new collaborations.

The second goal was to encourage and promote student mobility between Mexico and Québec. The workshop will feature short talks by graduate students and postdoc fellows, who will have opportunity to present their work, but also to exchange with different researchers. This will enable them to enrich their academic network, and may well open up mobility opportunities for them in the future. Yet, Dante Mata Lopez is sharing the office with the team (Agathe, Marouane), and this summer, two interns will join our team: Allison Lara Nieva, from the Universidad Nacional Autónoma de México (Agathe Fernandes Machado will be involved in the supervision) and Fabian Dominguez Lopez, from the Universidad de Guanajuato, working with Hélène Guérin and Arsène Brice Zotsa Ngoufack, who will be involved in the supervision.

Picture credit for the poster: Yuko Nishikawa (Yuko Nishikawa is a Brooklyn-based Japanese multidisciplinary artist and designer, known for her organic, dreamlike works. She grew up in the seaside town of Chigasaki (茅ヶ崎市), south of Tokyo).

Artificial Intelligence and Personalization of Insurance: Failure or Delayed Ignition?

Our joint paper, Artificial Intelligence and Personalization of Insurance: Failure or Delayed Ignition?, with Xavier Vamparys, has been published in Big Data & Society.

In insurance, there is still a significant gap between the anticipated disruption, due to big data and machine learning algorithms, and the actual implementation of behaviour-based personalization, as described by Meyers (2018). Here, we identify eight key factors that serve as fundamental obstacles to the radical transformation of insurance guarantees, aiming to closely align them with the risk profile of each policyholder. These obstacles include the collective nature of insurance, the entrenched beliefs of some insurance companies, challenges related to data collection and use for personalized pricing, limited interest from insurers in adopting new models as well as policyholders’ reluctance towards embracing connected devices. Additionally, the hurdles of explainability, insurer inertia and ethical or societal considerations further complicate the path toward achieving highly individualized insurance pricing.