Many jurisdictions have laws or guidelines stipulating that insurance companies must not discriminate on some specified policyholder characteristics. Omission of the prohibited variables from the models removes direct discrimination, but does not prevent proxy discrimination — a phenomenon especially prevalent when powerful predictive algorithms are fed with an abundance of allowed covariates. In the actuarial literature, there remains some confusion on the definition of indirect discrimination: this impedes the understanding of the goals of each fairness methodology and their comparison. In the causal inference literature, many tools, such as directed acyclic graphs (DAGs), help uncover various types of biases. A DAG describes the causal relationships between variables of interest and has clear dependence implications. We exploit this tool for fairness to formally define direct and indirect discrimination, to discuss potential sources of bias, and to understand the properties of different fairness methodologies. Four families of fair scores (best-estimate, unaware, aware and corrective) are placed in the DAG representing the insurance pricing problem. This allows us to study their behaviour in terms of direct and indirect discrimination. A comprehensive pedagogical example illustrates our findings.
Cet article, publié dans le dernier numéro de la revue Risques, a été co-écrit avec Nicolas Marescaux.
Récemment, une chronique sur France Info citait un décideur politique, exprimant sa frustration envers le Conseil d’orientation des retraites (COR). Selon lui, le COR, « en définissant plusieurs scénarios prévisionnels, a empêché tout consensus sur la nécessité (ou non) d’une réforme ». Cette déclaration, bien que centrée sur la polémique de la réforme des retraites, soulève une question plus large : comment naviguer et communiquer efficacement dans un environnement incertain, surtout lorsqu’il s’agit de prendre des décisions cruciales ? Cette question est d’autant plus pertinente lorsqu’on la met en parallèle avec la récente demande de moratoire sur la recherche en intelligence artificielle et les réformes réglementaires liées au changement climatique. L’incertitude, en créant un flou ou un manque de confiance dans les informations disponibles, rend-elle plus ardue la recherche d’un accord ou d’un consensus sur une question donnée ? Et par extension, freine-t-elle le processus de prise de décision ? (à suivre)
Friday (and Saturday), the 2023 NBER-NSF conference on time series will take place at UQAM. Philipp Ratz will present some recent work on Value-at-risk Forecasting via Sieves.
A previous version of the paper is available on ArXiv.
With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.
Mercredi soir, je participerai à un évènement organisé par Data Driven Montréal, sur le thème du défi de l’assurabilité des risques climatiques. J’ai préparé quelques slides pour lancer la discussion.
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.
Algorithmic Fairness and the explainability of potentially unfair outcomes are crucial for establishing trust and accountability of Artificial Intelligence systems in domains such as healthcare and policing. Though significant advances have been made in each of the fields separately, achieving explainability in fairness applications remains challenging, particularly so in domains where deep neural networks are used. At the same time, ethical data-mining has become ever more relevant, as it has been shown countless times that fairness-unaware algorithms result in biased outcomes. Current approaches focus on mitigating biases in the outcomes of the model, but few attempts have been made to try to explain why a model is biased. To bridge this gap, we propose a comprehensive approach that leverages optimal transport theory to uncover the causes and implications of biased regions in images, which easily extends to tabular data as well. Through the use of Wasserstein barycenters, we obtain scores that are independent of a sensitive variable but keep their marginal orderings. This step ensures predictive accuracy but also helps us to recover the regions most associated with the generation of the biases. Our findings hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
In supervised learning, it is quite frequent to be confronted with real imbalanced datasets. This situation leads to a learning difficulty for standard algorithms. Research and solutions in imbalanced learning have mainly focused on classification tasks. Despite its importance, very few solutions exist for imbalanced regression. In this paper, we propose a data augmentation procedure, the GOLIATH algorithm, based on kernel density estimates which can be used in classification and regression. This general approach encompasses two large families of synthetic oversampling: those based on perturbations, such as Gaussian Noise, and those based on interpolations, such as SMOTE. It also provides an explicit form of these machine learning algorithms and an expression of their conditional densities, in particular for SMOTE. New synthetic data generators are deduced. We apply GOLIATH in imbalanced regression combining such generator procedures with a wild-bootstrap resampling technique for the target values. We evaluate the performance of the GOLIATH algorithm in imbalanced regression situations. We empirically evaluate and compare our approach and demonstrate significant improvement over existing state-of-the-art techniques.
"sendo l'intento mio scrivere cosa utile a chi la intende…"