Congratulations Philipp !
Tag Archives: Ratz
2024 Optimization Days, (algorithmic) collusions in games
Tomorrow, I will attend the 2024 Optimization Days, in Montréal. I will present some work we did last Fall with Philipp Ratz and Suzie Grondin, on (algorithmic) collusions in games, “Market Pricing with Reinforcement Learning” (the paper will be available soon)
Several recent articles have attempted to gain a better understanding of algorithmic collusion (Calvano et al. (2020), Klein (2021), Banchio & Mantegazza (2022) Rocher et al. (2023)). For example, in Calvano et al. (2020), a simulation study showed that for a simplified market environment, basic Q-Learning Agents can learn to collude tacitly, in order to propose higher prices and increase their combined profit. Inspired by some Iterated Prisoners Dilemma, we derive some reinforcement learning algorithm to investigate and discuss several recent results and their robustness, and explain how reinforcement learning differs from simpler strategies and which conditions lead to unfavorable outcomes from a consumer perspective. In particular, we first describe the reinforcement learning problem in a more general manner and investigate the influence of the hyper-parameters. We then consider two situations separately. One, similar in spirit to Rocher et al. (2023), assumes that the market is in equilibrium and that a general agent tries to exploit a pricing strategy of an incumbent agent. The second, more general, approach consists of an agent continuously updating their own policy.
The starting point was Calvano et al. (2020),
For classical games, the mathematical framework is the following
for example, with the prisoner’s dilemma
Then, consider repeated games, and possible collusion
The next step is to include randomness, with (dynamic) stochastic games
and standard equations
(I describe quickly the different concepts). Finally, we can move from here to reinforcement learning, and Q-learning
The idea will be to play (or to interact) to learn that matrix
with the following interpretations, for the different parameters
Then, we will play a little bit, on the framework introduced to present the prisoner’s dilemma, for instance to understand the importance of \beta, using in the \epsilon-greedy approach, with \epsilon_t=\exp(-\beta t)
That is our first approach to the concept of collusion : agents don’t need to “cooperate” to have collusion
Then, we will use the experiment of Calvano et al. (2020) to get more complex discussions…
Geospatial Disparities: A Case Study on Real Estate Prices in Paris
Our paper, Geospatial Disparities: A Case Study on Real Estate Prices in Paris, and Agathe Fernandes Machado, François Hu, Philipp Ratz and Ewen Gallic, is now online on ArXiv,
Driven by an increasing prevalence of trackers, ever more IoT sensors, and the declining cost of computing power, geospatial information has come to play a pivotal role in contemporary predictive models. While enhancing prognostic performance, geospatial data also has the potential to perpetuate many historical socio-economic patterns, raising concerns about a resurgence of biases and exclusionary practices, with their disproportionate impacts on society. Addressing this, our paper emphasizes the crucial need to identify and rectify such biases and calibration errors in predictive models, particularly as algorithms become more intricate and less interpretable. The increasing granularity of geospatial information further introduces ethical concerns, as choosing different geographical scales may exacerbate disparities akin to redlining and exclusionary zoning. To address these issues, we propose a toolkit for identifying and mitigating biases arising from geospatial data. Extending classical fairness definitions, we incorporate an ordinal regression case with spatial attributes, deviating from the binary classification focus. This extension allows us to gauge disparities stemming from data aggregation levels and advocates for a less interfering correction approach. Illustrating our methodology using a Parisian real estate dataset, we showcase practical applications and scrutinize the implications of choosing geographical aggregation levels for fairness and calibration measures.
Talk at the 38th Annual AAAI Conference on Artificial Intelligence, in Vancouver
This week, François is in Vancouver, at the 38th Annual AAAI Conference on Artificial Intelligence,
presenting our joint work on Sequentially Fair Mechanism for Multiple Sensitive Attributes,
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.
Talk at the Groupe de travail ARC (Actuariat et Risques Contemporains), in Paris
This Friday, I will give a talk in Paris, on using optimal transport to mitigate unfair predictions, at the ARC Seminar.
The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this talk, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).
Slides are now available online.
Talk at Akur8 internal weekly seminar
Today, we will be presenting at Akur8 internal seminar our recent work on fairness in insurance. Slides are available online.
Apprentissage par renforcement, jeux et collision
Ce matin, Suzie Grondin nous expliquait les applications du RL (apprentissage par renforcement) en théorie des jeux, pour comprendre la collusion, pour la fin de son stage (6 mois de césure à l’ENSAE Paris) 😪 . Avec un lien avec le papier de Luc Rocher sur les approches adversariales, et une rapide introduction au “Offline-to-Online Reinforcement Learning”…
Merci à toute l’équipe (Philipp Ratz, François HU, Agathe Fernandes Machado, Dante Mata López) qui est venu l’écouter, ainsi que Louis Abraham… Superbe travail ! on veut un papier maintenant 😉 !
The whole is greater than the sum of the parts
Good news: ou paper, A Sequentially Fair Mechanism for Multiple Sensitive Attributes, written with Philipp Ratz and François Hu will be presented in February in Vancouver, at the 38th Annual AAAI Conference on Artificial Intelligence. For a shorter version, there was a review on montrealethics.ai of the paper last week (as mentioned previously).
Also, last week also, the team launched the equipy python package, with codes used in the paper,
pip install equipy
EquiPy is a Python package implementing sequential fairness on the predicted outputs of Machine Learning models, when dealing with multiple sensitive attributes. This post-processing method progressively achieve fairness accross a set of sensitive features by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between performance and unfairness.
(from the left to the right, Agathe, who just joint the PhD program, Suzie, MSc student at ENSAE, with us since May or June, Philipp, PhD student, François, postdoctoral fellow – and Dante, also postdoctoral fellow, in stochastic processes). According to Aristotle (or probably slightly misquoted),
the whole is greater than the sum of the parts
I couldn’t agree more !
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Nice review of our paper, with Philipp Ratz and François Hu, on montrealethics.ai.
Ask a group of people which biases in machine learning should be reduced, and you are likely to be showered with suggestions, making it difficult to decide where to start. To enable an objective discussion, we study a way to sequentially get rid of biases and propose a tool that can efficiently analyze the effects that the order of correction has on outcomes.
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Fairness Explainability using Optimal Transport with Applications in Image Classification
A revised version of our paper “Fairness Explainability using Optimal Transport with Applications in Image Classification” is now online, with more discussion about conterfactuals
Ensuring trust and accountability in Artificial Intelligence systems demands explainability of its outcomes. Despite significant progress in Explainable AI, human biases still taint a substantial portion of its training data, raising concerns about unfairness or discriminatory tendencies. Current approaches in the field of Algorithmic Fairness focus on mitigating such biases in the outcomes of a model, but few attempts have been made to try to explain why a model is biased. To bridge this gap between the two fields, we propose a comprehensive approach that uses optimal transport theory to uncover the causes of discrimination in Machine Learning applications, with a particular emphasis on image classification. We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions. This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence on the bias.Taking advantage of this interplay of enforcing and explaining fairness, our method hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
Parametric Fairness with Statistical Guarantees
Our paper Parametric Fairness with Statistical Guarantees is now available on ArXiv.
Algorithmic fairness has gained prominence due to societal and regulatory concerns about biases in Machine Learning models. Common group fairness metrics like Equalized Odds for classification or Demographic Parity for both classification and regression are widely used and a host of computationally advantageous post-processing methods have been developed around them. However, these metrics often limit users from incorporating domain knowledge. Despite meeting traditional fairness criteria, they can obscure issues related to intersectional fairness and even replicate unwanted intra-group biases in the resulting fair solution. To avoid this narrow perspective, we extend the concept of Demographic Parity to incorporate distributional properties in the predictions, allowing expert knowledge to be used in the fair solution. We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges like limited training data and constraints on total spending, offering a robust solution for real-life applications.
Value-at-risk Forecasting via Sieves
Friday (and Saturday), the 2023 NBER-NSF conference on time series will take place at UQAM. Philipp Ratz will present some recent work on Value-at-risk Forecasting via Sieves.
A previous version of the paper is available on ArXiv.
A Sequentially Fair Mechanism for Multiple Sensitive Attributes
Our paper, A Sequentially Fair Mechanism for Multiple Sensitive Attributes, with François Hu and Philipp Ratz, is now available on Arxiv,
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score. Throughout recent years, the scientific community has developed a host of definitions and tools to solve this task, which work well in many practical applications. However, the applicability and effectivity of these tools and definitions becomes less straightfoward in the case of multiple sensitive attributes. To tackle this issue, we propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features. We accomplish this by leveraging multi-marginal Wasserstein barycenters, which extends the standard notion of Strong Demographic Parity to the case with multiple sensitive characteristics. This method also provides a closed-form solution for the optimal, sequentially fair predictor, permitting a clear interpretation of inter-sensitive feature correlations. Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness. This extension permits a targeted prioritization of fairness improvements for a specific attribute within a set of sensitive attributes, allowing for a case specific adaptation. A data-driven estimation procedure for the derived solution is developed, and comprehensive numerical experiments are conducted on both synthetic and real datasets. Our empirical findings decisively underscore the practical efficacy of our post-processing approach in fostering fair decision-making.
Addressing Fairness and Explainability in Image Classification Using Optimal Transport
Our new paper, with François Hu and Philipp Ratz, Addressing Fairness and Explainability in Image Classification Using Optimal Transport, is now available on ArXiv.
Algorithmic Fairness and the explainability of potentially unfair outcomes are crucial for establishing trust and accountability of Artificial Intelligence systems in domains such as healthcare and policing. Though significant advances have been made in each of the fields separately, achieving explainability in fairness applications remains challenging, particularly so in domains where deep neural networks are used. At the same time, ethical data-mining has become ever more relevant, as it has been shown countless times that fairness-unaware algorithms result in biased outcomes. Current approaches focus on mitigating biases in the outcomes of the model, but few attempts have been made to try to explain why a model is biased. To bridge this gap, we propose a comprehensive approach that leverages optimal transport theory to uncover the causes and implications of biased regions in images, which easily extends to tabular data as well. Through the use of Wasserstein barycenters, we obtain scores that are independent of a sensitive variable but keep their marginal orderings. This step ensures predictive accuracy but also helps us to recover the regions most associated with the generation of the biases. Our findings hold significant implications for the development of trustworthy and unbiased AI systems, fostering transparency, accountability, and fairness in critical decision-making scenarios across diverse domains.
Mitigating Discrimination in Insurance with Wasserstein Barycenters
Our new paper, with François Hu and Philipp Ratz, Mitigating Discrimination in Insurance with Wasserstein Barycenters is now available on ArXiv.
The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.
(fictitious maps used in the article)