Tag Archives: Suzie Grondin

EquiPy: Sequential Fairness using Optimal Transport in Python

Our article EquiPy: Sequential Fairness using Optimal Transport in Python, with Agathe Fernandes Machado, Suzie Grondin, François Hu and Philipp Ratz is now online. See also equilibration.github.io/equipy/ for the Python package

Algorithmic fairness has received considerable attention due to the failures of various predictive AI systems that have been found to be unfairly biased against subgroups of the population. Many approaches have been proposed to mitigate such biases in predictive systems, however, they often struggle to provide accurate estimates and transparent correction mechanisms in the case where multiple sensitive variables, such as a combination of gender and race, are involved. This paper introduces a new open source Python package, EquiPy, which provides a easy-to-use and model agnostic toolbox for efficiently achieving fairness across multiple sensitive variables. It also offers comprehensive graphic utilities to enable the user to interpret the influence of each sensitive variable within a global context. EquiPy makes use of theoretical results that allow the complexity arising from the use of multiple variables to be broken down into easier-to-solve sub-problems. We demonstrate the ease of use for both mitigation and interpretation on publicly available data derived from the US Census and provide sample code for its use.

Beyond Human Intervention: Algorithmic Collusion through Multi-Agent Learning Strategies

Our paper, Beyond Human Intervention: Algorithmic Collusion through Multi-Agent Learning Strategies, with Suzie Grondin and Philipp Ratz is now available online

Collusion in market pricing is a concept associated with human actions to raise market prices through artificially limited supply. Recently, the idea of algorithmic collusion was put forward, where the human action in the pricing process is replaced by automated agents. Although experiments have shown that collusive market equilibria can be reached through such techniques, without the need for human intervention, many of the techniques developed remain susceptible to exploitation by other players, making them difficult to implement in practice. In this article, we explore a situation where an agent has a multi-objective strategy, and not only learns to unilaterally exploit market dynamics originating from other algorithmic agents, but also learns to model the behaviour of other agents directly. Our results show how common critiques about the viability of algorithmic collusion in real-life settings can be overcome through the usage of slightly more complex algorithms.

Talk in Stockholm, Sweden, at the Insurance Data Science Conference

This week, I will attend the Insurance Data Science conference in Sweeden. It has been a while… I was a keynote speaker at the one in London, ten years ago (to give a talk I still have feedbacks about – Getting into Bayesian Wizardry… (with the eyes of a muggle actuary) – by that time, the conference was “R in Insurance”), and then, we organized the one in Paris, back in 2017. Then we had the online events, but it was… different.

This time, I will get back to our recent paper A Sequentially Fair Mechanism for Multiple Sensitive Attributes, with François Hu and Philipp Ratz, and the equipy package, wrote with Agathe Fernandes-Machado and Suzie Grondin. The slides are available online.