Tag Archives: Bertille

Modeling and Understanding Indirect Discrimination in Algorithmic Fairness

In a couple of days, I will give a talk on “Modeling and Understanding Indirect Discrimination in Algorithmic Fairness” at Singapore campus – ESSEC Asia-Pacific. The abstract is

Observed disparities between groups in algorithmic decisions (whether in hiring, credit approval, or risk prediction) do not necessarily imply direct discrimination. They may also stem from legitimate differences in the distribution of explanatory attributes. Understanding and quantifying which components of these gaps are “explained” versus those that reflect direct or indirect discrimination lies at the core of modern causal approaches to algorithmic fairness. This talk will begin with an accessible introduction to group-gap decomposition, building on the classical Kitagawa–Oaxaca–Blinder econometric framework. This approach separates differences attributable to observable characteristics from residual components that may signal discriminatory effects. The second part will introduce recent developments leveraging optimal transport to construct individual-level counterfactuals, enabling estimation of direct and indirect causal effects for each observation. In particular, we will show how sequential transport mappings aligned with a causal graph can disentangle pathways and quantify the contribution of each mediator. This methodology overcomes limitations of traditional linear models, introduced by Kitagawa, Oaxaca and Blinder, provides interpretable counterfactuals, and is well suited to complex empirical settings. The presentation will combine intuitive motivation, illustrative examples, and recent research insights, with the goal of making these tools accessible and useful to researchers in management science, applied economics, and data science.

Buzy week in Singapore

It has been a buzy week at the 40th Annual AAAI Conference on Artificial Intelligence, here in Singapore where Bertille Tierny and François Hu will give talks (in the “main track”, in the “student track”, in a workshop) to present our recent work, “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“. More to come very soon…

On my way to Singapore

By the end of the week, I will be in Singapore. I plan to spend some time at the 40th Annual AAAI Conference on Artificial Intelligence, where Bertille Tierny and François Hu will give talks (in the “main track” and in the “student track”) to present our recent work, “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“.

Then I will spend two weeks, invited at ESSEC Asia-Pacific, invited by Pierre Alquier. A couple of talks are also scheduled.

 

Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint

Our paper “Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint“, with Bertille Tierny and François Hu is now online on ArXiv.

Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.

On sera à Singapour pour le présenter fin janvier, à AAAI 2026, 40th Annual AAAI Conference on Artificial Intelligence.