Observed disparities between groups in algorithmic decisions (whether in hiring, credit approval, or risk prediction) do not necessarily imply direct discrimination. They may also stem from legitimate differences in the distribution of explanatory attributes. Understanding and quantifying which components of these gaps are “explained” versus those that reflect direct or indirect discrimination lies at the core of modern causal approaches to algorithmic fairness. This talk will begin with an accessible introduction to group-gap decomposition, building on the classical Kitagawa–Oaxaca–Blinder econometric framework. This approach separates differences attributable to observable characteristics from residual components that may signal discriminatory effects. The second part will introduce recent developments leveraging optimal transport to construct individual-level counterfactuals, enabling estimation of direct and indirect causal effects for each observation. In particular, we will show how sequential transport mappings aligned with a causal graph can disentangle pathways and quantify the contribution of each mediator. This methodology overcomes limitations of traditional linear models, introduced by Kitagawa, Oaxaca and Blinder, provides interpretable counterfactuals, and is well suited to complex empirical settings. The presentation will combine intuitive motivation, illustrative examples, and recent research insights, with the goal of making these tools accessible and useful to researchers in management science, applied economics, and data science.
Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.
On sera à Singapour pour le présenter fin janvier, à AAAI 2026, 40th Annual AAAI Conference on Artificial Intelligence.
"sendo l'intento mio scrivere cosa utile a chi la intende…"