Tag Archives: Marouane

IJCAI 2025

After Maroune’s presentation this week-end,

Agathe was presenting twice at IJCAI 2025.

Early this week, it was the Doctoral Consortium, and yesterday, our joint paper on “Optimal Transport on Categorical Data for Counterfactuals using Compositional Data and Dirichlet Transport

Recently, optimal transport-based approaches have gained attention for deriving counterfactuals, e.g., to quantify algorithmic discrimination. However, in the general multivariate setting, these methods are often opaque and difficult to interpret. To address this, alternative methodologies have been proposed, using causal graphs combined with iterative quantile regressions (Plečko and Meinshausen (2020)) or sequential transport (Fernandes Machado et al. (2025)) to examine fairness at the individual level, often referred to as “counterfactual fairness.” Despite these advancements, transporting categorical variables remains a significant challenge in practical applications with real datasets. In this paper, we propose a novel approach to address this issue. Our method involves (1) converting categorical variables into compositional data and (2) transporting these compositions within the probabilistic simplex of \mathbb{R}^d. We demonstrate the applicability and effectiveness of this approach through an illustration on real-world data, and discuss limitations.

IJCAI 2025 Workshop on Explainable Artificial Intelligence (XAI)

This week, Marouane will present recent work at the Workshop on Explainable Artificial Intelligence (XAI), at IJCAI in Montréal,

Explainable Artificial Intelligence (XAI) addresses the challenge of how to communicate and explain the decision-making of AI systems. The need for explainability increases as AI systems are deployed in critical applications, raising questions such as: how should explainable AI systems be designed? What queries should AI systems be able to answer about their models and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?

The Explainable AI (XAI) workshop at IJCAI provides a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.

In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.

The paper, Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models, is available on ArXiv.

Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Our paper Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models, with Marouane Il Idrissi and Agathe Fernandes Machado, is now online. It will be presented at the IJCAI 2025 Workshop on Explainable Artificial Intelligence (XAI), in Montréal this Summer…

Cooperative game theory has become a cornerstone of post-hoc interpretability in machine learning, largely through the use of Shapley values. Yet, despite their widespread adoption, Shapley-based methods often rest on axiomatic justifications whose relevance to feature attribution remains debatable. In this paper, we revisit cooperative game theory from an interpretability perspective and argue for a broader and more principled use of its tools. We highlight two general families of efficient allocations, the Weber and Harsanyi sets, that extend beyond Shapley values and offer richer interpretative flexibility. We present an accessible overview of these allocation schemes, clarify the distinction between value functions and aggregation rules, and introduce a three-step blueprint for constructing reliable and theoretically-grounded feature attributions. Our goal is to move beyond fixed axioms and provide the XAI community with a coherent framework to design attribution methods that are both meaningful and robust to shifting methodological trends.

Unveil Sources of Uncertainty: Feature Contribution to ConformalPrediction Intervals

With Marouane Il Idrissi, Agathe Fernandes Machado and Ewen Gallic, we recently uploaded a paper “Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals” on ArXiv.

Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.

SCOR Project Newsletter #2

The second newsletter, related to the SCOR research project is now available. It is a brief summary of the second six months block, from April till the end of September (the first one is available here).

As explained, over the past six months, we have had several interns, including Noé Bosc-Haddad, Florent Crouzet, Julien Siharath, Ana María Patrón Piñerez and Cassandra Mussard, the visit of Laurence Barry and Fei Huang, Philipp Ratz and Samuel Stocksieker defended their PhD, François Hu finished his postdoctoral fellowship, while Marouane Il Idrissi and Arsene Zotsa just arrived, Agathe Fernandes Machado and Olivier Côté (co-supervised with Ewen Gallic and Marie-Pier Côté) finished their PhD courses and are now 100% on their research… We wrote papers, gave talks… Thank you to all those who have supported us, and continue to support us. At least two more years to work on insurance and predictive models, fairness, calibration, discrimination, trust, explainability, interpretability, market equilibria, competition, generative models, and so much more… We’ve still got a lot of work to do, and plenty of enthusiasm!