Tag Archives: XAI

IJCAI 2025 Workshop on Explainable Artificial Intelligence (XAI)

This week, Marouane will present recent work at the Workshop on Explainable Artificial Intelligence (XAI), at IJCAI in Montréal,

Explainable Artificial Intelligence (XAI) addresses the challenge of how to communicate and explain the decision-making of AI systems. The need for explainability increases as AI systems are deployed in critical applications, raising questions such as: how should explainable AI systems be designed? What queries should AI systems be able to answer about their models and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?

The Explainable AI (XAI) workshop at IJCAI provides a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.

In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.

The paper, Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models, is available on ArXiv.

Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models

Our paper Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models, with Marouane Il Idrissi and Agathe Fernandes Machado, is now online. It will be presented at the IJCAI 2025 Workshop on Explainable Artificial Intelligence (XAI), in Montréal this Summer…

Cooperative game theory has become a cornerstone of post-hoc interpretability in machine learning, largely through the use of Shapley values. Yet, despite their widespread adoption, Shapley-based methods often rest on axiomatic justifications whose relevance to feature attribution remains debatable. In this paper, we revisit cooperative game theory from an interpretability perspective and argue for a broader and more principled use of its tools. We highlight two general families of efficient allocations, the Weber and Harsanyi sets, that extend beyond Shapley values and offer richer interpretative flexibility. We present an accessible overview of these allocation schemes, clarify the distinction between value functions and aggregation rules, and introduce a three-step blueprint for constructing reliable and theoretically-grounded feature attributions. Our goal is to move beyond fixed axioms and provide the XAI community with a coherent framework to design attribution methods that are both meaningful and robust to shifting methodological trends.