EXCD-ECAI25: Evaluating Explainable AI and Complex Decision-Making Workshop at ECAI Bologna, Italy, October 25-30, 2025 |
Conference website | https://sites.google.com/view/excd-2025 |
Submission link | https://easychair.org/conferences/?conf=excdecai25 |
While explainable artificial intelligence (XAI) has become massively popular and impactful over the last years and has become an integral part of all major AI venues, progress in the field is, to some degree, still hindered by a lack of agreed-on evaluation methods and metrics. Many articles present only anecdotal evidence, and the large variation in explanation techniques and application domains makes it challenging to define, quantify, and compare the relevant performance criteria for XAI. This leads to a lack of standardized baselines and established state-of-the-art, making the contributions of newly proposed XAI methods difficult to evaluate. The discussion on how to evaluate explainability and interpretability, whether through user studies or with computational proxy measures, is ongoing.
This has been a particular (but not exclusive) challenge for complex decision-making approaches that go beyond classification or regression models. Therefore, this workshop brings together researchers interested in XAI in general and in AI planning, reinforcement learning, and data-driven optimization in particular, to discuss recent developments in XAI evaluation and collaboratively develop a roadmap to address this gap.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. We welcome papers of up to 7 pages + references following the ECAI format. Accepted papers will not be published in archival proceedings. This means that you can submit your paper to another venue after the workshop. However, we aim at editing a special issue on the topic of the workshop, giving an opportunity for selected papers to be published in an extended version.
List of Topics
-
Evaluation metrics for XAI (even if not yet applied to complex decision-making problems);
-
Benchmarks for XAI evaluation;
-
LLMs and XAI evaluation;
-
Agentic system evaluation;
-
Reports on evaluation with different stakeholder groups in practice;
-
Computational evaluation approaches;
-
Evaluating interactive systems/open-ended interactions;
-
Learnings on experimental studies from psychology and social sciences;
-
Evaluation methods for explainable autonomous agents;
-
Evaluation of XAI in embodied systems/robotics;
-
Impact of users' preferences;
-
HCI for XAI evaluation;
-
Testing actionable algorithmic recourse;
-
Evaluating contestability of AI decisions;
-
Evaluation metrics for XAI in combinatorial optimization;
-
Evaluation of interpretable RL models;
-
Evaluation of explanations in AI planning, e.g. based on model reconciliation;
-
Performance trade-offs of interpretable methods for optimization.
Committees
Organizing committee
- Dr. Hendrik Baier
- Dr. Yingqian Zhang
- Mark Towers
- Balint Gyevnar
Contact
All questions about submissions should be emailed to h.j.s.baier@tue.nl