Feustel, IsabelIsabelFeustelRach, NiklasNiklasRachMinker, WolfgangWolfgangMinkerUltes, StefanStefanUltes0000-0003-2667-31262025-09-162025-09-162024https://fis.uni-bamberg.de/handle/uniba/110387Explainable artifcial intelligence (XAI) is a rapidly evolving feld that seeks to create AI systems that can provide humanunderstandable explanations for their decisionmaking processes. However, these explanations rely on model and data-specifc information only. To support better human decisionmaking, integrating domain knowledge into AI systems is expected to enhance understanding and transparency. In this paper, we present an approach for combining XAI explanations with domain knowledge within a dialogue system. We concentrate on techniques derived from the feld of computational argumentation to incorporate domain knowledge and corresponding explanations into human-machine dialogue. We implement the approach in a prototype system for an initial user evaluation, where users interacted with the dialogue system to receive predictions from an underlying AI model. The participants were able to explore different types of explanations and domain knowledge. Our results indicate that users tend to more effectively evaluate model performance when domain knowledge is integrated. On the other hand, we found that domain knowledge was not frequently requested by the user during dialogue interactions.eng-Enhancing Model Transparency : A Dialogue System Approach to XAI with Domain Knowledgeconferenceobject10.18653/v1/2024.sigdial-1.22