Options
Enhancing Model Transparency : A Dialogue System Approach to XAI with Domain Knowledge
Feustel, Isabel; Rach, Niklas; Minker, Wolfgang; u. a. (2026): Enhancing Model Transparency : A Dialogue System Approach to XAI with Domain Knowledge, in: Bamberg: Otto-Friedrich-Universität, S. 248–258.
Faculty/Chair:
Author:
Publisher Information:
Year of publication:
2026
Pages:
Source/Other editions:
Tatsuya Kawahara, Vera Demberg, Stefan Ultes, u. a. (Hrsg.), Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Kyoto: Association for Computational Linguistics, 2024, S. 248–258
Year of first publication:
2024
Language:
English
Abstract:
Explainable artifcial intelligence (XAI) is
a rapidly evolving feld that seeks to create
AI systems that can provide humanunderstandable
explanations for their decisionmaking
processes. However, these explanations
rely on model and data-specifc information
only. To support better human decisionmaking,
integrating domain knowledge into AI
systems is expected to enhance understanding
and transparency. In this paper, we present
an approach for combining XAI explanations
with domain knowledge within a dialogue system.
We concentrate on techniques derived
from the feld of computational argumentation
to incorporate domain knowledge and corresponding
explanations into human-machine dialogue.
We implement the approach in a prototype
system for an initial user evaluation, where
users interacted with the dialogue system to receive
predictions from an underlying AI model.
The participants were able to explore different
types of explanations and domain knowledge.
Our results indicate that users tend to more
effectively evaluate model performance when
domain knowledge is integrated. On the other
hand, we found that domain knowledge was
not frequently requested by the user during dialogue
interactions.
a rapidly evolving feld that seeks to create
AI systems that can provide humanunderstandable
explanations for their decisionmaking
processes. However, these explanations
rely on model and data-specifc information
only. To support better human decisionmaking,
integrating domain knowledge into AI
systems is expected to enhance understanding
and transparency. In this paper, we present
an approach for combining XAI explanations
with domain knowledge within a dialogue system.
We concentrate on techniques derived
from the feld of computational argumentation
to incorporate domain knowledge and corresponding
explanations into human-machine dialogue.
We implement the approach in a prototype
system for an initial user evaluation, where
users interacted with the dialogue system to receive
predictions from an underlying AI model.
The participants were able to explore different
types of explanations and domain knowledge.
Our results indicate that users tend to more
effectively evaluate model performance when
domain knowledge is integrated. On the other
hand, we found that domain knowledge was
not frequently requested by the user during dialogue
interactions.
Keywords:
-
Peer Reviewed:
Yes:
International Distribution:
Yes:
Type:
Conferenceobject
Activation date:
March 25, 2026
Permalink
https://fis.uni-bamberg.de/handle/uniba/114438