Options
Generating Explanations for Conceptual Validation of Graph Neural Networks : An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs
Finzel, Bettina; Saranti, Anna; Angerschmid, Alessa; u. a. (2022): Generating Explanations for Conceptual Validation of Graph Neural Networks : An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs, in: Künstliche Intelligenz : KI ; Forschung, Entwicklung, Erfahrungen ; Organ des Fachbereichs 1 Künstliche Intelligenz der Gesellschaft für Informatik e.V., GI, Berlin: Springer, Jg. 36, Nr. 3–4, S. 271–285, doi: 10.1007/s13218-022-00781-7.
Faculty/Chair:
Title of the Journal:
Künstliche Intelligenz : KI ; Forschung, Entwicklung, Erfahrungen ; Organ des Fachbereichs 1 Künstliche Intelligenz der Gesellschaft für Informatik e.V., GI
ISSN:
0933-1875
1610-1987
Publisher Information:
Year of publication:
2022
Volume:
36
Issue:
3-4
Pages:
Language:
English
Abstract:
Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain’s and user’s perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.
GND Keywords: ; ; ;
Neuronales Netz
Wissensgraph
Explainable Artificial Intelligence
Induktive logische Programmierung
Keywords: ; ; ; ;
Graph neural networks (GNN)
Explainable AI (xAI)
Inductive logic programming (ILP)
Symbolic AI
Kandinsky pattern (KP)
DDC Classification:
RVK Classification:
Type:
Article
Activation date:
November 8, 2023
Versioning
Question on publication
Permalink
https://fis.uni-bamberg.de/handle/uniba/91634