Enriching LIME with Inductive Logic Programming: Explaining Deep Learning Classifiers with Logic Rules in a Companion System Framework

Faculty/Professorship: Cognitive Systems  ; Fakultät Wirtschaftsinformatik und Angewandte Informatik: Abschlussarbeiten 
Author(s): Rabold, Johannes  
Publisher Information: Bamberg : Otto-Friedrich-Universität
Year of publication: 2022
Pages: v, 52 ; Illustrationen
Supervisor(s): Schmid, Ute  
Language(s): English
Masterarbeit, Otto-Friedrich-Universität Bamberg, 2018
DOI: 10.20378/irb-46527
Licence: Creative Commons - CC BY - Attribution 4.0 International 
URN: urn:nbn:de:bvb:473-irb-465273
With the rise of black-box classifiers like Deep Learning networks, the need for interpretable and complete explanations for them becomes apparent. Users need to have the possibility to ask why a classifier inferred a particular result. Logic clauses induced by Inductive Logic Programming systems are superior in expressibility over visual explanations alone.
This thesis uses the ideas of LIME, a visual explanation framework, and enriches it with an ILP component to get comprehensible and powerful explanations for the inference results of Deep Learning Networks for images. The background knowledge for the predicates is obtained both automatically and by an annotation system that lets humans annotate labels and relations. The human labeling system and the explanation component form a Companion System where not only AI helps the user but also the other way round.
GND Keywords: Künstliche Intelligenz; Deep learning; Induktive Logik; System; Framework <Informatik>
Keywords: Explainable AI, Deep Learning, Inductive Logic Programming, LIME, Companion System, Local Interpretable Model-Agnostic Explanations
DDC Classification: 004 Computer science  
RVK Classification: ST 301   
Type: Masterthesis
URI: https://fis.uni-bamberg.de/handle/uniba/46527
Release Date: 1. February 2023

File Description SizeFormat  
fisba46527.pdf3.36 MBPDFView/Open