Rabold, JohannesJohannesRabold0000-0003-0656-58812023-02-012023-02-012022https://fis.uni-bamberg.de/handle/uniba/46527Masterarbeit, Otto-Friedrich-Universität Bamberg, 2018With the rise of black-box classifiers like Deep Learning networks, the need for interpretable and complete explanations for them becomes apparent. Users need to have the possibility to ask why a classifier inferred a particular result. Logic clauses induced by Inductive Logic Programming systems are superior in expressibility over visual explanations alone. This thesis uses the ideas of LIME, a visual explanation framework, and enriches it with an ILP component to get comprehensible and powerful explanations for the inference results of Deep Learning Networks for images. The background knowledge for the predicates is obtained both automatically and by an annotation system that lets humans annotate labels and relations. The human labeling system and the explanation component form a Companion System where not only AI helps the user but also the other way round.engExplainable AI, Deep Learning, Inductive Logic Programming, LIME, Companion SystemLocal Interpretable Model-Agnostic Explanations004Enriching LIME with Inductive Logic Programming: Explaining Deep Learning Classifiers with Logic Rules in a Companion System Frameworkmasterthesisurn:nbn:de:bvb:473-irb-465273