Mutual Explanations for Cooperative Decision Making in Medicine




Faculty/Professorship: Cognitive Systems  
Author(s): Schmid, Ute  ; Finzel, Bettina  
Publisher Information: Bamberg : Otto-Friedrich-Universität
Year of publication: 2022
Pages: 227-233
Source/Other editions: Künstliche Intelligenz : KI ; Forschung, Entwicklung, Erfahrungen, 34 (2020), 2, S. 227-233 - ISSN: 1610-1987
is version of: 10.1007/s13218-020-00633-2
Year of first publication: 2020
Language(s): English
Licence: Creative Commons - CC BY - Attribution 4.0 International 
URN: urn:nbn:de:bvb:473-irb-552219
Abstract: 
Exploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.
GND Keywords: Induktive logische Programmierung; Constraint <Künstliche Intelligenz>; Maschinelles Lernen; Entscheidungsunterstützung
Keywords: Human-AI partnership, Inductive Logic Programming, Explanations as constraints
DDC Classification: 004 Computer science  
RVK Classification: ST 302   
Type: Article
URI: https://fis.uni-bamberg.de/handle/uniba/55221
Release Date: 22. September 2022
Project: Transparenter Begleiter für medizinische Anwendung

File SizeFormat  
fisba55221.pdf928.67 kBPDFView/Open