The Next Generation of Medical Decision Support : A Roadmap Toward Transparent Expert Companions





Faculty/Professorship: Cognitive Systems  
Author(s): Kiefer, Sebastian  ; Finzel, Bettina  ; Schmid, Ute  
By: Bruckert, Sebastian
Title of the Journal: Frontiers in artificial intelligence
ISSN: 2624-8212
Publisher Information: Lausanne : Frontiers Media
Year of publication: 2020
Volume: 3
Issue: 507973
Pages: 1-13
Language(s): English
Remark: 
Zweitveröffentlichung der Verlagsversion am 09.03.2021
Licence: Creative Commons - CC BY - Attribution 4.0 International 
DOI: 10.3389/frai.2020.507973
URL: https://www.frontiersin.org/article/10.3389/fra...
Abstract: 
Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.
SWD Keywords: Künstliche Intelligenz ; Maschinelles Lernen ; Interpretation ; Diagnose ; Vertrauen ; Gesundheitswesen ; Entscheidungsunterstützung
Keywords: explainable artificial intelligence, interactive ML, interpretability, trust, medical diagnosis, medical decision support, companion
DDC Classification: 004 Computer science  
RVK Classification: ST 302   
Peer Reviewed: Ja
International Distribution: Ja
Open Access Journal: Ja
Document Type: Article
URI: https://fis.uni-bamberg.de/handle/uniba/49222
Release Date: 21. December 2020
Project: Transparent Medical Expert Companion
Open-Access-Publikationsfonds 2012-2020

File SizeFormat  
fisba49222.pdf1.93 MBAdobe PDFView/Open