The Next Generation of Medical Decision Support : A Roadmap Toward Transparent Expert Companions

Faculty/Professorship: Cognitive Systems  
Author(s): Kiefer, Sebastian  ; Finzel, Bettina  ; Schmid, Ute  
By: Bruckert, Sebastian ; ...
Publisher Information: Bamberg : Otto-Friedrich-Universität
Year of publication: 2020
Issue: 507973
Pages: 13
Source/Other editions: Frontiers in artificial intelligence, 3 (2020), 13 S. - ISSN: 2624-8212
is version of: 10.3389/frai.2020.507973
Year of first publication: 2020
Language(s): English
Licence: Creative Commons - CC BY - Attribution 4.0 International 
Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.
GND Keywords: Künstliche Intelligenz; Maschinelles Lernen; Interpretation; Diagnose; Vertrauen; Gesundheitswesen; Entscheidungsunterstützung
Keywords: explainable artificial intelligence, interactive ML, interpretability, trust, medical diagnosis, medical decision support, companion
DDC Classification: 004 Computer science  
RVK Classification: ST 302   
Type: Article
Release Date: 21. December 2020
Project: Transparenter Begleiter für medizinische Anwendung
Open-Access-Publikationsfonds 2012-2020

File SizeFormat  
fisba49222.pdf1.93 MBPDFView/Open