Explainability is not Enough : Requirements for Human-AI-Partnership in Complex Socio-Technical Systems





Faculty/Professorship: Cognitive Systems  
Author(s): Wäfler, Toni; Schmid, Ute  
By: Waefler, Toni 
Publisher Information: Bamberg : Otto-Friedrich-Universität
Year of publication: 2021
Pages: 185-194
Source/Other editions: Proceedings of the 2nd European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2020) / ed. by Florinda Matos. Lissabon: ACPIL, 2020, S. 185-194. - ISBN 9781912764747
Year of first publication: 2020
Language(s): English
DOI: 10.20378/irb-49775
Licence: Creative Commons - CC BY-NC-SA - Attribution - NonCommercial - ShareAlike 4.0 International 
URL: https://fis.uni-bamberg.de/handle/uniba/49781
URN: urn:nbn:de:bvb:473-irb-497751
Abstract: 
Explainability has been recognized as an important requirement of artificial intelligence (AI) systems. Transparent decision policies and explanations regarding why an AI system comes about a certain decision is a pre-requisite if AI is supposed to support human decision-making or if human-AI collaborative decision-making is envisioned. Human-AI interaction and joint decision-making is required in many real-world domains, where risky decisions have to be made (e.g. medical diagnosis) or complex situations have to be assessed (e.g. states of machines or production processes). However, in this paper we theorize that explainability is necessary but not sufficient. Coming from the point of view of work psychology we argue that for the human part of the human-AI system much more is required than intelligibility. In joint human-AI decision-making a certain role is assigned to the human, which normally encompasses tasks such as (i) verifying AI based decision suggestions, (ii) improving AI systems, (iii) learning from AI systems, and (iv) taking responsibility for the final decision as well as for compliance with legislation and ethical standards. Empowering the human to take this demanding role requires not only human expertise but e.g. also human motivation, which is triggered by a suitable task design. Furthermore, at work humans normally do not take decisions as lonely wolves but in formal and informal cooperation with other humans. Hence, to design effective explainability and to empower for true human-AI collaborative decision-making, embedding human-AI dyads into a socio-technical context is necessary. Coming from theory, this paper presents system design criteria on different levels substantiated by work psychology. The criteria are described and confronted with a use case scenario of AI-supported medical decision making in the context of digital pathology. On this basis, the need for further research is outlined.
SWD Keywords: Künstliche Intelligenz ; Maschinelles Lernen ; Faktor Mensch ; Sozialtechnologie ; Motivation
Keywords: Companion Technology, Explainable AI, Interactive Learning, Human Factors, Socio-Technical Systems, Motivation
DDC Classification: 004 Computer science  
RVK Classification: ST 302   
Peer Reviewed: Ja
International Distribution: Ja
Document Type: Contribution to an Articlecollection
URI: https://fis.uni-bamberg.de/handle/uniba/49775
Release Date: 18. May 2021

File Description SizeFormat  
fisba49775.pdf732.02 kBAdobe PDFView/Open