Explainability is not Enough : Requirements for Human-AI-Partnership in Complex Socio-Technical Systems

Faculty/Professorship: Cognitive Systems  
Author(s): Wäfler, Toni; Schmid, Ute  
By: Waefler, Toni ; ...
Title of the compilation: Proceedings of the 2nd European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2020)
Editors: Matos, Florinda
Corporate Body: European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2020), 2, 2020
Instituto Universitário de Lisboa (ISCTE-IUL)
Publisher Information: Lissabon : ACPIL
Year of publication: 2020
Pages: 185-194
ISBN: 9781912764747
Language(s): English
DOI: 10.34190/EAIR.20.007
URL: https://cogsys.uni-bamberg.de/publications/waef...
Explainability has been recognized as an important requirement of artificial intelligence (AI) systems. Transparent decision policies and explanations regarding why an AI system comes about a certain decision is a pre-requisite if AI is supposed to support human decision-making or if human-AI collaborative decision-making is envisioned. Human-AI interaction and joint decision-making is required in many real-world domains, where risky decisions have to be made (e.g. medical diagnosis) or complex situations have to be assessed (e.g. states of machines or production processes). However, in this paper we theorize that explainability is necessary but not sufficient. Coming from the point of view of work psychology we argue that for the human part of the human-AI system much more is required than intelligibility. In joint human-AI decision-making a certain role is assigned to the human, which normally encompasses tasks such as (i) verifying AI based decision suggestions, (ii) improving AI systems, (iii) learning from AI systems, and (iv) taking responsibility for the final decision as well as for compliance with legislation and ethical standards. Empowering the human to take this demanding role requires not only human expertise but e.g. also human motivation, which is triggered by a suitable task design. Furthermore, at work humans normally do not take decisions as lonely wolves but in formal and informal cooperation with other humans. Hence, to design effective explainability and to empower for true human-AI collaborative decision-making, embedding human-AI dyads into a socio-technical context is necessary. Coming from theory, this paper presents system design criteria on different levels substantiated by work psychology. The criteria are described and confronted with a use case scenario of AI-supported medical decision making in the context of digital pathology. On this basis, the need for further research is outlined.
GND Keywords: Künstliche Intelligenz ; Maschinelles Lernen ; Faktor Mensch ; Sozialtechnologie ; Motivation
Keywords: Companion Technology, Explainable AI, Interactive Learning, Human Factors, Socio-Technical Systems, Motivation
DDC Classification: 004 Computer science  
RVK Classification: ST 302   
Peer Reviewed: Ja
International Distribution: Ja
Type: Contribution to an Articlecollection
URI: https://fis.uni-bamberg.de/handle/uniba/49781
Release Date: 18. May 2021