Options
Explainability is not Enough : Requirements for Human-AI-Partnership in Complex Socio-Technical Systems
Waefler, Toni; Schmid, Ute (2020): Explainability is not Enough : Requirements for Human-AI-Partnership in Complex Socio-Technical Systems, in: Florinda Matos (Hrsg.), Proceedings of the 2nd European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2020), Lissabon: ACPIL, S. 185–194, doi: 10.34190/EAIR.20.007.
Faculty/Chair:
Author:
By:
Waefler, Toni; ...
Title of the compilation:
Proceedings of the 2nd European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2020)
Editors:
Corporate Body:
European Conference on the Impact of Artificial Intelligence and Robotics (ECIAIR 2020), 2, 2020
Instituto Universitário de Lisboa (ISCTE-IUL)
Publisher Information:
Year of publication:
2020
Pages:
ISBN:
9781912764747
Language:
English
DOI:
Abstract:
Explainability has been recognized as an important requirement of artificial intelligence (AI) systems. Transparent decision policies and explanations regarding why an AI system comes about a certain decision is a pre-requisite if AI is supposed to support human decision-making or if human-AI collaborative decision-making is envisioned. Human-AI interaction and joint decision-making is required in many real-world domains, where risky decisions have to be made (e.g. medical diagnosis) or complex situations have to be assessed (e.g. states of machines or production processes). However, in this paper we theorize that explainability is necessary but not sufficient. Coming from the point of view of work psychology we argue that for the human part of the human-AI system much more is required than intelligibility. In joint human-AI decision-making a certain role is assigned to the human, which normally encompasses tasks such as (i) verifying AI based decision suggestions, (ii) improving AI systems, (iii) learning from AI systems, and (iv) taking responsibility for the final decision as well as for compliance with legislation and ethical standards. Empowering the human to take this demanding role requires not only human expertise but e.g. also human motivation, which is triggered by a suitable task design. Furthermore, at work humans normally do not take decisions as lonely wolves but in formal and informal cooperation with other humans. Hence, to design effective explainability and to empower for true human-AI collaborative decision-making, embedding human-AI dyads into a socio-technical context is necessary. Coming from theory, this paper presents system design criteria on different levels substantiated by work psychology. The criteria are described and confronted with a use case scenario of AI-supported medical decision making in the context of digital pathology. On this basis, the need for further research is outlined.
GND Keywords:
Künstliche Intelligenz ; Maschinelles Lernen ; Faktor Mensch ; Sozialtechnologie ; Motivation
Keywords:
Companion Technology, Explainable AI, Interactive Learning, Human Factors, Socio-Technical Systems, Motivation
DDC Classification:
RVK Classification:
Peer Reviewed:
Yes:
International Distribution:
Yes:
Type:
Contribution to an Articlecollection
Activation date:
May 18, 2021
Versioning
Question on publication
Permalink
https://fis.uni-bamberg.de/handle/uniba/49781