Options
XAI in Medicine : Analysis and Evaluation of XAI Tools and Legal Liability for Neural Networks ; A Case Study on Tumor Image Classification
Tenne, Alina (2024): XAI in Medicine : Analysis and Evaluation of XAI Tools and Legal Liability for Neural Networks ; A Case Study on Tumor Image Classification, Bamberg: Otto-Friedrich-Universität, doi: 10.20378/irb-96672.
Author:
Publisher Information:
Year of publication:
2024
Pages:
Supervisor:
Language:
English
Remark:
Masterarbeit, Otto-Friedrich-Universität Bamberg, 2024
DOI:
Abstract:
Artificial Intelligence (AI) and Machine Learning (ML) have immense potential to revolutionize various fields, especially in the domain of medicine. Deep learning models are increasingly used in the healthcare sector for image classification and disease diagnosis. However, the prob- lem of explainability remains a major concern. It is still unclear how much explanatory power current XAI tools have.
Therefore, this analysis aims to evaluate explainable AI (XAI) tools based on the current Ethics Guidelines of Trustworthy AI from the European Commission. The purpose is to determine the extent to which current explanation algorithms provide trustworthy, transparent, and explanatory support for black box models. The issue of liability arises due to the lack of traceability and increased use of black box models. It is unclear which organization, individuals, or groups of people are liable in the event of a claim. This thesis analyzes the current draft legislation on the AI Act and the Legal Liability Directive with regard to the question of liability. Additionally, it examines the role of XAI tools in this context.
XAI tools currently provide extensive capabilities for visualising model decisions and explaining the factors that are most likely to have contributed to the outcome in an understandable manner. However, the analysis and evaluation of XAI tools revealed that there are some op- portunities for improvement. Additionally, the availability of XAI tools heavily influences the issue of liability, as traceability and transparency are crucial elements for the legal implementation of new technologies.
Therefore, this analysis aims to evaluate explainable AI (XAI) tools based on the current Ethics Guidelines of Trustworthy AI from the European Commission. The purpose is to determine the extent to which current explanation algorithms provide trustworthy, transparent, and explanatory support for black box models. The issue of liability arises due to the lack of traceability and increased use of black box models. It is unclear which organization, individuals, or groups of people are liable in the event of a claim. This thesis analyzes the current draft legislation on the AI Act and the Legal Liability Directive with regard to the question of liability. Additionally, it examines the role of XAI tools in this context.
XAI tools currently provide extensive capabilities for visualising model decisions and explaining the factors that are most likely to have contributed to the outcome in an understandable manner. However, the analysis and evaluation of XAI tools revealed that there are some op- portunities for improvement. Additionally, the availability of XAI tools heavily influences the issue of liability, as traceability and transparency are crucial elements for the legal implementation of new technologies.
GND Keywords: ; ; ; ;
Künstliche Intelligenz
Explainable Artificial Intelligence
Bildanalyse
Europäische Union
Richtlinie
Keywords:
Artificial Intelligence, Explainable AI, AI Act
DDC Classification:
RVK Classification:
Type:
Masterthesis
Activation date:
July 31, 2024
Permalink
https://fis.uni-bamberg.de/handle/uniba/96672