Options
Concept Embedding Analysis Based Methods for the Safety Assurance of Deep Neural Networks : towards safe automotive computer vision applications
Schwalbe, Gesina (2022): Concept Embedding Analysis Based Methods for the Safety Assurance of Deep Neural Networks : towards safe automotive computer vision applications, Bamberg: Otto-Friedrich-Universität, doi: 10.20378/irb-57172.
Author:
Publisher Information:
Year of publication:
2022
Pages:
Supervisor:
Language:
English
Remark:
Kumulative Dissertation, Otto-Friedrich-Universität Bamberg, 2022
DOI:
Abstract:
Deep neural networks (DNNs) are handled as a key technology for computer vision (CV) in automated driving and similar safety critical domains. For admission to market, the safe operation of such systems has to be ensured. However, DNNs come along with properties like complexity and opaqueness. These make new approaches towards safety argumentation and evidence acquisition inevitable. This thesis both identifies and tackles key practical issues associated with safety assurance of convolutional deep neural networks (CNNs) in automotive computer vision applications.
The key practical issue handled here is that many safety requirements origin from symbolic domain knowledge which relate semantic concepts from natural language, such as “eyes usually belong to an obstacle”. That is a problem since outputs of typical object detection CNNs are restricted to few semantic label classes, and both input and intermediate outputs are non-symbolic. This thesis bridges this gap using methods from concept embedding analysis (CA). CA research tries to associate semantic concepts with items in the DNN intermediate outputs, thus providing access to symbolic knowledge encoded in the model. A baseline CA method is chosen according to a broad literature analysis, and, in the course of several experimental studies, substantially improved regarding efficiency and performance. This allowed for the first time to successfully apply CA to state-of-the-art CNNs for object detection.
Based on the improved CA method, diverse approaches to provide evidence for different types of symbolic safety requirements are developed. Concretely, CA is used (1) to verify correct encoding of semantic relations, (2) to build global, interpretable, and inspectable proxy models, and it is used (3) as part of a framework for inspection and verification of compliance with symbolic fuzzy logic rules. Applicability of the approaches is shown and evaluated on several state-of-the-art object detector CNNs and backends of such.
Lastly, the contribution of CA -based evidence generation to safety assurance is highlighted. A template for a safety argumentation structure is developed, and a broad review of existing DNN specific evidence generation methods is conducted. This reveals the need for the CA -based methods developed in this thesis, and properly positions them in the overall safety argument.
Altogether, the works in this thesis constitute a solid step towards safer usage of DNNs in automotive CV applications.
The key practical issue handled here is that many safety requirements origin from symbolic domain knowledge which relate semantic concepts from natural language, such as “eyes usually belong to an obstacle”. That is a problem since outputs of typical object detection CNNs are restricted to few semantic label classes, and both input and intermediate outputs are non-symbolic. This thesis bridges this gap using methods from concept embedding analysis (CA). CA research tries to associate semantic concepts with items in the DNN intermediate outputs, thus providing access to symbolic knowledge encoded in the model. A baseline CA method is chosen according to a broad literature analysis, and, in the course of several experimental studies, substantially improved regarding efficiency and performance. This allowed for the first time to successfully apply CA to state-of-the-art CNNs for object detection.
Based on the improved CA method, diverse approaches to provide evidence for different types of symbolic safety requirements are developed. Concretely, CA is used (1) to verify correct encoding of semantic relations, (2) to build global, interpretable, and inspectable proxy models, and it is used (3) as part of a framework for inspection and verification of compliance with symbolic fuzzy logic rules. Applicability of the approaches is shown and evaluated on several state-of-the-art object detector CNNs and backends of such.
Lastly, the contribution of CA -based evidence generation to safety assurance is highlighted. A template for a safety argumentation structure is developed, and a broad review of existing DNN specific evidence generation methods is conducted. This reveals the need for the CA -based methods developed in this thesis, and properly positions them in the overall safety argument.
Altogether, the works in this thesis constitute a solid step towards safer usage of DNNs in automotive CV applications.
GND Keywords: ; ; ; ;
Neuronales Netz
Maschinelles Sehen
Technische Sicherheit
Objekterkennung
Autonomes Fahrzeug
Keywords: ; ; ; ; ; ; ; ; ;
Deep neural network
Computer Vision
Safety
Automated Driving
Concept Embedding Analysis
Explainable Artificial Intelligence
XAI
Safety Argument
Verification
Object Detection
DDC Classification:
RVK Classification:
Type:
Doctoralthesis
Activation date:
February 2, 2023
Permalink
https://fis.uni-bamberg.de/handle/uniba/57172