Concept Embedding Analysis Based Methods for the Safety Assurance of Deep Neural Networks : towards safe automotive computer vision applications





Faculty/Professorship: Fakultät Wirtschaftsinformatik und Angewandte Informatik: Abschlussarbeiten ; Cognitive Systems 
Author(s): Schwalbe, Gesina 
Publisher Information: Bamberg : Otto-Friedrich-Universität
Year of publication: 2022
Pages: 354 ; Illustrationen, Diagramme
Supervisor(s): Schmid, Ute  ; Wolter, Diedrich  ; Lüttgen, Gerald  
Language(s): English
Remark: 
Kumulative Dissertation, Otto-Friedrich-Universität Bamberg, 2022
DOI: 10.20378/irb-57172
Licence: Creative Commons - CC BY - Attribution 4.0 International 
URN: urn:nbn:de:bvb:473-irb-571725
Abstract: 
Deep neural networks (DNNs) are handled as a key technology for computer vision (CV) in automated driving and similar safety critical domains. For admission to market, the safe operation of such systems has to be ensured. However, DNNs come along with properties like complexity and opaqueness. These make new approaches towards safety argumentation and evidence acquisition inevitable. This thesis both identifies and tackles key practical issues associated with safety assurance of convolutional deep neural networks (CNNs) in automotive computer vision applications.
The key practical issue handled here is that many safety requirements origin from symbolic domain knowledge which relate semantic concepts from natural language, such as “eyes usually belong to an obstacle”. That is a problem since outputs of typical object detection CNNs are restricted to few semantic label classes, and both input and intermediate outputs are non-symbolic. This thesis bridges this gap using methods from concept embedding analysis (CA). CA research tries to associate semantic concepts with items in the DNN intermediate outputs, thus providing access to symbolic knowledge encoded in the model. A baseline CA method is chosen according to a broad literature analysis, and, in the course of several experimental studies, substantially improved regarding efficiency and performance. This allowed for the first time to successfully apply CA to state-of-the-art CNNs for object detection.
Based on the improved CA method, diverse approaches to provide evidence for different types of symbolic safety requirements are developed. Concretely, CA is used (1) to verify correct encoding of semantic relations, (2) to build global, interpretable, and inspectable proxy models, and it is used (3) as part of a framework for inspection and verification of compliance with symbolic fuzzy logic rules. Applicability of the approaches is shown and evaluated on several state-of-the-art object detector CNNs and backends of such.
Lastly, the contribution of CA -based evidence generation to safety assurance is highlighted. A template for a safety argumentation structure is developed, and a broad review of existing DNN specific evidence generation methods is conducted. This reveals the need for the CA -based methods developed in this thesis, and properly positions them in the overall safety argument.
Altogether, the works in this thesis constitute a solid step towards safer usage of DNNs in automotive CV applications.
GND Keywords: Neuronales Netz; Maschinelles Sehen; Technische Sicherheit; Objekterkennung; Autonomes Fahrzeug
Keywords: Deep neural network, Computer Vision, Safety, Automated Driving, Concept Embedding Analysis, Explainable Artificial Intelligence, XAI, Safety Argument, Verification, Object Detection
DDC Classification: 004 Computer science  
620 Engineering  
RVK Classification: ST 301   
Type: Doctoralthesis
URI: https://fis.uni-bamberg.de/handle/uniba/57172
Release Date: 2. February 2023

File Description SizeFormat  
fisba57172.pdf31.23 MBPDFView/Open