Schwalbe, GesinaGesinaSchwalbe0000-0003-2690-24782023-02-022023-02-022022https://fis.uni-bamberg.de/handle/uniba/57172Kumulative Dissertation, Otto-Friedrich-Universität Bamberg, 2022Deep neural networks (DNNs) are handled as a key technology for computer vision (CV) in automated driving and similar safety critical domains. For admission to market, the safe operation of such systems has to be ensured. However, DNNs come along with properties like complexity and opaqueness. These make new approaches towards safety argumentation and evidence acquisition inevitable. This thesis both identifies and tackles key practical issues associated with safety assurance of convolutional deep neural networks (CNNs) in automotive computer vision applications. The key practical issue handled here is that many safety requirements origin from symbolic domain knowledge which relate semantic concepts from natural language, such as “eyes usually belong to an obstacle”. That is a problem since outputs of typical object detection CNNs are restricted to few semantic label classes, and both input and intermediate outputs are non-symbolic. This thesis bridges this gap using methods from concept embedding analysis (CA). CA research tries to associate semantic concepts with items in the DNN intermediate outputs, thus providing access to symbolic knowledge encoded in the model. A baseline CA method is chosen according to a broad literature analysis, and, in the course of several experimental studies, substantially improved regarding efficiency and performance. This allowed for the first time to successfully apply CA to state-of-the-art CNNs for object detection. Based on the improved CA method, diverse approaches to provide evidence for different types of symbolic safety requirements are developed. Concretely, CA is used (1) to verify correct encoding of semantic relations, (2) to build global, interpretable, and inspectable proxy models, and it is used (3) as part of a framework for inspection and verification of compliance with symbolic fuzzy logic rules. Applicability of the approaches is shown and evaluated on several state-of-the-art object detector CNNs and backends of such. Lastly, the contribution of CA -based evidence generation to safety assurance is highlighted. A template for a safety argumentation structure is developed, and a broad review of existing DNN specific evidence generation methods is conducted. This reveals the need for the CA -based methods developed in this thesis, and properly positions them in the overall safety argument. Altogether, the works in this thesis constitute a solid step towards safer usage of DNNs in automotive CV applications.engDeep neural networkComputer VisionSafetyAutomated DrivingConcept Embedding AnalysisExplainable Artificial IntelligenceXAISafety ArgumentVerificationObject Detection004620Concept Embedding Analysis Based Methods for the Safety Assurance of Deep Neural Networks : towards safe automotive computer vision applicationsdoctoralthesisurn:nbn:de:bvb:473-irb-571725