Concept Enforcement and Modularization as Methods for the ISO 26262 Safety Argumentation of Neural Networks





Faculty/Professorship: University of Bamberg  
Author(s): Schwalbe, Gesina ; Schels, Martin
Publisher Information: Bamberg : Otto-Friedrich-Universität
Year of publication: 2020
Pages: 11
Source/Other editions: European Congress on Embedded Real Time Software and Systems ERTS, 10 (2020), 11 S.
Language(s): English
DOI: 10.20378/irb-47276
Licence: Creative Commons - CC BY-NC-SA - Attribution - NonCommercial - ShareAlike 4.0 International 
URL: https://hal.archives-ouvertes.fr/hal-02442796
URN: urn:nbn:de:bvb:473-irb-472762
Abstract: 
Neural networks (NN) are prone to systematic faults which are hard to detect using the methods recommended by the ISO 26262 automotive functional safety standard. In this paper we propose a unified approach to two methods for NN safety argumentation: Assignment of human interpretable concepts to the internal representation of NNs to enable modularization and formal verification. Feasibility of the required concept embedding analysis is demonstrated in a minimal example and important aspects for generalization are investigated. The contribution of the methods is derived from a proposed generic argumentation structure for a NN model safety case.
GND Keywords: ISO/DIS 26262 ; Funktionssicherheit ; Maschinelles Lernen ; Neurales Netzwerk ; Künstliche Intelligenz
Keywords: concept enforcement, machine learning, neural networks, functional safety, ISO 26262, goal structuring notation, explainable AI
DDC Classification: 004 Computer science  
RVK Classification: ST 300   
Peer Reviewed: Ja
International Distribution: Ja
Open Access Journal: Ja
Type: Conferenceobject
URI: https://fis.uni-bamberg.de/handle/uniba/47276
Release Date: 3. July 2020

File Description SizeFormat  
fisba47276.pdf787.5 kBPDFView/Open