Schwalbe, GesinaGesinaSchwalbe0000-0003-2690-2478Schels, MartinMartinSchels2020-07-032020-07-032020https://fis.uni-bamberg.de/handle/uniba/47276Neural networks (NN) are prone to systematic faults which are hard to detect using the methods recommended by the ISO 26262 automotive functional safety standard. In this paper we propose a unified approach to two methods for NN safety argumentation: Assignment of human interpretable concepts to the internal representation of NNs to enable modularization and formal verification. Feasibility of the required concept embedding analysis is demonstrated in a minimal example and important aspects for generalization are investigated. The contribution of the methods is derived from a proposed generic argumentation structure for a NN model safety case.engconcept enforcementmachine learningneural networksfunctional safetyISO 26262goal structuring notationexplainable AI004Concept Enforcement and Modularization as Methods for the ISO 26262 Safety Argumentation of Neural Networksconferenceobjecthttps://hal.archives-ouvertes.fr/hal-02442796urn:nbn:de:bvb:473-irb-472762