KI Absicherung: DNN-specific Safety Concerns
For image-based perception, DNNs are the state-of-the-art means of choice but not error-free. This fact creates challenges for safety-critical tasks, such as pedestrian detection, for automated driving realized with deep learning technologies. Under some conditions, the output of the DNN might be wrong or inaccurate. For example, the pixel distribution in an image inputted to an object detection DNN might lead to some false negative detections or inaccuracies in the localization of some objects. This is due the generally insufficient generalization capability of DNNs for open-world context applications.
With insufficient generalisation capability, we refer to erroneous outputs produced by an DNN-based function at inference time. Since the set of data with which such functions are trained is necessarily incomplete for open-world context applications, their generalization on an input outside this data set is not perfect. In other words, the DNN-based function in the perception has an incomplete input to output mapping. This is also known as the generalisation problem resulting from a function approximation based on a set of data points sampled from an unknown population distribution.
Figure 1 illustrates the relation between “Insufficient Generalisation Capability” and “DNN-specific Safety Concerns” (SCs). SCs are underlying issues which may either lead to the insufficiency of the generalization or merely make it difficult to argue the safety of DNN-based part of the system (ref. [1]). When triggered via the input to the DNN, this insufficiency will result in an erroneous output by the DNN.
Figure 1- Illustration of the relationship between Safety Concerns, Functional Insufficiency, Triggering Event and Erroneous Output adapted from ref. [1] (Willers, S. Sudholt, S. Raafatnia, S. Abrecht: Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks).
In the KI Absicherung project, we have consolidated a collection of DNN-specific Safety Concerns. The resulted list of DNN-specific Safety Concerns builds upon three publications:
- [1] Willers O., Sudholt S., Raafatnia S., Abrecht S. (2020) Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks. In: Casimiro A., Ortmeier F., Schoitsch E., Bitsch F., Ferreira P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops. SAFECOMP 2020. Lecture Notes in Computer Science, vol 12235. Springer, Cham. doi.org/10.1007/978-3-030-55583-2_25
- [2] T. Sämann, P. Schlicht, F. Hüger: Strategy to Increase the Safety of a DNN-based Perception for HAD Systems
- [3] G. Schwalbe, B. Knie, T. Sämann, T. Dobberphul, L. Gauerhof, S., V. Rocco: Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications
The topic was assessed in multiple workshops with a number of different participants with ML and/or safety background, and ultimately led to a consolidated collection containing 14 plus 1 items. The listing was furthermore divided into categories, as depicted in Fig. 2.
Figure 2- Safety Concerns divided into categories of “Safety Concerns related to DNN-characteristics (SC-1.X)”, “Data-related concerns (SC-2.X)”, “Metric-related concerns (SC-3.X)” and others (Continental).
Besides their direct bearing on the safety argumentation, DNN-specific Safety Concerns are also:
- used for structuring the safety argumentation. Evidences, mostly generated by mitigation approaches (for discussion of some of the potential approaches see ref. [1]), are used in the safety case to argue why a safety concern is considered to be sufficiently mitigated. It needs to be noted that the argument includes other elements which might not directly and/or completely be produced by a specific mitigation approach, e.g., justifying the choice of a mechanism due to state-of-the-art literature.
- used to sort the methods and measures developed in the project for providing evidences of mitigation (see previous point). This enables an early identification of gaps, i.e. Safety Concerns for which no mechanism is available in the project. These gaps can then be closed by new mitigations approaches.
As the DNN-specific Safety Concerns are successfully being used as a tool in KI Absicherung they might also support other projects within the KI Familie. They still need to be investigated and may need to be modified.
Authors of the article: Dominik Brüggemann – BUW, Hanno Gottschalk – BUW, Christian Hellert – Continental , Fabian Hüger – Volkswagen, PD Dr. Michael Mock – Fraunhofer IAIS, Shervin Raafatnia – Robert Bosch GmbH, Gesina Schwalbe – Continental
Sources:
Sämann, Timo; Schlicht, Peter; Hüger, Fabian (2020): Strategy to Increase the Safety of a DNN-based Perception for HAD Systems. Online verfügbar unter arxiv.org/pdf/2002.08935.
Schwalbe, Gesina; Knie, Bernhard; Sämann, Timo; Dobberphul, Timo; Gauerhof, Lydia; Raafatnia, Shervin; Rocco, Vittorio (2020): Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications. In: António Casimiro (Hg.): Computer safety, reliability, and security. SAFECOMP 2020 workshops : DECSoS 2020, DepDevOps 2020, USDAI 2020, and WAISE 2020, Lisbon, Portugal, September 15, 2020, Bd. 12235. Cham, Switzerland: Springer (LNCS sublibrary: SL 2 - Programming and software engineering, 12235), S. 383–394.
Willers, Oliver; Sudholt, Sebastian; Raafatnia, Shervin; Abrecht, Stephanie (2020): Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks. In: António Casimiro (Hg.): Computer safety, reliability, and security. SAFECOMP 2020 workshops : DECSoS 2020, DepDevOps 2020, USDAI 2020, and WAISE 2020, Lisbon, Portugal, September 15, 2020, Bd. 12235. Cham, Switzerland: Springer (LNCS sublibrary: SL 2 - Programming and software engineering, 12235), S. 336–350