Bridging the gap between uncertain perception and reality when designing controller software

A research team consisting of Tsutomu Kobayashi, Ichiro Hasuo, Fuyuki Ishikawa and Shinya Katsumata at the National Institute of Informatics (NII, Japan) and Rick Salay and Krzysztof Czarnecki at the University of Waterloo (Canada) has developed a method that automatically transforms controller models Convert software into models that meet security requirements even when the state of the environment is uncertain. In addition to the transformation, the procedure generates formulas that represent the degree of uncertainty that the controller software can tolerate. The method can be applied to various control systems that interact with the external environment, including autonomous vehicles.

This research was carried out as part of the ERATO MMSD project (* 1) funded by the Japan Science and Technology Agency (JST, Japan). The results were presented at the 13th NASA Formal Methods Symposium (online) on May 26, 2021.

background

Control systems like drones and autonomous vehicles are critical to society. These systems must function as safely as possible, as many of them are intended for use in the real world. Approaches to mathematically modeling the systems and demonstrating their safety are effective to ensure safety.

The controller software in such systems determines its actions in response to the state of the environment, which is perceived with sensors. In reality, however, the systems can perceive values ​​that deviate from the true values ​​(perceptual uncertainty). This can lead to security breaches if the controller software reacts to the incorrectly perceived values. For example, if the sensor of an autonomous vehicle can detect the positions of other cars up to 1 m incorrectly, it should work with a safety distance of at least 1 m (Figure 1 (a)).

Designing such insecure and secure controller software is quite complicated, as developers have to verify that security is guaranteed for every possible behavior of the system, taking into account differences between true values ​​and perceived values ​​(Figure 1 (b)). It is also difficult to estimate the level of uncertainty. Uncertainty of perception depends, for example, on the situation in which the controller system is used, for example whether it is foggy or not. Instead of specifying concrete perceptual uncertainty in models, it is therefore better suited for flexible analysis to construct uncertain controller models and to calculate the degree of uncertainty that the designed controller can tolerate. However, it proved difficult to get this limit as a formula.

Methods and results

In this study, we propose a method that automatically transforms a model of an uncertain controller into a model of a robustized controller, ie a new controller that behaves safely even under uncertainty (Figure 2).

The method consists of two steps.

The first step (uncertainty injection) is to convert an input model of an uncertain controller into an intermediate model of an uncertain controller. Note that the behavior of the intermediate model corresponds to that of the input model and is therefore insecure.

Then the next step (robustification) is to convert the intermediate model into an insecure and secure model. The behavior of the resulting controller is updated so that it works safely even under uncertainty.

In addition, the method generates a formula that represents the degree of uncertainty that the output controller can tolerate.

In the first step (Uncertainty Injection), variables of perceived values ​​are introduced that can deviate from true values. While the input model is specified under the ideal assumption that the controller can refer to the true value when determining an action, the intermediate controller also refers to the perceived value to reflect the reality of the perception. Although the intermediate controller determines its actions as a reaction to perceived values, safety is not guaranteed because its behavior corresponds to that of the input controller. For example, an autonomous vehicle can misjudge the distance to another car to be 5 m if it is only 4 m, which leads to a collision.

In the second step (robustification) the case differentiation of the environmental status is calculated. For example, in some situations an autonomous vehicle may be unsure whether to drive or brake due to perceptual uncertainty about the distance between it and a car in front. The robustification method exhaustively lists such unsafe cases so that they are considered separately. In addition, the behavior of the controller is updated so that security is guaranteed even in the event of perceptual uncertainty. The method takes into account all possibilities of the true value estimated from the perceived value and calculates a safe behavior for each possibility.

Restrictions for the behavior of the generated controller are specified in such a way that safe operation is guaranteed even under uncertainty. Whether such restrictions can be met, however, depends on the uncertainty. As an extreme example: If a sensor detects the positions of other cars incorrectly up to 100 km, it is impossible to ensure safety in many situations. This begs the question of how much uncertainty the generated controller can tolerate. Where is the limit?

To answer this, the proposed approach also generates the limit value as an uncertainty formula. Developers can select suitable sensors from a predefined catalog using the formula as a criterion. In addition, the formula can be used to analyze uncertainties, e.g. B. how the uncertainty spreads when the controller is combined with other components.

The method makes the construction of unsafe and safe controllers more systematic and effortless. In addition, it enables developers to flexibly analyze different situations of perceptual uncertainty. Thus, the method improves the overall security of the real world in which control systems are ubiquitous.

Future perspective

In addition to autonomous vehicles, the proposed method can also be applied to various other control systems that interact with external environments. In future work we intend to generalize the method so that it can deal with a wider range of uncertainties. For example, we will address misclassification problems such as: B. an object classifier module that classifies an object as the wrong object class.

Comments from Tsutomu Kobayashi

“Control systems are critical because the usefulness of most software systems depends on their interactions with external environments. This research aims to help developers apply formal modeling approaches to realistic software by addressing the inevitable problem of controller systems regarding the gap between perception and reality. This allows developers to concentrate on the essentials of controller behavior. We believe the method is valuable and can be expanded in a number of ways. We will continue to work towards the systematic and simple application of rigorous mathematical methods to ensure a safe environment for all. “

Information on paper

title

: Robustness of the controller specifications of cyber-physical systems against perceptual uncertainty

Authors

: Tsutomu Kobayashi, Rick Salay, Ichiro Hasuo, Krzysztof Czarnecki, Fuyuki Ishikawa, Shin-ya Katsumata

conference

: NASA’s 13th Symposium on Formal Methods https: //Shemesh.lark.NASA.Government/nfm2021 /

Presentation date

: May 26, 2021 (US Eastern Time)

(* 1) ERATO Hasuo Metamathematics for Systems Design Project (ERATO-MMSD): a project funded under the Exploratory Research for Advanced Technology (ERATO) program of the Japan Science and Technology Agency (JST). The project conducts scientific research on the quality assurance of cyber-physical systems as the core of Society 5.0. The project specifically focuses on automated driving systems and examines reliability techniques for modeling, formal verification, testing and holistic, practice-oriented V&V techniques including all. This challenge requires close collaboration between different academic areas such as software science and engineering, control theory and engineering and artificial intelligence. Therefore, the project also focuses on (meta-) mathematical theories. https: //www.jst.walk.jp /erato /hasuo /de /

###

About the National Institute for Computer Science (NII)

NII is Japan’s only academic research institute dedicated to the new discipline of computer science. Your mission is to “create future values” in computer science. The NII conducts both long-term basic research and practical research to solve social problems in a wide range of computer science research fields, from basic theories to current topics such as artificial intelligence, big data, the Internet of Things and information security.

As an inter-university research institute, the NII builds and operates scientific information infrastructures that are essential for the research and educational activities of the entire academic community (including the Science Information Network), and develops services such as those that provide scientific content and service platforms. https: //www.wellac.jp /de /

About the Research Organization for Information and Systems (ROIS)

ROIS is an umbrella organization of four national institutes (National Institute of Polar Research, National Institute of Informatics, Institute of Statistical Mathematics and National Institute of Genetics) and the Joint Support Center for Data Science Research. ROIS has set itself the task of promoting integrated cutting-edge research beyond the borders of these institutions and facilitating their research activities as members of inter-university research institutes.

LEAVE A REPLY

Please enter your comment!
Please enter your name here