Mykel Kochenderfer

Stanford University

Building and Validating Safety Critical Decision Making Systems

Building safety critical decision making systems is complicated due to the vast spectrum of possible scenarios that may be encountered. There are often many low-probability edge cases that are difficult for human engineers to anticipate and resolve before deployment. This talk will discuss an approach to designing robust systems that involves the mathematical framework of partially observable Markov decision processes (POMDPs). Instead of relying on human engineers to explicitly construct the decision making system, the approach involves specifying models of the dynamics, sensors, and objectives and using algorithms to optimize the decision strategy. Such an approach led to a new aircraft collision avoidance system that has been accepted for use worldwide, and it is the basis for ongoing work in automated driving. This talk will discuss how to validate the correct operation of these systems and outline the challenges in facilitating greater levels of automation into safety critical systems.

Practical Challenges to the Implementation of Automated Driving Systems

This presentation aims to provide a realistic assessment of the state of the art in Automated Driving Systems based on understanding the long-term historical trends in transportation and the technical challenges that remain to be solved. It begins with the long history of prior efforts to automate driving and then clarifies the descriptions of automated driving systems based on their levels of automation and connectivity and their operational design domains. The importance of vehicle-vehicle and vehicle-infrastructure connectivity in order to achieve transportation system improvements from automation is emphasized, based on results of simulations calibrated to full-scale vehicle test results. The formidable unsolved challenges in perception technology and system safety assurance are then discussed as part of the explanation for why it will take multiple decades of further development efforts before automated driving will be able to serve major fractions of surface transportation needs.

Steven E. Shladover, Sc.D.

California PATH, UC Berkeley, USA

Prof. Philip Koopman

Carnegie Mellon University

A Strategy for Evolving Self-Driving Car Safety Assurance

Assuring the safety of fully self-driving vehicles will require a dramatic increase in scope compared to previous automotive safety standards because there will no longer be a human driver to take ultimate responsibility for vehicle safety. Moreover, the use of nondeterministic algorithms and inductive learning techniques requires an assurance approach that goes beyond the classic "V" model. This talk will present a technology-neutral goal-based safety argumentation approach to establishing and evolving self-driving car safety. Salient features include: treating uncertainty as a first class citizen, permitting credit for data feedback paths in evolving safety, and enabling integration of evidence generated by diverse existing safety standards. Rather than mandating specific use of technology, the approach includes recommendations of best practices, anti-patterns that should be avoided, and topics that must be addressed to provide credible argumentation in support of safety.

Importance of Standardization and Shared Code for Evaluating Automated Systems

The indefinite number of scenarios a highly automated vehicle faces during its lifespan is infeasible to be covered or predicted. To derive these scenarios into requirements and respectively aligned tests is complicated; system robustness covering all scenarios within this traffic space is equally complicated.Simulation allows engineers to extract edge case scenarios in virtual environments running complete vehicles tests, enlarging the covered scenario test space. In this talk I will discuss the errors occurring when extracting scenarios out of collected test vehicle measurements, and subsequently re-simulating newer versions of the software stack under test. The detailed errors for scenario extraction, re-simulation, and its assessment reveal the problems for scenario-based risk assessment using different test domains. Furthermore, the error chain for these processes shows the need not only for standardization, but also for sharing code - due to errors originating within the different mathematical principles being used. For an industry overarching goal of evaluating automated systems, standardized code is inevitable, allowing companies to share scenarios and give an overall safety assessment for future traffic analysis.

Thomas Kuehbeck

BMW Technology Office

2019 IEEE ICCVE Organizers

2019 IEEE ICCVE Patrons