Close

Presentation

Design and Synthesis of Certifiably Safe AI-enabled Autonomy with Focus on Human-in-the-Loop Human-in-the Plant Systems
DescriptionAdvent of Large Language Models (LLM) and generative AI has introduced uncertainty in operation of autonomous systems with significant implications on safe and secure operation. This has led to the US government directive on assurance and testing of trustworthiness of AI. This tutorial aims at introducing the audience to the arising safety issues of AI-enabled autonomous systems (AAS) and how is affects dependable and safe design for real life deployments. With the advent of LLMs and deep AI methods, AAS are becoming vulnerable to uncertainties. It will introduce a new human in the loop human in the plant design philosophy that is geared towards assured certifiability in presence of human actions and AI uncertainties while reducing data sharing between the AAS manufacturer and certifier. We will provide a landscape of informal and formal approaches in ensuring AI-based AAS safety at every phase of the design lifecycle, defining the gaps, current research to fill those gaps, and tools for detection of commonly occurring software failures such as doping. This tutorial also aims at emphasizing the need for operational safety of AI-based AAS and highlight the importance of explainability at every stage for enhancing trustworthiness. There has been significant research in the domain of model-based engineering that are attempting to solve this design problem. Observations from the deployment of a AAS are used to: a) ascertain whether the AAS used in practice match the proposed safety assured design, b) explain reasons for a mismatch in AAS operation and the safety assured design, c) generate evidence to establish the trustworthiness of a AAS, d) generate novel practical scenarios where a AAS is likely to fail.

- Relevance, target audience, and interest for the DAC community
AI has been widely adopted in different domains including autonomous vehicles and IoT medical device. In a competitive environment, engineers and researchers are focused on developing innovative applications while minimal attention is provided to safety engineering techniques that cope with the fast pace of technological advances. As a result, recent failures and operational accidents of AI-based system highlight a pressing need for the development of suitable stringent safety monitoring techniques. We advocate for a change in the linear AAS development lifecycle from design, validation, implementation, and verification by incorporating feedback from the field of operation. This will result in a circular AAS development lifecycle, where operational data can be used to identify novel states and can be used as feedback. This will enable an agile proactive redesign policy that can predict failures and propose techniques to circumvent any safety risks. The tools used in this circular lifecycle will provide interpretable reports to the appropriate stake holders such as certification agencies, developers and users at different stages. This tutorial directly relates to the Autonomous systems, ML, topic of DAC.
Event Type
Tutorial
TimeMonday, June 241:30pm - 5:00pm PDT
Location3002, 3rd Floor
Topics
AI