Skip to main content

Talk: Trustworthy AI via Formal Verification and Adversarial Testing

Huan Zhang: Postdoctoral, Carnegie Mellon University

Event Details

Thursday, April 13, 2023
4-5 p.m.


Abstract: Applying deep learning to safety-critical tasks requires us to formally verify their trustworthiness, ensuring properties like safety, security, robustness, and correctness. Unfortunately, modern deep neural networks (DNNs) are largely “black boxes”, and existing tools can hardly formally reason about them. My talk presents a new framework for trustworthy AI, relying on novel methods for formal verification and adversarial testing of DNNs.

In particular, I will introduce a novel framework called “linear bound propagation methods” to enable efficient formal verification of deep neural networks, with an example of rigorously proving their safety and robustness. By exploiting the structure of the DNN verification problem, this framework achieves up to three orders of magnitude speedup compared to traditional verification algorithms. My work leads to the open-source α,β-CROWN verifier, the winners of the 2021 and 2022 International Verification of Neural Networks Competitions (VNN-COMP), with applications in computer vision, reinforcement learning, and computer systems. Besides verification, I will discuss the complementary problem of disproving the trustworthiness of AI-based systems using adversarial testing, including black-box adversarial attacks and theoretically-principled attacks to deep reinforcement learning. Finally, I will conclude with an outlook on enabling trustworthy AI components in complex systems and engineering applications such as autonomous systems and control, paving the way for a safer, more reliable AI-powered future.

Bio: Huan Zhang is a postdoctoral researcher at Carnegie Mellon University, working with Prof. Zico Kolter. He obtained his Ph.D. in Computer Science at UCLA in 2020, advised by Prof. Cho-Jui Hsieh. Huan aims to build trustworthy AI systems that can be safely and reliably used in mission-critical tasks, with a focus on using formal verification techniques to give provable performance guarantees for machine learning. He is the leader of a multi-institutional team developing the α,β-CROWN neural network verifier, which won VNN-COMP 2022 and VNN-COMP 2021. He has received several awards, including an IBM Ph.D. fellowship, the 2021 Adversarial Machine Learning Rising Star Award, and a Schmidt Futures AI2050 Early Career Fellowship with a $300,000 research grant.