ROWL: Reliable Open-World Learning
Also offered online
The real world is open and full of unknowns, presenting significant challenges for AI systems that must reliably handle diverse, and sometimes anomalous inputs. Out-of-distribution (OOD) uncertainty arises when a machine learning model sees a test-time input that differs from its training data, and thus should not be predicted by the model. As ML is used for more safety-critical domains, the abilities to handle out-of-distribution data are central in building open-world learning systems. In this talk, I will talk about methods, challenges, and opportunities towards building ROWL (Reliable Open-World Learning).
To tackle these challenges, I will first describe mechanisms that improve OOD uncertainty estimation by using calibrated softmax scores. I will then talk about recent advancement using energy-based models, which produces statistical measurement that is provably aligned with the probability density of the training data. We show that energy score is less susceptible to softmax's overconfidence issue, and leads to state-of-the-art performance on common OOD detection benchmarks. Lastly, I will discuss how to robustify the out-of-distribution detection algorithms, in the presence of adversarial or natural image perturbations.
Bio: Sharon Yixuan Li is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison. Previously, she was a postdoctoral researcher at Stanford University's Computer Science Department, where she worked with Chris Ré. She obtained her Ph.D. from Cornell University in 2017, advised by John E. Hopcroft, Kilian Q. Weinberger, and Thorsten Joachims. She served as Program Chair and founding organizer of ICML Workshop on Robustness and Uncertainty in Deep Learning (UDL) in 2019 and 2020. She has spent time at Google AI twice as an intern, and Facebook AI as a Research Scientist. She was named 30 Under 30 Rising Stars in AI in 2019, and Forbes 30 Under 30 in Science in 2020.