Online Seminar: Do ImageNet Classifiers Generalize to ImageNet?
Ludwig Schmidt: Postdoctoral Researcher, UC Berkeley
Abstract: Progress on the ImageNet dataset seeded much of the excitement around the machine learning revolution of the past decade. In this talk, we analyze this progress in order to understand the obstacles blocking the path towards safe, dependable, and secure machine learning.
First, we will investigate the nature and extent of overfitting on ML benchmarks through reproducibility experiments for ImageNet and other key datasets. Our results show that overfitting through test set re-use is surprisingly absent, but distribution shift poses a major open problem for reliable ML.
In the second part, we will focus on a particular robustness issue, known as adversarial examples, and develop methods inspired by optimization and generalization theory to address this issue. We conclude with a large experimental study of current robustness interventions that summarizes the main challenges going forward.
Bio: Ludwig Schmidt is a postdoctoral researcher at UC Berkeley working with Moritz Hardt and Ben Recht. Ludwig’s research interests revolve around the empirical and theoretical foundations of machine learning, often with a focus on making machine learning more reliable. Before Berkeley, Ludwig completed his PhD at MIT under the supervision of Piotr Indyk. Ludwig received a Google PhD fellowship, a Microsoft Simons fellowship, a best paper award at the International Conference on Machine Learning (ICML), and the Sprowls dissertation award from MIT.