Talk: Deep Learning Theory in the Age of Generative AI
Sadhika Malladi: Final-year PhD Candidate, Computer Science, Princeton
Event Details
LIVE STREAM: https://uwmadison.zoom.us/j/95799014225?pwd=abFKafuFuq0ipS44OE5JAqNAM9aYyr.1
Abstract: Modern deep learning has achieved remarkable results, but the design of training methodologies largely relies on guess-and-check approaches. Thorough empirical studies of recent massive language models (LMs) is prohibitively expensive, underscoring the need for theoretical insights, but classical ML theory struggles to describe modern training paradigms. I present a novel approach to developing prescriptive theoretical results that can directly translate to improved training methodologies for LMs. My research has yielded actionable improvements in model training across the LM development pipeline — for example, my theory motivates the design of MeZO, a fine-tuning algorithm that reduces memory usage by up to 12x and halves the number of GPU-hours required. Throughout the talk, to underscore the prescriptiveness of my theoretical insights, I will demonstrate the success of these theory-motivated algorithms on novel empirical settings published after the theory.
Bio: Sadhika Malladi is a final-year PhD student in Computer Science at Princeton University advised by Sanjeev Arora. Her research advances deep learning theory to capture modern-day training settings, yielding practical training improvements and meaningful insights into model behavior. She has co-organized multiple workshops, including Mathematical and Empirical Understanding of Foundation Models at ICLR 2024 and Mathematics for Modern Machine Learning (M3L) at NeurIPS 2024. She was named a 2025 Siebel Scholar.