MadS&P Seminar
Guest Talk by Anvith Thudi
Event Details
Title: Unlearning for Deep Learning Can Be Efficient
Abstract: Suppose after training an expensive model, you realize you did not want to train on some parts of the dataset (e.g., data poisoning, inaccurate data, etc.). Machine unlearning aims to handle this scenario by efficiently producing the models coming from training without certain data points. However, unlearning is generally considered computationally hard for modern deep learning pipelines. This perceived difficulty has motivated a flurry of weaker methods that are often not robust to varying unlearning evaluations. In this talk we show how the status-quo of “unlearning is hard” might no longer be accurate. We make the observation that the advent of in-context learning also makes exact unlearning efficient in the fine-tuning stage of a model. We then also present recent per-instance privacy analyses which imply many datapoints can be “strongly” unlearnt with no extra computation when training with (noisy) SGD. However, we end on current difficulties in auditing machine unlearning.
Bio: Anvith Thudi is a Computer Science Ph.D. student at the University of Toronto, advised by Nicolas Papernot and Chris Maddison. His research interests span Trustworthy Machine Learning and data curation, with a particular interest in unlearning and privacy and their connections to the performance of models. He is supported by a Vanier Canada Graduate Scholarship.
Join Zoom Meeting
https://uwmadison.zoom.us/j/94727012361?pwd=ziMYckmebClmUSYTzZTDIQR6KYYBqW.1
Meeting ID: 947 2701 2361
Passcode: 447648