Skip to main content

Talk: Towards Secure and Regulated Machine Learning Systems

Emily Wenger: PhD Candidate, University of Chicago

Event Details

Date
Monday, February 6, 2023
Time
4 p.m.
Location
Description

LINK TO LIVE STREAM: https://uwmadison.zoom.us/j/91414671946?pwd=WUd6Y0hJUHEvS0kxK3J3cG5YODIrZz09

Abstract: Despite well-known vulnerabilities and privacy issues, machine learning (ML) models are often integrated into consumer-facing commercial products without meaningful consideration of consequences. This pervasive “build it, then fix it” approach to ML model use today is dangerous, privileging fast deployment over user safety. This approach must change, but moving towards more principled deployment is difficult. A significant gap exists between more theoretical academic work on ML safety and idealistic principles espoused by policymakers, hampering practical regulatory efforts. Bridging this gap requires turning theoretical ML knowledge into forward-thinking, pragmatic tools to inform and enable future model regulation. My research—spanning security, systems, and ML—fills this gap. I create practical techniques and tools that move us from “building, then fixing” models to instead securing and regulating models before and during deployment, with end-users in mind. This talk will highlight two key areas of my work. It will first showcase how I identify vulnerabilities in and caused by ML models and discuss a novel attack I discovered against computer vision models. Then, it will explore my work building practical tools that protect models and empower users, highlighting a privacy tool I made to disrupt unwanted facial recognition. It will conclude by discussing my vision for the future of secure and regulated ML. 

Bio: Emily Wenger is a final year PhD candidate advised by Ben Zhao and Heather Zheng at the University of Chicago, where her time is spent researching open questions at the intersection of machine learning, security, and privacy. Her dissertation work revolves around the question of how users can retain personal privacy and agency in the brave new world of ubiquitous deep learning models, biometrics, and surveillance systems. Her research has been published at the top venues in computer security and computer vision and featured by wide range of media outlets from New York Times, MIT Technology Review, to news agencies in 15+ countries. Emily has previously worked for the US Department of Defense as a mathematician and at Meta AI Research as an intern and researcher. She is the recipient of the GFSD, Harvey, UChicago Neubauer fellowships, and the UChicago Harper dissertation fellowship, as well as a Siebel Scholarship. 
 

Cost
Free

Tags