Skip to main content

MadS&P Weekly Seminar - Guest Talk from Jaron Mink

Guest Talk from Jaron Mink: Human Factors in Secure and Non-Abusive AI

Event Details

Friday, April 26, 2024
11 a.m.

Abstract: Machine learning (ML) models are becoming increasingly powerful and integrated into security-critical systems; however, this shift has inevitably led to the misuse of ML systems and the exploitation of ML vulnerabilities. As these risks are intertwined with the humans who interact with ML systems, understanding human behavior is essential for developing effective mitigations. In this talk, I will discuss how the behaviors of ML developers, ML users, and victims of ML misuse affect the security of systems. More specifically, I will start my discussion on ML-enabled abuse by discussing whether human expertise can be harnessed to detect deepfakes. I will then discuss the difficulty of meaningfully using ML systems to perform practical security operations. Lastly, I will discuss the sociotechnical barriers that impede defense implementation in ML systems. Through these works, I will demonstrate how understanding human behavior is necessary for informing the design of secure and non-abusive ML systems, and what promising future work may be.

Bio: Jaron Mink is a Ph.D. Candidate in the Computer Science Department at the University of Illinois at Urbana-Champaign and will be an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University in the Fall of 2024. He received his B.Sc. (Magna Cum Laude) in the field of Computer Science at the University of California, Los Angeles. Jaron investigates computer security threats and focuses on how human factors affect the security of machine learning systems. His work has appeared in top-tier venues (CHI, USENIX Security, IEEE S&P, WWW), has been covered by a variety of news outlets (New Scientist, The Transmitter, The 21st Show), and has been made into educational content (Futurum Careers). Jaron has also served as a consultant for “Partnership on AI”, an NGO that fosters the responsible development of AI systems. He is an awardee of the NSF Graduate Research Fellowship Program (GRFP).