Skip to main content

Machine Learning Lunch Meeting

AI Security in the Era of Large Language Models and Agents

Event Details

Date
Friday, November 22, 2024
Time
12:30-1:30 p.m.
Location
Description

Everyone is invited to the weekly Machine Learning Lunch Meetings held Fridays 12:30-1:30pm. Faculty members from Computer Sciences, Statistics, ECE, and other departments will discuss their latest groundbreaking research in machine learning. This is an opportunity to network with faculty and fellow researchers, and to learn about the cutting-edge research being conducted at our university. Please see our website for more information.

Speaker: Chaowei Xiao (iSchool)

Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities, including  instruction-following and safety alignment. These advancements have enabled their deployment across diverse domains, including safety-critical applications and as the decision-making core for agents managing complex tasks. Given their increasing adoption, it is imperative to investigate their security challenges.

In this talk, I will introduce emerging threats in LLMs and agents.  I  will show that how the emerging threats parallel longstanding problems in adversarial machine learning and  how we can leverage our advancements and principles  in adversarial machine learning  for them. Specifically,  I will first introduce my lab’s recent work on how to  discover the model-level vulnerabilities via building red-teaming tools from adversarial perspectives and how we can address them. Additionally, I will delve into LLM-empowered agents. I will  show the unique security threats presented by agents, emphasizing the critical need to study their vulnerabilities from both adversarial and system-level perspectives.

Cost
Free

Tags