Skip to main content

Online Seminar: Deep Rationalization: Enabling Two-way Communications between Human and AI

Shiyu Chang: Research Scientist, MIT-IBM Watson AI Lab and Affiliated Researcher, CSAIL MIT

Event Details

Date
Tuesday, April 21, 2020
Time
4-5 p.m.
Description

Abstract: Deep learning has achieved unprecedented success across many benchmark tasks, but it still has limited scope to exercise in mission-critical deployments for two main reasons: 1) neural predictions are rarely interpretable; 2) existing training paradigms require large amounts of data. These two seemingly distinct problems share a common underlying cause, which is the lack of efficient communication between humans and machines. As a result, humans cannot inject additional guidance into machines or receive further justifications beyond machine predictions. In this talk, I will introduce my recent work on ``deep rationalization’’, which enables two-way communications between humans and machines via a language called rationales. In particular, I will first talk about how rationales are established to improve model interpretability. After that, I will discuss how human-generated rationales impact learning performance in low-resource scenarios. 

Bio: Shiyu Chang is a research scientist at the MIT-IBM Watson AI Lab. He is also an affiliated researcher at CSAIL MIT, where he works closely with Prof. Regina Barzilay and Tommi Jaakkola. His research focuses on machine learning and its applications in natural language processing and computer vision. Most recently, he has been studying how machine predictions can be made more interpretable to humans, and how human intuition and rationalization can improve AI transferability, data efficiency, and adversarial robustness. His work has received many awards, including the Best Student Paper Award at ICDM 2014. Shiyu obtained B.S. and Ph.D. from the University of Illinois at Urbana-Champaign. His Ph.D. advisor is Prof. Thomas S. Huang.  

Cost
Free

Tags