Skip to main content

Talk: Causal Inference for Robust, Reliable, and Responsible NLP

Zhijing Jin: Ph.D, Max Planck Institute & ETH

Event Details

Date
Thursday, April 25, 2024
Time
12-1 p.m.
Location
Description

Live stream: https://uwmadison.zoom.us/j/95216675725?pwd=ZzhOOWxGbTd2SnVSV3lsNU91WnBYUT09

Abstract: Despite the remarkable progress in large language models (LLMs), it is well-known that natural language processing (NLP) models tend to fit for spurious correlations, which can lead to unstable behavior under domain shifts or adversarial attacks. In my research, I develop a causal framework for robust and fair NLP, which investigates the alignment of the causality of human decision-making and model decision-making mechanisms. Under this framework, I develop a suite of stress tests for NLP models across various tasks, such as text classification, natural language inference, and math reasoning; and I propose to enhance robustness by aligning model learning direction with the underlying data generating direction. Using this causal inference framework, I also test the validity of causal and logical reasoning in models, with implications for fighting misinformation, and also extend the impact of NLP by applying it to analyze the causality behind social phenomena important for our society, such as causal analysis of policies, and measuring gender bias in our society. Together, I develop a roadmap towards socially responsible NLP by ensuring the reliability of models, and broadcasting its impact to various social applications.

Bio: Zhijing Jin (she/her) is a Ph.D. at Max Planck Institute & ETH. Her research focuses on socially responsible NLP by causal inference. Specifically, she works on expanding the impact of NLP by promoting NLP for social good, and developing CausalNLP to improve robustness, fairness, and interpretability of NLP models, as well as causal analysis of social problems. She has received three Rising Star awards, and two PhD Fellowships. Her work has published at many NLP and AI venues (e.g., ACL, EMNLP, NAACL, NeurIPS, ICLR, AAAI), and featured in MIT News and ACM TechNews. She co-organizes five workshops (including the NLP for Positive Impact Workshop at EMNLP 2024, and Moral AI Workshop at NeurIPS 2023), leads the Tutorial on CausalNLP at EMNLP 2022, and served as the Publications Chair for the 1st conference on Causal Learning and Reasoning (CLeaR). To support diversity, she organizes the ACL Year-Round Mentorship Program. More information can be found on her personal website: zhijing-jin.com

Cost
Free

Tags