Talk: Learning to Make Decisions from Data: Foundations and Algorithms
Tengyang Xie: Final-year PhD Candidate, University of Illinois at Urbana-Champaign
LIVE STREAM: https://uwmadison.zoom.us/j/92187492653?pwd=MlkxV0w4SVhvNmQxUGs4bDNkMXhBZz09
Abstract: Reinforcement learning—a powerful machine learning paradigm for sequential decision-making—has achieved exciting successes in the past years. Due to the active-learning nature of reinforcement learning, frequent interactions with the environment are usually viewed as a necessary condition. However, such interactions in the real world can be costly or even prohibited, which, as a consequence, makes the data-driven learning paradigm to be more preferable. In this talk, I will provide a general solution for data-driven reinforcement learning (a.k.a. offline RL), from theoretical foundations to algorithms design. I will show (1) what is the fundamental solution concept of offline reinforcement learning, as well as how to form a general theoretical framework with mild data requirements and flexible model choices; (2) how to implement such a framework efficiently without losing theoretical guarantees. The resulting algorithm—Adversarially Trained Actor Critic (ATAC)—enjoys SoTA performance in several popular offline RL benchmarks, and more importantly, ATAC provably and empirically achieves the best of both worlds from offline RL and imitation learning. Finally, I will discuss my ongoing work and future plans, including their potential impacts on broader areas.
Bio:Tengyang Xie is a final-year PhD candidate at the University of Illinois at Urbana-Champaign. He has broad interests in designing provably efficient and practical algorithms for machine learning, especially for reinforcement learning and representation learning. His PhD research focuses on the foundations and methodologies of data-driven reinforcement learning. His work has received the Outstanding Paper Runner-up Award at ICML-2022.