Neyman Meets Causal Machine Learning presented by Kosuke Imai
Abstract: This year marks a centennial celebration of Jerzy Neyman's dissertation paper where he showed how to evaluate the efficacy of treatment using a randomized controlled trial (RCT) with a minimal set of assumptions. Today, scientists across disciplines use the methodology based on his randomization inference framework when analyzing the data from RCTs. In this talk, I will demonstrate that Neyman's classical inferential framework can be used to provide a statistical performance guarantee to modern causal machine learning (ML). First, I show how to experimentally evaluate the efficacy of individualized treatment rules (ITRs) regardless of what ML algorithms are used to derive them. In particular, the proposed methodology can take into account additional uncertainty resulting from a commonly used ML training-evaluation process of cross-fitting. Second, I discuss how to statistically evaluate the heterogeneous treatment effects discovered by a generic ML algorithm, again, without making strong assumptions about the properties of the ML algorithm. Finally, I will introduce a methodology that enables one to use any ML algorithm and identify "exceptional responders" who benefit most from the treatment. Like the other two cases, the proposed methodology accounts for statistical uncertainty without making any assumptions about the properties of the ML algorithm. This is a joint work with Michael Lingzhi Li (Harvard Business School).