Skip to main content

Machine Learning Lunch Meeting

Inference in Adaptive Experiments with Contextual Noise

Event Details

Date
Tuesday, April 22, 2025
Time
1-2 p.m.
Location
Description

General: MLLM is a cross-discipline weekly seminar open to all where UW-Madison professors present their research in machine learning, both theory and applications.  The goal is to promote the great work at UW-Madison and stimulate collaborations. Please see our website for more information.

Speaker: Yongyi Guo (STAT)

Abstract: We study statistical inference after adaptive experiments in a linear contextual bandit setting, a framework widely used in applications such as digital health. In practice, observed contexts often contain noise rather than the true underlying context, which can significantly impact decision-making and compromise the validity of inference. 

We develop valid inference methods when data with noisy contexts are collected by adaptive algorithms, and explore the general conditions under which valid inference can be achieved. A key condition we identify is policy convergence, which plays a fundamental role across various adaptive inference settings. This condition is broad and general, and we provide common examples of online algorithms that satisfy it. In contrast, standard contextual bandit algorithms such as LinUCB and Thompson Sampling, when naively applied to noisy contexts, fail to converge due to the misalignment between the observed (noisy) context and the reward, which depends on the true context. 

Our findings contribute to a broader understanding of how model misspecification affects statistical inference in adaptive decision-making, where data limitations and structural mismatches are common challenges.

Cost
Free

Tags