Loading Now

Summary of Off-policy Estimation with Adaptively Collected Data: the Power Of Online Learning, by Jeonghwan Lee et al.


Off-policy estimation with adaptively collected data: the power of online learning

by Jeonghwan Lee, Cong Ma

First submitted to arxiv on: 19 Nov 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Optimization and Control (math.OC); Statistics Theory (math.ST)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates estimating linear functionals of treatment effects from adaptively collected data, with applications in off-policy evaluation and causal inference. A class of augmented inverse propensity weighting (AIPW) estimators is studied, which enjoys semi-parametric efficiency but lacks non-asymptotic theory for adaptively collected data. The authors establish generic upper bounds on the mean-squared error of these estimators, crucially dependent on a sequentially weighted error between the treatment effect and its estimates. A general reduction scheme is proposed to produce sequences of estimates via online learning, minimizing this estimation error. Three concrete instantiations are provided: tabular case, linear function approximation, and general function approximation for the outcome model. The authors also establish a local minimax lower bound showing instance-dependent optimality of the AIPW estimator using no-regret online learning algorithms.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how to use data collected in different ways to estimate how well something works compared to not doing it. This is important because we often want to know if something new will work better than what we’re currently doing, and this can help us make that decision. The authors study a special type of estimator that does a good job of finding the answer, but they didn’t know much about how well it works when the data is collected in different ways. They found out some important things about how to use these estimators, which can help us make better decisions.

Keywords

» Artificial intelligence  » Inference  » Online learning