Loading Now

Summary of Achieving Exponential Asymptotic Optimality in Average-reward Restless Bandits Without Global Attractor Assumption, by Yige Hong et al.


Achieving Exponential Asymptotic Optimality in Average-Reward Restless Bandits without Global Attractor Assumption

by Yige Hong, Qiaomin Xie, Yudong Chen, Weina Wang

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC); Probability (math.PR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The infinite-horizon average-reward restless bandit problem is tackled using a novel two-set policy approach. This method maintains two dynamic subsets of arms: one subset with nearly optimal state distribution, controlled by an Optimal Local Control routine, and another subset driven towards the optimal state distribution and gradually merged into the first subset. The two-set policy is shown to be asymptotically optimal with an O(exp(-C N)) optimality gap for an N-armed problem under mild assumptions of aperiodic-unichain, non-degeneracy, and local stability. This policy achieves exponential asymptotic optimality under these assumptions, whereas prior work either required a strong global attractor assumption or achieved only an O(1/√N) optimality gap. The authors also discuss obstacles in weakening the assumptions by demonstrating examples where exponential asymptotic optimality is not achievable when any of the three assumptions is violated. A lower bound is proved for a large class of locally unstable restless bandits, highlighting the importance of local stability for exponential asymptotic optimality.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at a problem called the infinite-horizon average-reward restless bandit problem. They come up with a new way to solve it using a “two-set policy”. This means they divide the options into two groups: one that is already doing well and another that needs help getting better. They show that this approach works really well and is even better than previous methods in some cases.

Keywords

* Artificial intelligence