Summary of Near-optimal Per-action Regret Bounds For Sleeping Bandits, by Quan Nguyen et al.
Near-optimal Per-Action Regret Bounds for Sleeping Bandits
by Quan Nguyen, Nishant A. Mehta
First submitted to arxiv on: 2 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper derives near-optimal per-action regret bounds for sleeping bandits, where an adversary chooses both available arms and losses. The best known upper bound is O(K√TAlnK), which contains a multiplicative factor of KlnK compared to the minimax Ω(√T) lower bound. The authors address this gap by directly minimizing the per-action regret using generalized versions of EXP3, EXP3-IX, and FTRL with Tsallis entropy, achieving near-optimal bounds of O(√TAlnK) and O(√T√AK). The results are extended to bandits with advice from sleeping experts, generalizing EXP4. This leads to new proofs for existing adaptive and tracking regret bounds for standard non-sleeping bandits. Additionally, the paper proves a lower bound showing that minimax optimal algorithms have an action whose regret is sublinear in T but linear in the number of active rounds. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper explores how to make good decisions when we don’t know what will happen next. It’s like trying to choose the best arm to pull from a slot machine, but you can only see the results after you’ve pulled it. The authors come up with new ways to make better choices and track our progress over time. They show that their methods are close to the best possible and provide examples of how they work in different situations. |
Keywords
* Artificial intelligence * Tracking