Summary of Dopl: Direct Online Preference Learning For Restless Bandits with Preference Feedback, by Guojun Xiong et al.
DOPL: Direct Online Preference Learning for Restless Bandits with Preference Feedback
by Guojun Xiong, Ujwal Dinesha, Debajoy Mukherjee, Jian Li, Srinivas Shakkottai
First submitted to arxiv on: 7 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Optimization and Control (math.OC); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel restless multi-armed bandit (RMAB) model, Pref-RMAB, is introduced to tackle constrained sequential decision making problems in the presence of preference signals. Unlike traditional RMAB, which relies on scalar reward signals, Pref-RMAB only observes pairwise preference feedback from activated arms at each decision epoch. To efficiently explore unknown environments and adaptively collect preference data online, a direct online preference learning (DOPL) algorithm is proposed for Pref-RMAB. The DOPL algorithm achieves a sublinear regret of (), outperforming existing methods. Experimental results demonstrate the effectiveness of DOPL in real-world scenarios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new type of problem, called restless multi-armed bandits (RMAB), is being studied to help make decisions when there are many options and some things change over time. Usually, we know how good each option is because it gives us a score or reward. But sometimes, we might only get hints about which option is better, like “A is slightly better than B.” This paper introduces a new way of making decisions called Pref-RMAB that uses these hint-like clues instead of scores. To make this work, the team created an algorithm that can learn from these hints and make good choices. They tested their algorithm and found it does much better than previous methods. |