Summary of Online Bandit Learning with Offline Preference Data, by Akhil Agnihotri et al.
Online Bandit Learning with Offline Preference Data
by Akhil Agnihotri, Rahul Jain, Deepak Ramachandran, Zheng Wen
First submitted to arxiv on: 13 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Reinforcement Learning with Human Feedback (RLHF) is a key component in fine-tuning generative AI models for language and images. Typically, human feedback takes the form of rank or preference feedback from raters, as opposed to eliciting scores, which can be noisy. However, RL theory and algorithms often assume reward feedback availability. To address this gap, we propose , a posterior sampling algorithm for online learning that can leverage offline preference data generated by an expert of unknown competence'. Our approach models the competence’ of the expert to effectively utilize such datasets. We provide novel theoretical analysis of Bayesian regret and extensive empirical evaluation of our method, which achieves 25-50% regret reduction compared to baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper explores how to use human feedback to improve AI models for language and images. Human feedback is often given as rankings or preferences, rather than scores. The researchers propose a new way to use this type of feedback to train AI models online. They also develop an algorithm that can learn from offline datasets generated by experts, even if the quality of these datasets is unknown. The results show that their approach can reduce regret (a measure of how well the AI model performs) by 25-50% compared to other methods. |
Keywords
» Artificial intelligence » Fine tuning » Online learning » Reinforcement learning » Rlhf