Summary of Dataset Reset Policy Optimization For Rlhf, by Jonathan D. Chang et al.
Dataset Reset Policy Optimization for RLHF
by Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Kianté Brantley, Dipendra Misra, Jason D. Lee, Wen Sun
First submitted to arxiv on: 12 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Reinforcement Learning (RL) from Human Preference-based feedback is a popular paradigm for fine-tuning generative models. This framework typically consists of two steps: learning a reward model from an offline preference dataset and then running online RL to optimize the learned reward model. In this work, the authors propose a new algorithm called Dataset Reset Policy Optimization (DR-PO) that integrates the existing offline preference dataset into the online policy training procedure via dataset reset. Theoretical analysis shows that DR-PO learns at least as well as any policy covered by the offline dataset under general function approximation with finite sample complexity. Experimental results on both TL;DR summarization and Anthropic Helpful Harmful (HH) datasets demonstrate that the generation from DR-PO outperforms Proximal Policy Optimization (PPO) and Direction Preference Optimization (DPO) in terms of GPT4 win-rate. The authors provide code for this work at this URL. Keywords include model names like GPT-4 and Claude3 Opus, methods like RLHF, datasets such as TL;DR summarization and HH, tasks like fine-tuning generative models, and subfields like reinforcement learning. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about improving how computers learn from humans. Right now, we teach computers by giving them feedback on what they do well or poorly. This new algorithm makes that process better by using old data to help the computer learn faster and more accurately. The results show that this new algorithm works better than other methods in making computers generate text that is helpful and not harmful. You can find the code for this project at a website called GitHub. |
Keywords
* Artificial intelligence * Fine tuning * Gpt * Optimization * Reinforcement learning * Rlhf * Summarization