Loading Now

Summary of Dpo Meets Ppo: Reinforced Token Optimization For Rlhf, by Han Zhong et al.


DPO Meets PPO: Reinforced Token Optimization for RLHF

by Han Zhong, Zikang Shan, Guhao Feng, Wei Xiong, Xinle Cheng, Li Zhao, Di He, Jiang Bian, Liwei Wang

First submitted to arxiv on: 29 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Reinforced Token Optimization (RTO), a framework that models Reinforcement Learning from Human Feedback (RLHF) problems as Markov Decision Processes (MDPs). RTO learns token-wise reward functions from preference data and optimizes policies based on these learned rewards. By integrating Direct Preference Optimization (DPO) with Proximal Policy Optimization (PPO), RTO outperforms PPO and other direct preference learning algorithms in extensive experiments. The framework’s capabilities are demonstrated on the AlpacaEval 2 benchmark, achieving a 7.5-point improvement over PPO. The code and models are available at https://github.com/zkshan2002/RTO.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making computers better at understanding what people mean when they give feedback. Right now, computers can only learn from small rewards or punishments, which makes it hard for them to understand what’s good and bad. The authors created a new way for computers to learn by giving them detailed information about what’s good and bad. This new method is called Reinforced Token Optimization (RTO). RTO works better than other methods in tests, especially when understanding human feedback. You can find the code and models used in this paper at https://github.com/zkshan2002/RTO.

Keywords

» Artificial intelligence  » Optimization  » Reinforcement learning from human feedback  » Rlhf  » Token