Loading Now

Summary of It Takes Two: on the Seamlessness Between Reward and Policy Model in Rlhf, by Taiming Lu et al.


It Takes Two: On the Seamlessness between Reward and Policy Model in RLHF

by Taiming Lu, Lingfeng Shen, Xinyu Yang, Weiting Tan, Beidi Chen, Huaxiu Yao

First submitted to arxiv on: 12 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to Reinforcement Learning from Human Feedback (RLHF), which involves training policy models (PMs) and reward models (RMs) to align language models with human preferences. The authors identify the “saturation phenomenon” where continual improvements in RM and PM do not translate into RLHF progress, due to a 35% mismatch rate between PM and RM judgments induced by data samples. To address this issue, they introduce the concept of seamlessness and propose an automatic metric, SEAM, which quantifies the discrepancies between PM and RM judgments. The authors validate the effectiveness of SEAM in selecting relevant data for RL training, improving performance by 4.5%, and guiding model augmentation, resulting in a 4% performance improvement over standard methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how to make language models work better with human feedback. Right now, these models are not very good at understanding what humans want them to do. The authors found that even if the models get better and better, they don’t actually get better at doing what humans want. They call this problem “saturation.” To solve this issue, they came up with a new way of looking at how the models work together. This helps the models choose the right data to learn from and makes them do a better job overall.

Keywords

* Artificial intelligence  * Reinforcement learning from human feedback  * Rlhf