Summary of Robust Preference Optimization Through Reward Model Distillation, by Adam Fisch et al.
Robust Preference Optimization through Reward Model Distillation
by Adam Fisch, Jacob Eisenstein, Vicky Zayats, Alekh Agarwal, Ahmad Beirami, Chirag Nagpal, Pete Shaw, Jonathan Berant
First submitted to arxiv on: 29 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents an analysis and solution to a problem in language model post-training, specifically Direct Preference Optimization (DPO). DPO trains a policy directly on preference data without requiring additional training or reinforcement learning. However, existing implementations often lead to overfitting and degenerate policies. The authors propose a novel approach that uses distillation to obtain a better proxy for the true preference distribution. They train the language model to match an explicit reward model trained on preference data and optimize against a family of reward models to account for uncertainty. This leads to improved robustness to distribution shifts in preference annotations while preserving the simplicity of DPO. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about fixing a problem with a way to improve language models. Currently, this method (called Direct Preference Optimization) doesn’t work well because it often makes the model do silly things instead of good things. The authors came up with a new idea called distillation that helps get rid of these problems. They train the language model to follow an expert’s advice on what’s good and what’s not, and this makes the model more reliable. This is important because it will help us make better use of language models in real-life applications. |
Keywords
» Artificial intelligence » Distillation » Language model » Optimization » Overfitting » Reinforcement learning