Loading Now

Summary of Self-play Preference Optimization For Language Model Alignment, by Yue Wu and Zhiqing Sun and Huizhuo Yuan and Kaixuan Ji and Yiming Yang and Quanquan Gu


Self-Play Preference Optimization for Language Model Alignment

by Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yiming Yang, Quanquan Gu

First submitted to arxiv on: 1 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Self-Play Preference Optimization (SPPO) method for language model alignment treats the problem as a constant-sum two-player game to identify the Nash equilibrium policy. This approach iteratively updates policies to provably approximate the Nash equilibrium, using only 60k prompts from the UltraFeedback dataset and a pre-trained preference model PairRM with 0.4B parameters. SPPO achieves state-of-the-art length-controlled win-rates against GPT-4-Turbo on AlpacaEval 2.0 and outperforms DPO and IPO on MT-Bench, Arena-Hard, and the Open LLM Leaderboard without additional external supervision.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new method for aligning language models with human preferences using self-play. It treats the problem as a game where two players try to find a common policy that works well for both of them. The method uses this idea to iteratively update policies until it finds one that is close to the best possible solution. This approach is able to achieve high accuracy without needing any additional training data or help from stronger language models.

Keywords

» Artificial intelligence  » Alignment  » Gpt  » Language model  » Optimization