Loading Now

Summary of Transforming and Combining Rewards For Aligning Large Language Models, by Zihao Wang et al.


Transforming and Combining Rewards for Aligning Large Language Models

by Zihao Wang, Chirag Nagpal, Jonathan Berant, Jacob Eisenstein, Alex D’Amour, Sanmi Koyejo, Victor Veitch

First submitted to arxiv on: 1 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses two key challenges in aligning language models with human preferences. The first challenge arises when using a reward model learned from preference data to update the language model, where any monotone transformation of the reward model preserves preference ranking. The authors identify a natural choice for transformation, dubbed “LSC-transformation” (log-sigmoid-centered transformation), which emphasizes improving poorly-performing outputs and mitigates underfitting and reward hacking. This approach enables principled aggregation of rewards by linking summation to logical conjunction. Experiments using RLHF show substantial improvements over the baseline approach.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about finding a better way to connect language models with what humans like or dislike. They’re trying to solve two problems: how can we make sure that the language model gets better at producing good outputs? And how can we combine multiple things we want the language model to be good at? The authors found a simple and clever solution that helps improve poorly-performing outputs, rather than just making the model even better at what it’s already good at. This makes the model more helpful overall.

Keywords

» Artificial intelligence  » Language model  » Rlhf  » Sigmoid  » Underfitting