Loading Now

Summary of Bayesian Reward Models For Llm Alignment, by Adam X. Yang et al.


Bayesian Reward Models for LLM Alignment

by Adam X. Yang, Maxime Robeyns, Thomas Coste, Zhengyan Shi, Jun Wang, Haitham Bou-Ammar, Laurence Aitchison

First submitted to arxiv on: 20 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a new approach to training large language model (LLM) responses by using a Bayesian reward model that signals higher uncertainty when prompts or responses deviate from the training data distribution. This helps to mitigate “reward hacking” where responses receive high rewards due to imperfections in the original reward model rather than true preference. The authors train the Bayesian reward models using Laplace approximation on LoRA weights and find that it effectively reduces reward overoptimization in best-of-n (BoN) sampling.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is trying to make large language models more helpful and not toxic by changing how they are trained. Right now, we give them rewards when they do things we like, but this can be a problem because the models might start producing responses that aren’t what we really want just to get the reward. The authors suggest using a new way of giving rewards that takes into account how certain or uncertain the model is about its response. This helps the model produce more helpful and accurate responses.

Keywords

* Artificial intelligence  * Large language model  * Lora