Loading Now

Summary of Mitigating the Alignment Tax Of Rlhf, by Yong Lin et al.


Mitigating the Alignment Tax of RLHF

by Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Heng Ji, Yuan Yao, Tong Zhang

First submitted to arxiv on: 12 Sep 2023

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the “alignment tax” phenomenon in Large Language Models (LLMs) that occur when Reinforcement Learning with Human Feedback (RLHF) is applied. During pre-training, LLMs acquire various skills, but RLHF can cause them to forget these abilities. The researchers conducted experiments using OpenLLaMA-3B and found a significant “alignment tax” in NLP tasks. Surprisingly, efforts to mitigate forgetting often compromise the RLHF performance, leading to a delicate balance between alignment and forgetting.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study is about how big language models learn from humans. When we teach these models what’s right or wrong, they sometimes forget things they already knew. The researchers looked at this problem and found that it happens in many tasks related to natural language processing. They also discovered that trying to fix this forgetting problem can actually make the model worse at learning from humans. This is important because it means we need to find a way to balance these two goals.

Keywords

* Artificial intelligence  * Alignment  * Natural language processing  * Nlp  * Reinforcement learning  * Rlhf