Loading Now

Summary of Getting More Juice Out Of the Sft Data: Reward Learning From Human Demonstration Improves Sft For Llm Alignment, by Jiaxiang Li et al.


Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment

by Jiaxiang Li, Siliang Zeng, Hoi-To Wai, Chenliang Li, Alfredo Garcia, Mingyi Hong

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores innovative techniques for aligning human preferences with foundation models, focusing on the importance of incorporating preference learning into the Reinforcement Learning from Human Feedback (RLHF) framework. The authors argue that fine-tuning foundation models using supervised learning and preference data can be improved by leveraging Inverse Reinforcement Learning (IRL) to build a reward model simultaneously. This approach yields new algorithms for efficient and robust alignment, which are demonstrated to outperform existing methods on the HuggingFace Open LLM Leaderboard.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how we can make AI models work better with humans by learning from human preferences. Right now, we have ways to fine-tune these models using human data, but this process can be improved by also learning what humans like and dislike. The authors propose a new method called Inverse Reinforcement Learning that does just this. They show that this approach makes AI models better at following human preferences, which is important for making sure AI systems are fair and make good decisions.

Keywords

» Artificial intelligence  » Alignment  » Fine tuning  » Reinforcement learning  » Reinforcement learning from human feedback  » Rlhf  » Supervised