Summary of Semi-supervised Reward Modeling Via Iterative Self-training, by Yifei He et al.
Semi-Supervised Reward Modeling via Iterative Self-Training
by Yifei He, Haoxiang Wang, Ziyan Jiang, Alexandros Papangelis, Han Zhao
First submitted to arxiv on: 10 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Semi-Supervised Reward Modeling (SSRM) is a novel approach that enhances traditional Reward Models (RMs) used in Reinforcement Learning with Human Feedback (RLHF). RMs are crucial for aligning large language models (LLMs) to human preferences. Conventional RM training relies on extensive human-annotated data, which poses scalability and cost challenges. SSRM addresses these limitations by utilizing unlabeled data through three iterative steps: pseudo-labeling, confidence threshold selection, and supervised finetuning. Experimental results demonstrate that SSRM improves RM performance without additional labeling costs. Notably, SSRM achieves comparable performance to models trained on labeled data of equivalent volumes. This breakthrough reduces the dependency on large human-annotated datasets, decreasing training time and cost. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine a way to teach computers what humans like or dislike without needing tons of labeled data. That’s what Semi-Supervised Reward Modeling (SSRM) does! It helps align computer models with human preferences. Right now, we need lots of people to label data for this alignment process. SSRM makes it possible to use more data without needing as many labels. This is important because labeling data can be slow and expensive. The new approach works by doing a few things: pretending to label some data, choosing the most confident examples, and then fine-tuning the model on these examples. It all adds up to better models that understand human preferences! |
Keywords
» Artificial intelligence » Alignment » Fine tuning » Reinforcement learning » Rlhf » Semi supervised » Supervised