Loading Now

Summary of Online Merging Optimizers For Boosting Rewards and Mitigating Tax in Alignment, by Keming Lu et al.


Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment

by Keming Lu, Bowen Yu, Fei Huang, Yang Fan, Runji Lin, Chang Zhou

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the challenge of aligning Large Language Models (LLMs) with human-centric values while maintaining their pre-trained abilities. The authors discover that interpolating model parameters can adjust the trade-off between human preference and basic capabilities, reducing the alignment tax at the cost of alignment reward. To achieve this, they propose the Online Merging Optimizer, which integrates the RL policy and SFT models at each optimization step to regulate the training direction. The optimizer is tested with different LLM families (Qwen and LLaMA), model sizes (1.8B to 8B), RLHF algorithms (DPO and KTO), and existing model merging methods, demonstrating significant enhancements in alignment reward while mitigating alignment tax across 14 benchmarks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making computers understand human values better. Right now, computers are very good at doing tasks on their own, but they often don’t align with what humans want. The authors found a way to make computers adjust how well they do tasks based on human preferences, while still keeping their basic abilities. They did this by merging two different models together and adjusting the direction of training. This new method works well with many different types of computer models and improves how well they align with human values.

Keywords

» Artificial intelligence  » Alignment  » Llama  » Optimization  » Rlhf