Loading Now

Summary of Length Desensitization in Direct Preference Optimization, by Wei Liu et al.


Length Desensitization in Direct Preference Optimization

by Wei Liu, Yang Bai, Chengcheng Han, Rongxiang Weng, Jun Xu, Xuezhi Cao, Jingang Wang, Xunliang Cai

First submitted to arxiv on: 10 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the effects of Direct Preference Optimization (DPO) on Large Language Models (LLMs), which is widely used for Reinforcement Learning from Human Feedback. DPO tends to over-optimize for verbosity, negatively impacting performance and user experience. The authors reveal a strong correlation between the implicit reward of DPO and data length, leading to length sensitivity during training and verbosity. To address this issue, they propose LD-DPO, a method that decouples explicit length preference from other implicit preferences. Experimental results on various benchmarks demonstrate that LD-DPO outperforms DPO and baseline methods, achieving 10-40% reduction in response length while aligning with human-like preferences.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how we can make large language models work better with humans. Right now, a technique called Direct Preference Optimization (DPO) is used to teach these models what humans like. But DPO has a problem: it makes the models talk too much! The researchers found that this is because DPO gets confused and tries to make the models longer. They came up with a new idea called LD-DPO that helps fix this problem by separating what we want the model to say from how long it says it. They tested this on some big datasets and showed that it works better than the old way.

Keywords

» Artificial intelligence  » Optimization  » Reinforcement learning from human feedback