Loading Now

Summary of Explaining Length Bias in Llm-based Preference Evaluations, by Zhengyu Hu et al.


Explaining Length Bias in LLM-Based Preference Evaluations

by Zhengyu Hu, Linxin Song, Jieyu Zhang, Zheyuan Xiao, Tianfu Wang, Zhengyu Chen, Nicholas Jing Yuan, Jianxun Lian, Kaize Ding, Hui Xiong

First submitted to arxiv on: 1 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed decomposition of preference evaluation metrics in large language models (LLMs) reveals bias towards longer responses, undermining reliability. The metric is decomposed into desirability, related to trustworthiness, and information mass, which is length-dependent. Empirical experiments show response length impacts evaluations by influencing information mass. To derive a reliable evaluation metric, AdapAlpaca adjusts win rate measurement to ensure fair comparison of response quality.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to evaluate responses from large language models (LLMs) to make sure they’re not just picking the longest answer. They break down how good an answer is into two parts: whether it’s trustworthy and correct, and how much information it provides. By doing this, they found that the length of the answer actually makes a difference in how well it’s judged. To fix this problem, they came up with a new way to measure how good an answer is, called AdapAlpaca.

Keywords

* Artificial intelligence