Loading Now

Summary of Judging the Judges: a Systematic Study Of Position Bias in Llm-as-a-judge, by Lin Shi et al.


Judging the Judges: A Systematic Study of Position Bias in LLM-as-a-Judge

by Lin Shi, Chiyu Ma, Wenhua Liang, Weicheng Ma, Soroush Vosoughi

First submitted to arxiv on: 12 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework introduces a systematic approach to examine position bias in pairwise comparisons of Large Language Models (LLMs) serving as judges. This study explores repetition stability, position consistency, and preference fairness across 12 LLM judges evaluating various tasks on MTBench and DevBench. The findings confirm that position bias is not due to random chance and highlights notable variations among judges and tasks. Furthermore, the quality gap between solutions significantly impacts position bias. These insights can inform debiasing strategies, optimize judge model selections, and improve benchmark design.
Low GrooveSquid.com (original content) Low Difficulty Summary
LLMs are being used as judges to evaluate different tasks, but they have their own biases. One of these biases is called position bias, which means that the LLM tends to favor solutions based on where they appear in the prompt. This study shows how to examine this bias using a framework that looks at repetition stability, consistency, and fairness. The researchers tested 12 LLM judges across different tasks and found that position bias is not just random chance. They also discovered that the quality gap between solutions matters a lot. These findings can help make LLMs better judges.

Keywords

» Artificial intelligence  » Prompt