Summary of Style Outweighs Substance: Failure Modes Of Llm Judges in Alignment Benchmarking, by Benjamin Feuer et al.
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking
by Benjamin Feuer, Micah Goldblum, Teresa Datta, Sanjana Nambiar, Raz Besaleli, Samuel Dooley, Max Cembalest, John P. Dickerson
First submitted to arxiv on: 23 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the connection between LLM-judge preferences and concrete metrics for alignment in post-training language models. It introduces SOS-Bench, a large standardized benchmark to evaluate model alignment, and finds that LLM-judge preferences do not correlate with measures of safety, world knowledge, and instruction following. Instead, it identifies the supervised fine-tuning stage as having the greatest impact on alignment, driven by data scaling and prompt diversity. The paper also highlights implicit biases in LLM-judges, prioritizing style over factuality and safety. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how well language models align with human preferences. It makes a special benchmark to test this, called SOS-Bench. Researchers found that what people like about language models doesn’t match up with how well they do certain tasks, like being safe or knowing facts. Instead, the way language models are trained is more important for getting good results. The paper also shows that these models have biases, preferring things to sound nice rather than being accurate. |
Keywords
» Artificial intelligence » Alignment » Fine tuning » Prompt » Supervised