Summary of Href: Human Response-guided Evaluation Of Instruction Following in Language Models, by Xinxi Lyu et al.
HREF: Human Response-Guided Evaluation of Instruction Following in Language Models
by Xinxi Lyu, Yizhong Wang, Hannaneh Hajishirzi, Pradeep Dasigi
First submitted to arxiv on: 20 Dec 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper introduces a novel approach to evaluating Large Language Models’ (LLMs) capability in following instructions by reevaluating various choices for automatic evaluation on a wide range of instruction-following tasks. The authors experiment with methods that leverage human-written responses, finding they enhance the reliability of automatic evaluations across tasks, resulting in up to 3.2% improvement in agreement with human judges. They also discover that human-written responses offer an orthogonal perspective to model-generated responses and propose a new evaluation benchmark, Human Response-Guided Evaluation of Instruction Following (HREF), which comprises 4,258 samples across 11 task categories and employs a composite evaluation setup. The authors study the impact of key design choices in HREF and host a live leaderboard that evaluates LLMs on the private evaluation set. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to better test big language models’ ability to follow instructions. Right now, we’re relying too much on these powerful models to judge themselves, which can be biased. The researchers tried out different ways of evaluating these models and found that using human-written responses makes the evaluation more reliable. They also created a new way of testing called Human Response-Guided Evaluation of Instruction Following (HREF), which uses a mix of human and model evaluations. This helps us see how well language models are doing in following instructions. |