Loading Now

Summary of Identifyme: a Challenging Long-context Mention Resolution Benchmark, by Kawshik Manikantan et al.


IdentifyMe: A Challenging Long-Context Mention Resolution Benchmark

by Kawshik Manikantan, Makarand Tapaswi, Vineet Gandhi, Shubham Toshniwal

First submitted to arxiv on: 12 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses a limitation in evaluating language models’ (LLMs) ability to understand references by introducing IdentifyMe, a new benchmark that presents mention resolution tasks in a multiple-choice question format. The benchmark features long narratives and employs heuristics to create a more challenging task, allowing for fine-grained analysis of model performance. Evaluations on both closed- and open-source LLMs reveal a significant performance gap between state-of-the-art sub-10B open models and closed ones. The paper also highlights the difficulty of resolving pronominal mentions with limited surface information and the tendency of LLMs to confuse entities when their mentions overlap in nested structures. Overall, this study demonstrates the strong referential capabilities of state-of-the-art LLMs while indicating room for further improvement.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how well language models can figure out what things are being referred to in a piece of text. The problem is that current ways of testing these models aren’t very good at catching their mistakes. To fix this, the researchers created a new way of asking questions about references called IdentifyMe. This benchmark is like a quiz with multiple-choice answers, which makes it harder for the models to get things wrong. They tested several language models and found that some are much better than others at understanding references. The best model got 81.9% of the questions right! This shows that these language models are really good at understanding what’s being referred to in a text, but there is still room for improvement.

Keywords

* Artificial intelligence