Loading Now

Summary of Same Task, More Tokens: the Impact Of Input Length on the Reasoning Performance Of Large Language Models, by Mosh Levy et al.


Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models

by Mosh Levy, Alon Jacoby, Yoav Goldberg

First submitted to arxiv on: 19 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates how extending input lengths affects the capabilities of Large Language Models (LLMs). Despite their advancements, it’s unclear whether they perform consistently across different input lengths. The researchers introduce a novel QA reasoning framework to assess this impact and find that there is a notable degradation in LLMs’ performance at shorter input lengths than their technical maximum. They also discover that the traditional metric of next word prediction correlates negatively with the performance on their reasoning dataset. By analyzing these results, they identify failure modes that can guide future research and inform strategies to address limitations in LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how making language models longer affects what they can do. It’s surprising that we don’t know if they work well for short or long inputs! The researchers came up with a special way to test this, using lots of versions of the same text, each with different amounts and types of extra words added. They found that when the input is too short, language models get worse at answering questions. This happened on every version of their dataset, but some were more affected than others. The researchers looked at what went wrong and came up with ideas for how to make language models better in the future.

Keywords

» Artificial intelligence