Loading Now

Summary of Reattention: Training-free Infinite Context with Finite Attention Scope, by Xiaoran Liu et al.


ReAttention: Training-Free Infinite Context with Finite Attention Scope

by Xiaoran Liu, Ruixiao Li, Qipeng Guo, Zhigeng Liu, Yuerong Song, Kai Lv, Hang Yan, Linlin Li, Qun Liu, Xipeng Qiu

First submitted to arxiv on: 21 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes ReAttention, a training-free approach that enables Large Language Models (LLMs) to support infinite context lengths with finite attention scope. This breakthrough addresses the limitation of LLMs’ practical applications, which are currently restricted by their maximum supported context length. The method works by performing position-agnostic top-k attention before traditional self-attention, allowing LLMs to effectively capture semantic relationships within long contexts. Experimental results on LongBench, L-Eval, and InfiniteBench demonstrate that ReAttention is on par with traditional methods, while also enabling mainstream LLMs like LLaMA3.1-8B and Mistral-v0.3-7B to support context lengths of at least 1M.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper solves a big problem in Large Language Models (LLMs) that limits their use. Right now, these models can only understand text that is very short, which makes them not very useful for many tasks. The new approach, called ReAttention, lets LLMs understand much longer pieces of text without needing more training or special tools. This means they can be used in even more situations where people want to make computers understand and generate human-like language.

Keywords

» Artificial intelligence  » Attention  » Context length  » Self attention