Loading Now

Summary of Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention, by Tsendsuren Munkhdalai et al.


Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention

by Tsendsuren Munkhdalai, Manaal Faruqui, Siddharth Gopal

First submitted to arxiv on: 10 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs, using a new attention technique called Infini-attention. The Infini-attention incorporates compressive memory into the vanilla attention mechanism, allowing for both masked local and long-term linear attention in a single Transformer block. The proposed approach is demonstrated on long-context language modeling benchmarks, including 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper finds a way to make very big language models work really well even when they’re faced with super long texts. They do this by inventing a new attention technique called Infini-attention, which is like a special kind of memory that helps the model focus on what’s important. This means that these language models can now handle tasks like summarizing entire books or searching for specific information in really long passages.

Keywords

» Artificial intelligence  » Attention  » Summarization  » Transformer