Loading Now

Summary of Longheads: Multi-head Attention Is Secretly a Long Context Processor, by Yi Lu et al.


LongHeads: Multi-Head Attention is Secretly a Long Context Processor

by Yi Lu, Xin Zhou, Wei He, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang

First submitted to arxiv on: 16 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large language models (LLMs) have achieved impressive performance in numerous domains, but often struggle to process lengthy inputs effectively and efficiently due to limited length generalization and attention’s quadratic computational demands. Our proposed framework, LongHeads, addresses these issues by unlocking the untapped potential of multi-head attention. Instead of attending to the full sentence, each head processes in-distribution length by selecting important context chunks using a chunk selection strategy that relies on query-key representation correlation. This allows heads to process attended tokens within trained lengths and collectively process longer contexts. LongHeads works efficiently in linear time, fitting seamlessly with LLMs that use relative positional encoding. We demonstrate the efficacy of LongHeads by achieving 100% accuracy at the 128k length on passkey retrieval tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models can process lots of information, but they often struggle to understand really long sentences because it’s hard for them to focus on what’s important. Our new method, called LongHeads, helps these models understand longer sentences by breaking them down into smaller chunks that each head can focus on. This way, the model can still learn from all the information in a sentence, even if it’s very long.

Keywords

» Artificial intelligence  » Attention  » Generalization  » Multi head attention  » Positional encoding