Summary of Simple Linear Attention Language Models Balance the Recall-throughput Tradeoff, by Simran Arora et al.
Simple linear attention language models balance the recall-throughput tradeoff
by Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, Christopher Ré
First submitted to arxiv on: 28 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In a quest to improve the efficiency of attention-based language models without sacrificing recall, researchers explored a broad set of architectures. They identified a tradeoff between state size and recall ability, finding that efficient alternatives to attention struggled with recall. To bridge this gap, they proposed BASED, a simple architecture combining linear and sliding window attention. By tuning its parameters, they achieved the full quality of attention while maintaining small state sizes. They trained language models up to 1.3 billion parameters, demonstrating BASED’s competitiveness with Mamba on perplexity and real-world recall-intensive tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Language models can be very good at remembering things they’ve seen before! But, they use a lot of memory to do so. To solve this problem, researchers looked at different ways to build language models. They found that some methods were better than others at remembering things, but worse at using memory. They came up with a new idea called BASED, which combines two types of attention. By adjusting the settings, they could make it use less memory or be more good at remembering things. |
Keywords
* Artificial intelligence * Attention * Perplexity * Recall