Loading Now

Summary of Eigen Attention: Attention in Low-rank Space For Kv Cache Compression, by Utkarsh Saxena et al.


Eigen Attention: Attention in Low-Rank Space for KV Cache Compression

by Utkarsh Saxena, Gobinda Saha, Sakshi Choudhary, Kaushik Roy

First submitted to arxiv on: 10 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes Eigen Attention, a novel approach to reduce the memory usage of large language models (LLMs) during inference. By performing attention in a low-rank space, Eigen Attention reduces the size of the key-value (KV) cache, which is critical at long context lengths and large batch sizes. The authors demonstrate that their approach leads to up to 40% reduction in KV cache sizes and up to 60% reduction in attention operation latency with minimal performance drop. They experiment on various model families, including OPT, MPT, and Llama.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new way to make large language models work better for complex tasks by reducing the amount of memory they use. It does this by changing how the model pays attention to different parts of what it’s processing. This makes the model faster and uses less memory, which is important when working with big datasets. The results show that this approach can make the model up to 60% faster while still being very accurate.

Keywords

» Artificial intelligence  » Attention  » Inference  » Llama