Loading Now

Summary of Get More with Less: Synthesizing Recurrence with Kv Cache Compression For Efficient Llm Inference, by Harry Dong et al.


Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference

by Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen

First submitted to arxiv on: 14 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the memory bottleneck issue in large language models caused by the key-value (KV) cache, a computational shortcut that stores previous KV pairs during decoding. Existing methods prune or evict less important KV pairs to reduce memory footprint, but may have limited success in tasks requiring recollection of most tokens. The authors propose LESS, a constant-sized cache integrated with eviction-based methods, allowing all tokens to be queried throughout decoding steps. Experimental results demonstrate that LESS can help bridge the performance gap from caching everything, sometimes matching it, while being efficient. The proposed method is implemented and open-sourced on GitHub.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper solves a problem in big language models called memory bottleneck. When these models process text, they need to store old information to recall later. But this takes up too much space in computers. Other solutions try to get rid of some of that stored information to save space. However, this doesn’t work well when we need to remember most of the old information. The authors suggest a new way called LESS that combines storing some old information with getting rid of other parts. This helps big language models work better and use less computer memory.

Keywords

* Artificial intelligence  * Recall