Loading Now

Summary of Shadowkv: Kv Cache in Shadows For High-throughput Long-context Llm Inference, by Hanshi Sun et al.


ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference

by Hanshi Sun, Li-Wen Chang, Wenlei Bao, Size Zheng, Ningxin Zheng, Xin Liu, Harry Dong, Yuejie Chi, Beidi Chen

First submitted to arxiv on: 28 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed ShadowKV system aims to efficiently support high-throughput inference for long-context large language models (LLMs) by reducing the memory footprint and minimizing decoding latency. To achieve this, ShadowKV stores the low-rank key cache and offloads the value cache, employing an accurate KV selection strategy that reconstructs minimal sparse KV pairs on-the-fly. The system is evaluated on various benchmarks, including RULER, LongBench, and Needle In A Haystack, as well as models like Llama-3.1-8B, Llama-3-8B-1M, GLM-4-9B-1M, Yi-9B-200K, Phi-3-Mini-128K, and Qwen2-7B-128K. The results show that ShadowKV can support up to 6x larger batch sizes and boost throughput by up to 3.04x on an A100 GPU without sacrificing accuracy, even surpassing the performance achievable with infinite batch size under the assumption of infinite GPU memory.
Low GrooveSquid.com (original content) Low Difficulty Summary
ShadowKV is a new system for long-context LLMs that makes them faster and more efficient. It works by storing some information in a special way and moving other parts to make better use of computer memory. This helps big models like Llama-3.1-8B or GLM-4-9B-1M process longer pieces of text at the same time, without slowing down. The system is tested on many different models and sets of data, and it performs well, sometimes even better than if there was no limit to how much memory it could use.

Keywords

» Artificial intelligence  » Inference  » Llama