Loading Now

Summary of Cross-self Kv Cache Pruning For Efficient Vision-language Inference, by Xiaohuan Pei et al.


Cross-Self KV Cache Pruning for Efficient Vision-Language Inference

by Xiaohuan Pei, Tao Huang, Chang Xu

First submitted to arxiv on: 5 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, the authors propose a novel approach to reducing memory and computation costs in long-context auto-regressive generation for vision-language models. They argue that existing methods overlook the distributional discrepancies between modalities, leading to inaccurate token importance estimation and over-pruning of critical visual tokens. To address this, they decompose attention scores into intra-modality and inter-modality attention, enabling more precise KV cache pruning. Additionally, they introduce an n-softmax function to counteract distribution shifts caused by pruning. The proposed method, Cross-Self Pruning (CSP), achieves competitive performance compared to models with full KV caches while significantly outperforming previous pruning methods on the MileBench benchmark. The authors demonstrate CSP’s effectiveness, achieving up to a 41% performance improvement on challenging tasks like conversational embodied dialogue while reducing the KV cache budget by 13.6%.
Low GrooveSquid.com (original content) Low Difficulty Summary
KV cache pruning is a technique for reducing memory and computation costs in long-context auto-regressive generation for vision-language models. Researchers have been working on this problem, but existing methods didn’t take into account the differences between words and pictures. The new method, called Cross-Self Pruning (CSP), does take these differences into account and is better at figuring out what’s important to keep in memory. This helps the model work more efficiently without sacrificing performance.

Keywords

» Artificial intelligence  » Attention  » Pruning  » Softmax  » Token