Loading Now

Summary of Dycoke: Dynamic Compression Of Tokens For Fast Video Large Language Models, by Keda Tao et al.


DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models

by Keda Tao, Can Qin, Haoxuan You, Yang Sui, Huan Wang

First submitted to arxiv on: 22 Nov 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the challenge of efficient processing of complex video content by large language models (VLLMs). Unlike single image inputs, VLLMs attend to visual tokens from different frames at different decoding iterations, making one-shot pruning strategies prone to removing important tokens. The authors introduce DyCoke, a training-free token compression method that optimizes token representation and accelerates VLLMs. DyCoke incorporates a temporal compression module to minimize redundancy across frames and a dynamic KV cache reduction mechanism to prune spatially redundant tokens. Experimental results show that DyCoke outperforms state-of-the-art (SoTA) counterparts, achieving 1.5X inference speedup, 1.4X memory reduction, while maintaining performance with no training.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps make video-processing computers faster and more efficient by reducing the amount of information they need to process. Big language models are good at understanding videos, but it takes a lot of computer power to do so. The authors came up with a new way to compress the data that these models use, called DyCoke. This method makes sure important parts of the video aren’t lost and can be applied without needing to train the model again. Tests show that this method is better than what’s currently available.

Keywords

» Artificial intelligence  » Inference  » One shot  » Pruning  » Token