Loading Now

Summary of Lazyllm: Dynamic Token Pruning For Efficient Long Context Llm Inference, by Qichen Fu et al.


LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

by Qichen Fu, Minsik Cho, Thomas Merth, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi

First submitted to arxiv on: 19 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to accelerate transformer-based large language models is introduced, focusing on the inference process. The method, called LazyLLM, selectively computes key-value (KV) cache for tokens important for predicting the next token in both prefilling and decoding stages. This allows language models to dynamically select different subsets of tokens from the context without pruning them all at once. Extensive experiments demonstrate that LazyLLM can be seamlessly integrated with existing language models, such as LLama 2 7B, accelerating the generation process without compromising accuracy. For instance, in multi-document question-answering tasks, LazyLLM accelerates the prefilling stage by 2.34x.
Low GrooveSquid.com (original content) Low Difficulty Summary
LazyLLM is a new way to make big language models work faster! It helps computers generate text by only looking at the most important parts of what they’re being asked. This makes it much quicker without sacrificing accuracy. They tested it with different tasks and found that it works really well, even speeding up complex question-answering tasks.

Keywords

» Artificial intelligence  » Inference  » Llama  » Pruning  » Question answering  » Token  » Transformer