Loading Now

Summary of Kangaroo: Lossless Self-speculative Decoding Via Double Early Exiting, by Fangcheng Liu et al.


Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting

by Fangcheng Liu, Yehui Tang, Zhenhua Liu, Yunsheng Ni, Kai Han, Yunhe Wang

First submitted to arxiv on: 29 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed self-speculative decoding framework, Kangaroo, uses a fixed shallow sub-network as a self-draft model to accelerate inference of large language models while maintaining a consistent sampling distribution. The approach bridges the gap between the sub-network and the full model’s representation ability using an adapter module. To increase token acceptance rate without increasing drafting steps, Kangaroo introduces early exiting for generating draft tokens by halting the small model’s prediction when confidence falls below a certain threshold. Extensive experiments on Spec-Bench demonstrate Kangaroo’s effectiveness, achieving speedups up to 1.68x with 88.7% fewer additional parameters compared to Medusa-1.
Low GrooveSquid.com (original content) Low Difficulty Summary
Kangaroo is a new way to make big language models work faster while keeping the output good. It uses a smaller model as a “draft” and makes adjustments to get it working like the bigger model. This helps reduce the time it takes to generate text, which can be important for applications that need fast responses. The researchers tested this approach on a benchmark dataset called Spec-Bench and found that it was much faster than another similar approach, Medusa-1, while using fewer extra resources.

Keywords

» Artificial intelligence  » Inference  » Token