Loading Now

Summary of Eagle: Speculative Sampling Requires Rethinking Feature Uncertainty, by Yuhui Li et al.


EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty

by Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang

First submitted to arxiv on: 26 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents an efficient speculative sampling framework called EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) to address the time-consuming inference issue in Large Language Models (LLMs). By reconsidering autoregressive decoding at the feature level, the authors derive two key observations: that feature-level autoregression is more straightforward than token-level autoregression, and that uncertainty in feature-level autoregression constrains its performance. EAGLE incorporates a token sequence advanced by one time step to resolve this uncertainty, enabling precise second-to-top-layer feature prediction with minimal overhead. The authors evaluate EAGLE on various models and tasks, including dialogue, code generation, mathematical reasoning, and instruction following.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper tries to make language models work faster! Right now, it takes a long time for these models to figure out what we want them to say next. The researchers came up with an idea called EAGLE that makes this process much quicker without losing any quality. They discovered that if they look at the model’s features instead of just individual words, it gets easier and more accurate. They also found that there’s some uncertainty involved in this process, but by looking ahead one step, they can remove this uncertainty and make the whole thing faster and better.

Keywords

* Artificial intelligence  * Autoregressive  * Inference  * Language model  * Token