Loading Now

Summary of Lossless Acceleration Of Large Language Model Via Adaptive N-gram Parallel Decoding, by Jie Ou et al.


Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding

by Jie Ou, Yueming Chen, Wenhong Tian

First submitted to arxiv on: 10 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces Adaptive N-gram Parallel Decoding (ANPD), a novel approach that accelerates inference in Large Language Models (LLMs) while preserving their original output integrity. ANPD employs a two-stage process: rapid drafting using an N-gram module and verification by the LLM. This method eliminates the need for retraining or extra GPU memory, making it efficient and plug-and-play. The paper showcases speed improvements up to 3.67x in models like LLaMA and its fine-tuned variants.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large Language Models are super smart, but they take a long time to work because of how they process information. This new approach, called ANPD, makes them faster without losing their intelligence. It’s like having a super-fast writing partner that checks facts with the original expert. The new method is simple, doesn’t require extra training or special computers, and makes LLMs up to 3.67 times quicker.

Keywords

* Artificial intelligence  * Inference  * Llama  * N gram