Summary of Flattenquant: Breaking Through the Inference Compute-bound For Large Language Models with Per-tensor Quantization, by Yi Zhang et al.
FlattenQuant: Breaking Through the Inference Compute-bound for Large Language Models with Per-tensor Quantization
by Yi Zhang, Fei Yang, Shuang Peng, Fangyu Wang, Aimin Pan
First submitted to arxiv on: 28 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Large language models (LLMs) have achieved state-of-the-art performance across various tasks, but their latency and GPU memory consumption hinder deployment. Efficient quantization methods exist, yet compute-bound issues persist when dealing with large batch sizes or long sequences. Our novel method, FlattenQuant, reduces the maximum tensor value by flattening channels to achieve low-bit quantization with minimal accuracy loss. We demonstrate that 4 bits can be used for linear layer calculations in LLMs, with remaining layers using 8 bits, achieving up to 2x speedup and 2.3x memory reduction while preserving accuracy. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models have been very good at doing many tasks, but they are not perfect because it takes them a long time to make decisions and they need a lot of computer power. Scientists have tried to make the models faster and use less computer power, but some problems still exist. Our new idea is called FlattenQuant, and it helps solve one of these problems by making the model work with smaller numbers without losing too much accuracy. |
Keywords
* Artificial intelligence * Quantization