Summary of Accurate Block Quantization in Llms with Outliers, by Nikita Trukhanov and Ilya Soloveychik
Accurate Block Quantization in LLMs with Outliers
by Nikita Trukhanov, Ilya Soloveychik
First submitted to arxiv on: 29 Mar 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Hardware Architecture (cs.AR); Numerical Analysis (math.NA)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The recent surge in demand for large-scale language models has highlighted a critical shortage of hardware capable of processing compute and memory efficiently. To address this issue, quantization techniques have been proposed to enable accurate processing of both weights and activations. The introduction of Block Floating Point (BFP) formats, characterized by shared scale factors, has shown promise in providing memory-efficient hardware support for tensor operations while maintaining high quantization accuracy. However, the presence of outliers in weights and activations remains a significant challenge, affecting model accuracy. This paper proposes a novel approach to address this issue by rearranging outlier channels at compile time, enabling low-precision BFP formats without compromising model accuracy. The proposed methodology achieves 2x memory savings with minimal degradation in model performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are supercomputer-hungry and can’t fit on most computers. This is a problem because lots of people want to use these powerful models. To make them work better, scientists have been trying to shrink the size of the models without losing their power. They’ve made some progress with something called Block Floating Point formats. These formats help computers store and process the information in the models more efficiently. The only challenge is that sometimes certain parts of the model get mixed up and make it less accurate. This paper presents a new way to fix this problem by rearranging these mixed-up parts at the beginning, so they don’t affect how well the model works. |
Keywords
» Artificial intelligence » Precision » Quantization