Loading Now

Summary of Arb-llm: Alternating Refined Binarizations For Large Language Models, by Zhiteng Li et al.


ARB-LLM: Alternating Refined Binarizations for Large Language Models

by Zhiteng Li, Xianglong Yan, Tianao Zhang, Haotong Qin, Dong Xie, Jiang Tian, zhongchao shi, Linghe Kong, Yulun Zhang, Xiaokang Yang

First submitted to arxiv on: 4 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large Language Models (LLMs) have made significant advancements in natural language processing, but their high memory and computational demands hinder practical deployment. To address this issue, we propose ARB-LLM, a novel 1-bit post-training quantization technique tailored for LLMs. Our approach, which includes an alternating refined binarization algorithm, column-group bitmap (CGB) weight partition strategy, and calibration data consideration, significantly reduces the distribution gap between binarized and full-precision weights. We show that our method outperforms state-of-the-art binarization methods for LLMs and even surpasses FP16 models of the same size.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large Language Models have made big progress in understanding language, but they need a lot of memory and computing power to work well. To make them more practical, we developed a new way to shrink model weights down to just 1 bit. This makes it much faster and uses less memory. Our method is called ARB-LLM and it works by updating the binarization parameters in a special way. We also added some extra steps to make sure our method works well with big language models. The result is that our method performs better than other methods and even beats full precision 16-bit models of the same size.

Keywords

* Artificial intelligence  * Natural language processing  * Precision  * Quantization