Loading Now

Summary of Lrq: Optimizing Post-training Quantization For Large Language Models by Learning Low-rank Weight-scaling Matrices, By Jung Hyun Lee et al.


LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

by Jung Hyun Lee, Jeonghoon Kim, June Yong Yang, Se Jung Kwon, Eunho Yang, Kang Min Yoo, Dongsoo Lee

First submitted to arxiv on: 16 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel post-training weight quantization method, Low-Rank Quantization (LRQ), is proposed to compress and accelerate large language models (LLMs). LRQ leverages low-rank weight-scaling matrices to reconstruct intermediate Transformer block outputs, replacing conventional full weight-scaling matrices. This simple yet effective approach reduces the number of learnable parameters while enabling individual weight scaling, boosting the generalization capability of quantized LLMs. The proposed method outperforms prior LLM post-training quantization works under various 8-bit and 4-bit quantization schemes.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are getting bigger! To make them run faster on our devices, researchers have been working on a way to shrink the size of these models without losing their abilities. They found that by using special math tricks, they can make the model smaller while still keeping it good at understanding languages. This new method is called Low-Rank Quantization and it’s super helpful for making big language models work faster.

Keywords

» Artificial intelligence  » Boosting  » Generalization  » Quantization  » Transformer