Loading Now

Summary of Taming Sensitive Weights : Noise Perturbation Fine-tuning For Robust Llm Quantization, by Dongwei Wang et al.


Taming Sensitive Weights : Noise Perturbation Fine-tuning for Robust LLM Quantization

by Dongwei Wang, Huanrui Yang

First submitted to arxiv on: 8 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes an alternative approach to address the issue of outliers in large language models (LLMs) when applying quantization for efficient deployment. Existing methods leave these sensitive weights as floating points or higher precisions, which can limit hardware deployment. The authors introduce Noise Perturbation Fine-tuning (NPFT), a method that identifies outlier weights and adds random perturbations to reduce the loss Hessian trace. This approach improves performance without requiring special treatment for outliers. NPFT is applied to OPT and LLaMA models with uniform and non-uniform quantizers, achieving stable performance improvements while reducing inference efficiency. Surprisingly, even the simplest RTN can match GPTQ’s performance using NPFT on the LLaMA2-7B-4bits benchmark.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps make large language models work better on devices with limited resources. It finds a way to make quantization (a process that reduces the size of data) work better by taking care of “outlier” weights that are sensitive to errors. The authors create a new method called Noise Perturbation Fine-tuning, which makes the model perform better without needing special treatment for these outliers. They test this on two models and show that it works well with different types of quantization.

Keywords

» Artificial intelligence  » Fine tuning  » Inference  » Llama  » Quantization