Loading Now

Summary of Llm-barber: Block-aware Rebuilder For Sparsity Mask in One-shot For Large Language Models, by Yupeng Su et al.


LLM-Barber: Block-Aware Rebuilder for Sparsity Mask in One-Shot for Large Language Models

by Yupeng Su, Ziyi Guan, Xiaoqun Liu, Tianlai Jin, Dongkuan Wu, Graziano Chesi, Ngai Wong, Hao Yu

First submitted to arxiv on: 20 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents LLM-Barber, a novel one-shot pruning framework for large language models (LLMs) that efficiently prunes models with 7B to 13B parameters on a single A100 GPU in just 30 minutes. The framework rebuilds the sparsity mask of pruned models without retraining or weight reconstruction, ensuring global performance optimization across Self-Attention and MLP blocks. LLM-Barber introduces an innovative pruning metric that identifies weight importance using weights multiplied by gradients, achieving state-of-the-art results in both perplexity and zero-shot performance on various language benchmarks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us create better large language models. Right now, it takes a lot of work to make these models smaller without losing their ability to understand language. The authors developed a new way to do this that’s really fast and good at keeping the model’s abilities. They tested their method on some big models and showed it works well.

Keywords

» Artificial intelligence  » Mask  » One shot  » Optimization  » Perplexity  » Pruning  » Self attention  » Zero shot