Summary of Bypass Back-propagation: Optimization-based Structural Pruning For Large Language Models Via Policy Gradient, by Yuan Gao et al.
Bypass Back-propagation: Optimization-based Structural Pruning for Large Language Models via Policy Gradient
by Yuan Gao, Zujing Liu, Weizhong Zhang, Bo Du, Gui-Song Xia
First submitted to arxiv on: 15 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel optimization-based structural pruning method is proposed to efficiently prune Large-Language Models (LLMs) without relying on heuristically hand-crafted metrics or expensive weight finetuning. The approach learns probabilistic pruning masks by optimizing the loss of the pruned model, eliminating back-propagation through the LLM and requiring only forward passes. This allows for efficient optimization via a policy gradient estimator, enabling global and heterogeneous pruning at structural granularities such as channels, heads, and layers. Experimental results on various datasets demonstrate that the method outperforms state-of-the-arts in perplexity and zero-shot tasks, operating efficiently on a single A100 GPU with 35GB memory. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new way to make big language models smaller and faster is developed. Instead of using rules or trial-and-error, this approach uses math to find the best parts of the model to remove. This makes it possible to prune (remove) more parts of the model than before, which can help with tasks like understanding text and generating responses. The method was tested on several datasets and showed better results than other approaches, while also being efficient in terms of computing resources. |
Keywords
» Artificial intelligence » Optimization » Perplexity » Pruning » Zero shot