Loading Now

Summary of Scalable Iterative Pruning Of Large Language and Vision Models Using Block Coordinate Descent, by Gili Rosenberg et al.


Scalable iterative pruning of large language and vision models using block coordinate descent

by Gili Rosenberg, J. Kyle Brubaker, Martin J. A. Schuetz, Elton Yechao Zhu, Serdar Kadıoğlu, Sima E. Borujeni, Helmut G. Katzgraber

First submitted to arxiv on: 26 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC); Quantum Physics (quant-ph)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers introduce an iterative neural network pruning technique called “iterative Combinatorial Brain Surgeon” (iCBS). Building upon the Combinatorial Brain Surgeon, iCBS solves an optimization problem over a subset of the network weights in a block-wise manner using block coordinate descent. This approach allows for scalability to very large models, including large language models (LLMs), that may not be feasible with a one-shot combinatorial optimization approach. The authors demonstrate the effectiveness of iCBS by achieving higher performance metrics at the same density levels compared to existing pruning methods such as Wanda when applied to large models like Mistral and DeiT. This paper also explores the quality-time (or cost) tradeoff that is not available when using a one-shot pruning technique alone, making it suitable for scenarios where computational resources are limited.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models (LLMs) can be highly effective, but they often require significant computational resources to train and deploy. To address this challenge, researchers have developed pruning techniques that remove redundant or unnecessary weights in neural networks. In this study, the authors propose an iterative approach called “iterative Combinatorial Brain Surgeon” (iCBS). iCBS works by dividing the network into smaller blocks and optimizing each block separately using a technique called block coordinate descent. This allows for faster optimization and makes it possible to prune very large models like LLMs. The authors test their method on several models, including Mistral and DeiT, and show that it can achieve better results than other pruning methods at the same level of compression.

Keywords

» Artificial intelligence  » Neural network  » One shot  » Optimization  » Pruning