Loading Now

Summary of Drive: Dual Gradient-based Rapid Iterative Pruning, by Dhananjay Saikumar et al.


DRIVE: Dual Gradient-Based Rapid Iterative Pruning

by Dhananjay Saikumar, Blesson Varghese

First submitted to arxiv on: 1 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Modern deep neural networks (DNNs) consist of millions of parameters, requiring high-performance computing during training and inference. Pruning is a solution that reduces space and time complexities by streamlining DNNs. Traditional pruning methods, such as iterative magnitude-based pruning (IMP), achieve up to 90% parameter reduction while retaining accuracy comparable to the original model. However, this approach relies on multiple train-prune-reset cycles, leading to impractical runtime. To bridge this gap, we present Dual Gradient-Based Rapid Iterative Pruning (DRIVE), which leverages dense training for initial epochs and employs a dual gradient-based metric for parameter ranking. DRIVE has been experimentally demonstrated for VGG and ResNet architectures on CIFAR-10/100, Tiny ImageNet, and ImageNet, achieving superior performance over other training-agnostic early pruning methods in accuracy while being 43to 869faster than IMP for pruning.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding a way to make deep neural networks smaller and faster without sacrificing their ability to learn. Neural networks have millions of parameters, which makes them take up lots of space and time on computers. Pruning is one solution that can help with this problem. The researchers are trying to find the best way to prune these networks so they can be used more efficiently. They came up with a new method called DRIVE that is faster than other methods while still keeping the network’s ability to learn. This could lead to better and faster artificial intelligence in the future.

Keywords

* Artificial intelligence  * Inference  * Pruning  * Resnet