Loading Now

Summary of Autodfp: Automatic Data-free Pruning Via Channel Similarity Reconstruction, by Siqi Li et al.


AutoDFP: Automatic Data-Free Pruning via Channel Similarity Reconstruction

by Siqi Li, Jun Chen, Jingyang Xiang, Chengrui Zhu, Yong Liu

First submitted to arxiv on: 13 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to structured pruning methods for neural networks is proposed, aiming to bridge the gap between massive network scale and limited hardware resources. The current methods rely on training datasets, resulting in high computational burdens and limitations in scenarios requiring privacy and security. As an alternative, data-free methods have been proposed, but these often require handcraft parameter tuning and can only achieve inflexible reconstruction. The Automatic Data-Free Pruning (AutoDFP) method is developed to achieve automatic pruning and reconstruction without fine-tuning. AutoDFP formulates data-free pruning as an optimization problem, which is effectively addressed through reinforcement learning. The approach assesses channel similarity for each layer, guiding the pruning and reconstruction process. Evaluation on multiple networks and datasets demonstrates impressive compression results, including a 2.87% reduction in accuracy loss on CIFAR-10 compared to recent methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to make neural networks smaller is introduced. Normally, making these networks smaller requires training with lots of data, which can be a problem if you need to keep the data private. Some other methods have been tried that don’t require training, but they’re not very flexible. This paper proposes a new method called AutoDFP that can automatically make neural networks smaller without needing any extra training data. It works by looking at how similar different parts of the network are and using that to decide what to keep and what to get rid of. The results show that this method can make neural networks much smaller while still keeping them able to do their job well.

Keywords

* Artificial intelligence  * Fine tuning  * Optimization  * Pruning  * Reinforcement learning