Summary of Learning Effective Pruning at Initialization From Iterative Pruning, by Shengkai Liu et al.
Learning effective pruning at initialization from iterative pruning
by Shengkai Liu, Yaofeng Cheng, Fusheng Zha, Wei Guo, Lining Sun, Zhenshan Bing, Chenguang Yang
First submitted to arxiv on: 27 Aug 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Pruning at initialization (PaI) reduces training costs by removing weights before training, which becomes increasingly crucial with the growing network size. The paper explores ways to improve PaI performance inspired by iterative pruning methods. It introduces an end-to-end neural network called AutoSparse that learns the correlation between initial features and their scores, using this information to prune parameters before training. The approach outperforms existing methods in high-sparsity settings and can be generalized across different models with a single-time iteration of iterative rewind pruning (IRP). The paper conducts extensive experiments to validate factors influencing the method’s performance, providing new insights into our understanding and research of PaI from a practical perspective. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Pruning at initialization (PaI) is a way to make neural networks more efficient. It removes some parts of the network before training starts, which can save time and computer power. But current methods for doing this have an accuracy gap compared to other ways of pruning. The paper asks if we can use ideas from iterative pruning to improve PaI. They develop a new method called AutoSparse that learns how important each part of the network is based on its initial state. This allows them to prune parts of the network more effectively, especially when using very sparse models. The results show that this approach works better than existing methods and can be used for different types of networks. |
Keywords
» Artificial intelligence » Neural network » Pruning