Loading Now

Summary of Investigating the Effect Of Network Pruning on Performance and Interpretability, by Jonathan Von Rad et al.


Investigating the Effect of Network Pruning on Performance and Interpretability

by Jonathan von Rad, Florian Seuffert

First submitted to arxiv on: 29 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The research paper investigates the impact of different pruning techniques on the classification performance and interpretability of GoogLeNet, a Deep Neural Network (DNN). The authors apply unstructured and structured pruning, as well as connection sparsity methods to the network and analyze its performance on the validation set of ImageNet. They also compare different retraining strategies, such as iterative pruning and one-shot pruning. The results show that with sufficient retraining epochs, the pruned networks can approximate or even surpass the performance of the default GoogLeNet in some cases. To assess interpretability, the authors use the Mechanistic Interpretability Score (MIS) and find no significant relationship between interpretability and pruning rate when using MIS as a measure. Additionally, they observe that networks with extremely low accuracy can still achieve high MIS scores, suggesting that MIS may not always align with intuitive notions of interpretability.
Low GrooveSquid.com (original content) Low Difficulty Summary
The research paper looks at how to make Deep Neural Networks (DNNs) smaller without losing their ability to do tasks like image classification. The authors test different ways to “prune” the network, or remove parts of it, and see how it affects the network’s performance. They also try different methods for retraining the pruned network, such as doing multiple rounds of training. The results show that with enough training, the pruned networks can be just as good as the original one. The authors also use a special score to measure how well they can understand why the network is making certain decisions.

Keywords

» Artificial intelligence  » Classification  » Image classification  » Neural network  » One shot  » Pruning