Loading Now

Summary of Pruning Deep Convolutional Neural Network Using Conditional Mutual Information, by Tien Vu-van et al.


Pruning Deep Convolutional Neural Network Using Conditional Mutual Information

by Tien Vu-Van, Dat Du Thanh, Nguyen Ho, Mai Vu

First submitted to arxiv on: 27 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This study proposes a novel approach to compressing Convolutional Neural Networks (CNNs) for deployment on resource-limited hardware. The method leverages Mutual Information and Conditional Mutual Information (CMI) metrics to identify the most informative features within each layer of the network. By ranking feature maps based on CMI values, computed using a matrix-based Renyi α-order entropy numerical method, the approach successively evaluates each layer and selectively retains the most important features. This structured filter-pruning technique reduces model size while preserving accuracy, achieving a 0.32% drop in test accuracy when reducing the number of filters by more than a third on the VGG16 architecture with the CIFAR-10 dataset.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine trying to figure out what’s important and what’s not in a super complex computer program. That’s kind of like what this study does, but instead of people, it’s about computers learning from images. The researchers created a new way to make these image-learning computers smaller and faster by finding the most important parts of the program that help them recognize things. They use special math to figure out which bits are most important and then get rid of the rest. This makes the computer program smaller, but it still works just as well! In fact, they tested it on a popular image recognition task and found that it was only slightly less good than the original program.

Keywords

* Artificial intelligence  * Pruning