Loading Now

Summary of Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space, by Mahsa Mozafari-nia and Salimeh Yasaei Sekeh


Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space

by Mahsa Mozafari-Nia, Salimeh Yasaei Sekeh

First submitted to arxiv on: 29 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework provides a novel theoretical explanation for deep neural network (DNN) compression techniques such as pruning and low-rank decomposition. By leveraging a probabilistic latent space of DNN weights, the authors introduce analogous projected patterns (AP2) and analogous-in-probability projected patterns (AP3) notions for DNNs, which are shown to be related to the performance of compressed networks. Theoretical analysis is provided to explain the training process of compressed networks, and experiments on standard pre-trained benchmarks using CIFAR10 and CIFAR100 datasets validate the results.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper proposes a new way to understand how deep neural networks can be made smaller without losing their ability to learn. It does this by looking at the weights inside the network as if they were random points in space, which helps explain why some methods for compressing the network work better than others. The authors also show that certain properties of the network are related to its performance when it’s been compressed and fine-tuned.

Keywords

* Artificial intelligence  * Latent space  * Neural network  * Probability  * Pruning