Loading Now

Summary of Mpruner: Optimizing Neural Network Size with Cka-based Mutual Information Pruning, by Seungbeom Hu et al.


MPruner: Optimizing Neural Network Size with CKA-Based Mutual Information Pruning

by Seungbeom Hu, ChanJun Park, Andrew Ferraiuolo, Sang-Ki Ko, Jinwoo Kim, Haein Song, Jieung Kim

First submitted to arxiv on: 24 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: This paper proposes a new pruning algorithm called MPruner, which utilizes mutual information through vector similarity to determine the optimal size of neural networks. The method leverages layer clustering with the Centered Kernel Alignment (CKA) similarity metric to incorporate global information from the neural network for more precise and efficient layer-wise pruning. The authors evaluated MPruner across various architectures and configurations, demonstrating its versatility and providing practical guidelines. The results show that MPruner can achieve up to a 50% reduction in parameters and memory usage for CNN and transformer-based models with minimal loss in accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This paper is about making neural networks smaller without losing their power. Neural networks need to be the right size so they don’t use too much memory or take too long to run. One way to make them smaller is by pruning, which is like editing out unnecessary parts. The problem with most pruning methods is that they don’t consider how all the different parts of the network work together. This paper proposes a new algorithm called MPruner that solves this problem by looking at how all the parts of the network are connected. The results show that MPruner can make neural networks smaller without losing their accuracy.

Keywords

» Artificial intelligence  » Alignment  » Clustering  » Cnn  » Neural network  » Pruning  » Transformer