Loading Now

Summary of Layermerge: Neural Network Depth Compression Through Layer Pruning and Merging, by Jinuk Kim et al.


LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging

by Jinuk Kim, Marwa El Halabi, Mingi Ji, Hyun Oh Song

First submitted to arxiv on: 18 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper, LayerMerge, presents a novel depth compression method that jointly prunes convolution layers and activation functions to achieve a desired inference speed-up while minimizing performance loss. This approach addresses the critical drawback of existing methods, which increase kernel sizes when merging consecutive convolution layers, undermining latency reduction. The authors formulate a surrogate optimization problem and efficiently solve it via dynamic programming to select which layers to remove. Empirical results demonstrate that LayerMerge outperforms existing depth compression and layer pruning methods on various network architectures for image classification and generation tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding a way to make computer networks work faster without losing their ability to do things correctly. Some people have already tried doing this by getting rid of extra parts in the network, but they didn’t think about how this might affect the size of certain important pieces. The authors of this paper came up with a new idea that takes into account both the layers and the activation functions in the network. They used math to figure out which parts to remove so the network runs faster without losing its ability to do things right. This approach worked better than other methods they tested on different types of networks.

Keywords

» Artificial intelligence  » Image classification  » Inference  » Optimization  » Pruning