Loading Now

Summary of Variational Autoencoder-based Neural Network Model Compression, by Liang Cheng et al.


Variational autoencoder-based neural network model compression

by Liang Cheng, Peiyuan Guan, Amir Taherkordi, Lei Liu, Dapeng Lan

First submitted to arxiv on: 25 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores neural network model compression methods based on Variational Autoencoders (VAEs). The authors train different neural network models (Feedforward Neural Network, Convolutional Neural Network, Recurrent Neural Network, and Long Short-Term Memory) for MNIST recognition and then compress each model using VAEs. The compressed models are evaluated by reconstructing the original parameters from latent space representations. Results show that using VAEs improves compression rates compared to traditional methods like pruning and quantization without significantly affecting accuracy. This work provides a foundation for further exploring ways to efficiently save or transfer large-scale deep learning models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research looks at how to make neural networks smaller and faster. They took four different types of networks (used for things like image recognition) and used them as “training data” to create something called Variational Autoencoders. These VAEs can shrink the size of a network without making it less accurate. This is important because we’ll soon have many big neural networks, so finding ways to save time and space will be crucial.

Keywords

» Artificial intelligence  » Deep learning  » Latent space  » Model compression  » Neural network  » Pruning  » Quantization