Loading Now

Summary of Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance, by Suryam Arnav Kalra et al.


Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance

by Suryam Arnav Kalra, Arindam Biswas, Pabitra Mitra, Biswajit Basu

First submitted to arxiv on: 17 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the expansion property of graphs in the context of deep neural networks. Specifically, it investigates how recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) can be pruned while maintaining their performance on real-time sequence learning tasks. The authors demonstrate that pruning RNNs and LSTMs to a high degree of sparsity preserves the strong connectivity and sparseness of the underlying graphs, ensuring layerwise expansion properties. Experimental results on benchmarks such as MNIST, CIFAR-10, and Google speech command data show that expander graph properties are crucial for preserving classification accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making deep learning models work better in limited resources. It looks at how to make recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) thinner while keeping them accurate on tasks like recognizing speech or handwriting. The researchers show that by pruning these models, they can keep their performance even when there’s not much computer power available. They tested this idea on different datasets and found that it really works.

Keywords

* Artificial intelligence  * Classification  * Deep learning  * Pruning