Summary of Activity Sparsity Complements Weight Sparsity For Efficient Rnn Inference, by Rishav Mukherji et al.
Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference
by Rishav Mukherji, Mark Schöne, Khaleelulla Khan Nazeer, Christian Mayr, Anand Subramoney
First submitted to arxiv on: 13 Nov 2023
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores the compression technique of sparsifying neural networks, which involves reducing the number of model parameters to decrease computational requirements. While weight pruning is a well-known method, sparse activations have not been fully utilized in deep learning despite their presence in biological and artificial neural networks. The authors demonstrate that activity sparsity can be composed with parameter sparsity in recurrent neural networks (RNNs) using GRUs designed for activity sparsity. This approach achieves up to 20x computation reduction while maintaining language modeling performance below 60 on the Penn Treebank task, outperforming previous sparse LSTMs and RNNs. The results suggest that making deep learning models activity sparse and porting them to neuromorphic devices can be a viable strategy without compromising task performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Deep learning models are getting bigger and more powerful, but they also use a lot of energy and computing power. This paper shows how to make these models smaller and more efficient by using something called “sparse activations”. In simple terms, sparse activations means that instead of all the neurons in a model being active at once, only some of them are active at any given time. The authors test this idea on a type of language modeling task and show that it can reduce the amount of computation needed by as much as 20 times while still getting good results. |
Keywords
* Artificial intelligence * Deep learning * Pruning