Summary of Activation Map Compression Through Tensor Decomposition For Deep Learning, by Le-trung Nguyen et al.
Activation Map Compression through Tensor Decomposition for Deep Learning
by Le-Trung Nguyen, Aël Quélennec, Enzo Tartaglione, Samuel Tardieu, Van-Tam Nguyen
First submitted to arxiv on: 10 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to reduce the memory footprint of backpropagation in edge AI applications. The authors tackle the challenge by using tensor decomposition techniques to compress activation maps, which are a key component of deep learning models. Specifically, they investigate the use of Singular Value Decomposition (SVD) and High-Order SVD (HOSVD) to reduce the memory requirements of backpropagation while preserving the features essential for learning. The approach is demonstrated on mainstream architectures and tasks, showing improved performance in terms of generalization and memory footprint. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper helps solve a big problem in using artificial intelligence (AI) for things like smart homes or self-driving cars. Right now, AI models are too big to run on devices like smartphones or cameras, but they’re really important for making these devices smarter. The authors came up with a clever way to make the AI models smaller and more efficient by breaking them down into smaller pieces and storing only the most important parts. This makes it possible to run the AI models on small devices without sacrificing their ability to learn and get better over time. |
Keywords
» Artificial intelligence » Backpropagation » Deep learning » Generalization