Summary of Structure-preserving Network Compression Via Low-rank Induced Training Through Linear Layers Composition, by Xitong Zhang et al.
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition
by Xitong Zhang, Ismail R. Alkhouri, Rongrong Wang
First submitted to arxiv on: 6 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A deep learning paper proposes a novel technique called Low-Rank Induced Training (LoRITa) to compress and prune neural networks, reducing storage and computational requirements for deployment on resource-limited devices. LoRITa promotes low-rankness through linear layer composition and singular value truncation, eliminating the need for pre-trained models, rank selection, or SVD computation in each iteration. The approach achieves competitive or state-of-the-art results with leading structured pruning and low-rank training methods on various datasets (MNIST, CIFAR10, CIFAR100, ImageNet) using Fully Connected Networks, Vision Transformers, and Convolutional Neural Networks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Deep learning is making huge progress in solving problems that were previously unsolvable. However, it’s hard to use these powerful models on devices with limited resources because they require a lot of storage and computing power. To solve this problem, researchers have been working on ways to shrink or “compress” the models without losing their performance. One way to do this is by using something called low-rank decomposition. In this paper, scientists propose a new method that does this more efficiently than previous approaches. |
Keywords
» Artificial intelligence » Deep learning » Pruning