Summary of Oats: Outlier-aware Pruning Through Sparse and Low Rank Decomposition, by Stephen Zhang et al.
OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition
by Stephen Zhang, Vardan Papyan
First submitted to arxiv on: 20 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The recent shift to large-scale foundation models has led to significant success in deep learning, but also high costs due to memory consumption and compute requirements. To address this issue, researchers have developed post-hoc neural network pruning techniques that don’t require retraining. However, existing methods often experience a drop in model performance as compression increases. This paper presents OATS, a novel approach that decomposes model weights into sparse and low-rank matrices using input embeddings’ second moment information. Without retraining, OATS achieves state-of-the-art performance when compressing models by up to 60% on large language models like Llama-3 and Phi-3, as well as vision transformers like ViT and DINOv2, while providing up to 1.37 times CPU acceleration compared to a comparably pruned model. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large foundation models have made deep learning super successful, but they’re also very expensive to use because they need so much memory and computer power. To make them more affordable, people are working on ways to remove parts of the model that aren’t needed. But most of these methods make the model perform worse as you get rid of more parts. This paper is about a new way to do this called OATS (Optimal Adaptive Training Strategy). It takes advantage of how the input information is structured and breaks down the model’s weights into simpler, easier-to-compute pieces. Without retraining, OATS can make models up to 60% smaller while still keeping their performance and even speeding them up by 1.37 times compared to other methods. |
Keywords
» Artificial intelligence » Deep learning » Llama » Neural network » Pruning » Vit