Summary of Svd-llm: Truncation-aware Singular Value Decomposition For Large Language Model Compression, by Xin Wang et al.
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression
by Xin Wang, Yu Zheng, Zhongwei Wan, Mi Zhang
First submitted to arxiv on: 12 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel Large Language Model (LLM) compression method is proposed, addressing limitations of existing Singular Value Decomposition (SVD)-based techniques. The SVD-LLM approach incorporates a truncation-aware data whitening technique to ensure direct mapping between singular values and compression loss. Additionally, it uses parameter updates with sequential low-rank approximation to compensate for accuracy degradation after SVD compression. Experimental results on 10 datasets and seven models from three different LLM families at three scales demonstrate the superiority of SVD-LLM over state-of-the-art methods, particularly at high model compression ratios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary LLMs have gotten bigger and better, but that’s a problem! They’re too big to be used in real-life situations. One way to fix this is by compressing them, making them smaller and more efficient. Singular Value Decomposition (SVD) is a technique that can help with this. However, current SVD-based methods have some flaws. The new method, called SVD-LLM, fixes these problems by using special techniques to make sure the model doesn’t lose too much information when it’s compressed. It even adjusts the weights of the model after compression to keep its performance good. This new method was tested on many different datasets and models, and it did really well! |
Keywords
* Artificial intelligence * Large language model * Model compression