Summary of Memory-efficient Llm Training with Online Subspace Descent, by Kaizhao Liang et al.
Memory-Efficient LLM Training with Online Subspace Descent
by Kaizhao Liang, Bo Liu, Lizhang Chen, Qiang Liu
First submitted to arxiv on: 23 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Recently, memory-efficient Large Language Model (LLM) training algorithms have gained popularity, leveraging the low-rank structure of gradients to project optimizer states into a subspace using a projection matrix found by Singular Value Decomposition (SVD). However, convergence is highly dependent on update rules for the projection matrix. This work provides the first convergence guarantee for arbitrary update rules of the projection matrix, applicable to optimizers like LION and Adam analyzed with Hamiltonian Descent. Inspired by this understanding, we propose Online Subspace Descent, a new family of subspace descent optimizers without SVD. Instead of updating eigenvectors, Online Subspace Descent updates the projection matrix with online Principal Component Analysis (PCA). This method is flexible and introduces minimal overhead to training. The paper shows that for pretraining LLaMA models on the C4 dataset, Online Subspace Descent achieves better perplexity and downstream tasks performance than state-of-the-art low-rank training methods across different settings, narrowing the gap with full-rank baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research focuses on making large language model training more efficient. It looks at how to use a special kind of math problem called Singular Value Decomposition (SVD) to make the training process faster and more accurate. The key finding is that if you change the way you update the SVD projection matrix, it can make the training process converge faster and be more reliable. The researchers also propose a new method for doing this, called Online Subspace Descent, which uses a different type of math problem to speed up the training process. They test their approach on large language models and show that it works better than other methods in certain situations. |
Keywords
» Artificial intelligence » Large language model » Llama » Pca » Perplexity » Pretraining » Principal component analysis