Summary of Velora: Memory Efficient Training Using Rank-1 Sub-token Projections, by Roy Miles et al.
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections
by Roy Miles, Pradyumna Reddy, Ismail Elezi, Jiankang Deng
First submitted to arxiv on: 28 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Large language models (LLMs) have revolutionized natural language processing, but training them remains computationally and memory-intensive. This paper identifies key components for effective model convergence using gradient descent and finds that intermediate activations can be compressed without performance degradation. This insight leads to a cheap and memory-efficient algorithm for fine-tuning and pre-training LLMs. The proposed method divides tokens into sub-tokens, projects them onto a fixed subspace during the forward pass, and coarsely reconstructs features during the backward pass. Our results demonstrate the effectiveness of this approach on the VTAB-1k fine-tuning benchmark, outperforming QLoRA for fine-tuning LLaMA and showing competitive performance with other memory-efficient pre-training methods on the C4 dataset. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine a super-smart computer that can understand language. This computer is called a Large Language Model (LLM). Right now, making these computers work well requires lots of processing power and memory. In this research paper, scientists discovered how to make these computers more efficient without losing their abilities. They found that by breaking down the information into smaller pieces and reassembling it during learning, they could reduce the need for powerful computers. This new method works really well on a test dataset and performs similarly to other top methods. |
Keywords
» Artificial intelligence » Fine tuning » Gradient descent » Large language model » Llama » Natural language processing