Summary of Riemannian Preconditioned Lora For Fine-tuning Foundation Models, by Fangzhao Zhang et al.
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
by Fangzhao Zhang, Mert Pilanci
First submitted to arxiv on: 4 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Numerical Analysis (math.NA); Optimization and Control (math.OC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Low-Rank Adaptation (LoRA) is a popular method for fine-tuning pre-trained models by updating an additive low-rank trainable matrix. This work enhances LoRA training by introducing an r × r preconditioner in each gradient step, where r is the LoRA rank. Theoretical analysis shows that this preconditioner stabilizes feature learning under infinite-width neural network settings. Experimentally, the new preconditioner requires minimal changes to existing optimizer code and incurs negligible storage and runtime overhead. Our results demonstrate significant enhancements in convergence and reliability for both SGD and AdamW optimizers on large language models and text-to-image diffusion models. Moreover, the training process becomes more robust to hyperparameter choices such as learning rate. The new preconditioner is derived from a novel Riemannian metric in low-rank matrix space. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper improves a popular way to update pre-trained models called LoRA (Low-Rank Adaptation). They add a special step that helps the model learn better features. This change makes the training process more stable and reliable, especially when using certain optimizers like SGD or AdamW. The new method works well on big language models and text-to-image models. It also makes the training process less sensitive to small changes in settings. |
Keywords
* Artificial intelligence * Fine tuning * Hyperparameter * Lora * Low rank adaptation * Neural network