Loading Now

Summary of From Galore to Welore: How Low-rank Weights Non-uniformly Emerge From Low-rank Gradients, by Ajay Jaiswal et al.


From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients

by Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang

First submitted to arxiv on: 15 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers explore the potential of Large Language Models (LLMs) to be expressed in low-rank format, which can significantly reduce computational resources and memory usage. The study reveals that different layers within LLMs exhibit varying levels of converged low-rank structure, requiring a non-uniform rank reduction across them to minimize performance drop due to compression. To achieve this, the authors propose Weight Low-Rank Projection (WeLore), a unified technique for weight compression and memory-efficient fine-tuning. WeLore identifies suitable rank reduction ratios based on singular values and categorizes weight matrices into Low-rank Components (LRCs) and Non-Low-rank Components (N-LRCs). The authors demonstrate that LRCs tend to have better fine-tuning capabilities, closely mimicking the training loss trajectory and performance of full-finetuning with notable memory and compute footprint reduction.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding a way to make Large Language Models smaller and faster. It’s like compressing a big file on your computer, but for language models that need to be really smart! The researchers found that some parts of the model are more important than others and can be made smaller without losing too much information. They came up with a new method called WeLore that can make these models faster and use less memory. It’s like a superpower for computers!

Keywords

* Artificial intelligence  * Fine tuning