Loading Now

Summary of Owlore: Outlier-weighed Layerwise Sampled Low-rank Projection For Memory-efficient Llm Fine-tuning, by Pengxiang Li et al.


OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning

by Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei Liu

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Outlier-weighed Layerwise Sampled Low-Rank Projection (OwLore) approach revolutionizes the fine-tuning of Large Language Models (LLMs), addressing the significant memory challenges posed by their substantial size. Unlike low-rank adaptation (LoRA), which compromises performance, OwLore strategically assigns higher sampling probabilities to layers with more outliers, selectively sampling only a few layers and fine-tuning their pre-trained weights. To further boost performance while maintaining memory efficiency, gradient low-rank projection is incorporated. Extensive experiments demonstrate OwLore’s consistent outperformance of baseline approaches across various architectures, including LLaMa2, LLaMa3, and Mistral, achieving up to a 1.1% average accuracy gain on the Commonsense Reasoning benchmark, a 3.0% improvement on MMLU, and a notable 10% boost on MT-Bench.
Low GrooveSquid.com (original content) Low Difficulty Summary
OwLore is a new way to fine-tune large language models without using too much memory or computer power. This helps us make bigger and more accurate language models that can do things like answer questions about the world. The idea behind OwLore is to look at which parts of the model are most important and only update those parts, instead of updating everything. This makes it faster and uses less memory.

Keywords

» Artificial intelligence  » Fine tuning  » Lora  » Low rank adaptation