Loading Now

Summary of Pc-lora: Low-rank Adaptation For Progressive Model Compression with Knowledge Distillation, by Injoon Hwang et al.


PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation

by Injoon Hwang, Haewon Park, Youngwan Lee, Jooyoung Yang, SunJae Maeng

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Progressive Compression LoRA (PC-LoRA), a novel approach that leverages Low-Rank Adaption (LoRA) to simultaneously compress and fine-tune models. Unlike traditional LoRA methods, which rely on pre-trained weights during the fine-tuning process, PC-LoRA removes these weights gradually, ultimately replacing them with low-rank adapters. This compression-fine-tuning hybrid achieves impressive parameter and FLOPs reductions of 94.36%/89.1% for vision models like ViT-B and 93.42%/84.2% for language models like BERT. By eliminating the need for pre-trained weights, PC-LoRA offers a more efficient method for fine-tuning models while maintaining their performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine being able to make computers smarter without using up too much power or storage space. This paper shows how to do just that by creating a new way to adapt and compress large AI models. Instead of needing the entire model, this method focuses on a few key parts that can still help the computer learn. This makes it more efficient and useful for real-world applications.

Keywords

» Artificial intelligence  » Bert  » Fine tuning  » Lora  » Vit