Loading Now

Summary of Parameter-efficient Fine-tuning For Pre-trained Vision Models: a Survey, by Yi Xin et al.


Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey

by Yi Xin, Siqi Luo, Haodi Zhou, Junlong Du, Xiaohong Liu, Yue Fan, Qing Li, Yuntao Du

First submitted to arxiv on: 3 Feb 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The survey provides a comprehensive overview of parameter-efficient fine-tuning (PEFT) for visual tasks, which seeks to exceed the performance of full fine-tuning with minimal parameter modifications. The paper defines PEFT and discusses pre-training methods, categorizing existing methods into addition-based, partial-based, and unified-based approaches. It also introduces commonly used datasets and applications and suggests potential future research challenges.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large-scale pre-trained vision models have great potential for adaptability across various tasks. However, these models are growing to billions or trillions of parameters, making full fine-tuning unsustainable due to high computational and storage demands. This survey looks at parameter-efficient fine-tuning, which aims to exceed performance with minimal changes. The paper defines PEFT, discusses pre-training methods, and explores different approaches like addition-based, partial-based, and unified-based. It also covers datasets and applications.

Keywords

* Artificial intelligence  * Fine tuning  * Parameter efficient