Loading Now

Summary of Lpt: Long-tailed Prompt Tuning For Image Classification, by Bowen Dong et al.


LPT: Long-tailed Prompt Tuning for Image Classification

by Bowen Dong, Pan Zhou, Shuicheng Yan, Wangmeng Zuo

First submitted to arxiv on: 3 Oct 2022

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Long-tailed Prompt Tuning (LPT) method for long-tailed classification addresses the issues of high computational cost and overfitting by introducing trainable prompts into a frozen pretrained model. The approach consists of two groups of prompts: shared and group-specific, which are learned through a two-phase training paradigm. The shared prompt adapts the pretrained model to the target domain, while the group-specific prompts gather features for similar samples. By fine-tuning only a few prompts while fixing the rest of the model, LPT reduces training and deployment costs and demonstrates strong generalization ability, achieving comparable performance with previous whole model fine-tuning methods on various long-tailed benchmarks.
Low GrooveSquid.com (original content) Low Difficulty Summary
Long-tailed classification is important because it helps machines understand complex data. Many current methods use a big model that’s trained on lots of data, then adjust the entire model for new data. However, this can be slow and make the model worse at guessing other things. A new method called Long-tailed Prompt Tuning tries to solve these problems by adding special prompts to a frozen model. These prompts help the model learn about specific groups of data and become better at making predictions. By only changing a few parts of the model, LPT makes it faster and cheaper to use, while still doing well on many different types of data.

Keywords

* Artificial intelligence  * Classification  * Fine tuning  * Generalization  * Overfitting  * Prompt