Loading Now

Summary of Few-shot Class Incremental Learning with Attention-aware Self-adaptive Prompt, by Chenxi Liu et al.


Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt

by Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang

First submitted to arxiv on: 14 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel framework, Attention-aware Self-adaptive Prompt (ASP), is proposed for Few-Shot Class-Incremental Learning (FSCIL). Existing methods fine-tune the entire backbone, leading to overfitting, while recent prompt-based approaches alleviate forgetting by training prompts with sufficient data. ASP encourages task-invariant prompts to capture shared knowledge and provides specific information through self-adaptive task-specific prompts. This framework prevents overfitting on base tasks and doesn’t require enormous data in few-shot incremental tasks. Extensive experiments on three benchmark datasets show that ASP outperforms state-of-the-art FSCIL and prompt-based CIL methods in learning new classes and mitigating forgetting.
Low GrooveSquid.com (original content) Low Difficulty Summary
FSCIL models try to learn new classes with very little information while keeping knowledge of old ones. Current methods often fine-tune the whole model, which can be bad because it forgets what it learned earlier. New approaches use prompts to help remember old things. This paper introduces a new way called ASP that helps prompts capture shared knowledge and adapt to new tasks. It’s like having two sets of prompts: one for general things and one for specific details. This makes it work better in situations where there isn’t much data.

Keywords

* Artificial intelligence  * Attention  * Few shot  * Overfitting  * Prompt