Summary of Visual Prompt Tuning in Null Space For Continual Learning, by Yue Lu et al.
Visual Prompt Tuning in Null Space for Continual Learning
by Yue Lu, Shizhou Zhang, De Cheng, Yinghui Xing, Nannan Wang, Peng Wang, Yanning Zhang
First submitted to arxiv on: 9 Jun 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel approach to continual learning (CL) in vision-transformer models is proposed, focusing on orthogonal prompt tuning to overcome catastrophic forgetting. Unlike existing methods that update prompts based on previous tasks’ features, this paper learns each task by tuning prompts in a direction orthogonal to the subspace spanned by previous tasks’ features, ensuring no interference between tasks. The challenge lies in the high-order and non-linear self-attention operation and prompt distribution drift caused by LayerNorm. Two consistency conditions are theoretically deduced to achieve prompt gradient orthogonal projection, providing a guarantee of eliminating interference via the self-attention mechanism. An effective null-space-based approximation solution is proposed for practical implementation. Experimental results demonstrate the effectiveness on four class-incremental benchmarks with diverse pre-trained baseline models, outperforming state-of-the-art methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper solves a big problem in machine learning called catastrophic forgetting. When a computer learns many things and then needs to learn something new, it often forgets what it learned before! The researchers came up with a clever idea: instead of trying to remember everything, they want the computer to focus on the new thing and ignore the old stuff. They did this by changing how the computer looks at words and pictures. This helped the computer learn new things without forgetting the old ones. They tested their idea and it worked better than other methods! Now we can use computers to learn many things without them getting confused. |
Keywords
» Artificial intelligence » Continual learning » Machine learning » Prompt » Self attention » Vision transformer