Summary of One-stage Prompt-based Continual Learning, by Youngeun Kim et al.
One-stage Prompt-based Continual Learning
by Youngeun Kim, Yuhang Li, Priyadarshini Panda
First submitted to arxiv on: 25 Feb 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research introduces a one-stage Prompt-based Continual Learning (PCL) framework that reduces computational costs by 50% while maintaining accuracy. The framework eliminates the need for an additional feed-forward stage in traditional two-stage PCL approaches, achieving this through direct use of intermediate layer token embeddings as prompt queries. Additionally, the authors propose a Query-Pool Regularization (QR) loss to regulate the relationship between prompt queries and pools, improving representation power. The QR loss is only applied during training, allowing for 50% cost reduction at inference time. On public benchmarks such as CIFAR-100, ImageNet-R, and DomainNet, this approach outperforms prior two-stage PCL methods by 1.4%. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research makes artificial intelligence (AI) better at learning new things without forgetting what it already knows. It introduces a new way to do this called Prompt-based Continual Learning (PCL). The old way of doing PCL was slow and used too much computer power, so the researchers found a faster way that uses less power. They also developed a special trick to make the AI learn better. This new way works well on lots of different pictures and images, making it useful for things like self-driving cars or recognizing animals. |
Keywords
» Artificial intelligence » Continual learning » Inference » Prompt » Regularization » Token