Summary of Pass++: a Dual Bias Reduction Framework For Non-exemplar Class-incremental Learning, by Fei Zhu et al.
PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning
by Fei Zhu, Xu-Yao Zhang, Zhen Cheng, Cheng-Lin Liu
First submitted to arxiv on: 19 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a novel approach to class-incremental learning (CIL), which aims to recognize new classes while maintaining the discriminability of old classes without relearning old data. The authors identify two inherent problems in CIL, representation bias and classifier bias, that cause catastrophic forgetting of old knowledge. To address these biases, they propose a dual bias reduction framework that employs self-supervised transformation (SST) in input space and prototype augmentation (protoAug) in deep feature space. The framework is designed to alleviate the representation bias by learning generic and diverse representations and overcome the classifier bias by augmenting prototypes of old classes. The authors also propose hardness-aware prototype augmentation and multi-view ensemble strategies, which lead to significant improvements. The proposed framework can be easily integrated with pre-trained models and performs comparably with state-of-the-art exemplar-based approaches without storing any samples of old classes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about a new way to learn new things while still being good at the old things you learned before. This is called class-incremental learning (CIL). The authors found that there are two big problems with CIL: representation bias and classifier bias. They propose a simple solution to these problems using self-supervised transformation (SST) and prototype augmentation (protoAug). These techniques help the model learn new things without forgetting what it already knows. The authors also suggest some extra ideas, like making sure the model is learning hard things first, which makes it even better. |
Keywords
* Artificial intelligence * Self supervised