Summary of Class-incremental Novel Class Discovery, by Subhankar Roy et al.
Class-incremental Novel Class Discovery
by Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Sebe, Elisa Ricci
First submitted to arxiv on: 18 Jul 2022
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to Novel Class Discovery (class-iNCD), a task that involves discovering new categories in an unlabelled dataset by leveraging a pre-trained model trained on a labelled dataset containing related categories. The proposed method, inspired by rehearsal-based incremental learning methods, prevents forgetting of past information about base classes by jointly exploiting base class feature prototypes and feature-level knowledge distillation. Additionally, the paper presents a self-training clustering strategy that clusters novel categories and trains a joint classifier for both base and novel classes. Experimental results on three benchmarks demonstrate that the proposed method outperforms state-of-the-art approaches. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary We study how to find new categories in data by using an old model that was trained on different but related categories. The goal is to not only discover these new categories but also remember what we learned about older categories. To do this, we propose a new approach that combines ideas from rehearsal-based incremental learning and feature-level knowledge distillation. We also use self-training clustering to group new categories together and train a single classifier for both old and new categories. Our experiments show that our method works better than existing approaches on three common datasets. |
Keywords
* Artificial intelligence * Clustering * Knowledge distillation * Self training