Loading Now

Summary of Few-shot Class-incremental Learning with Prior Knowledge, by Wenhao Jiang et al.


Few-Shot Class-Incremental Learning with Prior Knowledge

by Wenhao Jiang, Duo Li, Menghan Hu, Guangtao Zhai, Xiaokang Yang, Xiao-Ping Zhang

First submitted to arxiv on: 2 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes Learning with Prior Knowledge (LwPK), a novel approach to enhance the generalization ability of pre-trained models in few-shot class-incremental learning (FSCIL). FSCIL aims to learn new classes while preserving knowledge from previous classes, but current methods often overlook the role of pre-trained models. LwPK introduces prior knowledge from a few unlabeled data points of subsequent incremental classes, which are clustered and pseudo-labeled to train jointly with labeled base class samples. This approach effectively allocates embedding space for both old and new class data, improving model resilience against catastrophic forgetting. Experimental results demonstrate the effectiveness of LwPK, backed by theoretical analysis based on empirical risk minimization and class distance measurement.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper solves two big problems in machine learning: “forgetting” old knowledge when learning new things, and becoming too good at the training data. The team proposes a way to use old data, even if it’s not labeled, to help the model learn better. They do this by grouping similar old data points together and using them to train the model along with the labeled new data. This approach helps the model remember what it learned before while still learning new things. The results show that this method works well and can be used to improve machine learning models.

Keywords

* Artificial intelligence  * Embedding space  * Few shot  * Generalization  * Machine learning