Loading Now

Summary of Closer: Towards Better Representation Learning For Few-shot Class-incremental Learning, by Junghun Oh et al.


CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning

by Junghun Oh, Sungyong Baik, Kyoung Mu Lee

First submitted to arxiv on: 8 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the challenging problem of few-shot class-incremental learning (FSCIL), where a model must learn new classes with only a few samples while preserving knowledge of existing classes. To address this, researchers often fix a feature extractor trained on old classes to reduce overfitting and forgetting. The primary focus is on representation learning for base classes to achieve transferability and discriminability simultaneously. Building upon recent efforts to enhance transferability, the paper finds that securing the spread of features within a confined space enables a better balance between these two goals. Contrary to prior beliefs, the authors claim that closer class distances are beneficial for FSCIL. Empirical results and information bottleneck theory justify this representation learning method, raising research questions and suggesting alternative directions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about teaching machines to learn new things quickly, even when they don’t have much data. The challenge is to teach them without forgetting what they already know. Researchers usually use a feature extractor that was trained on old data to help the machine avoid mistakes. The main goal is to make sure the machine’s representation of the world is good at transferring to new situations and being able to tell things apart. By building upon recent ideas, this paper finds that keeping features close together helps achieve both goals. This goes against what people thought before: they thought it would be better if features were far apart. The results from this study show why this approach works and suggest new areas for research.

Keywords

» Artificial intelligence  » Few shot  » Overfitting  » Representation learning  » Transferability