Loading Now

Summary of Generative Multi-modal Models Are Good Class-incremental Learners, by Xusheng Cao et al.


Generative Multi-modal Models are Good Class-Incremental Learners

by Xusheng Cao, Haori Lu, Linlan Huang, Xialei Liu, Ming-Ming Cheng

First submitted to arxiv on: 27 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the challenge of catastrophic forgetting in class-incremental learning (CIL) scenarios, where discriminative models struggle to retain knowledge from previous tasks. The authors propose a novel generative multi-modal model (GMM) framework that directly generates labels for images using an adapted generative model. This approach leverages text features extracted from the generated textual information and employs feature matching to determine the most similar label as the classification prediction. The GMM framework achieves significantly better results in long-sequence task scenarios, outperforming state-of-the-art methods by at least 14% accuracy under Few-shot CIL settings. The code is available on GitHub.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper solves a big problem called “catastrophic forgetting” that happens when we try to learn new things but forget old ones. The researchers found a way to use special models that can generate text and images together, which helps them remember what they learned before. This means they don’t forget as much as usual! They tested their idea and it worked really well.

Keywords

* Artificial intelligence  * Classification  * Few shot  * Generative model  * Multi modal