Loading Now

Summary of Ceat: Continual Expansion and Absorption Transformer For Non-exemplar Class-incremental Learning, by Xinyuan Gao et al.


CEAT: Continual Expansion and Absorption Transformer for Non-Exemplar Class-Incremental Learning

by Xinyuan Gao, Songlin Dong, Yuhang He, Xing Wei, Yihong Gong

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a new architecture called Continual Expansion and Absorption Transformer (CEAT) to tackle the plasticity-stability dilemma in continual learning. CEAT can learn new tasks without forgetting old knowledge by extending expanded-fusion layers in parallel with frozen previous parameters, then absorbing extended parameters into the backbone. The model also incorporates a novel prototype contrastive loss to reduce overlap between old and new classes. To address classifier bias towards new classes, a pseudo-feature generation approach is proposed. Experiments on three standard NECIL benchmarks demonstrate significant improvements over previous works.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this paper, researchers created a new way for machines to learn and remember information without forgetting what they already know. This is important because in real-world situations, machines need to be able to learn new things while still remembering old ones. The team called their new approach the Continual Expansion and Absorption Transformer (CEAT). CEAT helps machines learn by adding new layers that work together with old ones, then combining them to keep the total number of “brain cells” the same. To make sure the machine doesn’t get too good at recognizing new information and forget what it already knows, the team added a special trick called prototype contrastive loss. This helps the machine remember all the different types of things it has seen before. The researchers tested their approach on three big datasets and found that it worked much better than previous methods.

Keywords

» Artificial intelligence  » Continual learning  » Contrastive loss  » Transformer