Loading Now

Summary of Balancing the Causal Effects in Class-incremental Learning, by Junhao Zheng et al.


Balancing the Causal Effects in Class-Incremental Learning

by Junhao Zheng, Ruiyan Wang, Chongzhi Zhang, Huawen Feng, Qianli Ma

First submitted to arxiv on: 15 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a novel approach to alleviate catastrophic forgetting in Pre-Trained Models (PTMs) during Class-Incremental Learning (CIL). Recent breakthroughs in visual and natural language processing tasks have highlighted the potential of PTMs to learn sequentially. However, existing studies emphasize the need to address the forgetting issue. The authors conduct a pilot study and causal analysis to identify the root cause of the problem, finding that imbalanced causal effects between new and old data lead to adaptation conflicts. They propose Balancing the Causal Effects (BaCE) in CIL, which introduces two objectives for building causal paths from both new and old data to predict new classes. Experimental results on continual image classification, text classification, and named entity recognition demonstrate BaCE’s superiority over various CIL methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
CIL is a big challenge in artificial intelligence that allows models to learn from new data without forgetting what they learned before. Recently, special kinds of AI models called Pre-Trained Models (PTMs) have been very good at learning sequential data. However, these PTMs still forget old information when learning new things. The authors of this paper wanted to figure out why this happens and how we can fix it. They did some experiments and found that the problem is caused by how new and old data affect the model’s predictions. To solve this issue, they came up with a new method called Balancing Causal Effects (BaCE). This approach helps PTMs learn from both new and old data at the same time, so they don’t forget what they learned before.

Keywords

* Artificial intelligence  * Image classification  * Named entity recognition  * Natural language processing  * Text classification