Loading Now

Summary of Resurrecting Old Classes with New Data For Exemplar-free Continual Learning, by Dipam Goswami et al.


Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning

by Dipam Goswami, Albin Soutif–Cormerais, Yuyang Liu, Sandesh Kamath, Bartłomiej Twardowski, Joost van de Weijer

First submitted to arxiv on: 29 May 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to address the problem of feature drift estimation for exemplar-free continual learning methods. The method, called Adversarial Drift Compensation (ADC), aims to reduce the catastrophic forgetting phenomenon by perturbing current samples to mimic old class prototypes in the old model embedding space. ADC estimates the drift between the old and new models using these perturbed images and compensates the prototypes accordingly. This approach leverages the transferability of adversarial samples across feature spaces, making it computationally cheap. The authors demonstrate the effectiveness of ADC on several standard continual learning benchmarks and fine-grained datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper tries to solve a big problem in machine learning called “catastrophic forgetting”. When we train models to learn new things, they often forget what they learned before. This is bad because it means our models are not good at learning continuously. The authors of this paper propose a new way to make sure our models don’t forget what they learned earlier. They do this by pretending that the model has seen old data again, and then adjusting its understanding of what the old data looks like. This helps the model remember what it learned earlier better. The authors tested their method on some big datasets and showed that it works really well.

Keywords

» Artificial intelligence  » Continual learning  » Embedding space  » Machine learning  » Transferability