Loading Now

Summary of Overcoming Domain Drift in Online Continual Learning, by Fan Lyu et al.


Overcoming Domain Drift in Online Continual Learning

by Fan Lyu, Daofeng Liu, Linglan Zhao, Zhang Zhang, Fanhua Shang, Fuyuan Hu, Wei Feng, Liang Wang

First submitted to arxiv on: 15 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers propose a novel approach to online continual learning (OCL), which enables machine learning models to learn from new data streams while retaining knowledge acquired previously. The key challenge addressed is catastrophic forgetting, where previous learning is overwritten by new tasks, leading to biased forgetting of prior knowledge. To mitigate this issue, the authors introduce Drift-Reducing Rehearsal (DRR), a strategy that anchors old tasks and reduces negative transfer effects. DRR consists of three components: selecting memory for representative samples, a two-level angular cross-task Contrastive Margin Loss (CML) to promote intra-class and inter-class compactness and discrepancy, and an optional Centroid Distillation Loss (CDL) to anchor knowledge in feature space for each previous old task. Experimental results on four benchmark datasets demonstrate that DRR effectively mitigates continual domain drift and achieves state-of-the-art performance in OCL.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this paper, scientists are trying to help machine learning models learn new things without forgetting what they already know. This is called online continual learning (OCL). One big problem with OCL is that the model might forget what it learned earlier because of new tasks coming along. To solve this issue, the authors propose a new way to rehearse old knowledge and reduce negative transfer effects. They suggest selecting memory for representative samples, making sure the model doesn’t get too confused by new data, and adding a special loss function to help keep the model’s features in place. The results show that their approach works well on many datasets and can even achieve better performance than other methods.

Keywords

» Artificial intelligence  » Continual learning  » Distillation  » Loss function  » Machine learning