Loading Now

Summary of Dual Consolidation For Pre-trained Model-based Domain-incremental Learning, by Da-wei Zhou et al.


Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning

by Da-Wei Zhou, Zi-Wen Cai, Han-Jia Ye, Lijun Zhang, De-Chuan Zhan

First submitted to arxiv on: 1 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Domain-Incremental Learning (DIL) involves adapting models to new concepts across different domains. Recent pre-trained models provide a foundation for DIL, but learning new concepts often leads to the forgetting of pre-trained knowledge. Specifically, sequential updates can overwrite both representation and classifier with latest domain knowledge. To address this, we propose DUal ConsolidaTion (Duct) to unify and consolidate historical knowledge at representation and classifier levels. Duct merges backbone stages to create a multi-domain representation space, capturing task-specific features from all seen domains. We also introduce an extra classifier consolidation process to align historical and estimated classifiers with the consolidated embedding space. Experimental results on four benchmark datasets demonstrate Duct’s state-of-the-art performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
DIL is about teaching machines to learn new things across different areas of expertise. Right now, machines can learn from a lot of data, but they often forget what they learned before when learning something new. To solve this problem, we created a way to keep all the knowledge together at two levels: how the machine represents the world and what it knows about specific categories. We also developed a way to combine old and new information so machines can learn from everything they’ve seen.

Keywords

» Artificial intelligence  » Embedding space