Loading Now

Summary of Slca: Slow Learner with Classifier Alignment For Continual Learning on a Pre-trained Model, by Gengwei Zhang et al.


SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model

by Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, Yunchao Wei

First submitted to arxiv on: 9 Mar 2023

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the concept of continual learning, where a recognition model improves its performance by learning sequentially arrived data. It focuses on adapting pre-trained knowledge for each incremental task while maintaining generalizability. The authors identify the key challenge as progressive overfitting and propose an approach called Slow Learner with Classifier Alignment (SLCA) to resolve this issue. SLCA selectively reduces the learning rate in the representation layer, models class-wise distributions, and aligns classification layers post-hoc. The results show substantial improvements on various scenarios, outperforming state-of-the-art approaches by a large margin.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about making machines smarter by teaching them new things over time. It’s like how humans learn new skills or facts as they grow older. The authors found that when we use pre-trained knowledge (like what we learned in school) to teach machines, it gets stuck and can’t adapt well to new information. To fix this, they developed a way to make the machine learn more efficiently by slowing down its learning process and adjusting how it categorizes things. This approach worked really well on many different types of data and even outperformed other methods. The authors hope that their work will help machines get better at learning over time.

Keywords

* Artificial intelligence  * Alignment  * Classification  * Continual learning  * Overfitting