Summary of Class-incremental Learning with Clip: Adaptive Representation Adjustment and Parameter Fusion, by Linlan Huang et al.
Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion
by Linlan Huang, Xusheng Cao, Haori Lu, Xialei Liu
First submitted to arxiv on: 19 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles the challenging problem of class-incremental learning, where a model must classify data from an increasing number of classes over time. Pre-trained models like CLIP demonstrate good generalization ability and excel in this task with frozen parameters. However, fine-tuning these models for downstream tasks leads to severe forgetting of old classes. Most existing works assume uniform forgetting when acquiring new knowledge. The proposed method, Adaptive Representation Adjustment and Parameter Fusion (RAPF), addresses this issue by measuring the influence of new classes on old ones and adjusting representations using textual features during training. After training, RAPF employs decomposed parameter fusion to mitigate forgetting during adapter module fine-tuning. Experimental results on conventional benchmarks show that RAPF achieves state-of-the-art performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Class-incremental learning is a big problem in artificial intelligence. Imagine you’re trying to teach a computer to recognize different types of animals, and it starts out only knowing what a cat looks like. As time goes on, the computer should learn to recognize dogs, birds, and even more exotic animals. But most pre-trained models are not very good at this task because they tend to forget the old classes when learning new ones. A team of researchers has come up with a new way to train these models called Adaptive Representation Adjustment and Parameter Fusion (RAPF). RAPF measures how much the new classes affect the old classes, and adjusts things accordingly during training. Then, after training, it fine-tunes the model again to make sure it doesn’t forget what it learned earlier. The results show that RAPF is better than other methods at this task. |
Keywords
* Artificial intelligence * Fine tuning * Generalization