Summary of Focil: Finetune-and-freeze For Online Class Incremental Learning by Training Randomly Pruned Sparse Experts, By Murat Onur Yildirim et al.
FOCIL: Finetune-and-Freeze for Online Class Incremental Learning by Training Randomly Pruned Sparse Experts
by Murat Onur Yildirim, Elif Ceren Gok Yildirim, Decebal Constantin Mocanu, Joaquin Vanschoren
First submitted to arxiv on: 13 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed FOCIL approach for online continual learning solves the problem of storing previous data, which incurs memory and computation costs, as well as privacy issues. FOCIL fine-tunes a randomly pruned sparse subnetwork for each task, then freezes trained connections to prevent forgetting. The method adaptively determines sparsity levels and learning rates per task without requiring replay data storage. Experimental results on 10-Task CIFAR100, 20-Task CIFAR100, and 100-Task TinyImagenet demonstrate FOCIL’s superiority over the state-of-the-art (SOTA). The code is publicly available at this GitHub URL. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary FOCIL is a new approach for online continual learning. It helps machines learn from a series of new classes without storing all the data. This makes it more realistic and efficient. The method works by fine-tuning a smaller version of itself, then freezing the changes to prevent forgetting what was learned before. FOCIL does this for each task and adapts its settings to get the best results. It outperforms current methods on several benchmark datasets. |
Keywords
* Artificial intelligence * Continual learning * Fine tuning