Summary of Can We Understand Plasticity Through Neural Collapse?, by Guglielmo Bonifazi et al.
Can We Understand Plasticity Through Neural Collapse?
by Guglielmo Bonifazi, Iason Chalas, Gian Hess, Jakub Łucki
First submitted to arxiv on: 3 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the connection between plasticity loss and neural collapse in deep learning. The authors find a significant correlation between these phenomena during the initial training phase on the first task. They also introduce a regularization approach to mitigate neural collapse, which effectively alleviates plasticity loss. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Deep learning has two recently discovered problems: plasticity loss and neural collapse. This paper looks at how they’re related in different situations. It finds that they’re linked during the early training phase of the first task. The authors also come up with a way to stop neural collapse, which helps reduce plasticity loss. |
Keywords
» Artificial intelligence » Deep learning » Regularization