Summary of Emergence in Non-neural Models: Grokking Modular Arithmetic Via Average Gradient Outer Product, by Neil Mallinar et al.
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
by Neil Mallinar, Daniel Beaglehole, Libin Zhu, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin
First submitted to arxiv on: 29 Jul 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this study, researchers investigate the phenomenon of “grokking” in neural networks trained on modular arithmetic tasks, where test accuracy improves sharply after achieving 100% training accuracy. They show that this emergence is not unique to neural networks or gradient descent-based optimization and can also occur with Recursive Feature Machines (RFM) and kernel machines. The study reveals that the key mechanism behind grokking is feature learning, specifically the learning of block-circulant features that enable the solution of modular arithmetic tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Grokking in neural networks is a fascinating phenomenon where test accuracy improves significantly after achieving 100% training accuracy. This study explores this phenomenon and finds it’s not limited to neural networks or specific optimization methods. Instead, feature learning plays a crucial role. By using Recursive Feature Machines (RFM) with kernel machines, the researchers show that features can be learned quickly and accurately, leading to high test accuracy. |
Keywords
* Artificial intelligence * Gradient descent * Optimization