Summary of Just How Flexible Are Neural Networks in Practice?, by Ravid Shwartz-ziv and Micah Goldblum and Arpit Bansal and C. Bayan Bruss and Yann Lecun and Andrew Gordon Wilson
Just How Flexible are Neural Networks in Practice?
by Ravid Shwartz-Ziv, Micah Goldblum, Arpit Bansal, C. Bayan Bruss, Yann LeCun, Andrew Gordon Wilson
First submitted to arxiv on: 17 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the ability of neural networks to fit training data, challenging the common assumption that overparameterized models can always fit their training sets. The authors find that standard optimizers are limited by the model’s capacity, only finding minima where the model can fit significantly fewer samples than it has parameters. They also show that convolutional networks are more parameter-efficient than MLPs and ViTs, even on randomly labeled data. Additionally, the paper reveals that stochastic training has a regularizing effect, but SGD finds minima that fit more training data than full-batch gradient descent. Furthermore, the authors discover that the difference in capacity to fit correctly labeled and incorrectly labeled samples can predict generalization performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Neural networks are really good at learning from data, but did you know there’s a limit to how well they can do it? Researchers found that most neural networks can only learn from a certain amount of training data, even if they have millions of parameters. They also discovered that different types of neural networks, like convolutional and multi-layer perceptrons, are better at learning with less data. This is important because it means we need to rethink how we train our models to make them more accurate. |
Keywords
» Artificial intelligence » Generalization » Gradient descent » Parameter efficient