Loading Now

Summary of Bias Of Stochastic Gradient Descent or the Architecture: Disentangling the Effects Of Overparameterization Of Neural Networks, by Amit Peleg and Matthias Hein


Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks

by Amit Peleg, Matthias Hein

First submitted to arxiv on: 4 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the factors that influence generalization in neural networks when fitting data perfectly. While many theories have been proposed, this study aims to disentangle the role of optimization and architecture in achieving good generalization performance. The authors experiment with random and SGD-optimized networks that achieve zero training error and find that overparameterization benefits generalization due to an implicit bias in stochastic gradient descent (SGD), rather than architectural simplicity. However, for increasing depth, overparameterization is detrimental, suggesting an architectural bias.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at why neural networks can work well even when they have more parts than needed. One idea is that the way we train the network helps it learn to be simple and generalize well. The researchers tested this by making some networks fit the training data perfectly, then looking at how they did on new, unseen data. They found that having too many “neurons” (think of them like tiny calculators) can actually help if you’re using a certain way to update the network’s weights. But if you add more layers to the network, it’s better to have fewer neurons. This helps us understand why neural networks work well and how we can make them even better.

Keywords

» Artificial intelligence  » Generalization  » Optimization  » Stochastic gradient descent