Summary of On the Impacts Of the Random Initialization in the Neural Tangent Kernel Theory, by Guhan Chen et al.
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory
by Guhan Chen, Yicheng Li, Qian Lin
First submitted to arxiv on: 8 Oct 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the impact of random initialization on neural networks in the context of the Neural Tangent Kernel (NTK) theory. Most recent works have ignored this aspect and instead used special mirrored architectures and initializations to ensure the network’s output is zero at initialization. The authors investigate whether conventional settings and random initialization would result in different generalization capabilities for wide neural networks. They show that the training dynamics of gradient flow converge uniformly to NTK regression with random initialization, and analyze the generalization error of wide neural networks trained by gradient descent. The results highlight both the benefits of mirror initialization and the limitations of the NTK theory in explaining neural network performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how starting a neural network with random weights affects its ability to learn. Most people have ignored this aspect and instead used special tricks to make sure the network starts off by producing zero output. The authors want to know if using the usual way of initializing a network would change how well it generalizes. They find that the way the network learns does converge to what’s expected, but they also show that there’s still a problem with neural networks learning as the number of features increases. |
Keywords
» Artificial intelligence » Generalization » Gradient descent » Neural network » Regression