Summary of Neural Network Learns Low-dimensional Polynomials with Sgd Near the Information-theoretic Limit, by Jason D. Lee and Kazusato Oko and Taiji Suzuki and Denny Wu
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
by Jason D. Lee, Kazusato Oko, Taiji Suzuki, Denny Wu
First submitted to arxiv on: 3 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the problem of learning a single-index target function using gradient descent under isotropic Gaussian data. The authors analyze the complexity of gradient-based training for neural networks and establish that a two-layer network optimized by an SGD-based algorithm learns this target with a sample and runtime complexity of O(d⋅polylog d). This result is surprising as previous works showed that the information exponent p governs the necessary complexity, but here it is not the case. The authors also show that the generative exponent p* ≤ p plays a crucial role in determining the sufficient number of samples for achieving low generalization error. The paper’s findings are relevant to neural networks and their applications. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research study looks at how well computers can learn from data. Specifically, it investigates how to teach artificial neural networks to recognize patterns in noisy data. The authors make some surprising discoveries about what makes this process work efficiently. They find that the number of samples needed to learn from this data is not determined by a specific property called the “information exponent”, but instead by another measure called the “generative exponent”. This has important implications for building better artificial intelligence systems. |
Keywords
» Artificial intelligence » Generalization » Gradient descent