Loading Now

Summary of Hardness Of Learning Neural Networks Under the Manifold Hypothesis, by Bobak T. Kiani et al.


Hardness of Learning Neural Networks under the Manifold Hypothesis

by Bobak T. Kiani, Jason Wang, Melanie Weber

First submitted to arxiv on: 3 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Differential Geometry (math.DG); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the impact of encoding geometric structure on the learnability of neural networks under the manifold hypothesis. The authors rigorously analyze the hardness of learning neural networks with minimal assumptions on the curvature and regularity of the manifold. They prove that learning is hard for manifolds of bounded curvature, but show that additional assumptions on the volume of the data manifold can guarantee learnability via interpolation. The paper also comments on intermediate regimes of manifolds which have heterogeneous features commonly found in real-world data.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how well neural networks can learn when the data is thought to be lying near a low-dimensional surface, or “manifold”. Researchers already know that this kind of geometric structure can help improve learning, but they haven’t studied it rigorously. The authors try to figure out what assumptions we need to make about the manifold for learning to work well. They show that if the manifold is curved in certain ways, learning gets harder, but if there’s enough information in the data, learning becomes easier. This has implications for how we design and train neural networks.

Keywords

» Artificial intelligence