Loading Now

Summary of Preconditioned Gradient Descent Finds Over-parameterized Neural Networks with Sharp Generalization For Nonparametric Regression, by Yingzhen Yang


Preconditioned Gradient Descent Finds Over-Parameterized Neural Networks with Sharp Generalization for Nonparametric Regression

by Yingzhen Yang

First submitted to arxiv on: 16 Jul 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Statistics Theory (math.ST)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to nonparametric regression using over-parameterized two-layer neural networks. The authors demonstrate that Preconditioned Gradient Descent (PGD) with early stopping leads to sharper generalization bounds compared to standard gradient descent methods, particularly in cases where the target function has spectral bias. The results show a minimax optimal rate of O(1/n^4α/(4α+1)) and do not require distributional assumptions. Additionally, the authors establish uniform convergence to the Neural Tangent Kernel (NTK) during training and employ local Rademacher complexity to bound the generalization error.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores a new way of doing nonparametric regression using neural networks. The authors find that a special type of training method called Preconditioned Gradient Descent can make predictions more accurate than usual methods. They show that this method works particularly well when trying to predict functions that have certain patterns or biases. The results are important because they help us understand how to get better generalization bounds and don’t require any specific assumptions about the data.

Keywords

* Artificial intelligence  * Early stopping  * Generalization  * Gradient descent  * Regression