Loading Now

Summary of Weak Correlations As the Underlying Principle For Linearization Of Gradient-based Learning Systems, by Ori Shem-ur et al.


Weak Correlations as the Underlying Principle for Linearization of Gradient-Based Learning Systems

by Ori Shem-Ur, Yaron Oz

First submitted to arxiv on: 8 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Statistical Mechanics (cond-mat.stat-mech); High Energy Physics – Theory (hep-th); Probability (math.PR); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates deep learning models, particularly wide neural networks, as nonlinear dynamical physical systems. It’s found that gradient descent-based algorithms exhibit a linear structure in their parameter dynamics, similar to the neural tangent kernel. This linearity is attributed to weak correlations between first-order derivatives and higher-order derivatives of the hypothesis function around initial values. The study also explores this concept in large-width neural networks and derives a bound on deviations from linearity during stochastic gradient descent training. Furthermore, it introduces a novel method for characterizing random tensor asymptotics.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how deep learning models work. It compares them to physical systems with lots of moving parts. The research finds that some learning algorithms behave like simple machines, making it easier to understand and improve them. By looking at how the algorithms change as they learn, scientists can better predict when they’ll start behaving in a straightforward way.

Keywords

* Artificial intelligence  * Deep learning  * Gradient descent  * Stochastic gradient descent