Loading Now

Summary of Sgd with Memory: Fundamental Properties and Stochastic Acceleration, by Dmitry Yarotsky et al.


SGD with memory: fundamental properties and stochastic acceleration

by Dmitry Yarotsky, Maksim Velikanov

First submitted to arxiv on: 5 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses an open problem in mini-batch stochastic gradient descent (SGD)-type algorithms on quadratic problems with power-law spectrum. In the non-stochastic setting, Heavy Ball (HB) with a suitable schedule achieves the optimal exponent ξ in loss convergence L_t ∼ C_Lt^{-ξ}, but this no longer works with mini-batch noise. The authors propose first-order methods with an arbitrary fixed number M of auxiliary velocity vectors (memory-M algorithms), which they show retain the exponent ξ of plain GD but can have different constants C_L depending on their effective learning rate. A memory-1 algorithm with a time-dependent schedule is proposed, which improves the exponent ξ of plain SGD. The authors use characteristic polynomials and signal and noise propagators to develop a general expansion of the loss.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper tries to make mini-batch gradient descent faster by finding new ways to update the model. Right now, we don’t have good ways to do this quickly while still keeping the model stable. The authors find some new methods that keep the same speed as usual but can be made more efficient. They also show how to use these new methods to make the model more accurate.

Keywords

» Artificial intelligence  » Gradient descent  » Stochastic gradient descent