Summary of Remove That Square Root: a New Efficient Scale-invariant Version Of Adagrad, by Sayantan Choudhury et al.
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
by Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horvath, Martin Takac, Eduard Gorbunov
First submitted to arxiv on: 5 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Optimization and Control (math.OC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces a novel optimization algorithm called KATE, which is a scale-invariant adaptation of AdaGrad. It proves the scale-invariance of KATE for Generalized Linear Models and establishes a convergence rate of O(√T) for smooth non-convex problems. The algorithm is compared to Adam and AdaGrad in numerical experiments on image classification and text classification tasks, showing that KATE consistently outperforms AdaGrad and matches/surpasses the performance of Adam. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper creates an adaptive method called KATE that makes learning rate tuning easier. It shows that KATE works well for a type of model called Generalized Linear Models, and it also works for more general problems. The algorithm is tested on image and text classification tasks and does better than other popular algorithms. |
Keywords
* Artificial intelligence * Image classification * Optimization * Text classification