Summary of No More Adam: Learning Rate Scaling at Initialization Is All You Need, by Minghao Xu et al.
No More Adam: Learning Rate Scaling at Initialization is All You Need
by Minghao Xu, Lichuan Xiang, Xu Cai, Hongkai Wen
First submitted to arxiv on: 16 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper challenges the need for adaptive gradient methods in deep neural networks, introducing SGD-SaI, a simple yet effective enhancement to stochastic gradient descent with momentum (SGDM). By adjusting learning rates based on parameter groups’ gradient signal-to-noise ratios (g-SNR), SGD-SaI prevents training imbalances and reduces optimizer memory usage by half compared to AdamW. Despite its simplicity, SGD-SaI consistently matches or outperforms AdamW in Transformer-based tasks like ImageNet-1K classification with Vision Transformers and GPT-2 pretraining for large language models. The paper also demonstrates the robustness of SGD-SaI to hyperparameter variations and practicality for diverse applications. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary SGD-SaI is a new way to train deep neural networks that doesn’t need special methods like AdamW. Instead, it adjusts learning rates based on how much information each part of the network has. This helps prevent some problems and uses less memory than before. SGD-SaI works well for tasks like image recognition and language models, and it’s good at handling different hyperparameters. |
Keywords
» Artificial intelligence » Classification » Gpt » Hyperparameter » Pretraining » Stochastic gradient descent » Transformer