Summary of Scaling Exponents Across Parameterizations and Optimizers, by Katie Everett et al.
Scaling Exponents Across Parameterizations and Optimizers
by Katie Everett, Lechao Xiao, Mitchell Wortsman, Alexander A. Alemi, Roman Novak, Peter J. Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, Jeffrey Pennington
First submitted to arxiv on: 8 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel approach to model scaling by re-examining the relationship between parameters and data. The authors investigate assumptions in prior work and derive new theoretical results under more general conditions. They empirically evaluate tens of thousands of models with various optimizers, parameterizations, and learning rates to identify best practices for hyperparameter transfer. Surprisingly, they find that all parameterizations can achieve hyperparameter transfer, not just maximal update parameterization (muP). The authors also introduce a new version of the Adam optimizer, Adam-atan2, which eliminates the epsilon hyperparameter and improves numerical stability. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to make artificial intelligence models work better when they get bigger. Right now, making these models work well requires adjusting many small details. Researchers thought that some assumptions about how the model’s “parameters” (think of them like puzzle pieces) matched up with the data were important for this process. But what if those assumptions aren’t always true? The authors of this paper investigate and find new ways to make the models work better, without needing to adjust all those small details. They also introduce a new way to use the Adam optimizer that makes it more reliable and efficient. |
Keywords
* Artificial intelligence * Hyperparameter