Loading Now

Summary of How to Set Adamw’s Weight Decay As You Scale Model and Dataset Size, by Xi Wang et al.


How to set AdamW’s weight decay as you scale model and dataset size

by Xi Wang, Laurence Aitchison

First submitted to arxiv on: 22 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the relationship between the AdamW weight decay hyperparameter and model and dataset size. It reveals that weights learned by AdamW can be understood as an exponential moving average (EMA) of recent updates, providing insights on how to set the weight decay in AdamW. The optimal EMA timescale is found to be roughly constant when changing model and dataset size, leading to rules for scaling the weight decay hyperparameter with increasing dataset size or model size. The study validates these findings on various architectures trained on different datasets. This research has implications for building larger models and understanding how to optimize their performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how to make machine learning models better by adjusting a special number called the AdamW weight decay hyperparameter. It shows that this number is connected to how big the model and dataset are, which is important when we want to build really large models. The researchers found that there’s an ideal ratio of recent updates that makes the model work well, regardless of the size. They also showed that if we increase the size of the dataset or the model, we should decrease or increase this special number accordingly. This research helps us make better models and understand how to get them to work better.

Keywords

» Artificial intelligence  » Hyperparameter  » Machine learning