Loading Now

Summary of Convergence Of Sharpness-aware Minimization Algorithms Using Increasing Batch Size and Decaying Learning Rate, by Hinata Harada and Hideaki Iiduka


Convergence of Sharpness-Aware Minimization Algorithms using Increasing Batch Size and Decaying Learning Rate

by Hinata Harada, Hideaki Iiduka

First submitted to arxiv on: 16 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The sharpness-aware minimization (SAM) algorithm and its variants, including gap guided SAM (GSAM), have been successful in improving the generalization capability of deep neural network models by finding flat local minima of the empirical loss. Theoretical and practical studies have shown that increasing the batch size or decaying the learning rate can avoid sharp local minima. This paper investigates the GSAM algorithm with increasing batch sizes or decaying learning rates, such as cosine annealing or linear learning rate, demonstrating theoretical convergence. Numerical comparisons are made between SAM (GSAM) with and without an increasing batch size, concluding that using an increasing batch size or decaying learning rate finds flatter local minima than a constant batch size and learning rate.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how to make deep neural networks work better by finding the right way to train them. The idea is to use a special kind of training method called sharpness-aware minimization, which helps the network avoid getting stuck in bad places. Some other researchers have found that increasing the size of the batch or slowing down the learning rate can also help the network find better solutions. In this paper, scientists test whether combining these ideas works even better. They show mathematically that it does and then compare different approaches to see which one is best.

Keywords

» Artificial intelligence  » Generalization  » Neural network  » Sam