Loading Now

Summary of Sharpness-aware Minimization Enhances Feature Quality Via Balanced Learning, by Jacob Mitchell Springer et al.


Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning

by Jacob Mitchell Springer, Vaishnavh Nagarajan, Aditi Raghunathan

First submitted to arxiv on: 30 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
SAM has emerged as a promising alternative to stochastic gradient descent (SGD) for optimizing neural networks. While its original motivation was to bias networks towards flatter minima that generalize better, recent studies have found conflicting evidence on the relationship between flatness and generalization. Instead of debating this, we identify an orthogonal effect of SAM: it balances diverse features by adaptively suppressing well-learned ones, allowing remaining features to be learned. This mechanism benefits datasets with redundant or spurious features, where SGD falls prey to simplicity bias. We demonstrate this on real data using CelebA, Waterbirds, CIFAR-MNIST, and DomainBed.
Low GrooveSquid.com (original content) Low Difficulty Summary
SAM is a new way to train neural networks that’s better than another method called stochastic gradient descent (SGD). People thought it was good because it makes the network find a “flatter” place to stop. But some other studies said this doesn’t really make a difference. Instead, we looked at what SAM actually does and found something new: it helps the network learn more features by ignoring ones it already knows well. This is helpful when there are extra things in the data that aren’t important. We tested SAM on real pictures and found it works better than SGD.

Keywords

» Artificial intelligence  » Generalization  » Sam  » Stochastic gradient descent