Loading Now

Summary of Improving Resistance to Noisy Label Fitting by Reweighting Gradient in Sam, By Hoang-chau Luong et al.


Improving Resistance to Noisy Label Fitting by Reweighting Gradient in SAM

by Hoang-Chau Luong, Thuc Nguyen-Quang, Minh-Triet Tran

First submitted to arxiv on: 26 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the challenges of noisy labels in machine learning, which can lead to overfitting and poor generalization. Sharpness-Aware Minimization (SAM) has been shown to improve generalization in classification tasks with noisy labels by slowing down noisy learning. However, its effectiveness in more realistic training settings is still underexplored. The authors analyze SAM’s behavior at each iteration, identifying key components that contribute to its robustness against noisy labels. They propose SANER, a variant of SAM that enhances its ability to manage noisy fitting rates. Experimental results on CIFAR-10, CIFAR-100, and Mini-WebVision show that SANER outperforms SAM, achieving up to an 8% increase in accuracy with 50% label noise.
Low GrooveSquid.com (original content) Low Difficulty Summary
Noisy labels are a big problem for machine learning, making it hard to get good results. Sharpness-Aware Minimization (SAM) is a technique that can help by slowing down the process when there’s noisy data. This paper looks at how SAM works and finds ways to make it better. They come up with a new version called SANER that does even better than SAM. In tests, SANER did well on pictures and websites, getting more right answers than SAM.

Keywords

» Artificial intelligence  » Classification  » Generalization  » Machine learning  » Overfitting  » Sam