Loading Now

Summary of Foster Adaptivity and Balance in Learning with Noisy Labels, by Mengmeng Sheng et al.


Foster Adaptivity and Balance in Learning with Noisy Labels

by Mengmeng Sheng, Zeren Sun, Tao Chen, Shuchao Pang, Yucheng Wang, Yazhou Yao

First submitted to arxiv on: 3 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses the issue of label noise in real-world scenarios, which can negatively impact deep neural networks’ generalization performance. Existing methods rely on dataset-dependent prior knowledge and neglect class balance, leading to biased model performance. The authors propose a simple yet effective approach called SED, which tackles label noise in a self-adaptive and class-balanced manner. Specifically, the method combines sample selection strategies with mean-teacher models to correct noisy labels and re-weight detected samples. Consistency regularization is also employed on clean samples to improve generalization performance. The authors demonstrate the effectiveness of their approach through extensive experimental results on synthetic and real-world datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper talks about a problem called label noise, which makes it hard for computers to learn from noisy data. Current methods don’t work well because they rely too much on specific information about each dataset and ignore an important aspect of the data called class balance. The authors propose a new method that tries to adapt to different situations and balances classes better. They use a combination of techniques, including correcting wrong labels and adjusting how much the model relies on certain samples. The results show that their approach works well on both artificial and real-world datasets.

Keywords

» Artificial intelligence  » Generalization  » Regularization