Loading Now

Summary of Learning Discriminative Dynamics with Label Corruption For Noisy Label Detection, by Suyeon Kim et al.


Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection

by Suyeon Kim, Dongha Lee, SeongKu Kang, Sukang Chae, Sanghwan Jang, Hwanjo Yu

First submitted to arxiv on: 30 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed DynaCor framework is a novel approach for detecting incorrectly labeled instances in datasets with label noise, which can significantly impact a model’s generalization performance. By leveraging the dynamics of training signals, DynaCor distinguishes between clean and noisy labels without relying on distinguishable training signals like training loss. To augment the original dataset, DynaCor introduces a label corruption strategy that intentionally corrupts some labels, enabling indirect simulation of the model’s behavior on noisy labels. This allows DynaCor to learn to identify clean and noisy instances by inducing two clearly distinguishable clusters from latent representations of training dynamics. The framework outperforms state-of-the-art competitors in comprehensive experiments and demonstrates strong robustness to various noise types and rates.
Low GrooveSquid.com (original content) Low Difficulty Summary
DynaCor is a new way to detect mistakes in labeling data. Sometimes, this label noise can make machine learning models perform poorly. Previous approaches relied on certain signs that the model was being trained correctly, but these signs weren’t always reliable or general enough. DynaCor takes a different approach by looking at how the model changes over time. It intentionally adds some wrong labels to the data and uses this to teach itself to recognize clean and noisy labels. This framework outperforms others in tests and can handle various types of noise.

Keywords

* Artificial intelligence  * Generalization  * Machine learning