Loading Now

Summary of Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection, By Huafeng Liu et al.


Learning with Imbalanced Noisy Data by Preventing Bias in Sample Selection

by Huafeng Liu, Mengmeng Sheng, Zeren Sun, Yazhou Yao, Xian-Sheng Hua, Heng-Tao Shen

First submitted to arxiv on: 17 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Class-Balance-based sample Selection (CBS) method addresses noisy labels in imbalanced datasets by preventing tail class samples from being neglected during training. It also introduces Confidence-based Sample Augmentation (CSA) to enhance the reliability of clean samples and rectify noisy samples using prediction history. To ensure the quality of corrected labels, the Average Confidence Margin (ACM) metric is used, which leverages the model’s evolving training dynamics. Finally, consistency regularization is applied to filtered label-corrected noisy samples to boost model performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new method to deal with noisy labels and class imbalance in datasets. It uses two main techniques: CBS to choose clean samples and CSA to make them more reliable. Then, it corrects the labels of noisy samples using their prediction history. The quality of these corrected labels is measured by ACM, which helps remove low-quality ones. Finally, consistency regularization is applied to improve model performance.

Keywords

* Artificial intelligence  * Regularization