Loading Now

Summary of Annot-mix: Learning with Noisy Class Labels From Multiple Annotators Via a Mixup Extension, by Marek Herde et al.


Annot-Mix: Learning with Noisy Class Labels from Multiple Annotators via a Mixup Extension

by Marek Herde, Lukas Lührs, Denis Huseljic, Bernhard Sick

First submitted to arxiv on: 6 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers explore ways to improve the generalization performance of neural networks when trained with noisy class labels. They focus on a popular regularization technique called mixup, which is designed to make memorizing false class labels more difficult. However, they note that in real-world scenarios, multiple annotators often provide class labels, and current approaches neglect this aspect. To address this, the authors propose an extension of mixup that can handle multiple class labels per instance while considering the origin of each label from different annotators. This new approach is integrated into a multi-annotator classification framework called annot-mix, which outperforms eight state-of-the-art methods on eleven datasets with noisy class labels provided by both human and simulated annotators.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this paper, scientists want to make sure that artificial intelligence models work well even when the data they’re trained on is incorrect. They use a special technique called mixup to help these models ignore fake information. But what if multiple people are helping to label the data? That’s where their new approach comes in – it takes into account who provided each label and combines them to create better results. The authors tested this new method and found that it performs much better than other approaches on a variety of datasets.

Keywords

» Artificial intelligence  » Classification  » Generalization  » Regularization