Loading Now

Summary of A Noisy Elephant in the Room: Is Your Out-of-distribution Detector Robust to Label Noise?, by Galadrielle Humblot-renaux and Sergio Escalera and Thomas B. Moeslund


A noisy elephant in the room: Is your out-of-distribution detector robust to label noise?

by Galadrielle Humblot-Renaux, Sergio Escalera, Thomas B. Moeslund

First submitted to arxiv on: 2 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the performance of 20 state-of-the-art out-of-distribution (OOD) detection methods when the underlying classifier is trained on unreliable labels, such as crowd-sourced or web-scraped data. The authors conduct extensive experiments across various datasets, noise levels, architectures, and checkpointing strategies to understand how class label noise affects OOD detection. Their findings highlight a previously overlooked limitation of existing methods: poor separation between incorrectly classified in-distribution (ID) samples and OOD samples.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at 20 ways to detect unusual pictures that aren’t part of what a computer vision system was trained on. These methods are meant to work after the picture is already been looked at by a classifier, but the authors want to know how well they do when the classifier itself was trained on bad or unreliable labels. They test these methods on different kinds of datasets, noisy data, and ways that pictures were taken. The results show that existing methods have a big problem: they can’t tell apart pictures that are just wrong from pictures that are truly new.

Keywords

» Artificial intelligence