Loading Now

Summary of Perceptual Quality-based Model Training Under Annotator Label Uncertainty, by Chen Zhou et al.


Perceptual Quality-based Model Training under Annotator Label Uncertainty

by Chen Zhou, Mohit Prabhushankar, Ghassan AlRegib

First submitted to arxiv on: 15 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the impact of annotator label uncertainty on model reliability and generalizability, particularly in scenarios where training data is labeled by multiple annotators. It highlights that low-quality annotations can significantly degrade model performance, making it crucial to develop effective methods for dealing with such uncertainty. The authors propose a novel framework that leverages perceptual quality scores to select a subset of samples with low quality and assign de-aggregated labels, thereby enhancing model reliability without requiring massive annotations.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores the problem of annotator label uncertainty in machine learning, where different annotators may have varying levels of expertise or understanding. It shows that when training models on such uncertain data, the performance can degrade significantly. To address this issue, the authors introduce a new approach that uses perceptual quality scores to select a subset of samples and assign de-aggregated labels, which can improve model reliability without requiring a large amount of annotated data.

Keywords

* Artificial intelligence  * Machine learning