Loading Now

Summary of Implicit to Explicit Entropy Regularization: Benchmarking Vit Fine-tuning Under Noisy Labels, by Maria Marrium et al.


Implicit to Explicit Entropy Regularization: Benchmarking ViT Fine-tuning under Noisy Labels

by Maria Marrium, Arif Mahmood, Mohammed Bennamoun

First submitted to arxiv on: 5 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The research paper investigates the vulnerability of Vision Transformers (ViTs) to noisy labels during fine-tuning, comparing their robustness with Convolutional Neural Networks (CNNs). The study evaluates two ViT backbones using three classification losses and six robust Noisy Labels Learning (NLL) methods on six datasets. The findings suggest that entropy regularization can enhance the performance of established loss functions and improve the resilience of NLL methods across both ViT backbones.
Low GrooveSquid.com (original content) Low Difficulty Summary
Automatic annotation of large-scale datasets can introduce noisy training data labels, which affect the learning process of deep neural networks like Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs). This study looks at how well ViTs can handle noisy labels during fine-tuning. They compare the robustness of ViTs with CNNs using different classification losses and NLL methods on various datasets.

Keywords

» Artificial intelligence  » Classification  » Fine tuning  » Regularization  » Vit