Loading Now

Summary of Smooth Pseudo-labeling, by Nikolaos Karaliolios et al.


Smooth Pseudo-Labeling

by Nikolaos Karaliolios, Hervé Le Borgne, Florian Chabot

First submitted to arxiv on: 23 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Smooth Pseudo-Labeling (SP L) loss function is a significant improvement over traditional Pseudo-Labeling (PL) methods in Semi-Supervised Learning (SSL). The SP L loss function addresses the discontinuities in the derivative due to thresholding, which causes instabilities when labels are scarce. This modification is tested on FixMatch and shows improved performance in the regime of scarce labels without adding any modules, hyperparameters, or computational overhead. Additionally, the performance remains stable in the regime of abundant labels. The proposed benchmark introduces a new selection method for labeled images, where they are randomly selected from the entire dataset without imposing representation proportional to class frequency. This modification does not necessarily improve performance when more labeled images are added, highlighting an important issue in designing SSL algorithms that should be addressed to make Active Learning algorithms more reliable and explainable. The proposed improvements demonstrate robustness with respect to variation of hyperparameters and training parameters, making the algorithm more reliable and efficient.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper improves upon traditional Pseudo-Labeling methods by introducing a Smooth Pseudo-Labeling loss function. This new method is tested on FixMatch and shows better performance in situations where labels are scarce. The goal of Semi-Supervised Learning is to use a small amount of labeled data along with a large amount of unlabeled data to achieve the same level of performance as if all data were labeled. The proposed algorithm helps to achieve this goal by making the learning process more stable and reliable.

Keywords

» Artificial intelligence  » Active learning  » Loss function  » Semi supervised