Loading Now

Summary of Pspu: Enhanced Positive and Unlabeled Learning by Leveraging Pseudo Supervision, By Chengjie Wang et al.


PSPU: Enhanced Positive and Unlabeled Learning by Leveraging Pseudo Supervision

by Chengjie Wang, Chengming Xu, Zhenye Gan, Jianlong Hu, Wenbing Zhu, Lizhuag Ma

First submitted to arxiv on: 9 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Positive and Unlabeled (PU) learning framework, PSPU, addresses the issue of overfitted risk estimation in PU models by introducing pseudo-supervision. This is achieved through a two-step process: first, training the PU model, then using it to gather confident samples for pseudo-supervision. The framework also incorporates an additional consistency loss to mitigate noisy sample effects. PSPU demonstrates significant performance gains on MNIST, CIFAR-10, and CIFAR-100 datasets in both balanced and imbalanced settings, as well as competitive results on MVTecAD for industrial anomaly detection.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new way to train machine learning models that only have positive examples. This is called Positive and Unlabeled (PU) learning. The problem with PU learning is that it can get too good at recognizing what it knows, but not very good at finding things it doesn’t know about. To fix this, the authors created a new approach called PSPU. It works by first training a PU model, then using that model to find more examples and use those to train the model again. This helps the model become better at recognizing what it doesn’t know about. The authors tested their approach on several datasets and found that it worked really well.

Keywords

* Artificial intelligence  * Anomaly detection  * Machine learning