Summary of Boosting Single Positive Multi-label Classification with Generalized Robust Loss, by Yanxi Chen et al.
Boosting Single Positive Multi-label Classification with Generalized Robust Loss
by Yanxi Chen, Chunxiao Li, Xinyang Dai, Jinhuan Li, Weiyu Sun, Yiming Wang, Renyuan Zhang, Tinghe Zhang, Bo Wang
First submitted to arxiv on: 6 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates Single Positive Multi-label Learning (SPML), where each image is associated with a single positive label, unlike traditional multi-label learning which requires comprehensive annotations. Existing SPML methods focus on designing losses using mechanisms like hard pseudo-labeling and robust losses, but often result in unacceptable false negatives. To address this issue, the authors propose a generalized loss framework based on expected risk minimization to provide soft pseudo labels, and show that existing losses can be seamlessly converted into their framework. They also design a novel robust loss based on their framework, which balances false positives and false negatives and handles class imbalance. Experimental results demonstrate that their approach significantly improves SPML performance and outperforms state-of-the-art methods on four benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to do machine learning when we only have one correct answer for each image. Right now, most machine learning models need many correct answers, which can be hard to get. The authors of this paper want to make it easier by creating a special type of machine learning that only needs one correct answer per image. They came up with a new way to do this and tested it on four different datasets. Their method worked really well and was better than most other methods. |
Keywords
» Artificial intelligence » Machine learning