Summary of Bidirectional Uncertainty-based Active Learning For Open Set Annotation, by Chen-chen Zong et al.
Bidirectional Uncertainty-Based Active Learning for Open Set Annotation
by Chen-Chen Zong, Ye-Wen Wang, Kun-Peng Ning, Hai-Bo Ye, Sheng-Jun Huang
First submitted to arxiv on: 23 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Bidirectional Uncertainty-based Active Learning (BUAL) framework addresses the challenge of active learning in open-set scenarios by querying examples that are both likely from known classes and highly informative. This is achieved through a Random Label Negative Learning method, which pushes unknown class examples toward regions with high-confidence predictions. The BUAL framework also incorporates a Bidirectional Uncertainty sampling strategy to perform consistent and stable sampling. Extensive experiments on multiple datasets demonstrate the state-of-the-art performance of BUAL in open-set scenarios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Active learning is trying to figure out how to pick the best examples from a big pool of unlabeled data that has some new classes mixed in with the ones you know already. Right now, there are two main ways to do this: one way tries to find simple examples, and the other way tries to find complex examples. But these methods have problems – they can either pick too many simple examples or miss out on important complex examples. In this paper, a new method is proposed that combines the best of both worlds by picking examples that are likely from known classes and also very informative. This new method is called Bidirectional Uncertainty-based Active Learning (BUAL) and it uses two main ideas: one idea pushes unknown class examples toward regions with high-confidence predictions, and the other idea jointly estimates uncertainty posed by both positive and negative learning to perform consistent and stable sampling. |
Keywords
* Artificial intelligence * Active learning