Summary of Forming Auxiliary High-confident Instance-level Loss to Promote Learning From Label Proportions, by Tianhao Ma et al.
Forming Auxiliary High-confident Instance-level Loss to Promote Learning from Label Proportions
by Tianhao Ma, Han Chen, Juncheng Hu, Yungang Zhu, Ximing Li
First submitted to arxiv on: 15 Nov 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this research paper, the authors explore Learning from Label Proportions (LLP), a weakly-supervised learning task where classifiers are trained using bags of instances and their corresponding class proportions. The mainstream approach to LLP incorporates an auxiliary instance-level loss with pseudo-labels formed by predictions. However, the authors find that these pseudo-labels are often inaccurate due to over-smoothing, especially in scenarios with large bag sizes. To address this issue, they propose a novel LLP method called L^2P-AHIL, which utilizes a dual entropy-based weight (DEW) to adaptively measure the confidences of pseudo-labels and form high-confident instance-level loss. Experimental results on benchmark datasets demonstrate that L^2P-AHIL outperforms existing baseline methods, particularly in scenarios with large bag sizes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Learning from Label Proportions is a way for computers to learn from incomplete data. Usually, we need complete information about what something is or isn’t, but this method works even when we only have some of that information. The problem is that the computer’s predictions can be too smooth and not very accurate. To fix this, the researchers created a new approach called L^2P-AHIL. This method helps the computer focus on making better predictions by weighing its confidence in what it thinks is correct. In tests with real data, this approach worked much better than other methods. |
Keywords
» Artificial intelligence » Supervised