Summary of Bem: Balanced and Entropy-based Mix For Long-tailed Semi-supervised Learning, by Hongwei Zheng et al.
BEM: Balanced and Entropy-based Mix for Long-Tailed Semi-Supervised Learning
by Hongwei Zheng, Linyuan Zhou, Han Li, Jinming Su, Xiaoming Wei, Xiaoming Xu
First submitted to arxiv on: 1 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Balanced and Entropy-based Mix (BEM) is a novel data mixing approach designed to tackle long-tailed semi-supervised learning (LTSSL). The current methods in LTSSL primarily focus on re-balancing the class distribution based on quantity, neglecting the crucial aspect of uncertainty. BEM addresses this limitation by proposing a class balanced mix bank and an entropy-based learning approach that incorporates entropy-based sampling, selection, and loss functions to balance both data quantity and uncertainty. This approach is shown to significantly enhance various LTSSL frameworks and achieve state-of-the-art performances on multiple benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary In this paper, researchers introduce a new way to combine data from different classes in long-tailed semi-supervised learning (LTSSL). They call it the Balanced and Entropy-based Mix (BEM). The problem with current methods is that they just try to balance the amount of data for each class without thinking about how uncertain or easy it is to classify those examples. BEM solves this by having a special mix bank that stores data from each class, sampling more from classes with fewer examples, and then using entropy-based ideas to pick the best data points and define the loss function. This approach can be used on its own or as an improvement to existing methods. |
Keywords
» Artificial intelligence » Loss function » Semi supervised