Summary of Asymptotic Midpoint Mixup For Margin Balancing and Moderate Broadening, by Hoyong Kim et al.
Asymptotic Midpoint Mixup for Margin Balancing and Moderate Broadening
by Hoyong Kim, Semi Lee, Kangil Kim
First submitted to arxiv on: 26 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed asymptotic midpoint mixup method addresses the intra-class collapse issue in coarse-to-fine transfer learning by balancing the margin for all classes and moderately broadening it until maximal confidence is reached. The approach generates augmented features through interpolation, moving them towards the midpoint of inter-class feature pairs. This induces two effects: balancing the margin and reducing intra-class collapse. The method outperforms other augmentation methods in both coarse-to-fine transfer learning and imbalanced learning on long-tailed datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers created a new way to improve machine learning models by fixing a problem called intra-class collapse. This happens when features from different classes get too close together, making it hard for the model to tell them apart. They call this method asymptotic midpoint mixup and use it to balance the differences between classes and make sure they’re not getting too close together. This helps the model perform better in certain situations. |
Keywords
* Artificial intelligence * Machine learning * Transfer learning