Summary of Prior2posterior: Model Prior Correction For Long-tailed Learning, by S Divakar Bhat et al.
Prior2Posterior: Model Prior Correction for Long-Tailed Learning
by S Divakar Bhat, Amit More, Mudit Soni, Surbhi Agrawal
First submitted to arxiv on: 21 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to address the issue of imbalanced data in long-tailed recognition tasks. The current solutions for this problem rely on removing the bias caused by imbalanced prior distributions. However, the effective prior learned by the model during training may differ from the empirical prior obtained using class frequencies. To address this, the authors propose a Prior2Posterior (P2P) approach that adjusts predicted probabilities after training to correct for the imbalance. Theoretical analysis shows that this approach optimizes models trained with naive cross-entropy or logit-adjusted loss. Experimental results demonstrate state-of-the-art performance on several benchmark datasets and show that existing methods can be improved further using the proposed post-hoc approach. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper helps machines learn better by fixing a problem with imbalanced data. When there’s more of one kind than another, it’s hard for machines to recognize all types equally well. The authors found a way to make sure machines don’t get biased towards one type and can actually do better overall. They tested their idea on several datasets and showed that it works really well. |
Keywords
» Artificial intelligence » Cross entropy