Summary of Rethinking Classifier Re-training in Long-tailed Recognition: a Simple Logits Retargeting Approach, by Han Lu et al.
Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple Logits Retargeting Approach
by Han Lu, Siyu Sun, Yichen Xie, Liqing Zhang, Xiaokang Yang, Junchi Yan
First submitted to arxiv on: 1 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a new approach to improving classifier re-training methods in long-tailed recognition tasks. Building upon the Decoupled Training paradigm, which separates representation learning from classifier re-training, the authors revisit existing classifier re-training methods based on unified feature representations and evaluate their performances using a newly proposed metric called Logits Magnitude. They also introduce an approximate invariant called Regularized Standard Deviation to optimize this metric during training. The authors then develop a simple logits retargeting approach (LORT) that divides one-hot labels into small true probabilities and large negative probabilities, achieving state-of-the-art performance on various imbalanced datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary In this study, researchers looked at ways to improve how we train models for tasks where some classes have many more examples than others. They found a new way to measure model performance that’s better than the current method. This led them to develop a new approach called logits retargeting (LORT), which helps models learn from imbalanced data sets. LORT is simple and doesn’t require knowing how many samples each class has. The results show that LORT can achieve better performance on various datasets. |
Keywords
» Artificial intelligence » Logits » One hot » Representation learning