Summary of Optimizing For Roc Curves on Class-imbalanced Data by Training Over a Family Of Loss Functions, By Kelsey Lieberman et al.
Optimizing for ROC Curves on Class-Imbalanced Data by Training over a Family of Loss Functions
by Kelsey Lieberman, Shuai Yuan, Swarna Kamlam Ravindran, Carlo Tomasi
First submitted to arxiv on: 8 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper explores the challenges of training reliable classifiers when dealing with severe class imbalance in computer vision. While previous work has proposed techniques to mitigate this issue, it’s observed that even slight changes in hyperparameter values can result in highly variable performance on binary problems. To address this, the authors propose a method called Loss Conditional Training (LCT) which trains over a family of loss functions instead of a single one. The method is tested on both CIFAR and Kaggle competition datasets, showing improved model performance and robustness to hyperparameter choices. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to solve a big problem in computer vision where the training data has very different sizes for different classes. This makes it hard to train good models that can work well on any new data. The authors came up with an idea called Loss Conditional Training (LCT) which helps make the models less sensitive to small changes in how they’re trained. They tested this method on two big datasets and found that it works better than other methods. |
Keywords
* Artificial intelligence * Hyperparameter