Summary of Dsde: Using Proportion Estimation to Improve Model Selection For Out-of-distribution Detection, by Jingyao Geng et al.
DSDE: Using Proportion Estimation to Improve Model Selection for Out-of-Distribution Detection
by Jingyao Geng, Yuan Zhang, Jiaqi Huang, Feng Xue, Falong Tan, Chuanlong Xie, Shumei Zhang
First submitted to arxiv on: 3 Nov 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach for improving Out-of-Distribution (OoD) detection in machine learning models. The authors introduce the concept of a model library, which combines multiple models to improve performance and provide uncertainty quantification. They argue that existing methods focus too much on controlling the True Positive Rate (TPR) while neglecting the False Positive Rate (FPR). To address this issue, they propose inverting sequential p-value strategies and estimate the error rate by defining a rejection region. The approach is called DOS-Storey-based Detector Ensemble (DSDE), which was tested on CIFAR10 and CIFAR100 datasets. Experimental results show that DSDE reduces the FPR from 11.07% to 3.31% compared to the top-performing single-model detector. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how to make machine learning models better at identifying things they’re not supposed to recognize. Right now, most models are good at recognizing things they were trained on, but they often get confused when they see something new. To fix this problem, the authors propose a way to combine multiple models together so that they can work better and provide more accurate results. They also suggest a new approach for estimating how confident we should be in our model’s decisions. The authors tested their ideas on two different datasets and found that it works really well! |
Keywords
* Artificial intelligence * Machine learning