Summary of What If the Input Is Expanded in Ood Detection?, by Boxuan Zhang et al.
What If the Input is Expanded in OOD Detection?
by Boxuan Zhang, Jianing Zhu, Zengmao Wang, Tongliang Liu, Bo Du, Bo Han
First submitted to arxiv on: 24 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces a novel approach to out-of-distribution (OOD) detection, which is crucial for reliable machine learning model deployment. Existing methods focus on identifying OOD inputs by excavating discriminative information from individual inputs. In contrast, this work expands the representation dimension by applying different common corruptions to the input space. The authors discover a phenomenon called confidence mutation, where OOD data’s confidence decreases significantly under corruption, while ID data shows higher confidence expectation due to semantic feature resistance. Based on this insight, they propose a new scoring method, Confidence aVerage (CoVer), which averages scores from different corrupted inputs and the original ones to improve separability between OOD and ID distributions in detection tasks. The paper provides extensive experiments and analyses to verify CoVer’s effectiveness. The code is publicly available for further research. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a way to detect when data is unusual or doesn’t belong. This is important because machine learning models can make mistakes if they’re not sure what kind of data they’re dealing with. Current methods focus on looking at individual pieces of data, but this approach looks at how different types of changes affect the data. They found that unusual data becomes less confident when it’s changed in certain ways, while normal data stays more confident. Based on this discovery, they created a new method called CoVer to detect unusual data better. The paper tested this method and showed it works well. You can find the code used in the research online. |
Keywords
* Artificial intelligence * Machine learning