Summary of Pursuing Feature Separation Based on Neural Collapse For Out-of-distribution Detection, by Yingwen Wu et al.
Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection
by Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, Xiaolin Huang
First submitted to arxiv on: 28 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method aims to improve out-of-distribution (OOD) detection in deep neural networks (DNNs). The approach involves fine-tuning the model with auxiliary OOD datasets, using a separation loss defined on model outputs. This paper introduces a novel loss function called Separation Loss, which binds the features of OOD data in a subspace orthogonal to the principal subspace of in-distribution (ID) features formed by Neural Collapse (NC). The method achieves state-of-the-art performance on CIFAR10, CIFAR100 and ImageNet benchmarks without additional data augmentation or sampling. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper proposes a new approach to detecting out-of-distribution data. It uses an aggregation property of in-distribution features called Neural Collapse to separate the features of OOD data from ID features. The method is simple but effective, achieving state-of-the-art performance on several benchmarks without needing extra data or processing. |
Keywords
» Artificial intelligence » Data augmentation » Fine tuning » Loss function