Summary of Nodi: Out-of-distribution Detection with Noise From Diffusion, by Jingqiu Zhou et al.
NODI: Out-Of-Distribution Detection with Noise from Diffusion
by Jingqiu Zhou, Aojun Zhou, Hongsheng Li
First submitted to arxiv on: 13 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles out-of-distribution (OOD) detection in machine learning models, a crucial aspect of deploying models safely. Previous methods compute OOD scores using limited information from the in-distribution dataset and encode images with neural image encoders. However, these methods rarely test their robustness across different training methods and architectures. This work introduces the diffusion process into OOD task, integrating information on the whole training set into predicted noise vectors. A closed-form solution is derived for the noise vector, which is then converted into an OOD score. The method outperforms previous OOD methods across various image encoders (ResNet to Vision Transformers), achieving a 3.5% performance gain with MAE-based image encoder. Furthermore, the paper demonstrates the robustness of its method by testing different types of image encoders. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research is about making sure machine learning models don’t get confused when they see things they’re not used to. This is important because we want our models to make good decisions even if they encounter new situations. The paper introduces a new way to calculate how well a model does with unfamiliar data, and it uses information from the entire training set to do so. The method is tested on different types of image encoders and performs better than previous methods. It’s also more robust, meaning it can handle changes in how images are processed. |
Keywords
* Artificial intelligence * Diffusion * Encoder * Machine learning * Mae * Resnet