Summary of Learning Non-linear Invariants For Unsupervised Out-of-distribution Detection, by Lars Doorenbos et al.
Learning Non-Linear Invariants for Unsupervised Out-of-Distribution Detection
by Lars Doorenbos, Raphael Sznitman, Pablo Márquez-Neila
First submitted to arxiv on: 4 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the crucial issue of unsupervised out-of-distribution (U-OOD) detection for reliable deep learning models. Despite significant attention, most methods rely on heuristics rather than theoretical foundations. Recent work formalized U-OOD using data invariants and achieved state-of-the-art results using affine invariants. However, this limitation hinders the expressiveness of the approach. To address this, the authors propose a framework based on normalizing flow-like architecture that can learn non-linear invariants. This novel approach achieves state-of-the-art results on an extensive U-OOD benchmark and demonstrates applicability to tabular data. The method retains desirable properties similar to those using affine invariants. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us make better deep learning models by making sure they don’t get confused when they see new, unseen types of data. So far, most solutions have been based on rules-of-thumb rather than a clear understanding of what’s going on. Recently, scientists found a way to define this problem and developed methods that work really well using special properties called affine invariants. However, these methods aren’t perfect because they can only find certain types of patterns. To overcome this limitation, the authors created a new approach that can learn more complex patterns and still perform very well on big datasets. |
Keywords
* Artificial intelligence * Attention * Deep learning * Unsupervised