Summary of Dimensionality-induced Information Loss Of Outliers in Deep Neural Networks, by Kazuki Uematsu et al.
Dimensionality-induced information loss of outliers in deep neural networks
by Kazuki Uematsu, Kosuke Haruki, Taiji Suzuki, Mitsuhiro Kimura, Takahiro Takimoto, Hideyuki Nakagawa
First submitted to arxiv on: 29 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates out-of-distribution (OOD) detection in deep neural networks (DNNs), aiming to understand how ID and OOD samples are generated by each processing step. The study finds that intrinsic low-dimensionalization is crucial, revealing how OOD samples become distinct from ID samples as features propagate to deeper layers. The authors propose a simple framework explaining various OOD properties, including the elimination of most information in OOD samples due to excessive attention on dataset bias. Furthermore, they develop a dimensionality-aware OOD detection method based on feature and weight alignment, achieving high performance for various datasets with reduced computational cost. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how to tell if something is an “out-of-distribution” (something the computer wasn’t trained on) from something it was trained on. The researchers want to know what makes these two types of things different as they go through a deep neural network. They found that making the neural network work in a simpler way helps us understand why out-of-distribution things are different. This helps them create a new way to detect when something is an out-of-distribution, which works well on lots of different datasets. |
Keywords
» Artificial intelligence » Alignment » Attention » Neural network