Loading Now

Summary of Representation Norm Amplification For Out-of-distribution Detection in Long-tail Learning, by Dong Geun Shin and Hye Won Chung


Representation Norm Amplification for Out-of-Distribution Detection in Long-Tail Learning

by Dong Geun Shin, Hye Won Chung

First submitted to arxiv on: 20 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper addresses the challenge of detecting out-of-distribution (OOD) samples in machine learning, particularly when models are trained on long-tailed datasets. Existing methods struggle to distinguish tail-class in-distribution samples from OOD samples. The authors introduce Representation Norm Amplification (RNA), a method that decouples OOD detection and in-distribution classification by using the norm of the representation as a new dimension for OOD detection. RNA achieves superior performance in both OOD detection and classification compared to state-of-the-art methods, with improvements of 1.70% and 9.46% on CIFAR10-LT and 2.43% and 6.87% on ImageNet-LT, respectively. The authors demonstrate the effectiveness of RNA on these datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper talks about how to make sure machine learning models don’t get confused when they see things they haven’t seen before. This is important because it helps the models be more reliable and accurate. The problem is that when models are trained on lots of data, but most of it is similar, they have trouble telling what’s normal and what’s not. The authors came up with a new way to solve this called Representation Norm Amplification (RNA). It works by looking at how the model represents things in different ways. This helps the model be better at detecting when something is unusual, which makes it more reliable. The authors tested their method on two big datasets and showed that it works really well.

Keywords

» Artificial intelligence  » Classification  » Machine learning