Loading Now

Summary of Uncertainty-aware Out-of-distribution Detection with Gaussian Processes, by Yang Chen et al.


Uncertainty-Aware Out-of-Distribution Detection with Gaussian Processes

by Yang Chen, Chih-Li Sung, Arpan Kusari, Xiaoyang Song, Wenbo Sun

First submitted to arxiv on: 30 Dec 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep neural networks (DNNs) often fail to generalize well to unseen data, leading to overconfident wrong predictions. This is problematic for safety-critical applications where incorrect decisions can have severe consequences. Most existing out-of-distribution (OOD) detection methods rely on having a set of OOD samples available during training, which isn’t always possible in real-world scenarios. To overcome this limitation, we propose a Gaussian-process-based OOD detection method that uses only in-distribution data to establish a decision boundary. Our approach involves uncertainty quantification using a multi-class Gaussian process (GP) and a score function to separate in-distribution and potential OOD data based on their differences in posterior predictive distribution. We demonstrate the effectiveness of our method through case studies on conventional image classification datasets and real-world image datasets, outperforming state-of-the-art OOD detection methods when OOD samples are not observed during training.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep neural networks can make mistakes when they encounter new data that is different from what they’ve seen before. This can be a big problem if the network is making decisions in situations where it’s not supposed to get things wrong. Most of the current ways to detect when this is happening rely on having some examples of what the new data might look like, which isn’t always possible. Our solution uses a type of math called Gaussian processes to figure out whether new data is likely to be something the network has seen before or not. We tested our method using pictures and found that it works better than other methods when you don’t have any examples of what the new data might look like.

Keywords

» Artificial intelligence  » Image classification