Summary of Hypothesis-driven Deep Learning For Out Of Distribution Detection, by Yasith Jayawardana et al.
Hypothesis-Driven Deep Learning for Out of Distribution Detection
by Yasith Jayawardana, Azeem Ahmad, Balpreet S. Ahluwalia, Rafi Ahmad, Sampath Jayarathna, Dushan N. Wadduwage
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed hypothesis-driven approach aims to quantify whether a new sample is in-distribution (InD) or out-of-distribution (OoD) by analyzing the latent responses of a deep neural network (DNN). The method first computes an ensemble of OoD metrics, known as latent responses, and then formulates the detection problem as a hypothesis test between these responses. Using permutation-based resampling, the approach infers the significance of observed responses under a null hypothesis. The authors demonstrate their method’s effectiveness in detecting unseen samples of bacteria to a trained deep learning model, revealing interpretable differences between InD and OoD latent responses. This work has implications for systematic novelty detection and informed decision-making from classifiers trained on a subset of labels. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about how to make sure that artificial intelligence models don’t get fooled by new data they’ve never seen before. Right now, there are many ways to check if a model is making good predictions or not, but they often don’t work well across different types of data and models. The authors propose a new way to solve this problem by looking at the “hidden” responses from the model when it sees new data. They use these hidden responses to decide whether the new data is normal or unusual. This approach has important implications for making decisions using artificial intelligence, especially in areas like healthcare. |
Keywords
* Artificial intelligence * Deep learning * Neural network * Novelty detection