Loading Now

Summary of Enhancing Out-of-distribution Detection with Multitesting-based Layer-wise Feature Fusion, by Jiawei Li et al.


Enhancing Out-of-Distribution Detection with Multitesting-based Layer-wise Feature Fusion

by Jiawei Li, Sitong Li, Shanshan Wang, Yicheng Zeng, Falong Tan, Chuanlong Xie

First submitted to arxiv on: 16 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework, Multitesting-based Layer-wise Out-of-Distribution (OOD) Detection (MLOD), is a novel approach for identifying distributional shifts in test samples at different feature levels through rigorous multiple testing. Unlike existing methods that focus on the output or penultimate layer of pre-trained deep neural networks, MLOD does not require modifying the structure or fine-tuning of the classifier. By integrating with distance-based inspection methods and utilizing feature extractors of varying depths, MLOD effectively enhances out-of-distribution detection performance. In particular, MLOD-Fisher achieves superior results in general, significantly reducing the false positive rate (FPR) from 24.09% to 7.47% on average compared to using only the last layer features.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to identify when test data is different from what was used to train a machine learning model. This problem is important because it can affect how well the model works on real-world data that might be different from the training data. The authors developed a method called MLOD, which looks at features at different levels of complexity to see if they are unusual. Unlike other methods, MLOD doesn’t need to change or fine-tune the pre-trained model. It can work with any existing distance-based inspection method and use different parts of the model’s architecture. The authors tested their method on a dataset called CIFAR10 and found that it performs better than other methods at detecting when test data is unusual.

Keywords

* Artificial intelligence  * Fine tuning  * Machine learning