Summary of Setar: Out-of-distribution Detection with Selective Low-rank Approximation, by Yixia Li et al.
SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximation
by Yixia Li, Boya Xiong, Guanhua Chen, Yun Chen
First submitted to arxiv on: 18 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research proposes SeTAR, a novel training-free out-of-distribution (OOD) detection method that leverages selective low-rank approximation of weight matrices in vision-language and vision-only models. The method enhances OOD detection via post-hoc modification of the model’s weight matrices using a simple greedy search algorithm. The researchers also propose SeTAR+FT, a fine-tuning extension optimizing model performance for OOD detection tasks. Extensive evaluations on ImageNet1K and Pascal-VOC benchmarks show SeTAR’s superior performance, reducing the relatively false positive rate by up to 18.95% and 36.80% compared to zero-shot and fine-tuning baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary SeTAR is a new way to detect when images or videos are not what a neural network was trained on. Normally, these networks can get confused when shown something they’ve never seen before. SeTAR helps fix this problem by changing the way the network looks at things it’s not familiar with. This makes the network better at telling what’s in and out of its “comfort zone”. The researchers tested their method on lots of images and videos, and it worked much better than other methods that were tried. |
Keywords
» Artificial intelligence » Fine tuning » Neural network » Zero shot