Summary of Clip-driven Outliers Synthesis For Few-shot Ood Detection, by Hao Sun et al.
CLIP-driven Outliers Synthesis for few-shot OOD detection
by Hao Sun, Rundong He, Zhongyi Han, Zhicong Lin, Yongshun Gong, Yilong Yin
First submitted to arxiv on: 30 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Few-shot out-of-distribution (OOD) detection is a challenging task that involves recognizing OOD images from unseen classes using only a small number of labeled in-distribution (ID) images. Current approaches rely on large-scale vision-language models, such as CLIP, but overlook the critical issue of unreliable OOD supervision information, leading to biased boundaries between ID and OOD. To address this problem, we propose CLIP-driven Outliers Synthesis (CLIP-OS), which enhances patch-level features through patch uniform convolution and adaptively obtains ID-relevant information using CLIP-surgery-discrepancy. The method synthesizes reliable OOD data by mixing up ID-relevant features from different classes to provide OOD supervision information, and then leverages synthetic OOD samples via unknown-aware prompt learning to enhance the separability of ID and OOD. Experimental results across multiple benchmarks demonstrate that CLIP-OS achieves superior few-shot OOD detection capability. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about recognizing pictures that don’t belong to any class we’ve seen before, using only a small number of labeled pictures from classes we have seen. Right now, most methods use really big models like CLIP, but these methods overlook an important issue: they can’t provide reliable information for detecting out-of-distribution pictures. To fix this problem, the authors propose a new method called CLIP-OS. This method improves how it looks at small pieces of images and decides what’s important and what’s not. It then uses this information to create fake out-of-distribution pictures that can be used to train the model. The results show that this method is really good at detecting out-of-distribution pictures. |
Keywords
» Artificial intelligence » Few shot » Prompt