Summary of Generalized Out-of-distribution Detection and Beyond in Vision Language Model Era: a Survey, by Atsuyuki Miyai et al.
Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey
by Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Yueqian Lin, Qing Yu, Go Irie, Shafiq Joty, Yixuan Li, Hai Li, Ziwei Liu, Toshihiko Yamasaki, Kiyoharu Aizawa
First submitted to arxiv on: 31 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed generalized out-of-distribution (OOD) detection framework aims to unify five related problems: OOD detection, anomaly detection, novelty detection, open set recognition, and outlier detection. The framework reveals that the evolution of Vision Language Models (VLMs) has blurred boundaries between these fields, with challenges becoming more demanding in OOD detection and anomaly detection. This survey provides a comprehensive review of methodology for OOD detection, including discussions on related tasks and their relationships. Key findings include significant shifts in problem settings, benchmarks, and the emergence of Large Vision Language Models (LVLMs) like GPT-4V. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at ways to detect when data is outside what a machine learning system knows. This is important because it helps keep systems safe. Researchers have been trying to solve this problem in different ways, such as by looking for things that don’t fit or by identifying new information. The authors of this survey propose a way to group these different approaches together and see how they’ve changed over time. They also talk about how newer machines are making it harder to keep track of what’s normal and what’s not. |
Keywords
» Artificial intelligence » Anomaly detection » Gpt » Machine learning » Novelty detection » Outlier detection