Summary of Otseg: Multi-prompt Sinkhorn Attention For Zero-shot Semantic Segmentation, by Kwanyoung Kim et al.
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation
by Kwanyoung Kim, Yujin Oh, Jong Chul Ye
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to zero-shot semantic segmentation using CLIP’s multimodal knowledge. The method, called OTSeg, leverages the Optimal Transport algorithm and Sinkhorn attention mechanism to align text embeddings with pixel embeddings. This allows for multiple text prompts to selectively focus on different semantic features within image pixels. The authors demonstrate state-of-the-art performance on three benchmark datasets, achieving significant gains in Zero-Shot Semantic Segmentation (ZS3) tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper finds a way to make computers better at understanding what’s happening in pictures just by looking at text that describes the picture. They use an old idea called Optimal Transport to help computer models focus on the right parts of the picture when they’re trying to figure out what it shows. This helps them do a much better job than before, especially when they don’t have any examples to look at beforehand. |
Keywords
* Artificial intelligence * Attention * Semantic segmentation * Zero shot