Loading Now

Summary of Open-vocabulary Segmentation with Unpaired Mask-text Supervision, by Zhaoqing Wang et al.


Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision

by Zhaoqing Wang, Xiaobo Xia, Ziye Chen, Xiao He, Yandong Guo, Mingming Gong, Tongliang Liu

First submitted to arxiv on: 14 Feb 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Unpair-Seg, a novel framework for open-vocabulary segmentation that learns from unpaired image-mask and image-text pairs. Current state-of-the-art approaches rely on labor-intensive annotations, whereas Unpair-Seg reduces the annotation cost by leveraging weak supervision. The model predicts binary masks and generates pseudo labels based on confident mask-text pairs. A feature adapter aligns region embeddings with text embeddings using these pseudo labels. To reduce noise in the mask-entity correspondence, a vision-language large model re-captions images and extracts precise entities. The multi-scale matching strategy is designed to minimize noisy pairings. Unpair-Seg achieves impressive performance on ADE-847 (14.6%) and PASCAL Context-459 (19.5%) datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Unpair-Seg is a new way to teach computers to identify objects in images without needing lots of training data. Normally, this requires very detailed labels, which are hard to get. The Unpair-Seg system uses two kinds of pictures: ones with masks and ones with words. It creates fake labels based on the most confident matches between masks and words. Then, it adjusts the way it looks at images to match these fake labels. To make sure this works well, the system also uses a big model that describes what’s happening in each picture. This makes Unpair-Seg very good at finding objects in pictures.

Keywords

* Artificial intelligence  * Mask