Loading Now

Summary of Ow-viscaptor: Abstractors For Open-world Video Instance Segmentation and Captioning, by Anwesa Choudhuri et al.


OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning

by Anwesa Choudhuri, Girish Chowdhary, Alexander G. Schwing

First submitted to arxiv on: 4 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel task is proposed in this paper: “open-world video instance segmentation and captioning.” This challenge requires detecting, segmenting, tracking, and describing previously unseen objects with rich captions. To address this task, the authors develop “abstractors” that connect a vision model and a language foundation model. Specifically, they combine a multi-scale visual feature extractor and a large language model (LLM) through an object abstractor and an object-to-text abstractor. The object abstractor introduces spatially-diverse open-world object queries to discover unseen objects in videos, while the object-to-text abstractor uses masked cross-attention to generate rich captions. Compared to a baseline that jointly addresses instance segmentation and dense video object captioning, this approach achieves 13% improvement on previously unseen objects and 10% improvement on object-centric captions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper proposes a new task called “open-world video instance segmentation and captioning.” It’s like teaching computers to identify and describe things they’ve never seen before in videos! The authors develop special tools, called “abstractors,” that connect two types of models: one for understanding images (vision model) and one for understanding language (language foundation model). These abstractors help the computer detect objects, track them over time, and write a description about each object. This approach does better than other methods at describing new objects in videos.

Keywords

» Artificial intelligence  » Cross attention  » Instance segmentation  » Large language model  » Tracking