Summary of Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models For Video Question Answering, by Haibo Wang et al.
Weakly Supervised Gaussian Contrastive Grounding with Large Multimodal Models for Video Question Answering
by Haibo Wang, Chenghang Lai, Yixuan Sun, Weifeng Ge
First submitted to arxiv on: 19 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed weakly supervised framework for Video Question Answering (VideoQA) aims to improve the reasoning capabilities of Large Multimodal Models (LMMs) by incorporating question-critical moments as visual inputs. The current approach in LMMs involves uniformly sampling frames from videos, neglecting relevant visual clues. To address this, the proposed method fuses question and answer pairs into event descriptions to identify keyframes with pseudo-labels using CLIP models. A lightweight Gaussian-based Contrastive Grounding (GCG) module is then devised to sample question-critical frames as positive moments for LMMs. This framework achieves substantial improvements over previous state-of-the-art methods on various benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Video Question Answering tries to answer questions based on what’s happening in videos. Large Multimodal Models do a great job with pictures and words, but they don’t really understand the important parts of videos. To fix this, researchers proposed a new way to train these models using weak supervision. They use event descriptions that combine questions and answers to find important moments in videos. Then, they use a special module called Gaussian-based Contrastive Grounding to select those moments as visual inputs for the models. This approach works better than previous methods on several video question answering benchmarks. |
Keywords
» Artificial intelligence » Grounding » Question answering » Supervised