Loading Now

Summary of Enhancing Temporal Modeling Of Video Llms Via Time Gating, by Zi-yuan Hu et al.


Enhancing Temporal Modeling of Video LLMs via Time Gating

by Zi-Yuan Hu, Yiwu Zhong, Shijia Huang, Michael R. Lyu, Liwei Wang

First submitted to arxiv on: 8 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper proposes a novel architecture for Video Large Language Models (Video LLMs) that addresses the limitation of neglecting temporal information in video data. The proposed Time Gating Video LLM (TG-Vid) incorporates a Time Gating module (TG) that enables the model to robustly understand temporal information within videos. This is achieved through a time gating mechanism on sub-modules, including gating spatial attention, gating temporal attention, and gating MLP. The paper evaluates TG-Vid on three temporal-sensitive video benchmarks: MVBench, TempCompass, and NExT-QA, demonstrating significant performance gains over existing Video LLMs. Ablation studies further validate the effectiveness of the proposed Time Gating module.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research proposes a new way for computers to understand videos by paying attention to time as well as space. The current models are not good at understanding what’s happening in a video over time, but this new model is designed to fix that problem. It does this by using something called “time gating” which helps the computer focus on specific parts of the video and understand how they relate to each other. The researchers tested their model on three different video tasks and found that it performed much better than existing models.

Keywords

* Artificial intelligence  * Attention