Summary of Tc-llava: Rethinking the Transfer From Image to Video Understanding with Temporal Considerations, by Mingze Gao et al.
TC-LLaVA: Rethinking the Transfer from Image to Video Understanding with Temporal Considerations
by Mingze Gao, Jingyu Liu, Mingda Li, Jiangtao Xie, Qingbin Liu, Bo Zhao, Xi Chen, Hui Xiong
First submitted to arxiv on: 5 Sep 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty Summary: This paper proposes two strategies to enhance the performance of Multimodal Large Language Models (MLLMs) in video-related tasks. The first approach, Temporal-Aware Dual Rotary Position Embedding (RoPE), introduces temporal position information to improve the model’s temporal modeling capabilities. The second approach, Frame-wise Block Causal Attention Mask, broadens visual token interactions within and across video frames while maintaining causal inference mechanisms. The proposed methods are applied to LLaVA, resulting in Temporal-Considered LLaVA (TC-LLaVA), which achieves state-of-the-art performance on various video understanding benchmarks with only supervised fine-tuning. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty Summary: This research improves the performance of a computer program that understands videos by enhancing its ability to process and analyze visual information. The program, called Temporal-Considered LLaVA, uses two new techniques to better understand video content. These techniques help the program learn more about the relationships between different parts of a video, such as scenes or frames. This leads to improved performance on various tasks that involve understanding videos. |
Keywords
» Artificial intelligence » Attention » Embedding » Fine tuning » Inference » Mask » Supervised » Token