Loading Now

Summary of Lvd-2m: a Long-take Video Dataset with Temporally Dense Captions, by Tianwei Xiong et al.


LVD-2M: A Long-take Video Dataset with Temporally Dense Captions

by Tianwei Xiong, Yuqing Wang, Daquan Zhou, Zhijie Lin, Jiashi Feng, Xihui Liu

First submitted to arxiv on: 14 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses a crucial challenge in training video generation models, which heavily rely on the quality of their training datasets. Most existing models are trained on short clips, while there is growing interest in longer videos. However, the lack of suitable long videos hinders progress in this area. To overcome this limitation, the authors introduce a novel pipeline for selecting high-quality long-take videos and generating temporally dense captions. They define metrics to assess video quality, enabling them to filter out good candidates from a large pool of source videos. A hierarchical captioning pipeline is then developed to annotate these videos with dense captions. The resulting dataset, LVD-2M, comprises 2 million long-take videos, each over 10 seconds long and annotated with temporally dense captions. To validate the effectiveness of this dataset, video generation models are fine-tuned to generate long videos with dynamic motions. This work is expected to significantly contribute to future research in long video generation.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us make better artificial intelligence (AI) that can create longer videos. Right now, AI models are mostly trained on short clips because it’s hard to find good, long videos for them to learn from. The authors created a new way to pick the best long-take videos and add words to describe what’s happening in those videos. They made a big dataset with 2 million long-take videos, each over 10 seconds long, and added captions that tell you what’s happening at every moment. This will help make AI models better at creating longer videos with more movement.

Keywords

* Artificial intelligence