Summary of Talc: Time-aligned Captions For Multi-scene Text-to-video Generation, by Hritik Bansal et al.
TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation
by Hritik Bansal, Yonatan Bitton, Michal Yarom, Idan Szpektor, Aditya Grover, Kai-Wei Chang
First submitted to arxiv on: 7 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces Time-Aligned Captions (TALC), a framework that enhances the text-conditioning mechanism in pre-trained text-to-video (T2V) generative models to generate multi-scene videos. The TALC framework recognizes temporal alignment between video scenes and scene descriptions, conditioning visual features of earlier and later scenes with representations of first and second scene descriptions. This allows for visually consistent multi-scene videos that adhere to multi-scene text descriptions. The paper also fine-tunes a pre-trained T2V model using the TALC framework, achieving a 29% relative gain in overall score (visual consistency and text adherence) through human evaluation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research helps computers create videos with multiple scenes that match what people write. Currently, computers can only make short videos of one scene, like a red panda climbing a tree. But real-life videos often have multiple scenes, like the red panda climbing and then sleeping on the tree. The researchers created a new way to make these multi-scene videos using a special framework called Time-Aligned Captions (TALC). They tested their method by fine-tuning a pre-trained computer model and got better results than before. |
Keywords
» Artificial intelligence » Alignment » Fine tuning