Loading Now

Summary of Larp: Tokenizing Videos with a Learned Autoregressive Generative Prior, by Hanyu Wang et al.


LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior

by Hanyu Wang, Saksham Suri, Yixuan Ren, Hao Chen, Abhinav Shrivastava

First submitted to arxiv on: 28 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The novel video tokenizer, LARP, is designed to overcome limitations in current video tokenization methods for autoregressive generative models. Unlike traditional patchwise tokenizers, LARP introduces a holistic tokenization scheme that captures more global and semantic representations. This design allows LARP to support an arbitrary number of discrete tokens, enabling adaptive and efficient tokenization based on the specific requirements of the task. The tokenizer integrates a lightweight AR transformer as a training-time prior model that predicts the next token on its discrete latent space. By incorporating the prior model during training, LARP learns a latent space optimized for video reconstruction and structured in a way conducive to autoregressive generation. Comprehensive experiments demonstrate LARP’s strong performance, achieving state-of-the-art FVD on the UCF101 class-conditional video generation benchmark.
Low GrooveSquid.com (original content) Low Difficulty Summary
LARP is a new way to break down videos into small pieces called tokens. This helps make it easier for computers to understand and generate videos. The old way was to look at tiny parts of the video, but LARP looks at the whole picture. It can also change how many tokens there are based on what task you’re doing. To make sure it works well, LARP uses a special computer program that helps predict what comes next. This makes it possible for computers to generate videos smoothly and accurately.

Keywords

» Artificial intelligence  » Autoregressive  » Latent space  » Token  » Tokenization  » Tokenizer  » Transformer