Loading Now

Summary of Transfusion: Predict the Next Token and Diffuse Images with One Multi-modal Model, by Chunting Zhou and Lili Yu and Arun Babu and Kushal Tirumala and Michihiro Yasunaga and Leonid Shamis and Jacob Kahn and Xuezhe Ma and Luke Zettlemoyer and Omer Levy


Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model

by Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, Omer Levy

First submitted to arxiv on: 20 Aug 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Transfusion approach combines a language modeling loss function with diffusion training to develop a single transformer for processing mixed-modality sequences. This method leverages pre-training on a mixture of text and image data, allowing for scaling laws to be established across various benchmarks. The results show that Transfusion outperforms quantizing images and training a language model over discrete tokens, while introducing modality-specific encoding and decoding layers further improves performance. Additionally, the approach can compress images into just 16 patches. When scaled to 7B parameters and 2T multi-modal tokens, the proposed method achieves image and text generation capabilities comparable to similar-scale diffusion models and language models.
Low GrooveSquid.com (original content) Low Difficulty Summary
Transfusion is a new way to train a model that can understand and work with different types of data, like text and images. This approach combines two techniques: language modeling and diffusion training. By pre-training on a mix of text and image data, the model learns to process mixed-modality sequences. The results show that Transfusion works better than other methods, and it can even compress images into small pieces. When scaled up, Transfusion can generate images and text similar to other advanced models.

Keywords

» Artificial intelligence  » Diffusion  » Language model  » Loss function  » Multi modal  » Scaling laws  » Text generation  » Transformer