Loading Now

Summary of Tinyfusion: Diffusion Transformers Learned Shallow, by Gongfan Fang et al.


TinyFusion: Diffusion Transformers Learned Shallow

by Gongfan Fang, Kunjun Li, Xinyin Ma, Xinchao Wang

First submitted to arxiv on: 2 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The researchers present TinyFusion, a novel method for pruning diffusion transformers to reduce inference overhead while maintaining strong performance. This is achieved through end-to-end learning and a differentiable sampling technique that simulates fine-tuning after pruning. The approach explicitly models post-fine-tuning performance, unlike existing methods that focus on minimizing loss or error. TinyFusion outperforms importance-based and error-based methods in pruning diffusion transformers, demonstrating strong generalization across diverse architectures like DiTs, MARs, and SiTs. For example, TinyFusion can create a shallow DiT-XL at 7% of the pre-training cost, achieving a speedup of 2x with an FID score of 2.86.
Low GrooveSquid.com (original content) Low Difficulty Summary
TinyFusion is a new way to make diffusion transformers faster without losing their ability to do tasks well. The team created a method that can remove layers from these models while still being able to learn and improve after the changes. This makes it possible to use diffusion transformers in real-world applications, even though they are usually very large and slow. TinyFusion does better than other methods for pruning diffusion transformers, and it works well with different types of architectures.

Keywords

» Artificial intelligence  » Diffusion  » Fine tuning  » Generalization  » Inference  » Pruning