Summary of Attention-driven Training-free Efficiency Enhancement Of Diffusion Models, by Hongjie Wang et al.
Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models
by Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu
First submitted to arxiv on: 8 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Image and Video Processing (eess.IV); Signal Processing (eess.SP)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research introduces a new framework for efficiently training diffusion models (DMs) called Attention-driven Training-free Efficient Diffusion Model (AT-EDM). The approach leverages attention maps to prune redundant tokens during runtime, eliminating the need for retraining. The researchers develop a novel ranking algorithm, Generalized Weighted Page Rank (G-WPR), and a similarity-based recovery method to identify and restore important tokens. They also propose Denoising-Steps-Aware Pruning (DSAP) to optimize pruning across different denoising timesteps. Experiments show that AT-EDM achieves similar FID and CLIP scores as the full model while reducing floating-point operations by 38.8% and increasing speed by up to 1.53x compared to Stable Diffusion XL. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new way to make computer models create realistic images more efficiently. Right now, these models are very good at making pictures that look like real things, but they’re also very slow and use a lot of computer power. The researchers came up with a new idea called AT-EDM that helps the model focus on what’s important and ignore what’s not needed. This makes the model faster and uses less energy. They tested their idea and found it works well, making images just as good as the original model but much quicker. |
Keywords
» Artificial intelligence » Attention » Diffusion » Diffusion model » Pruning