Summary of Ddil: Improved Diffusion Distillation with Imitation Learning, by Risheek Garrepalli et al.
DDIL: Improved Diffusion Distillation With Imitation Learning
by Risheek Garrepalli, Shweta Mahajan, Munawar Hayat, Fatih Porikli
First submitted to arxiv on: 15 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel framework called diffusion distillation within imitation learning (DDIL) to improve the practicality of diffusion-based generative models. The authors identify co-variate shift as a major limitation in multi-step distilled models, which can lead to poor performance due to compounding error at inference time. To address this issue, they formulate DDIL and train on both data distribution and student-induced distributions. This approach helps to diversify generations while preserving the marginal data distribution and correcting covariate shift. The authors also adopt a reflected diffusion formulation for distillation, demonstrating improved performance and stable training across different distillation methods. They compare their method to baseline algorithms such as progressive distillation (PD), latent consistency models (LCM), and distribution matching distillation (DMD2). |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps make better computer programs that can create new images or text from existing ones. Right now, these programs are pretty good but they take a long time to generate new things because they need to repeat some steps many times. The authors of this paper found out why these programs sometimes don’t work well and came up with a new way to make them better. They called it diffusion distillation within imitation learning (DDIL). This method helps the programs create more different and useful new things while making sure they don’t get too confused or mixed up. |
Keywords
» Artificial intelligence » Diffusion » Distillation » Inference