Loading Now

Summary of Inference-time Diffusion Model Distillation, by Geon Yeong Park et al.


Inference-Time Diffusion Model Distillation

by Geon Yeong Park, Sang Wan Lee, Jong Chul Ye

First submitted to arxiv on: 12 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces Distillation++, a novel inference-time distillation framework that improves the performance of diffusion distillation models by incorporating teacher-guided refinement during sampling. The model reduces the gap between its own performance and that of pre-trained diffusion models, which is exacerbated by distribution shifts and accumulated errors during multi-step sampling. This is achieved by recasting student model sampling as a proximal optimization problem with a score distillation sampling loss (SDS), allowing for teacher guidance to drive the student’s sampling trajectory towards the clean manifold using pre-trained diffusion models. The approach demonstrates substantial improvements over state-of-the-art distillation baselines, particularly in early sampling stages.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about making a type of machine learning model work better by adding something called “teacher-guided refinement” to its process. This helps the model make more accurate predictions when given noisy or incomplete data. The new approach is called Distillation++ and it’s designed specifically for use with “diffusion distillation models”. By using this technique, the model can produce more accurate results without needing additional training data.

Keywords

» Artificial intelligence  » Diffusion  » Distillation  » Inference  » Machine learning  » Optimization  » Student model