Summary of Adv-kd: Adversarial Knowledge Distillation For Faster Diffusion Sampling, by Kidist Amde Mekonnen et al.
Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Samplingby Kidist Amde Mekonnen, Nicola Dall'Asen, Paolo RotaFirst…
Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Samplingby Kidist Amde Mekonnen, Nicola Dall'Asen, Paolo RotaFirst…
Diffusion Actor-Critic: Formulating Constrained Policy Iteration as Diffusion Noise Regression for Offline Reinforcement Learningby Linjiajie…
Slight Corruption in Pre-training Data Makes Better Diffusion Modelsby Hao Chen, Yujin Han, Diganta Misra,…
Improving the Training of Rectified Flowsby Sangyun Lee, Zinan Lin, Giulia FantiFirst submitted to arxiv…
Don’t drop your samples! Coherence-aware training benefits Conditional diffusionby Nicolas Dufour, Victor Besnier, Vicky Kalogeiton,…
KerasCV and KerasNLP: Vision and Language Power-Upsby Matthew Watson, Divyashree Shivakumar Sreepathihalli, Francois Chollet, Martin…
Exploring Diffusion Models’ Corruption Stage in Few-Shot Fine-tuning and Mitigating with Bayesian Neural Networksby Xiaoyu…
Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplersby Kiyoung Seong, Seonghyun Park,…
Learning from Random Demonstrations: Offline Reinforcement Learning with Importance-Sampled Diffusion Modelsby Zeyu Fang, Tian LanFirst…
Diffusion Policies creating a Trust Region for Offline Reinforcement Learningby Tianyu Chen, Zhendong Wang, Mingyuan…