Summary of Mitigating Embedding Collapse in Diffusion Models For Categorical Data, by Bac Nguyen and and Chieh-hsin Lai and Yuhta Takida and Naoki Murata and Toshimitsu Uesaka and Stefano Ermon and Yuki Mitsufuji
Mitigating Embedding Collapse in Diffusion Models for Categorical Data
by Bac Nguyen, and Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Stefano Ermon, Yuki Mitsufuji
First submitted to arxiv on: 18 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Latent diffusion models have improved the handling of categorical datasets by leveraging continuous-state diffusion models. However, most methods rely on fixed pretrained embeddings, limiting their effectiveness. To address this limitation, we introduce CATDM, a novel framework that jointly trains the embedding and latent diffusion model while stabilizing training. Our approach combines a joint embedding-diffusion variational lower bound with a Consistency-Matching (CM) regularizer to ensure the recovery of the true data distribution. We also propose a shifted cosine noise schedule and random dropping strategy to enhance performance. Our experiments on FFHQ, LSUN Churches, and LSUN Bedrooms demonstrate that CATDM mitigates embedding collapse, achieving superior results compared to non-autoregressive models in machine translation and competitive results with previous methods in text generation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Researchers have developed a new way to improve the quality of generated images. They call it CATDM, which stands for Continuous Diffusion Model. The key idea is to train two parts together: one that learns about the data (embedding) and another that creates the image (diffusion model). This approach helps prevent a problem called “embedding collapse,” which makes the generated images not very good. To make sure it works well, they added some special tricks like noise schedules and consistency matching. They tested CATDM on several datasets and found that it outperforms other methods in generating images for tasks like machine translation and text generation. |
Keywords
» Artificial intelligence » Autoregressive » Diffusion » Diffusion model » Embedding » Text generation » Translation