Summary of Controlling the Fidelity and Diversity Of Deep Generative Models Via Pseudo Density, by Shuangqi Li et al.
Controlling the Fidelity and Diversity of Deep Generative Models via Pseudo Density
by Shuangqi Li, Chen Liu, Tong Zhang, Hieu Le, Sabine Süsstrunk, Mathieu Salzmann
First submitted to arxiv on: 11 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research introduces a novel approach to biasing deep generative models towards generating data with enhanced fidelity or increased diversity. The method involves manipulating training and generated data distributions using a pseudo density metric, which is based on nearest-neighbor information from real samples. The approach offers three distinct techniques: per-sample perturbation for precise adjustments, importance sampling during model inference to enhance either fidelity or diversity, and fine-tuning with importance sampling to control the distribution. The fine-tuning method demonstrates the ability to improve Frechet Inception Distance (FID) for pre-trained generative models with minimal iterations. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to make deep learning models that create fake data better at creating realistic or diverse data. They use a special measure called pseudo density, which looks at how similar each piece of fake data is to real data. This approach gives three ways to adjust the model: change one piece of fake data at a time, use importance sampling to make the generated data more like what you want, and fine-tune the model to learn this new distribution. The results show that this method can improve the quality of pre-trained models with just a few tries. |
Keywords
» Artificial intelligence » Deep learning » Fine tuning » Inference » Nearest neighbor