Summary of Hf-diff: High-frequency Perceptual Loss and Distribution Matching For One-step Diffusion-based Image Super-resolution, by Shoaib Meraj Sami et al.
HF-Diff: High-Frequency Perceptual Loss and Distribution Matching for One-Step Diffusion-Based Image Super-Resolution
by Shoaib Meraj Sami, Md Mahedi Hasan, Jeremy Dawson, Nasser Nasrabadi
First submitted to arxiv on: 20 Nov 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper improves the performance of single-step super-resolution methods by preserving high-frequency detail features. The authors introduce a high-frequency perceptual loss using an invertible neural network (INN) pre-trained on ImageNet, which generates different feature maps for various high-frequency aspects of an image. During training, they impose a constraint to preserve high-frequency features in both super-resolved and ground truth images, resulting in improved SR image quality. The authors also utilize the Jensen-Shannon divergence between GT and SR images in the DINO-v2 embedding space to match their distribution. This approach, dubbed HF-Diff, achieves state-of-the-art CLIPIQA scores on several benchmark datasets, including RealSR, RealSet65, DIV2K-Val, and ImageNet. The high-frequency perceptual loss outperforms LPIPS and VGG-based losses in multiple datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper makes super-resolution better by keeping important details in the picture. They use a special type of neural network to find the important parts of an image and keep them during the process of making it higher resolution. This helps make the final result look more like the original, high-quality image. The authors also use another technique to match the distribution of the super-resolved image with the real image, which improves the results even further. Their method, called HF-Diff, is able to achieve state-of-the-art performance on several benchmark datasets. |
Keywords
» Artificial intelligence » Embedding space » Neural network » Super resolution