Summary of Robustly Overfitting Latents For Flexible Neural Image Compression, by Yura Perugachi-diaz et al.
Robustly overfitting latents for flexible neural image compression
by Yura Perugachi-Diaz, Arwin Gansekoele, Sandjai Bhulai
First submitted to arxiv on: 31 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Neural image compression has made significant progress, with state-of-the-art models based on variational autoencoders outperforming classical methods. These neural compression models encode an image into a quantized latent representation that can be efficiently transmitted and decoded back into a reconstructed image. While successful in practice, these models are limited by imperfect optimization and encoder/decoder capacity constraints. Our work builds upon recent advances using stochastic Gumbel annealing (SGA) to refine pre-trained neural image compression model latents. We introduce SGA+, which combines three methods that extend SGA. We demonstrate improved overall compression performance in terms of the R-D trade-off, outperforming predecessors on both Tecnick and CLIC datasets. Our method is applicable to pre-trained hyperpriors and more flexible models. A detailed analysis reveals our proposed methods are less sensitive to hyperparameter choices. Each method can be extended for three-class rounding. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Neural image compression has come a long way! Researchers have developed special kinds of artificial intelligence called neural networks that can compress images really well. These networks take an image, break it down into tiny pieces, and then send those pieces to the other side of the network, which puts them back together again. The problem is that these networks don’t always do a great job because they’re limited by how good their training was and how much processing power they have. Some researchers tried using an idea called stochastic Gumbel annealing (SGA) to make the networks better. We took that idea and made it even better, calling it SGA+. Our new method does three different things to improve image compression: it’s more efficient, it works better with certain kinds of images, and it can be used with different types of neural networks. |
Keywords
* Artificial intelligence * Encoder decoder * Hyperparameter * Optimization