Summary of Optimal Eye Surgeon: Finding Image Priors Through Sparse Generators at Initialization, by Avrajit Ghosh et al.
Optimal Eye Surgeon: Finding Image Priors through Sparse Generators at Initialization
by Avrajit Ghosh, Xitong Zhang, Kenneth K. Sun, Qing Qu, Saiprasad Ravishankar, Rongrong Wang
First submitted to arxiv on: 7 Jun 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The Optimal Eye Surgeon (OES) framework is introduced for pruning and training deep image generator networks. It addresses the issue of untrained deep convolutional networks overfitting to noise in image restoration tasks by adaptively pruning networks at random initialization to a level of underparameterization. This process effectively captures low-frequency image components without training, making it suitable as an image prior. The pruned subnetworks, referred to as Sparse-DIP, resist overfitting to noise when trained to fit noisy images. OES is demonstrated to surpass other leading pruning methods, such as the Lottery Ticket Hypothesis, in image recovery tasks. The framework’s masks and sparse-subnetwork characteristics are shown to be transferable for image generation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Optimal Eye Surgeon (OES) is a new way to make deep learning models better at generating images. Normally, these models get too good at fitting the noise in an image instead of understanding what the image really looks like. OES fixes this by making the model smaller and more simple, so it can’t overfit as much. This makes it work better for tasks like image restoration. The authors show that OES is even better than other ways to make models simpler, and they share their code online. |
Keywords
» Artificial intelligence » Deep learning » Image generation » Overfitting » Pruning