Summary of Chasing Better Deep Image Priors Between Over- and Under-parameterization, by Qiming Wu et al.
Chasing Better Deep Image Priors between Over- and Under-parameterization
by Qiming Wu, Xiaohan Chen, Yifan Jiang, Zhangyang Wang
First submitted to arxiv on: 31 Oct 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Deep Neural Networks (DNNs) are known for acting as over-parameterized deep image priors (DIPs), regularizing various image inverse problems. Researchers have also proposed compact, under-parameterized image priors (e.g., deep decoders), which surprisingly excel in image restoration despite a loss of accuracy. This dichotomy prompts the question: can we identify “intermediate” parameterized image priors that balance performance, efficiency, and transferability? Drawing from the lottery ticket hypothesis (LTH), we propose a novel “lottery image prior” (LIP) by exploiting DNN inherent sparsity. Our results validate LIPs’ superiority: we successfully locate sparse subnetworks in over-parameterized DIPs at substantial sparsity ranges, which outperform deep decoders under comparable model sizes and demonstrate high transferability across images and task types. We extend LIP to compressive sensing image reconstruction, confirming its validity. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research is about finding the best way to restore damaged or unclear images using computer algorithms. The current methods are either very good at restoring images but take a lot of computer power, or they’re fast and efficient but not as good at restoring images. The researchers want to find a middle ground that balances how well it restores images with how much computer power it uses. They came up with an idea called the “lottery image prior” which uses special properties of deep learning algorithms to make better predictions about what the restored image should look like. Their results show that this method is very good at restoring images and can even transfer its knowledge to other types of images. |
Keywords
» Artificial intelligence » Deep learning » Transferability