Summary of Spatial-frequency Dual-domain Feature Fusion Network For Low-light Remote Sensing Image Enhancement, by Zishu Yao et al.
Spatial-frequency Dual-Domain Feature Fusion Network for Low-Light Remote Sensing Image Enhancement
by Zishu Yao, Guodong Fan, Jinfu Fan, Min Gan, C.L. Philip Chen
First submitted to arxiv on: 26 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Dual-Domain Feature Fusion Network (DFFN) addresses the challenge of enhancing low-light remote sensing images. Traditional convolutional neural networks struggle with establishing long-range correlations in such images due to their reliance on local correlations. Transformer-based methods, on the other hand, face high computational complexities when processing high-resolution images. The DFFN approach divides the enhancement task into two sub-tasks: learning amplitude information for brightness restoration and phase information for detail refinement. An information fusion affine block facilitates information exchange between phases. Two new datasets are constructed to address the lack of datasets in dark light remote sensing image enhancement. Evaluations show that the proposed method outperforms existing state-of-the-art methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper proposes a new way to improve low-light images taken from space. These images have lots of detail and long-distance connections between features. Old computer vision techniques don’t work well with these types of images because they focus on small areas instead of the big picture. The researchers created a new network that breaks down the task into two parts: making the image brighter and refining the details. They also made special blocks to help the different parts talk to each other. To test their method, they created two new datasets of low-light images. Their results show that their approach is better than what’s currently available. |
Keywords
» Artificial intelligence » Transformer