Summary of End-to-end Underwater Video Enhancement: Dataset and Model, by Dazhao Du et al.
End-To-End Underwater Video Enhancement: Dataset and Model
by Dazhao Du, Enhan Li, Lingyu Si, Fanjiang Xu, Jianwei Niu
First submitted to arxiv on: 18 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the challenge of underwater video enhancement (UVE), aiming to improve the visibility and quality of underwater videos. Current methods primarily focus on enhancing individual frames, neglecting inter-frame relationships. To address this gap, the authors create a novel dataset, SUVE, comprising 840 paired underwater-style videos with ground-truth references. They then train UVENet, an underwater video enhancement model leveraging inter-frame connections for improved performance. Through experiments on both synthetic and real-world underwater videos, the authors demonstrate the effectiveness of their approach. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps make underwater videos better, so scientists can learn more about our oceans. Right now, people are trying to improve individual frames, but they’re not thinking about how each frame is connected. To fix this, researchers created a big dataset with 840 pairs of underwater-style videos and real references. They used this data to train a new model that does a better job enhancing the videos. By testing it on fake and real videos, they showed their approach works well. |