Summary of Loli-street: Benchmarking Low-light Image Enhancement and Beyond, by Md Tanvir Islam and Inzamamul Alam et al.
LoLI-Street: Benchmarking Low-Light Image Enhancement and Beyond
by Md Tanvir Islam, Inzamamul Alam, Simon S. Woo, Saeed Anwar, IK Hyun Lee, Khan Muhammad
First submitted to arxiv on: 13 Oct 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computational Engineering, Finance, and Science (cs.CE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper tackles the problem of low-light image enhancement (LLIE) in computer vision, a crucial task for applications like object detection, tracking, segmentation, and scene understanding. The authors highlight the limitations of current LLIE methods, which struggle to improve images captured in underexposed conditions, particularly in street scenes. To address this gap, they introduce a new dataset called LoLI-Street, featuring 33k paired low-light and well-exposed images from developed cities, covering 19k object classes for object detection. Additionally, they propose a transformer-based LLIE model named “TriFuse” that leverages the LoLI-Street dataset to train and evaluate LLIE models. The authors compare various models on their dataset and benchmark against mainstream datasets, demonstrating significant enhancements in image quality and object detection for applications like autonomous driving and surveillance systems. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low-light images are a big problem for computer vision tasks like object detection, tracking, and scene understanding. Right now, there aren’t many good ways to make these images better. This is especially true for street scenes, which are important for things like self-driving cars. The authors of this paper want to change that by creating a new dataset with 33k paired low-light and well-exposed images from city streets. They also propose a new way to improve low-light images using transformers and diffusion-based models. They test their methods on their own dataset and compare them to other popular methods, showing that their approach can make big improvements in image quality and object detection. |
Keywords
» Artificial intelligence » Diffusion » Object detection » Scene understanding » Tracking » Transformer