Loading Now

Summary of Cross-view Geolocalization and Disaster Mapping with Street-view and Vhr Satellite Imagery: a Case Study Of Hurricane Ian, by Hao Li et al.


Cross-View Geolocalization and Disaster Mapping with Street-View and VHR Satellite Imagery: A Case Study of Hurricane IAN

by Hao Li, Fabian Deuser, Wenping Yina, Xuanshu Luo, Paul Walther, Gengchen Mai, Wei Huang, Martin Werner

First submitted to arxiv on: 13 Aug 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel disaster mapping framework, CVDisaster, to estimate geolocation awareness and damage perception simultaneously using cross-view Street-View Imagery (SVI) and Very High-Resolution satellite imagery. The framework consists of two cross-view models: CVDisaster-Geoloc for geolocalization and CVDisaster-Est for damage perception estimation. A contrastive learning objective with a Siamese ConvNeXt image encoder is used in the geolocalization model, while a Couple Global Context Vision Transformer (CGCViT) is employed in the classification model. The framework is evaluated using a novel cross-view dataset (CVIAN) and extensive experiments on Hurricane IAN. Results show that CVDisaster can achieve highly competitive performance (over 80% for geolocalization and 75% for damage perception estimation) with limited fine-tuning efforts.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us prepare better for natural disasters by creating a new way to map out what happened during the disaster. The method uses two types of images: ones taken from a car driving around the affected area and high-resolution satellite pictures. This allows us to figure out where people were when the disaster hit and how badly things got damaged. The researchers tested their idea using data from Hurricane IAN and found that it worked really well, even with limited training. They also shared all the data and code so others can use this method too.

Keywords

» Artificial intelligence  » Classification  » Encoder  » Fine tuning  » Vision transformer