Loading Now

Summary of Crasar-u-droids: a Large Scale Benchmark Dataset For Building Alignment and Damage Assessment in Georectified Suas Imagery, by Thomas Manzini et al.


CRASAR-U-DROIDs: A Large Scale Benchmark Dataset for Building Alignment and Damage Assessment in Georectified sUAS Imagery

by Thomas Manzini, Priyankari Perali, Raisa Karnik, Robin Murphy

First submitted to arxiv on: 24 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Robotics (cs.RO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The Center for Robot Assisted Search And Rescue (CRASAR) has created a new dataset, CRASAR-U-DROIDs, which aims to improve machine learning and computer vision models for disaster response. The dataset consists of high-resolution geospatial images from small uncrewed aerial systems (sUAS), collected from ten federally declared disasters. These images cover an area of 67.98 square kilometers and contain over 21,000 building polygons with damage labels. Additionally, the dataset includes spatial alignment annotations to enable more accurate machine learning models. The CRASAR-U-DROIDs dataset is unique in that it is the largest labeled dataset of sUAS orthomosaic imagery.
Low GrooveSquid.com (original content) Low Difficulty Summary
The Center for Robot Assisted Search And Rescue has made a new tool to help robots find damage after big disasters like hurricanes and fires. They used special cameras on small flying machines called drones to take pictures of the areas affected. They took over 20,000 pictures that show where buildings are damaged. This will help computers learn how to quickly identify which buildings need repair or rebuilding.

Keywords

» Artificial intelligence  » Alignment  » Machine learning