Abstract

Unmanned Aerial Vehicles (UAVs) equipped with bioradars are a life-saving technology that can enable identification of survivors under collapsed buildings in the aftermath of natural disasters such as earthquakes or gas explosions. However, these UAVs have to be able to autonomously navigate in disaster struck environments and land on debris piles in order to accurately locate the survivors. This problem is extremely challenging as pre-existing maps cannot be leveraged for navigation due to structural changes that may have occurred and existing landing site detection algorithms are not suitable to identify safe landing regions on debris piles. In this work, we present a computationally efficient system for autonomous UAV navigation and landing that does not require any prior knowledge about the environment. We propose a novel landing site detection algorithm that computes costmaps based on several hazard factors including terrain flatness, steepness, depth accuracy, and energy consumption information. We also introduce a first-of-a-kind synthetic dataset of over 1.2 million images of collapsed buildings with groundtruth depth, surface normals, semantics and camera pose information. We demonstrate the efficacy of our system using experiments from a city scale hyperrealistic simulation environment and in real-world scenarios with collapsed buildings.

Overview of the System
Overview of the System

Dataset

Overview

To faciliate training of neural networks and evaluation of alternate approaches for landing, we provide a synthetic dataset comprised of collapsed buildings. The dataset consists of 1,281,125 RGB images with corresponding groundtruth for depth, surface normals, semantics and camera pose information. In order to have diverse viewing angles, we varied the tilt of the camera from 0◦ to 55◦ in steps of 5◦, the pan of the camera from 0◦ to 360◦ in steps of 45◦, and we also varied the height of the UAV during data collection from 10 m to 30 m in steps of 5 m. Annotations are provided for the following classes: sky, houses, road, rocks, flora, terrain, trees, cars, and others.

BibTex

Please cite our work if you use the AutoLand Dataset or report results based on it.

@article{mittal2019autoland,
author = {Mayank Mittal and Rohit Mohan and Wolfram Burgard and Abhinav Valada},
title = {Vision-Based Autonomous UAV Navigation and Landing for Urban Search and Rescue},
journal = {arXiv preprint arXiv:1906.01304},
month = {June},
year = {2019}
}

License Agreement

The data is provided for non-commercial use only. By downloading the data, you accept the license agreement which can be downloaded here.

RGB + Camera Poses

rgb_1

Depth

depth_1

Surface Normals

normals_1

Semantics

seg_1

People