Next Article in Journal
A 6-Bit 0.13 μm SiGe BiCMOS Digital Step Attenuator with Low Phase Variation for K-Band Applications
Next Article in Special Issue
Real-Time Ground Vehicle Detection in Aerial Infrared Imagery Based on Convolutional Neural Network
Previous Article in Journal
Review on Health Management System for Lithium-Ion Batteries of Electric Vehicles
Open AccessArticle

Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments

by Tao Yang 1,2,*, Peiqi Li 1, Huiming Zhang 3, Jing Li 4,* and Zhi Li 1
SAIIP, School of Computer Science, Northwestern Polytechnical University, Xi’an 710072, China
Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518057, China
National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
Authors to whom correspondence should be addressed.
Electronics 2018, 7(5), 73;
Received: 17 April 2018 / Revised: 9 May 2018 / Accepted: 11 May 2018 / Published: 15 May 2018
(This article belongs to the Special Issue Autonomous Control of Unmanned Aerial Vehicles)
With the popularization and wide application of drones in military and civilian fields, the safety of drones must be considered. At present, the failure and drop rates of drones are still much higher than those of manned aircraft. Therefore, it is imperative to improve the research on the safe landing and recovery of drones. However, most drone navigation methods rely on global positioning system (GPS) signals. When GPS signals are missing, these drones cannot land or recover properly. In fact, with the help of optical equipment and image recognition technology, the position and posture of the drone in three dimensions can be obtained, and the environment where the drone is located can be perceived. This paper proposes and implements a monocular vision-based drone autonomous landing system in emergencies and in unstructured environments. In this system, a novel map representation approach is proposed that combines three-dimensional features and a mid-pass filter to remove noise and construct a grid map with different heights. In addition, a region segmentation is presented to detect the edges of different-height grid areas for the sake of improving the speed and accuracy of the subsequent landing area selection. As a visual landing technology, this paper evaluates the proposed algorithm in two tasks: scene reconstruction integrity and landing location security. In these tasks, firstly, a drone scans the scene and acquires key frames in the monocular visual simultaneous localization and mapping (SLAM) system in order to estimate the pose of the drone and to create a three-dimensional point cloud map. Then, the filtered three-dimensional point cloud map is converted into a grid map. The grid map is further divided into different regions to select the appropriate landing zone. Thus, it can carry out autonomous route planning. Finally, when it stops upon the landing field, it will start the descent mode near the landing area. Experiments in multiple sets of real scenes show that the environmental awareness and the landing area selection have high robustness and real-time performance.
View Full-Text
Keywords: UAV automatic landing; monocular visual SLAM; autonomous landing area selection UAV automatic landing; monocular visual SLAM; autonomous landing area selection
Show Figures

Figure 1

MDPI and ACS Style

Yang, T.; Li, P.; Zhang, H.; Li, J.; Li, Z. Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments. Electronics 2018, 7, 73.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop