sensors-logo

Journal Browser

Journal Browser

UAV-Based Sensing Techniques, Applications and Prospective

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (10 February 2021) | Viewed by 28102

Special Issue Editors


E-Mail Website
Guest Editor
Computer Vision and Aerial Robotics (CVAR) Group, Centre for Automation and Robotics (UPM-CSIC), Universidad Politécnica de Madrid, Calle José Gutiérrez Abascal 2, 28006 Madrid, Spain
Interests: UAVs; object tracking; visual control and guidance; visual SLAM; stereo and omnidirectional vision; aerial robotics; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Vision and Aerial Robotics (CVAR) Group, Centre for Automation and Robotics (UPM-CSIC), Universidad Politécnica de Madrid, Calle José Gutiérrez Abascal 2, 28006 Madrid, Spain
Interests: UAV; stereo vision; autonomous navigation; computer vision; image processing; pattern recognition; machine learning

Special Issue Information

The application of unmanned aerial vehicles has been increasing in the civil arena, where their high maneuverability can play an essential role in sensing and interpreting the environment by acquiring data from multiple key positions. The main challenge for these applications is whether the UAVs can accurately estimate their position and navigate regarding their environment and the objects they have to interact with (e.g., for inspection and physical manipulation).

The mentioned challenges require the successful exploitation of sensor fusion based on onboard sensors, in which vision and 3D LiDAR play a key role, not only in positioning, but also in scene recognition, see and avoid, as well as control and navigation itself. Several of the techniques that are now propelling the improvement in UAV autonomy are visual inertial odometry (VIO), visual semantic SLAM, deep learning for object recognition and localization, as well as direct reinforcement learning for planning and control, among others.

This Special Issue is aims to bring together top researches in the mentioned techniques for the common objective of contributing to UAV use as a very versatile aerial robot for sensing of the environment for a vast number of industrial applications.

Prof. Dr. Pascual Campoy
Dr. Adrian Carrio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 4610 KiB  
Article
An Improved Method for Stable Feature Points Selection in Structure-from-Motion Considering Image Semantic and Structural Characteristics
by Fei Wang, Zhendong Liu, Hongchun Zhu, Pengda Wu and Chengming Li
Sensors 2021, 21(7), 2416; https://doi.org/10.3390/s21072416 - 1 Apr 2021
Cited by 3 | Viewed by 1912
Abstract
Feature matching plays a crucial role in the process of 3D reconstruction based on the structure from motion (SfM) technique. For a large collection of oblique images, feature matching is one of the most time-consuming steps, and the matching result directly affects the [...] Read more.
Feature matching plays a crucial role in the process of 3D reconstruction based on the structure from motion (SfM) technique. For a large collection of oblique images, feature matching is one of the most time-consuming steps, and the matching result directly affects the accuracy of subsequent tasks. Therefore, how to extract the reasonable feature points robustly and efficiently to improve the matching speed and quality has received extensive attention from scholars worldwide. Most studies perform quantitative feature point selection based on image Difference-of-Gaussian (DoG) pyramids in practice. However, the stability and spatial distribution of feature points are not considered enough, resulting in selected feature points that may not adequately reflect the scene structures and cannot guarantee the matching rate and the aerial triangulation accuracy. To address these issues, an improved method for stable feature point selection in SfM considering image semantic and structural characteristics is proposed. First, the visible-band difference vegetation index is used to identify the vegetation areas from oblique images, and the line feature in the image is extracted by the optimized line segment detector algorithm. Second, the feature point two-tuple classification model is established, in which the vegetation area recognition result is used as the semantic constraint, the line feature extraction result is used as the structural constraint, and the feature points are divided into three types. Finally, a progressive selection algorithm for feature points is proposed, in which feature points in the DoG pyramid are selected by classes and levels until the number of feature points is satisfied. Oblique images of a 40-km2 area in Dongying city, China, were used for validation. The experimental results show that compared to the state-of-the-art method, the method proposed in this paper not only effectively reduces the number of feature points but also better reflects the scene structure. At the same time, the average reprojection error of the aerial triangulation decrease by 20%, the feature point matching rate increase by 3%, the selected feature points are more stable and reasonable. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Figure 1

21 pages, 14046 KiB  
Article
Optical Navigation Sensor for Runway Relative Positioning of Aircraft during Final Approach
by Antal Hiba, Attila Gáti and Augustin Manecy
Sensors 2021, 21(6), 2203; https://doi.org/10.3390/s21062203 - 21 Mar 2021
Cited by 10 | Viewed by 3771
Abstract
Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection [...] Read more.
Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Figure 1

18 pages, 4827 KiB  
Article
Assessment of DSM Based on Radiometric Transformation of UAV Data
by Muhammad Hamid Chaudhry, Anuar Ahmad, Qudsia Gulzar, Muhammad Shahid Farid, Himan Shahabi and Nadhir Al-Ansari
Sensors 2021, 21(5), 1649; https://doi.org/10.3390/s21051649 - 27 Feb 2021
Cited by 16 | Viewed by 3352
Abstract
Unmanned Aerial Vehicle (UAV) is one of the latest technologies for high spatial resolution 3D modeling of the Earth. The objectives of this study are to assess low-cost UAV data using image radiometric transformation techniques and investigate its effects on global and local [...] Read more.
Unmanned Aerial Vehicle (UAV) is one of the latest technologies for high spatial resolution 3D modeling of the Earth. The objectives of this study are to assess low-cost UAV data using image radiometric transformation techniques and investigate its effects on global and local accuracy of the Digital Surface Model (DSM). This research uses UAV Light Detection and Ranging (LIDAR) data from 80 m and UAV Drone data from 300 and 500 m flying height. RAW UAV images acquired from 500 m flying height are radiometrically transformed in Matrix Laboratory (MATLAB). UAV images from 300 m flying height are processed for the generation of 3D point cloud and DSM in Pix4D Mapper. UAV LIDAR data are used for the acquisition of Ground Control Points (GCP) and accuracy assessment of UAV Image data products. Accuracy of enhanced DSM with DSM generated from 300 m flight height were analyzed for point cloud number, density and distribution. Root Mean Square Error (RMSE) value of Z is enhanced from ±2.15 m to ±0.11 m. For local accuracy assessment of DSM, four different types of land covers are statistically compared with UAV LIDAR resulting in compatibility of enhancement technique with UAV LIDAR accuracy. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Figure 1

19 pages, 14690 KiB  
Article
Scan Pattern Characterization of Velodyne VLP-16 Lidar Sensor for UAS Laser Scanning
by H. Andrew Lassiter, Travis Whitley, Benjamin Wilkinson and Amr Abd-Elrahman
Sensors 2020, 20(24), 7351; https://doi.org/10.3390/s20247351 - 21 Dec 2020
Cited by 6 | Viewed by 4023
Abstract
Many lightweight lidar sensors employed for UAS lidar mapping feature a fan-style laser emitter-detector configuration which results in a non-uniform pattern of laser pulse returns. As the role of UAS lidar mapping grows in both research and industry, it is imperative to understand [...] Read more.
Many lightweight lidar sensors employed for UAS lidar mapping feature a fan-style laser emitter-detector configuration which results in a non-uniform pattern of laser pulse returns. As the role of UAS lidar mapping grows in both research and industry, it is imperative to understand the behavior of the fan-style lidar sensor to ensure proper mission planning. This study introduces sensor modeling software for scanning simulation and analytical equations developed in-house to characterize the non-uniform return density (i.e., scan pattern) of the fan-style sensor, with special focus given to a popular fan-style sensor, the Velodyne VLP-16 laser scanner. The results indicate that, despite the high pulse frequency of modern scanners, areas of poor laser pulse coverage are often present along the scanning path under typical mission parameters. These areas of poor coverage appear in a variety of shapes and sizes which do not necessarily correspond to the forward speed of the scanner or the height of the scanner above the ground, highlighting the importance of scan simulation for proper mission planning when using a fan-style sensor. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Graphical abstract

21 pages, 14950 KiB  
Article
DeepPilot: A CNN for Autonomous Drone Racing
by Leticia Oyuki Rojas-Perez and Jose Martinez-Carranza
Sensors 2020, 20(16), 4524; https://doi.org/10.3390/s20164524 - 13 Aug 2020
Cited by 21 | Viewed by 5345
Abstract
Autonomous Drone Racing (ADR) was first proposed in IROS 2016. It called for the development of an autonomous drone capable of beating a human in a drone race. After almost five years, several teams have proposed different solutions with a common pipeline: gate [...] Read more.
Autonomous Drone Racing (ADR) was first proposed in IROS 2016. It called for the development of an autonomous drone capable of beating a human in a drone race. After almost five years, several teams have proposed different solutions with a common pipeline: gate detection; drone localization; and stable flight control. Recently, Deep Learning (DL) has been used for gate detection and localization of the drone regarding the gate. However, recent competitions such as the Game of Drones, held at NeurIPS 2019, called for solutions where DL played a more significant role. Motivated by the latter, in this work, we propose a CNN approach called DeepPilot that takes camera images as input and predicts flight commands as output. These flight commands represent: the angular position of the drone’s body frame in the roll and pitch angles, thus producing translation motion in those angles; rotational speed in the yaw angle; and vertical speed referred as altitude h. Values for these 4 flight commands, predicted by DeepPilot, are passed to the drone’s inner controller, thus enabling the drone to navigate autonomously through the gates in the racetrack. For this, we assume that the next gate becomes visible immediately after the current gate has been crossed. We present evaluations in simulated racetrack environments where DeepPilot is run several times successfully to prove repeatability. In average, DeepPilot runs at 25 frames per second (fps). We also present a thorough evaluation of what we called a temporal approach, which consists of creating a mosaic image, with consecutive camera frames, that is passed as input to the DeepPilot. We argue that this helps to learn the drone’s motion trend regarding the gate, thus acting as a local memory that leverages the prediction of the flight commands. Our results indicate that this purely DL-based artificial pilot is feasible to be used for the ADR challenge. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Figure 1

21 pages, 7021 KiB  
Article
Deep Reinforcement Learning Approach with Multiple Experience Pools for UAV’s Autonomous Motion Planning in Complex Unknown Environments
by Zijian Hu, Kaifang Wan, Xiaoguang Gao, Yiwei Zhai and Qianglong Wang
Sensors 2020, 20(7), 1890; https://doi.org/10.3390/s20071890 - 29 Mar 2020
Cited by 31 | Viewed by 4476
Abstract
Autonomous motion planning (AMP) of unmanned aerial vehicles (UAVs) is aimed at enabling a UAV to safely fly to the target without human intervention. Recently, several emerging deep reinforcement learning (DRL) methods have been employed to address the AMP problem in some simplified [...] Read more.
Autonomous motion planning (AMP) of unmanned aerial vehicles (UAVs) is aimed at enabling a UAV to safely fly to the target without human intervention. Recently, several emerging deep reinforcement learning (DRL) methods have been employed to address the AMP problem in some simplified environments, and these methods have yielded good results. This paper proposes a multiple experience pools (MEPs) framework leveraging human expert experiences for DRL to speed up the learning process. Based on the deep deterministic policy gradient (DDPG) algorithm, a MEP–DDPG algorithm was designed using model predictive control and simulated annealing to generate expert experiences. On applying this algorithm to a complex unknown simulation environment constructed based on the parameters of the real UAV, the training experiment results showed that the novel DRL algorithm resulted in a performance improvement exceeding 20% as compared with the state-of-the-art DDPG. The results of the experimental testing indicate that UAVs trained using MEP–DDPG can stably complete a variety of tasks in complex, unknown environments. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 4248 KiB  
Review
A Review on Auditory Perception for Unmanned Aerial Vehicles
by Jose Martinez-Carranza and Caleb Rascon
Sensors 2020, 20(24), 7276; https://doi.org/10.3390/s20247276 - 18 Dec 2020
Cited by 11 | Viewed by 3966
Abstract
Although a significant amount of work has been carried out for visual perception in the context of unmanned aerial vehicles (UAVs), not so much has been done regarding auditory perception. The latter can complement the observation of the environment that surrounds a UAV [...] Read more.
Although a significant amount of work has been carried out for visual perception in the context of unmanned aerial vehicles (UAVs), not so much has been done regarding auditory perception. The latter can complement the observation of the environment that surrounds a UAV by providing additional information that can be used to detect, classify, and localize audio sources of interest. Motivated by the usefulness of auditory perception for UAVs, we present a literature review that discusses the audio techniques and microphone configurations reported in the literature. A categorization of techniques is proposed based on the role a UAV plays in the auditory perception (is it the one being perceived or is it the perceiver?), as well as a set of objectives that are more popularly aimed to be accomplished in the current literature (detection, classification, and localization). This literature review aims to provide a concise landscape of the most relevant works on auditory perception in the context of UAVs to date and provides insights into future avenues of research as a guide to those who are beginning to work in this field. Full article
(This article belongs to the Special Issue UAV-Based Sensing Techniques, Applications and Prospective)
Show Figures

Figure 1

Back to TopTop