Resilient UAV Autonomy and Remote Sensing

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (31 May 2024) | Viewed by 27371

Special Issue Editors

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
Interests: image/LiDAR point clouds processing; sensor fusion; SLAM; unmanned systems; remote sensing methods for the power industry
Special Issues, Collections and Topics in MDPI journals
1. Hubei Key Laboratory of Intelligent Geo-Information Processing, School of Computer Sciences, China University of Geosciences, Wuhan 430074, China
2. Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China
Interests: image retreival; image matching; structure from motion; multi-view stereo; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Safety Science and Emergency Management, Wuhan University of Technology, Luoshi Road 122, Wuhan 430079, China
Interests: laser scanning; point cloud segmentation; object recognition; semantic segmentation; image classification; instance segmentation
Special Issues, Collections and Topics in MDPI journals
College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, No. 2 Chongwen Road, Nan’an District, Chongqing 400065, China
Interests: multi-view stereo; LiDAR data processing; deep learning; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: UAV; mobile mapping; laser scanning; point cloud; inertial navigation
Special Issues, Collections and Topics in MDPI journals
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430205, China
Interests: HD map; lane detection; 3D object detection; multi-sensor fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the development of aerial imaging, oblique photogrammetry, laser scanning techniques and unmanned aircraft systems (UAVs), accurate and efficient perception, the reconstruction and recognition of large-scale 3D scenes have become popular topics in the fields of photogrammetry and computer vision. However, there are still several problems that need to be urgently solved, such as a low processing efficiency, difficulty to render the details of objects, and poor robustness of dense 3D reconstructions for poor-textured and occluded areas. Motivated by this rapid development, we are excited to invite you to submit a research paper to this Special Issue of Drones titled” UAV Image and LiDAR Processing”. The UAV data, including primarily UAV image and LiDAR data, has been widely used in aerial surveillance, 3D reconstruction and visualization, autonomous driving, and smart cities. This Special Issue aims to promote the further application of the UAV data, specifically in the fields of instance segmentation, object detection/tracking, SLAM, SFM, MVS, 3D mesh surface reconstruction, etc. Original submissions aligned with the above-mentioned research areas are highly welcomed.

Papers are welcomed from all fields directly related to these topics, including but not limited to the following:

  • Trajectory planning for UAV data acquisition;
  • The fusion of UAV sensor data (image/point clouds/GNSS/IMU);
  • The registration of UAV image/point clouds;
  • Real-time AI in motion planning and control, data gathering and analysis of UAVs;
  • Image/LiDAR feature extraction, matching and bundle adjustment between UAV and UGV;
  • Semantic/instance segmentation, classification, object detection and tracking with UAV data using the deep learning method;
  • 3D reconstructions from UAV image/point clouds;
  • SfM and SLAM using UAVs image/LiDAR data;
  • Cooperative perception and mapping utilizing multiple UAVs and UGVs;
  • Mobile edge computing (MEC) in UAVs;
  • UAV image/point clouds processing in inspection, surveillance, GNSS-denied environment (underground/indoor spaces), etc.;
  • UAV image/point clouds processing in power/oil/ industry, hydraulics, agriculture, ecology, emergency response and smart cities;

Dr. Chi Chen
Dr. San Jiang
Dr. Xijiang Chen
Dr. Mao Tian
Dr. Jianping Li
Dr. Jian Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • drones
  • UAV
  • UAV swarm
  • computer vision
  • photogrammetry
  • remote sensing
  • LiDAR
  • aerial imagery
  • image and point cloud fusion
  • detection and tracking
  • segmentation
  • SLAM
  • path planning
  • 3D reconstruction
  • 3D visualization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 8600 KiB  
Article
Contour Extraction of UAV Point Cloud Based on Neighborhood Geometric Features of Multi-Level Growth Plane
by Xijiang Chen, Qing An, Bufan Zhao, Wuyong Tao, Tieding Lu, Han Zhang, Xianquan Han and Emirhan Ozdemir
Drones 2024, 8(6), 239; https://doi.org/10.3390/drones8060239 - 2 Jun 2024
Viewed by 957
Abstract
The extraction of UAV building point cloud contour points is the basis for the expression of a three-dimensional lightweight building outline. Previous unmanned aerial vehicle (UAV) building point cloud contour extraction methods have mainly focused on the expression of the roof contour, but [...] Read more.
The extraction of UAV building point cloud contour points is the basis for the expression of a three-dimensional lightweight building outline. Previous unmanned aerial vehicle (UAV) building point cloud contour extraction methods have mainly focused on the expression of the roof contour, but did not extract the wall contour. In view of this, an algorithm based on the geometric features of the neighborhood points of the region-growing clustering fusion surface is proposed to extract the boundary points of the UAV building point cloud. Firstly, the region growth plane is fused to obtain a more accurate segmentation plane. Then, the neighboring points are projected onto the neighborhood plane and a vector between the object point and neighborhood point is constructed. Finally, the azimuth of each vector is calculated, and the boundary points of each segmented plane are extracted according to the difference in adjacent azimuths. Experiment results show that the best boundary points can be extracted when the number of adjacent points is 24 and the difference in adjacent azimuths is 120. The proposed method is superior to other methods in the contour extraction of UAV buildings point clouds. Moreover, it can extract not only the building roof contour points, but also the wall contour points, including the window contour points. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

20 pages, 4350 KiB  
Article
Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones
by Haoyu Wang, Chi Chen, Yong He, Shangzhe Sun, Liuchun Li, Yuhang Xu and Bisheng Yang
Drones 2024, 8(4), 137; https://doi.org/10.3390/drones8040137 - 2 Apr 2024
Cited by 1 | Viewed by 2305
Abstract
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera [...] Read more.
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filtering algorithm using class and confidence is designed to eliminate false detections. Secondly, multiple-object tracking (MOT) is applied to maintain the continuity of the trajectories, and the epipolar constraint is applied to multi-view correspondences. Finally, the calibrated multi-view cameras are used to calculate the 3D coordinates of the markers and effectively estimate the 3D pose of the target robot. Our system takes in real-time multi-camera data streams, making it easy to integrate into the robot system. In the simulation scenario experiment, the average position estimation error of the method is less than 0.008 m, and the average orientation error is less than 0.65 degrees. In the real scenario experiment, we compared the localization results of our method with the advanced LiDAR-Inertial Simultaneous Localization and Mapping (SLAM) algorithm. According to the experimental results, SLAM generates drifts during turns, while our method can overcome the drifts and accumulated errors of SLAM, making the trajectory more stable and accurate. In addition, the pose estimation speed of our system can reach 30 Hz. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

23 pages, 15746 KiB  
Article
IMUC: Edge–End–Cloud Integrated Multi-Unmanned System Payload Management and Computing Platform
by Jie Tang, Ruofei Zhong, Ruizhuo Zhang and Yan Zhang
Drones 2024, 8(1), 19; https://doi.org/10.3390/drones8010019 - 12 Jan 2024
Viewed by 2041
Abstract
Multi-unmanned systems are primarily composed of unmanned vehicles, drones, and multi-legged robots, among other unmanned robotic devices. By integrating and coordinating the operation of these robotic devices, it is possible to achieve collaborative multitasking and autonomous operations in various environments. In the field [...] Read more.
Multi-unmanned systems are primarily composed of unmanned vehicles, drones, and multi-legged robots, among other unmanned robotic devices. By integrating and coordinating the operation of these robotic devices, it is possible to achieve collaborative multitasking and autonomous operations in various environments. In the field of surveying and mapping, the traditional single-type unmanned device data collection mode is no longer sufficient to meet the data acquisition tasks in complex spatial scenarios (such as low-altitude, surface, indoor, underground, etc.). Faced with the data collection requirements in complex spaces, employing different types of robots for collaborative operations is an important means to improve operational efficiency. Additionally, the limited computational and storage capabilities of unmanned systems themselves pose significant challenges to multi-unmanned systems. Therefore, this paper designs an edge–end–cloud integrated multi-unmanned system payload management and computing platform (IMUC) that combines edge, end, and cloud computing. By utilizing the immense computational power and storage resources of the cloud, the platform enables cloud-based online task management and data acquisition visualization for multi-unmanned systems. The platform addresses the high complexity of task execution in various scenarios by considering factors such as space, time, and task completion. It performs data collection tasks at the end terminal, optimizes processing at the edge, and finally transmits the data to the cloud for visualization. The platform seamlessly integrates edge computing, terminal devices, and cloud resources, achieving efficient resource utilization and distributed execution of computing tasks. Test results demonstrate that the platform can successfully complete the entire process of payload management and computation for multi-unmanned systems in complex scenarios. The platform exhibits low response time and produces normal routing results, greatly enhancing operational efficiency in the field. These test results validate the practicality and reliability of the platform, providing a new approach for efficient operations of multi-unmanned systems in surveying and mapping requirements, combining cloud computing with the construction of smart cities. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

19 pages, 4759 KiB  
Article
Drone Multiline Light Detection and Ranging Data Filtering in Coastal Salt Marshes Using Extreme Gradient Boosting Model
by Xixiu Wu, Kai Tan, Shuai Liu, Feng Wang, Pengjie Tao, Yanjun Wang and Xiaolong Cheng
Drones 2024, 8(1), 13; https://doi.org/10.3390/drones8010013 - 4 Jan 2024
Cited by 1 | Viewed by 1988
Abstract
Quantitatively characterizing coastal salt-marsh terrains and the corresponding spatiotemporal changes are crucial for formulating comprehensive management plans and clarifying the dynamic carbon evolution. Multiline light detection and ranging (LiDAR) exhibits great capability for terrain measuring for salt marshes with strong penetration performance and [...] Read more.
Quantitatively characterizing coastal salt-marsh terrains and the corresponding spatiotemporal changes are crucial for formulating comprehensive management plans and clarifying the dynamic carbon evolution. Multiline light detection and ranging (LiDAR) exhibits great capability for terrain measuring for salt marshes with strong penetration performance and a new scanning mode. The prerequisite to obtaining the high-precision terrain requires accurate filtering of the salt-marsh vegetation points from the ground/mudflat ones in the multiline LiDAR data. In this study, a new alternative salt-marsh vegetation point-cloud filtering method is proposed for drone multiline LiDAR based on the extreme gradient boosting (i.e., XGBoost) model. According to the basic principle that vegetation and the ground exhibit different geometric and radiometric characteristics, the XGBoost is constructed to model the relationships of point categories with a series of selected basic geometric and radiometric metrics (i.e., distance, scan angle, elevation, normal vectors, and intensity), where absent instantaneous scan geometry (i.e., distance and scan angle) for each point is accurately estimated according to the scanning principles and point-cloud spatial distribution characteristics of drone multiline LiDAR. Based on the constructed model, the combination of the selected features can accurately and intelligently predict the category of each point. The proposed method is tested in a coastal salt marsh in Shanghai, China by a drone 16-line LiDAR system. The results demonstrate that the averaged AUC and G-mean values of the proposed method are 0.9111 and 0.9063, respectively. The proposed method exhibits enhanced applicability and versatility and outperforms the traditional and other machine-learning methods in different areas with varying topography and vegetation-growth status, which shows promising potential for point-cloud filtering and classification, particularly in extreme environments where the terrains, land covers, and point-cloud distributions are highly complicated. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

19 pages, 6120 KiB  
Article
MGFNet: A Progressive Multi-Granularity Learning Strategy-Based Insulator Defect Recognition Algorithm for UAV Images
by Zhouxian Lu, Yong Li and Feng Shuang
Drones 2023, 7(5), 333; https://doi.org/10.3390/drones7050333 - 22 May 2023
Cited by 3 | Viewed by 1586
Abstract
Due to the low efficiency and safety of a manual insulator inspection, research on intelligent insulator inspections has gained wide attention. However, most existing defect recognition methods extract abstract features of the entire image directly by convolutional neural networks (CNNs), which lack multi-granularity [...] Read more.
Due to the low efficiency and safety of a manual insulator inspection, research on intelligent insulator inspections has gained wide attention. However, most existing defect recognition methods extract abstract features of the entire image directly by convolutional neural networks (CNNs), which lack multi-granularity feature information, rendering the network insensitive to small defects. To address this problem, we propose a multi-granularity fusion network (MGFNet) to diagnose the health status of the insulator. An MGFNet includes a traversal clipping module (TC), progressive multi-granularity learning strategy (PMGL), and region relationship attention module (RRA). A TC effectively resolves the issue of distortion in insulator images and can provide a more detailed diagnosis for the local areas of insulators. A PMGL acquires the multi-granularity features of insulators and combines them to produce more resilient features. An RRA utilizes non-local interactions to better learn the difference between normal features and defect features. To eliminate the interference of the UAV images’ background, an MGFNet can be flexibly combined with object detection algorithms to form a two-stage object detection algorithm, which can accurately identify insulator defects in UAV images. The experimental results show that an MGFNet achieves 91.27% accuracy, outperforming other advanced methods. Furthermore, the successful deployment on a drone platform has enabled the real-time diagnosis of insulators, further confirming the practical applications value of an MGFNet. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

23 pages, 5396 KiB  
Article
Extraction and Mapping of Cropland Parcels in Typical Regions of Southern China Using Unmanned Aerial Vehicle Multispectral Images and Deep Learning
by Shikun Wu, Yingyue Su, Xiaojun Lu, Han Xu, Shanggui Kang, Boyu Zhang, Yueming Hu and Luo Liu
Drones 2023, 7(5), 285; https://doi.org/10.3390/drones7050285 - 24 Apr 2023
Cited by 5 | Viewed by 2017
Abstract
The accurate extraction of cropland distribution is an important issue for precision agriculture and food security worldwide. The complex characteristics in southern China pose great challenges to the extraction. In this study, for the objective of accurate extraction and mapping of cropland parcels [...] Read more.
The accurate extraction of cropland distribution is an important issue for precision agriculture and food security worldwide. The complex characteristics in southern China pose great challenges to the extraction. In this study, for the objective of accurate extraction and mapping of cropland parcels in multiple crop growth stages in southern China, we explored a method based on unmanned aerial vehicle (UAV) data and deep learning algorithms. Our method considered cropland size, cultivation patterns, spectral characteristics, and the terrain of the study area. From two aspects—model architecture of deep learning and the data form of UAV—four groups of experiments are performed to explore the optimal method for the extraction of cropland parcels in southern China. The optimal result obtained in October 2021 demonstrated an overall accuracy (OA) of 95.9%, a Kappa coefficient of 89.2%, and an Intersection-over-Union (IoU) of 95.7%. The optimal method also showed remarkable results in the maps of cropland distribution in multiple crop growth stages, with an average OA of 96.9%, an average Kappa coefficient of 89.5%, and an average IoU of 96.7% in August, November, and December of the same year. This study provides a valuable reference for the extraction of cropland parcels in multiple crop growth stages in southern China or regions with similar characteristics. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

19 pages, 3065 KiB  
Article
Visual-Inertial Odometry Using High Flying Altitude Drone Datasets
by Anand George, Niko Koivumäki, Teemu Hakala, Juha Suomalainen and Eija Honkavaara
Drones 2023, 7(1), 36; https://doi.org/10.3390/drones7010036 - 4 Jan 2023
Cited by 11 | Viewed by 8858
Abstract
Positioning of unoccupied aerial systems (UAS, drones) is predominantly based on Global Navigation Satellite Systems (GNSS). Due to potential signal disruptions, redundant positioning systems are needed for reliable operation. The objective of this study was to implement and assess a redundant positioning system [...] Read more.
Positioning of unoccupied aerial systems (UAS, drones) is predominantly based on Global Navigation Satellite Systems (GNSS). Due to potential signal disruptions, redundant positioning systems are needed for reliable operation. The objective of this study was to implement and assess a redundant positioning system for high flying altitude drone operation based on visual-inertial odometry (VIO). A new sensor suite with stereo cameras and an inertial measurement unit (IMU) was developed, and a state-of-the-art VIO algorithm, VINS-Fusion, was used for localisation. Empirical testing of the system was carried out at flying altitudes of 40–100 m, which cover the common flight altitude range of outdoor drone operations. The performance of various implementations was studied, including stereo-visual-odometry (stereo-VO), monocular-visual-inertial-odometry (mono-VIO) and stereo-visual-inertial-odometry (stereo-VIO). The stereo-VIO provided the best results; the flight altitude of 40–60 m was the most optimal for the stereo baseline of 30 cm. The best positioning accuracy was 2.186 m for a 800 m-long trajectory. The performance of the stereo-VO degraded with the increasing flight altitude due to the degrading base-to-height ratio. The mono-VIO provided acceptable results, although it did not reach the performance level of the stereo-VIO. This work presented new hardware and research results on localisation algorithms for high flying altitude drones that are of great importance since the use of autonomous drones and beyond visual line-of-sight flying are increasing and will require redundant positioning solutions that compensate for potential disruptions in GNSS positioning. The data collected in this study are published for analysis and further studies. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

23 pages, 13738 KiB  
Article
Graph-Based Image Segmentation for Road Extraction from Post-Disaster Aerial Footage
by Nicholas Paul Sebasco and Hakki Erhan Sevil
Drones 2022, 6(11), 315; https://doi.org/10.3390/drones6110315 - 26 Oct 2022
Cited by 4 | Viewed by 2176
Abstract
This research effort proposes a novel method for identifying and extracting roads from aerial images taken after a disaster using graph-based image segmentation. The dataset that is used consists of images taken by an Unmanned Aerial Vehicle (UAV) at the University of West [...] Read more.
This research effort proposes a novel method for identifying and extracting roads from aerial images taken after a disaster using graph-based image segmentation. The dataset that is used consists of images taken by an Unmanned Aerial Vehicle (UAV) at the University of West Florida (UWF) after hurricane Sally. Ground truth masks were created for these images, which divide the image pixels into three categories: road, non-road, and uncertain. A specific pre-processing step was implemented, which used Catmull–Rom cubic interpolation to resize the image. Moreover, the Gaussian filter used in Efficient Graph-Based Image Segmentation is replaced with a median filter, and the color space is converted from RGB to HSV. The Efficient Graph-Based Image Segmentation is further modified by (i) changing the Moore pixel neighborhood to the Von Neumann pixel neighborhood, (ii) introducing a new adaptive isoperimetric quotient threshold function, (iii) changing the distance function used to create the graph edges, and (iv) changing the sorting algorithm so that the algorithm can run more effectively. Finally, a simple function to automatically compute the k (scale) parameter is added. A new post-processing heuristic is proposed for road extraction, and the Intersection over Union evaluation metric is used to quantify the road extraction performance. The proposed method maintains high performance on all of the images in the dataset and achieves an Intersection over Union (IoU) score, which is significantly higher than the score of a similar road extraction technique using K-means clustering. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 3723 KiB  
Review
Target Localization for Autonomous Landing Site Detection: A Review and Preliminary Result with Static Image Photogrammetry
by Jayasurya Arasur Subramanian, Vijanth Sagayan Asirvadam, Saiful Azrin B. M. Zulkifli, Narinderjit Singh Sawaran Singh, N. Shanthi and Ravi Kumar Lagisetty
Drones 2023, 7(8), 509; https://doi.org/10.3390/drones7080509 - 2 Aug 2023
Cited by 2 | Viewed by 2946
Abstract
The advancement of autonomous technology in Unmanned Aerial Vehicles (UAVs) has piloted a new era in aviation. While UAVs were initially utilized only for the military, rescue, and disaster response, they are now being utilized for domestic and civilian purposes as well. In [...] Read more.
The advancement of autonomous technology in Unmanned Aerial Vehicles (UAVs) has piloted a new era in aviation. While UAVs were initially utilized only for the military, rescue, and disaster response, they are now being utilized for domestic and civilian purposes as well. In order to deal with its expanded applications and to increase autonomy, the ability for UAVs to perform autonomous landing will be a crucial component. Autonomous landing capability is greatly dependent on computer vision, which offers several advantages such as low cost, self-sufficiency, strong anti-interference capability, and accurate localization when combined with an Inertial Navigation System (INS). Another significant benefit of this technology is its compatibility with LiDAR technology, Digital Elevation Models (DEM), and the ability to seamlessly integrate these components. The landing area for UAVs can vary, ranging from static to dynamic or complex, depending on their environment. By comprehending these characteristics and the behavior of UAVs, this paper serves as a valuable reference for autonomous landing guided by computer vision and provides promising preliminary results with static image photogrammetry. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

Back to TopTop