Advanced Unmanned System Control and Data Processing

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (16 April 2024) | Viewed by 20308

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Software Engineering, Monmouth University, W Long Branch, NJ, USA
Interests: machine learning; real-time systems; UAV control

E-Mail Website
Guest Editor
DiCES Department, University of Calabria, Rende (CS), Italy
Interests: data mining; urban data analysis and smart cities; energy-aware cloud computing

E-Mail Website
Guest Editor
Department of Computer Science and Technology, Tongji University, Shanghai, China
Interests: multi-agents systems; cyber-physical systems; machine learning

E-Mail Website
Guest Editor
School of Science, Nanjing University of Science and Technology, Nanjing, China
Interests: unmanned vehicles; control theory and application; machine learning

Special Issue Information

Dear Colleagues,

UAV drone technology is constantly evolving with new innovation in circuit boards, chipset, peripherals, and software, resulting in more advanced drones on the market every few months. Drones are used for aerial photography, express shipping and delivery, search and rescue, geographic mapping, surveillance, and so on. Research on the design, data processing, and applications of drones or general unmanned vehicle systems are of great importance.

This Special Issue aims to report the most recent research results in UAV control and data processing. Papers on UAV tracking and control, UAV image-based object detection, UAV applications, and general machine learning algorithms are all welcome. This Special Issue is within the scope of Drones.

Prof. Dr. Jiacun Wang
Assoc. Prof. Dr. Eugenio Cesario
Prof. Dr. Guanjun Liu
Prof. Dr. Jun Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV-based object detection
  • UAV image processing
  • Unmanned vehicle tracking and control
  • UAV applications
  • Deep learning, reinforcement learning, and transfer learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5618 KiB  
Article
The Motion Estimation of Unmanned Aerial Vehicle Axial Velocity Using Blurred Images
by Yedong Mao, Quanxi Zhan, Linchuan Yang, Chunhui Zhang, Ge Xu and Runjie Shen
Drones 2024, 8(7), 306; https://doi.org/10.3390/drones8070306 - 8 Jul 2024
Viewed by 1118
Abstract
This study proposes a novel method for estimating the axial velocity of unmanned aerial vehicles (UAVs) using motion blur images captured in environments where GPS signals are unavailable and lighting conditions are poor, such as underground tunnels and corridors. By correlating the length [...] Read more.
This study proposes a novel method for estimating the axial velocity of unmanned aerial vehicles (UAVs) using motion blur images captured in environments where GPS signals are unavailable and lighting conditions are poor, such as underground tunnels and corridors. By correlating the length of motion blur observed in images with the UAV’s axial speed, the method addresses the limitations of traditional techniques in these challenging scenarios. We enhanced the accuracy by synthesizing motion blur images from neighboring frames, which is particularly effective at low speeds where single-frame blur is minimal. Six flight experiments conducted in the corridor of a hydropower station demonstrated the effectiveness of our approach, achieving a mean velocity error of 0.065 m/s compared to ultra-wideband (UWB) measurements and a root-mean-squared error within 0.3 m/s. The results highlight the stability and precision of the proposed velocity estimation algorithm in confined and low-light environments. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

19 pages, 7490 KiB  
Article
A Framework of Grasp Detection and Operation for Quadruped Robot with a Manipulator
by Jiamin Guo, Hui Chai, Qin Zhang, Haoning Zhao, Meiyi Chen, Yueyang Li and Yibin Li
Drones 2024, 8(5), 208; https://doi.org/10.3390/drones8050208 - 19 May 2024
Viewed by 988
Abstract
Quadruped robots equipped with manipulators need fast and precise grasping and detection algorithms for the transportation of disaster relief supplies. To address this, we developed a framework for these robots, comprising a Grasp Detection Controller (GDC), a Joint Trajectory Planner (JTP), a Leg [...] Read more.
Quadruped robots equipped with manipulators need fast and precise grasping and detection algorithms for the transportation of disaster relief supplies. To address this, we developed a framework for these robots, comprising a Grasp Detection Controller (GDC), a Joint Trajectory Planner (JTP), a Leg Joint Controller (LJC), and a Manipulator Joint Controller (MJC). In the GDC, we proposed a lightweight grasp detection CNN based on DenseBlock called DES-LGCNN, which reduced algorithm complexity while maintaining accuracy by incorporating UP and DOWN modules with DenseBlock. For JTP, we optimized the model based on quadruped robot kinematics to enhance wrist camera visibility in dynamic environments. We integrated the network and model into our homemade robot control system and verified our framework through multiple experiments. First, we evaluated the accuracy of the grasp detection algorithm using the Cornell and Jacquard datasets. On the Jacquard dataset, we achieved a detection accuracy of 92.49% for grasp points within 6 ms. Second, we verified its visibility through simulation. Finally, we conducted dynamic scene experiments which consisted of a dynamic target scenario (DTS), a dynamic base scenario (DBS), and a dynamic target and base scenario (DTBS) using an SDU-150 physical robot. In all three scenarios, the object was successfully grasped. The results demonstrate the effectiveness of our framework in managing dynamic environments throughout task execution. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

22 pages, 23235 KiB  
Article
Efficient YOLOv7-Drone: An Enhanced Object Detection Approach for Drone Aerial Imagery
by Xiaofeng Fu, Guoting Wei, Xia Yuan, Yongshun Liang and Yuming Bo
Drones 2023, 7(10), 616; https://doi.org/10.3390/drones7100616 - 1 Oct 2023
Cited by 14 | Viewed by 5287
Abstract
In recent years, the rise of low-cost mini rotary-wing drone technology across diverse sectors has emphasized the crucial role of object detection within drone aerial imagery. Low-cost mini rotary-wing drones come with intrinsic limitations, especially in computational power. Drones come with intrinsic limitations, [...] Read more.
In recent years, the rise of low-cost mini rotary-wing drone technology across diverse sectors has emphasized the crucial role of object detection within drone aerial imagery. Low-cost mini rotary-wing drones come with intrinsic limitations, especially in computational power. Drones come with intrinsic limitations, especially in resource availability. This context underscores an urgent need for solutions that synergize low latency, high precision, and computational efficiency. Previous methodologies have primarily depended on high-resolution images, leading to considerable computational burdens. To enhance the efficiency and accuracy of object detection in drone aerial images, and building on the YOLOv7, we propose the Efficient YOLOv7-Drone. Recognizing the common presence of small objects in aerial imagery, we eliminated the less efficient P5 detection head and incorporated the P2 detection head for increased precision in small object detection. To ensure efficient feature relay from the Backbone to the Neck, channels within the CBS module were optimized. To focus the model more on the foreground and reduce redundant computations, the TGM-CESC module was introduced, achieving the generation of pixel-level constrained sparse convolution masks. Furthermore, to mitigate potential data losses from sparse convolution, we embedded the head context-enhanced method (HCEM). Comprehensive evaluation using the VisDrone and UAVDT datasets demonstrated our model’s efficacy and practical applicability. The Efficient Yolov7-Drone achieved state-of-the-art scores while ensuring real-time detection performance. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

17 pages, 539 KiB  
Article
An Observer-Based Adaptive Neural Network Finite-Time Tracking Control for Autonomous Underwater Vehicles via Command Filters
by Jun Guo, Jun Wang and Yuming Bo
Drones 2023, 7(10), 604; https://doi.org/10.3390/drones7100604 - 26 Sep 2023
Cited by 3 | Viewed by 1392
Abstract
Due to the hostile marine environment, there will inevitably be unpredictable factors during the operation of unmanned underwater vehicles, including changes in ocean currents, hull dimensions, and velocity measurement uncertainties. An improved finite-time adaptive tracking control issue is considered for autonomous underwater vehicles [...] Read more.
Due to the hostile marine environment, there will inevitably be unpredictable factors during the operation of unmanned underwater vehicles, including changes in ocean currents, hull dimensions, and velocity measurement uncertainties. An improved finite-time adaptive tracking control issue is considered for autonomous underwater vehicles (AUVs) with uncertain dynamics, unknown external disturbances, and unavailable speed information. A state observer is designed to estimate the position and velocity of the vehicle via a neural network (NN) approach. The NN is used to estimate uncertainties and external disturbances. A finite-time controller is designed via backstepping and command filter techniques. A multi-input multi-output (MIMO) filter for AUVs is established, and the corresponding MIMO filter compensation signal is constructed to eliminate the effect of filtering error. All the signals of the closed-loop system are proved to be finite-time bounded. An example with comparison is given to show the effectiveness of our method. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

22 pages, 907 KiB  
Article
Partially Observable Mean Field Multi-Agent Reinforcement Learning Based on Graph Attention Network for UAV Swarms
by Min Yang, Guanjun Liu, Ziyuan Zhou and Jiacun Wang
Drones 2023, 7(7), 476; https://doi.org/10.3390/drones7070476 - 20 Jul 2023
Cited by 5 | Viewed by 2569
Abstract
Multiple unmanned aerial vehicles (Multi-UAV) systems have recently demonstrated significant advantages in some real-world scenarios, but the limited communication range of UAVs poses great challenges to multi-UAV collaborative decision-making. By constructing the multi-UAV cooperation problem as a multi-agent system (MAS), the cooperative decision-making [...] Read more.
Multiple unmanned aerial vehicles (Multi-UAV) systems have recently demonstrated significant advantages in some real-world scenarios, but the limited communication range of UAVs poses great challenges to multi-UAV collaborative decision-making. By constructing the multi-UAV cooperation problem as a multi-agent system (MAS), the cooperative decision-making among UAVs can be realized by using multi-agent reinforcement learning (MARL). Following this paradigm, this work focuses on developing partially observable MARL models that capture important information from local observations in order to select effective actions. Previous related studies employ either probability distributions or weighted mean field to update the average actions of neighborhood agents. However, they do not fully consider the feature information of surrounding neighbors, resulting in a local optimum often. In this paper, we propose a novel partially multi-agent reinforcement learning algorithm to remedy this flaw, which is based on graph attention network and partially observable mean field and is named as the GPMF algorithm for short. GPMF uses a graph attention module and a mean field module to describe how an agent is influenced by the actions of other agents at each time step. The graph attention module consists of a graph attention encoder and a differentiable attention mechanism, outputting a dynamic graph to represent the effectiveness of neighborhood agents against central agents. The mean field module approximates the effect of a neighborhood agent on a central agent as the average effect of effective neighborhood agents. Aiming at the typical task scenario of large-scale multi-UAV cooperative roundup, the proposed algorithm is evaluated based on the MAgent framework. Experimental results show that GPMF outperforms baselines including state-of-the-art partially observable mean field reinforcement learning algorithms, providing technical support for large-scale multi-UAV coordination and confrontation tasks in communication-constrained environments. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

21 pages, 1447 KiB  
Article
Diffusion Nonlinear Estimation and Distributed UAV Path Optimization for Target Tracking with Intermittent Measurements and Unknown Cross-Correlations
by Shen Wang, Yinya Li, Guoqing Qi and Andong Sheng
Drones 2023, 7(7), 473; https://doi.org/10.3390/drones7070473 - 18 Jul 2023
Cited by 1 | Viewed by 1412
Abstract
This paper focuses on distributed state estimation (DSE) and unmanned aerial vehicle (UAV) path optimization for target tracking. First, a diffusion cubature Kalman filter with intermittent measurements based on covariance intersection (DCKFI-CI) is proposed, to address state estimation with the existence of detection [...] Read more.
This paper focuses on distributed state estimation (DSE) and unmanned aerial vehicle (UAV) path optimization for target tracking. First, a diffusion cubature Kalman filter with intermittent measurements based on covariance intersection (DCKFI-CI) is proposed, to address state estimation with the existence of detection failure and unknown cross-correlations in the network. Furthermore, an alternative transformation of DCKFI-CI based on the information form is developed utilizing a pseudo measurement matrix. The performance of the proposed DSE algorithm is analyzed using the consistency and the bounded error covariance of the estimate. Additionally, the condition of the bounded error covariance is derived. In order to further improve the tracking performance, a UAV path optimization algorithm is developed by minimizing the sum of the trace of fused error covariance, based on the distributed optimization method. Finally, simulations were conducted to verify the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

23 pages, 6244 KiB  
Article
Hysteresis Modeling and Compensation for a Fast Piezo-Driven Scanner in the UAV Image Stabilization System
by Jinlei Lu, Jun Wang, Yuming Bo and Xianchun Zhang
Drones 2023, 7(6), 392; https://doi.org/10.3390/drones7060392 - 12 Jun 2023
Cited by 3 | Viewed by 1459
Abstract
The fast piezo-driven scanner (FPDS) compensates for vibrations in the unmanned aerial vehicle (UAV) image stabilization system. However, the hysteresis nonlinearity reduces the positioning accuracy of the FPDS. To address this challenge, this paper presents a novel weighted polynomial modified Bouc–Wen (WPMBW) model [...] Read more.
The fast piezo-driven scanner (FPDS) compensates for vibrations in the unmanned aerial vehicle (UAV) image stabilization system. However, the hysteresis nonlinearity reduces the positioning accuracy of the FPDS. To address this challenge, this paper presents a novel weighted polynomial modified Bouc–Wen (WPMBW) model cascaded with a linear dynamic model to describe counterclockwise, asymmetric, and rate-dependent hysteresis loops of an FPDS. The proposed approach utilizes the weighted polynomial function to describe the asymmetric characteristic and the linear dynamic model to capture the rate-dependent behavior. By modifying the last two terms in the classical Bouc–Wen (CBW) model, the modified BW model directly characterizes the counterclockwise hysteresis loops with fewer parameters, circumventing the algebraic-loop problem that arises in the inverse CBW model. The pseudorandom binary sequence (PRBS) input is employed to decouple the linear dynamic model from the WPMBW model. The sinusoidal input is then applied to stimulate the hysteresis phenomenon, and the parameters of the WPMBW model are estimated by the particle swarm optimization (PSO) toolbox. Experimental results on a commercial FPDS show that the proposed model is superior to the CBW and traditional asymmetric BW models in modeling accuracy and feedforward hysteresis compensation. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

19 pages, 4755 KiB  
Article
Research on Environment Perception System of Quadruped Robots Based on LiDAR and Vision
by Guangrong Chen and Liang Hong
Drones 2023, 7(5), 329; https://doi.org/10.3390/drones7050329 - 20 May 2023
Cited by 10 | Viewed by 3855
Abstract
Due to the high stability and adaptability, quadruped robots are currently highly discussed in the robotics field. To overcome the complicated environment indoor or outdoor, the quadruped robots should be configured with an environment perception system, which mostly contain LiDAR or a vision [...] Read more.
Due to the high stability and adaptability, quadruped robots are currently highly discussed in the robotics field. To overcome the complicated environment indoor or outdoor, the quadruped robots should be configured with an environment perception system, which mostly contain LiDAR or a vision sensor, and SLAM (Simultaneous Localization and Mapping) is deployed. In this paper, the comparative experimental platforms, including a quadruped robot and a vehicle, with LiDAR and a vision sensor are established firstly. Secondly, a single sensor SLAM, including LiDAR SLAM and Visual SLAM, are investigated separately to highlight their advantages and disadvantages. Then, multi-sensor SLAM based on LiDAR and vision are addressed to improve the environmental perception performance. Thirdly, the improved YOLOv5 (You Only Look Once) by adding ASFF (adaptive spatial feature fusion) is employed to do the image processing of gesture recognition and achieve the human–machine interaction. Finally, the challenge of environment perception system for mobile robot based on comparison between wheeled and legged robots is discussed. This research provides an insight for the environment perception of legged robots. Full article
(This article belongs to the Special Issue Advanced Unmanned System Control and Data Processing)
Show Figures

Figure 1

Back to TopTop