Advances in Deep Learning for Drones and Its Applications: 2nd Edition

Special Issue Editors

Tencent XR Vision Lab, Canberra, ACT 2601, Australia
Interests: UAV; robot vision; state estimation; deep learning in agriculture (horticulture); reinforcement learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Aerospace Engineering, Delft University of Technology, 2600 AA Delft, The Netherlands
Interests: learning, planning, active sensing; environmental mapping; informative path planning; robotic decision-making; agricultural robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Centre for Automation and Robotic Engineering Science, Department of Electrical and Computer Engineering, University of Auckland, Auckland, New Zealand
Interests: agricultural robots; IPT; smart farm; human-robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is with great pleasure that we present the second edition of our Special Issue, following the high popularity gained from the first one (11 published papers).

Drones, especially vertical takeoff and landing (VTOL) platforms, are extremely popular and useful for many tasks. The variety of commercially available VTOL platforms today indicates that they have left the research lab and are being utilized for real-world aerial work, such as vertical structure inspection, construction site survey, and precision agriculture. These platforms offer high-level autonomous functionalities, minimizing user interventions, and they can carry the useful payloads required for an application.

In addition, we have witnessed rapid growth in the area of machine learning, particularly deep learning. This has demonstrated that state-of-the-art deep learning techniques can already outperform human capabilities in many sophisticated tasks, such as autonomous driving, playing games such GO or Dota 2 (reinforcement learning), and even in medical image analysis (object detection and instance segmentation).

Based on the two cutting-edge technologies mentioned above, there exists a growing interest in utilizing deep learning techniques for aerial robots in order to improve their capabilities and level of autonomy. This step change will play a pivotal role in both drone technologies and the field of aerial robotics.

Within this context, we thus invite researchers to submit papers to this Special Issue that focus on the current advances in the area of deep learning for field aerial robots.

Papers are solicited on all areas directly related to these topics, including the following: 

  • Large-scale aerial datasets and standardized benchmarks for the training, testing, and evaluation of deep learning solutions;
  • Deep neural networks (DNN) for field aerial robot perception (e.g., object detection or semantic classification for navigation);
  • Recurrent networks for state estimation and dynamic identification of aerial vehicles;
  • Deep reinforcement learning for aerial robots (discrete or continuous control) in dynamic environments;
  • Learning-based aerial manipulation and planning in cluttered environments;
  • Decision making or task planning using machine learning for field aerial robots;
  • Data analytics and real-time decision making with aerial robots-in-the-loop;
  • Aerial robots in agriculture using deep learning;
  • Aerial robots in inspection using deep learning;
  • Imitation learning for aerial robots (e.g., teach and repeat);
  • Multi aerial-agent coordination using deep learning;
  • Innovative and novel mechanical and electrical design for aerial vehicles.

Dr. Inkyu Sa
Dr. Marija Popovic
Dr. Ho Seok Ahn
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAVs
  • aerial robots
  • drones
  • remote sensing
  • deep learning
  • deep neural networks
  • computer vision
  • robotic perception

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 7728 KiB  
Article
UVPose: A Real-Time Key-Point-Based Skeleton Detection Network for a Drone Countermeasure System
by Bodan Yao, Weijiao Wang, Zhaojie Wang and Qi Song
Drones 2025, 9(3), 214; https://doi.org/10.3390/drones9030214 - 17 Mar 2025
Viewed by 329
Abstract
In drone countermeasure systems, drone tracking is commonly conducted using object detection methods, which are typically limited to identifying the presence of a drone. To enhance the performance of such systems and improve the accuracy of drone flight posture prediction—while precisely capturing critical [...] Read more.
In drone countermeasure systems, drone tracking is commonly conducted using object detection methods, which are typically limited to identifying the presence of a drone. To enhance the performance of such systems and improve the accuracy of drone flight posture prediction—while precisely capturing critical components such as rotors, mainboards, and flight trajectories—this paper introduces a novel drone key point detection model, UVPose, built upon the MMpose framework. First, we design an innovative backbone network, MDA-Net, based on the CSPNet architecture. This network improves multi-scale feature extraction and strengthens connections between low- and high-level features. To further enhance key point perception and pose estimation accuracy, a parallel attention mechanism, combining channel and spatial attention, is integrated. Next, we propose an advanced neck structure, RFN, which combines high-level semantic features from the backbone with rich contextual information from the neck. For the head, we adopt the SimCC method, optimized for lightweight, efficient, and accurate key point localization. Experimental results demonstrate that UVPose outperforms existing models, achieving a PCK of 79.2%, an AP of 67.2%, and an AR of 73.5%, with only 15.8 million parameters and 3.3 G of computation. This balance between accuracy and resource efficiency makes UVPose well suited for deployment on edge devices. Full article
Show Figures

Figure 1

27 pages, 2843 KiB  
Article
GRU-Based Deep Learning Framework for Real-Time, Accurate, and Scalable UAV Trajectory Prediction
by Seungwon Yoon, Dahyun Jang, Hyewon Yoon, Taewon Park and Kyuchul Lee
Drones 2025, 9(2), 142; https://doi.org/10.3390/drones9020142 - 14 Feb 2025
Viewed by 871
Abstract
Trajectory prediction is critical for ensuring the safety, reliability, and scalability of Unmanned Aerial Vehicle (UAV) in urban environments. Despite advances in deep learning, existing methods often struggle with dynamic UAV conditions, such as rapid directional changes and limited forecasting horizons, while lacking [...] Read more.
Trajectory prediction is critical for ensuring the safety, reliability, and scalability of Unmanned Aerial Vehicle (UAV) in urban environments. Despite advances in deep learning, existing methods often struggle with dynamic UAV conditions, such as rapid directional changes and limited forecasting horizons, while lacking comprehensive real-time validation and generalization capabilities. This study addresses these challenges by proposing a gated recurrent unit (GRU)-based deep learning framework optimized through Look_Back and Forward_Length labeling to capture complex temporal patterns. The model demonstrated state-of-the-art performance, surpassing existing unmanned aerial vehicles (UAV) and aircraft trajectory prediction approaches, including FlightBERT++, in terms of both accuracy and robustness. It achieved reliable long-range predictions up to 4 s, and its real-time feasibility was validated due to its efficient resource utilization. The model’s generalization capability was confirmed through evaluations on two independent UAV datasets, where it consistently predicted unseen trajectories with high accuracy. These findings highlight the model’s ability to handle rapid maneuvers, extend prediction horizons, and generalize across platforms. This work establishes a robust trajectory prediction framework with practical applications in collision avoidance, mission planning, and anti-drone systems, paving the way for safer and more scalable UAV operations. Full article
Show Figures

Figure 1

17 pages, 7514 KiB  
Article
Cloud–Edge Collaborative Strategy for Insulator Recognition and Defect Detection Model Using Drone-Captured Images
by Pengpei Gao, Tingting Wu and Chunhe Song
Drones 2024, 8(12), 779; https://doi.org/10.3390/drones8120779 - 21 Dec 2024
Cited by 1 | Viewed by 695
Abstract
In modern power systems, drones are increasingly being utilized to monitor the condition of critical power equipment. However, limited computing capacity is a key factor limiting the application of drones. To optimize the computational load on drones, this paper proposes a cloud–edge collaborative [...] Read more.
In modern power systems, drones are increasingly being utilized to monitor the condition of critical power equipment. However, limited computing capacity is a key factor limiting the application of drones. To optimize the computational load on drones, this paper proposes a cloud–edge collaborative intelligence strategy to be applied to insulator identification and defect detection scenarios. Firstly, a low-computation method deployed at the edge is proposed for determing whether insulator strings are present in the captured images. Secondly, an efficient insulator recognition and defect detection method, I-YOLO (Insulator-YOLO), is proposed for cloud deployment. In the neck network, we integrate an I-ECA (Insulator-Enhanced Channel Attention) mechanism based on insulator characteristics to more comprehensively fuse features. In addition, we incorporated the insulator feature cross fusion network (I-FCFN) to enhance the detection of small-sized insulator defects. Experimental results demonstrate that the cloud–edge collaborative intelligence strategy performs exceptionally well in insulator-related tasks. The edge algorithm achieved an accuracy of 97.9% with only 0.7 G FLOPs, meeting the inspection requirements of drones. Meanwhile, the cloud model achieved a mAP50 of 96.2%, accurately detecting insulators and their defects. Full article
Show Figures

Figure 1

25 pages, 7661 KiB  
Article
Application of Reinforcement Learning in Controlling Quadrotor UAV Flight Actions
by Shang-En Shen and Yi-Cheng Huang
Drones 2024, 8(11), 660; https://doi.org/10.3390/drones8110660 - 9 Nov 2024
Cited by 2 | Viewed by 2485
Abstract
Most literature has extensively discussed reinforcement learning (RL) for controlling rotorcraft drones during flight for traversal tasks. However, most studies lack adequate details regarding the design of reward and punishment mechanisms, and there is a limited exploration of the feasibility of applying reinforcement [...] Read more.
Most literature has extensively discussed reinforcement learning (RL) for controlling rotorcraft drones during flight for traversal tasks. However, most studies lack adequate details regarding the design of reward and punishment mechanisms, and there is a limited exploration of the feasibility of applying reinforcement learning in actual flight control following simulation experiments. Consequently, this study focuses on the exploration of reward and punishment design and state input for RL. The simulation environment is constructed using AirSim and Unreal Engine, with onboard camera footage serving as the state input for reinforcement learning. The research investigates three RL algorithms suitable for discrete action training. The Deep Q Network (DQN), Advantage Actor–Critic (A2C), and Proximal Policy Optimization (PPO) were combined with three different reward and punishment design mechanisms for training and testing. The results indicate that employing the PPO algorithm along with a continuous return method as the reward mechanism allows for effective convergence during the training process, achieving a target traversal rate of 71% in the testing environment. Furthermore, this study proposes integrating the YOLOv7-tiny object detection (OD) system to assess the applicability of reinforcement learning in real-world settings. Unifying the state inputs of simulated and OD environments and replacing the original simulated image inputs with a maximum dual-target approach, the experimental simulation achieved a target traversal rate of 52% ultimately. In summary, this research formulates a set of logical frameworks for an RL reward and punishment design deployed with real-time Yolo’s OD implementation synergized as a useful aid for related RL studies. Full article
Show Figures

Figure 1

Review

Jump to: Research

40 pages, 11010 KiB  
Review
PRISMA Review: Drones and AI in Inventory Creation of Signage
by Geovanny Satama-Bermeo, Jose Manuel Lopez-Guede, Javad Rahebi, Daniel Teso-Fz-Betoño, Ana Boyano and Ortzi Akizu-Gardoki
Drones 2025, 9(3), 221; https://doi.org/10.3390/drones9030221 - 19 Mar 2025
Viewed by 429
Abstract
This systematic review explores the integration of unmanned aerial vehicles (UAVs) and artificial intelligence (AI) in automating road signage inventory creation, employing the preferred reporting items for systematic reviews and meta-analyses (PRISMA) methodology to analyze recent advancements. The study evaluates cutting-edge technologies, including [...] Read more.
This systematic review explores the integration of unmanned aerial vehicles (UAVs) and artificial intelligence (AI) in automating road signage inventory creation, employing the preferred reporting items for systematic reviews and meta-analyses (PRISMA) methodology to analyze recent advancements. The study evaluates cutting-edge technologies, including UAVs equipped with deep learning algorithms and advanced sensors like light detection and ranging (LiDAR) and multispectral cameras, highlighting their roles in enhancing traffic sign detection and classification. Key challenges include detecting minor or partially obscured signs and adapting to diverse environmental conditions. The findings reveal significant progress in automation, with notable improvements in accuracy, efficiency, and real-time processing capabilities. However, limitations such as computational demands and environmental variability persist. By providing a comprehensive synthesis of current methodologies and performance metrics, this review establishes a robust foundation for future research to advance automated road infrastructure management to improve safety and operational efficiency in urban and rural settings. Full article
Show Figures

Figure 1

Back to TopTop