Previous Issue
Volume 9, November
 
 

Drones, Volume 9, Issue 12 (December 2025) – 19 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 3407 KB  
Article
Autonomous UAV-Based Volcanic Gas Monitoring: A Simulation-Validated Case Study in Santorini
by Theodoros Karachalios and Theofanis Orphanoudakis
Drones 2025, 9(12), 829; https://doi.org/10.3390/drones9120829 (registering DOI) - 29 Nov 2025
Abstract
Unmanned Aerial Vehicles (UAVs) can deliver rapid, spatially resolved measurements of volcanic gases that often precede eruptions, yet most deployments remain manual or preplanned and are slow to react to seismic unrest. In the present work, we present a simulation-validated design of an [...] Read more.
Unmanned Aerial Vehicles (UAVs) can deliver rapid, spatially resolved measurements of volcanic gases that often precede eruptions, yet most deployments remain manual or preplanned and are slow to react to seismic unrest. In the present work, we present a simulation-validated design of an earthquake-triggered, autonomous workflow for early detection of CO2 anomalies, demonstrated through a conceptual case study focused on the Santorini caldera. The system ingests real-time seismic alerts, generates missions automatically, and executes a two-stage sensing strategy: a fast scan to build a coarse CO2 heatmap followed by targeted high-precision sampling at emerging hotspots. Mission planning includes wind-and terrain-aware flight profiles, geofenced safety envelopes and a facility-location approach to landing-site placement; in a Santorini case study, we provide a ring of candidate launch/landing zones with wind-contingent usage, illustrate adaptive replanning driven by heatmap uncertainty and outline calibration and quality-control steps for robust CO2 mapping. The proposed methodology offers an operational blueprint that links seismic triggers to actionable, georeferenced gas information and can be transferred to other island or caldera volcanoes. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Enhanced Emergency Response)
19 pages, 807 KB  
Article
DPAD: Distribution-Driven Perturbation-Adaptive Defense for UAV Time-Series Regression Under Hybrid Adversarial Attacks
by Bo Xu, Zhiqing Liu, Zhongjun Dong, Kaiqi Huang, Xiaopeng Huang, Haolin Zhu, Jun Wei, Yong Li, Yangbai Zhang and Xiuping Li
Drones 2025, 9(12), 828; https://doi.org/10.3390/drones9120828 (registering DOI) - 28 Nov 2025
Abstract
Time-series regression models are essential components in unmanned aerial vehicles (UAVs) for accurate trajectory and state prediction. Nevertheless, they are still vulnerable to hybrid adversarial attacks, which can lead to a compromised mission performance and cause huge economic loss. For this challenge, we [...] Read more.
Time-series regression models are essential components in unmanned aerial vehicles (UAVs) for accurate trajectory and state prediction. Nevertheless, they are still vulnerable to hybrid adversarial attacks, which can lead to a compromised mission performance and cause huge economic loss. For this challenge, we propose the Distribution-driven Perturbation-Adaptive Defense (DPAD) framework. DPAD improves perturbation detection with Gaussian Mixture Model (GMM)-based feature augmentation that raises the accuracy of perturbation strength prediction, increasing from 0.685 to 0.943 R2, and dynamically chooses a suitable defense sub-model or the original model for adaptive correction. The experiments on UAV_Delivery show that DPAD significantly enhances robustness by achieving about 80% reduction in prediction errors under hybrid attacks while maintaining high accuracy on clean samples with an inference speed of 2.744 ms per sample. The proposed framework can scale up an effective solution to defend UAV time-series regression models against complex adversarial scenarios. Full article
26 pages, 8112 KB  
Article
Advancing Real-Time Aerial Wildfire Detection Through Plume Recognition and Knowledge Distillation
by Pirunthan Keerthinathan, Juan Sandino, Sutharsan Mahendren, Anuraj Uthayasooriyan, Julian Galvez, Grant Hamilton and Felipe Gonzalez
Drones 2025, 9(12), 827; https://doi.org/10.3390/drones9120827 (registering DOI) - 28 Nov 2025
Abstract
Uncrewed aerial systems (UAS)-based remote sensing and artificial intelligence (AI) analysis enable real-time wildfire or bushfire detection, facilitating early response to minimize damage and protect lives and property. However, their effectiveness is limited by three issues: distinguishing smoke from fog, the high cost [...] Read more.
Uncrewed aerial systems (UAS)-based remote sensing and artificial intelligence (AI) analysis enable real-time wildfire or bushfire detection, facilitating early response to minimize damage and protect lives and property. However, their effectiveness is limited by three issues: distinguishing smoke from fog, the high cost of manual annotation, and the computational demands of large models. This study addresses the three key challenges by introducing plume as a new indicator to better distinguish smoke from similar visual elements, and by employing a hybrid annotation method using knowledge distillation (KD) to reduce expert labour and accelerate labelling. Additionally, it leverages lightweight YOLO Nano models trained with pseudo-labels generated from a fine-tuned teacher network to lower computational demands while maintaining high detection accuracy for real-time wildfire monitoring. Controlled pile burns in Canungra, QLD, Australia, were conducted to collect UAS-captured images over deciduous vegetation, which were subsequently augmented with the Flame2 dataset, which contains wildfire images of coniferous vegetation. A Grounding DINO model, fine-tuned using few-shot learning, served as the teacher network to generate pseudo-labels for a significant portion of the Flame2 dataset. These pseudo-labels were then used to train student networks consisting of YOLO Nano architectures, specifically versions 5, 8, and 11 (YOLOv5n, YOLOv8n, YOLOv11n). The experimental results show that YOLOv8n and YOLOv5n achieved an mAP@0.5 of 0.721. Plume detection outperforms smoke indicators (F1: 76.1–85.7% vs. 70%) in fog and wildfire scenarios. These findings underscore the value of incorporating plume as a distinct class and utilizing KD, both of which enhance detection accuracy and scalability, ultimately supporting more reliable and timelier wildfire monitoring and response. Full article
26 pages, 2005 KB  
Article
PLMT-Net: A Physics-Aware Lightweight Network for Multi-Agent Trajectory Prediction in Interactive Driving Scenarios
by Wan Yu, Fuyun Liu, Huiqi Liu, Ming Chen and Liangliang Zhao
Drones 2025, 9(12), 826; https://doi.org/10.3390/drones9120826 (registering DOI) - 28 Nov 2025
Abstract
Accurate and efficient multi-agent trajectory prediction remains a core challenge for autonomous driving, particularly in modeling complex interactions while maintaining physical plausibility and computational efficiency. Many existing methods- especially those based on large transformer architectures- tend to overlook physical constraints, leading to unrealistic [...] Read more.
Accurate and efficient multi-agent trajectory prediction remains a core challenge for autonomous driving, particularly in modeling complex interactions while maintaining physical plausibility and computational efficiency. Many existing methods- especially those based on large transformer architectures- tend to overlook physical constraints, leading to unrealistic predictions and high deployment costs. In this work, we propose a lightweight trajectory prediction framework that integrates physical information to enhance interaction modeling and runtime performance. Our method introduces two physically inspired strategies: (1) a constraint-guided mechanism is used to filter irrelevant or distracting neighbors, and (2) a physics-aware attention module is applied to steer attention weights toward physically plausible interactions. The overall architecture adopts a modular and vectorized design, effectively reducing model complexity and inference latency. Experiments on the Argoverse V1 dataset, comparing against multiple existing methods, demonstrate that our approach achieves a favorable balance among accuracy, physical feasibility, and efficiency, running in real time on a commodity desktop GPU. Future work will focus on validating its performance on embedded hardware. Full article
(This article belongs to the Section Innovative Urban Mobility)
Show Figures

Figure 1

30 pages, 4458 KB  
Article
Multi-UAV Cooperative Search in Partially Observable Low-Altitude Environments Based on Deep Reinforcement Learning
by Xiu-Xia Yang, Wen-Qiang Yao, Yi Zhang, Hao Yu and Chao Wang
Drones 2025, 9(12), 825; https://doi.org/10.3390/drones9120825 - 27 Nov 2025
Abstract
Multi-Unmanned Aerial Vehicle (Multi-UAV) cooperative search represents a cutting-edge research direction in the field of unmanned aerial vehicle applications. The use of multi-UAV systems for low-altitude target search and area surveillance has become an effective means of enhancing security capabilities. In practical scenarios, [...] Read more.
Multi-Unmanned Aerial Vehicle (Multi-UAV) cooperative search represents a cutting-edge research direction in the field of unmanned aerial vehicle applications. The use of multi-UAV systems for low-altitude target search and area surveillance has become an effective means of enhancing security capabilities. In practical scenarios, UAVs rely on onboard sensors to acquire environmental information; however, due to the limited perceptual range of these sensors, their observation capabilities are inherently local and constrained. This paper investigates the problem of multi-UAV cooperative search in partially observable low-altitude environments, where each UAV possesses a circular sensing range with a finite radius. Target location information is only obtained when a target enters the field of view of any UAV. The objective is to achieve cooperative search and sustain continuous surveillance while ensuring safety among UAVs and with the environment. To address this challenge, we propose a novel multi-agent deep reinforcement learning (MADRL) algorithm named Normalizing Graph Attention Soft Actor-Critic (NGASAC). This algorithm integrates a normalizing flow (NL) layer and a multi-head graph attention network (MHGAT). The normalizing flow technique maps traditional Gaussian sampling to a more complex action distribution, thereby enhancing the expressiveness and flexibility of the policy. Simultaneously, by constructing a multi-head graph attention network that captures “obstacle–target” relationships, the algorithm improves the UAVs’ ability to learn and reason about complex spatial topologies, leading to significantly better performance in cooperative search and stable surveillance of hidden targets. Simulation results demonstrate that the NGASAC algorithm markedly outperforms baseline methods such as Multi-Agent Soft Actor-Critic (MASAC), Multi-Agent Proximal Policy Optimization (MAPPO), and Multi-Agent Deep Deterministic Policy Gradient (MADDPG) across multiple evaluation metrics, including success rate, task time, and obstacle avoidance capability. Furthermore, it exhibits strong generalization performance and robustness. Full article
Show Figures

Figure 1

24 pages, 2374 KB  
Article
NightTrack: Joint Night-Time Image Enhancement and Object Tracking for UAVs
by Xiaomin Huang, Yunpeng Bai, Jiaman Ma, Ying Li, Changjing Shang and Qiang Shen
Drones 2025, 9(12), 824; https://doi.org/10.3390/drones9120824 - 27 Nov 2025
Abstract
UAV-based visual object tracking has recently become a prominent research focus in computer vision. However, most existing trackers are primarily benchmarked under well-illuminated conditions, largely overlooking the challenges that may arise in night-time scenarios. Although attempts exist to restore image brightness via low-light [...] Read more.
UAV-based visual object tracking has recently become a prominent research focus in computer vision. However, most existing trackers are primarily benchmarked under well-illuminated conditions, largely overlooking the challenges that may arise in night-time scenarios. Although attempts exist to restore image brightness via low-light image enhancement before feeding frames to a tracker, such two-stage pipelines often struggle to strike an effective balance between the competing objectives of enhancement and tracking. To address this limitation, this work proposes NightTrack, a unified framework that optimizes both low-light image enhancement and UAV object tracking. While boosting image visibility, NightTrack not only explicitly preserves but also reinforces the discriminative features required for robust tracking. To improve the discriminability of low-light representations, Pyramid Attention Modules (PAMs) are introduced to enhance multi-scale contextual cues. Moreover, by jointly estimating illumination and noise curves, NightTrack mitigates the potential adverse effects of low-light environments, leading to significant gains in precision and robustness. Experimental results on multiple night-time tracking benchmarks demonstrate that NightTrack outperforms state-of-the-art methods in night-time scenes, exhibiting strong promises for further development. Full article
Show Figures

Figure 1

24 pages, 15280 KB  
Article
An Efficient and Accurate UAV State Estimation Method with Multi-LiDAR–IMU–Camera Fusion
by Junfeng Ding, Pei An, Kun Yu, Tao Ma, Bin Fang and Jie Ma
Drones 2025, 9(12), 823; https://doi.org/10.3390/drones9120823 - 27 Nov 2025
Abstract
State estimation plays a vital role in UAV navigation and control. With the continuous decrease in sensor cost and size, UAVs equipped with multiple LiDARs, Inertial Measurement Units (IMUs), and cameras have attracted increasing attention. Such systems can acquire rich environmental and motion [...] Read more.
State estimation plays a vital role in UAV navigation and control. With the continuous decrease in sensor cost and size, UAVs equipped with multiple LiDARs, Inertial Measurement Units (IMUs), and cameras have attracted increasing attention. Such systems can acquire rich environmental and motion information from multiple perspectives, thereby enabling more precise navigation and mapping in complex environments. However, efficiently utilizing multi-sensor data for state estimation remains challenging. There is a complex coupling relationship between IMUs’ bias and UAV state. To address these challenges, this paper proposes an efficient and accurate UAV state estimation method tailored for multi-LiDAR–IMU–camera systems. Specifically, we first construct an efficient distributed state estimation model. It decomposes the multi-LiDAR–IMU–camera system into a series of single LiDAR–IMU–camera subsystems, reformulating the complex coupling problem as an efficient distributed state estimation problem. Then, we derive an accurate feedback function to constrain and optimize the UAV state using estimated subsystem states, thus enhancing overall estimation accuracy. Based on this model, we design an efficient distributed state estimation algorithm with multi-LiDAR-IMU-Camerafusion, termed DLIC. DLIC achieves robust multi-sensor data fusion via shared feature maps, effectively improving both estimation robustness and accuracy. In addition, we design an accelerated image-to-point cloud registration module (A-I2P) to provide reliable visual measurements, further boosting state estimation efficiency. Extensive experiments are conducted on 18 real-world indoor and outdoor scenarios from the public NTU VIRAL dataset. The results demonstrate that DLIC consistently outperforms existing multi-sensor methods across key evaluation metrics, including RMSE, MAE, SD, and SSE. More importantly, our method runs in real time on a resource-constrained embedded device equipped with only an 8-core CPU, while maintaining low memory consumption. Full article
(This article belongs to the Special Issue Advances in Guidance, Navigation, and Control)
24 pages, 35078 KB  
Article
AUP-DETR: A Foundational UAV Object Detection Framework for Enabling the Low-Altitude Economy
by Jiajing Xu, Xiaozhang Liu, Xiulai Li and Yuanyan Hu
Drones 2025, 9(12), 822; https://doi.org/10.3390/drones9120822 - 27 Nov 2025
Abstract
The ascent of the low-altitude economy underscores the critical need for autonomous perception in Unmanned Aerial Vehicles (UAVs), particularly within complex environments such as urban ports. However, existing object detection models often perform poorly when dealing with land–sea mixed scenes, extreme scale variations, [...] Read more.
The ascent of the low-altitude economy underscores the critical need for autonomous perception in Unmanned Aerial Vehicles (UAVs), particularly within complex environments such as urban ports. However, existing object detection models often perform poorly when dealing with land–sea mixed scenes, extreme scale variations, and dense object distributions from a UAV’s aerial perspective. To address this challenge, we propose AUP-DETR, a novel end-to-end object detection framework for UAVs. This framework, built upon an efficient DETR architecture, features the innovative Fusion with Streamlined Hybrid Core (Fusion-SHC) module. This module effectively fuses low-level spatial details with high-level semantics to strengthen the representation of small aerial objects. Additionally, a Synergistic Spatial Context Fusion (SSCF) module adaptively integrates multi-scale features to generate rich and unified representations for the detection head. Moreover, the proposed Spatial Agent Transformer (SAT) efficiently models global context and long-range dependencies to distinguish heterogeneous objects in complex scenes. To advance related research, we have constructed the Urban Coastal Aerial Detection (UCA-Det) dataset, which is specifically designed for urban port environments. Extensive experiments on our UCA-Det dataset show that AUP-DETR outperforms the YOLO series and other advanced DETR-based models. Our model achieves an mAP50 of 69.68%, representing a 4.41% improvement over the baseline. Furthermore, experiments on the public VisDrone dataset validate its excellent generalization capability and efficiency. This research delivers a robust solution and establishes a new dataset for precise UAV perception in low-altitude economy scenarios. Full article
Show Figures

Figure 1

16 pages, 1776 KB  
Article
Height-Dependent Analysis of UAV Spectrum Occupancy for Cellular Systems Considering 3D Antenna Patterns
by Sung Joon Maeng
Drones 2025, 9(12), 821; https://doi.org/10.3390/drones9120821 - 26 Nov 2025
Abstract
Unmanned aerial vehicles (UAVs) are emerging as mobile aerial platforms for radio frequency (RF) spectrum sensing, enabling dynamic monitoring of the spectrum occupancy of cellular systems at different altitudes. The impact of UAV receiver antenna configurations, particularly with respect to altitude, is critical [...] Read more.
Unmanned aerial vehicles (UAVs) are emerging as mobile aerial platforms for radio frequency (RF) spectrum sensing, enabling dynamic monitoring of the spectrum occupancy of cellular systems at different altitudes. The impact of UAV receiver antenna configurations, particularly with respect to altitude, is critical in determining occupancy performance. In this paper, we present a height-dependent analytical framework for UAV-based spectrum occupancy, focusing on how different receiver antenna configurations affect the sensed signal power. We consider two types of 3D antenna patterns: a typical dipole antenna and a downward directional antenna. Using a stochastic geometry-based approach, we derive closed-form expressions for the altitude-dependent sensed power under both antenna configurations. Moreover, we execute ray tracing-based analysis with a real-world 3-D map and realistic antenna patterns. Monte Carlo simulations are conducted to validate the analytical results, revealing that both altitude and antenna directivity critically affect occupancy accuracy and coverage. Full article
Show Figures

Figure 1

27 pages, 3113 KB  
Article
Multimodal Fusion and Dynamic Resource Optimization for Robust Cooperative Localization of Low-Cost UAVs
by Hongfu Liu, Yajing Fu, Yangyang Ma and Wanpeng Zhang
Drones 2025, 9(12), 820; https://doi.org/10.3390/drones9120820 - 26 Nov 2025
Abstract
To overcome the challenges of low positioning accuracy and inefficient resource utilization in cooperative target localization by unmanned aerial vehicles (UAVs) in complex environments, this paper presents a cooperative localization algorithm that integrates multimodal data fusion with dynamic resource optimization. By leveraging a [...] Read more.
To overcome the challenges of low positioning accuracy and inefficient resource utilization in cooperative target localization by unmanned aerial vehicles (UAVs) in complex environments, this paper presents a cooperative localization algorithm that integrates multimodal data fusion with dynamic resource optimization. By leveraging a cross-modal attention mechanism, the algorithm effectively combines complementary information from visual, radar, and lidar sensors, thereby enhancing localization robustness under occlusions, poor illumination, and adverse weather conditions. Furthermore, a real-time resource scheduling model based on integer linear programming is introduced to dynamically allocate computational and communication resources, which mitigates node overload and minimizes resource waste. Experimental evaluations in scenarios including maritime search and rescue, urban occlusions, and dynamic resource fluctuations show that the proposed algorithm achieves significant improvements in positioning accuracy, resource efficiency, and fault recovery, demonstrating strong potential for applications in complex tasks, demonstrating its potential as a viable solution for low-cost UAV swarm applications in complex environments. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

47 pages, 150968 KB  
Article
Adaptive Refined Graph Convolutional Action Recognition Network with Enhanced Features for UAV Ground Crew Marshalling
by Qing Zhou, Liheng Dong, Zhaoxiang Zhang, Yuelei Xu, Feng Xiao and Yingxia Wang
Drones 2025, 9(12), 819; https://doi.org/10.3390/drones9120819 - 26 Nov 2025
Abstract
For unmanned aerial vehicle (UAV) ground crew marshalling tasks, the accuracy of skeleton-based action recognition is often limited by the high similarity of motion patterns across action categories as well as variations in individual performance. To address this issue, we propose an adaptive [...] Read more.
For unmanned aerial vehicle (UAV) ground crew marshalling tasks, the accuracy of skeleton-based action recognition is often limited by the high similarity of motion patterns across action categories as well as variations in individual performance. To address this issue, we propose an adaptive refined graph convolutional network with enhanced features for action recognition. First, a multi-order and motion feature modeling module is constructed, which integrates joint positions, skeletal structures, and angular encodings for multi-granularity representation. Static-domain and dynamic-domain features are then fused to enhance the diversity and expressiveness of the input representations. Second, a data-driven adaptive graph convolution module is designed, where inter-joint interactions are dynamically modeled through a learnable topology. Furthermore, an adaptive refinement feature activation mechanism is introduced to optimize information flow between nodes, enabling fine-grained modeling of skeletal spatial information. Finally, a frame-index semantic temporal modeling module is incorporated, where joint-type semantics and frame-index semantics are introduced in the spatial and temporal dimensions, respectively, to capture the temporal evolution of actions and comprehensively exploit spatio-temporal semantic correlations. On the NTU-RGB+D 60 and NTU-RGB+D 120 benchmark datasets, the proposed method achieves accuracies of 89.4% and 94.2% under X-Sub and X-View settings, respectively, as well as 81.7% and 83.3% on the respective benchmarks. On the self-constructed UAV Airfield Ground Crew Dataset, the proposed method attains accuracies of 90.71% and 96.09% under X-Sub and HO settings, respectively. Environmental robustness experiments demonstrate that under complex environmental conditions including illumination variations, haze, rain, shadows, and occlusions, the adoption of the Test + Train strategy reduces the maximum performance degradation from 3.1 percentage points to within 1 percentage point. Real-time performance testing shows that the system achieves an end-to-end inference latency of 24.5 ms (40.8 FPS) on the edge device NVIDIA Jetson Xavier NX, meeting real-time processing requirements and validating the efficiency and practicality of the proposed method on edge computing platforms. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Graphical abstract

49 pages, 16677 KB  
Article
A Mission-Oriented Autonomous Missile Evasion Maneuver Decision-Making Method for Unmanned Aerial Vehicle
by Yuequn Luo, Chengwei Ruan, Dali Ding, Zehua Wang, Hang An, Fumin Wang, Mulai Tan, Anqiang Zhou and Huan Zhou
Drones 2025, 9(12), 818; https://doi.org/10.3390/drones9120818 - 26 Nov 2025
Abstract
The aerial game environment is complex. To enhance mission success rates, UAVs must comprehensively consider threats from various directions and distances, as well as autonomous evasion maneuver decision-making methods for multiple UAV platforms, rather than solely focusing on threats from specific directions and [...] Read more.
The aerial game environment is complex. To enhance mission success rates, UAVs must comprehensively consider threats from various directions and distances, as well as autonomous evasion maneuver decision-making methods for multiple UAV platforms, rather than solely focusing on threats from specific directions and distances or decision-making methods for fixed UAV platforms. Accordingly, this study proposes an autonomous missile evasion maneuver decision-making method for UAVs, suitable for multi-scenario and multi-platform transferable mission requirements. A three-dimensional UAV-missile pursuit-evasion model is established, along with state-space, hierarchical maneuver action space and reward function models for autonomous missile evasion. The auto-regressive multi-hybrid proximal policy optimization (ARMH-PPO) algorithm is proposed for this model, integrating autoregressive network structures and utilizing long short-term memory (LSTM) networks to extract temporal features. Drawing on exploration curriculum learning principles, temporal fusion of process and event reward functions is implemented to jointly guide the agent’s learning process through human experience and strategy exploration. Additionally, a proportion integration differentiation (PID) method is introduced to control the UAV’s maneuver execution, reducing the coupling between maneuver control quantities and the simulation object. Simulation experiments and result analysis demonstrate that the proposed algorithm ranks first in both average reward value and average evasion success rate metrics, with the average evasion success rate approximately 8% higher than the second-ranked algorithm. In the three initial scenarios where the missile is positioned laterally, head-on, and tail-behind the UAV, the UAV’s missile evasion success rates are 95%, 70%, and 85%, respectively. Multi-platform simulation results demonstrate that the decision model constructed in this paper exhibits a certain degree of multi-platform transferability. Full article
Show Figures

Figure 1

34 pages, 2994 KB  
Article
What Are the Preferences of Chinese Farmers for Drones (UAVs): Machine Learning in Technology Adoption Behavior
by Fanhao Yang, Jianya Zhao, Jinteng Liu, Zijia Luo, Xingchen Gu and Shu Wang
Drones 2025, 9(12), 817; https://doi.org/10.3390/drones9120817 - 25 Nov 2025
Abstract
With the continuous advancement of sustainable agriculture, drone technology has become a focus of attention. Current research primarily relies on classical models for questionnaire surveys and analyses within specific regions, rather than implementing macro-level investigations that incorporate innovative algorithms. This study designed a [...] Read more.
With the continuous advancement of sustainable agriculture, drone technology has become a focus of attention. Current research primarily relies on classical models for questionnaire surveys and analyses within specific regions, rather than implementing macro-level investigations that incorporate innovative algorithms. This study designed a survey questionnaire to investigate Chinese farmers’ preferences for agricultural drones and their technology adoption mechanisms under sustainable agriculture context. The Ant Colony Optimization-Decision Tree (ACO-DT) model and SHAP (Shapley Additive exPlanations) value analysis are applied to analyze the contribution of different indicators to technology adoption. The ACO-DT model outperformed traditional machine learning models with approximate accuracy 0.85, recall 0.98, and F1 Score 0.90, effectively identifying potential drone users compared to other traditional machine learning models. The SHAP analysis showed “Time Required for Promotion” (average SHAP value exceeds 1.25) and “Understanding of UAV Agriculture” (average SHAP value is about 1.0) were core influencing factors. Specifically, high-cognition farmers preferred shorter promotion cycles, while low-cognition group favored longer cycles to reduce decision-making uncertainty. Practically, the study enriches agricultural technology adoption research methodologically and offers references for advancing smart agriculture and optimizing rural production factors. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

34 pages, 6981 KB  
Article
Increasing Automation on Mission Planning for Heterogeneous Multi-Rotor Drone Fleets in Emergency Response
by Ilham Zerrouk, Esther Salamí, Cristina Barrado, Gautier Hattenberger and Enric Pastor
Drones 2025, 9(12), 816; https://doi.org/10.3390/drones9120816 - 24 Nov 2025
Viewed by 210
Abstract
Drones are increasingly vital for disaster management, yet emergency fleets often consist of heterogeneous platforms, complicating task allocation. Efficient deployment requires rapid assignment based on vehicle and payload characteristics. This work proposes a three-step method composed of fleet analysis, area decomposition and trajectory [...] Read more.
Drones are increasingly vital for disaster management, yet emergency fleets often consist of heterogeneous platforms, complicating task allocation. Efficient deployment requires rapid assignment based on vehicle and payload characteristics. This work proposes a three-step method composed of fleet analysis, area decomposition and trajectory generation for multi-rotor drone surveillance, aiming to achieve complete area coverage in minimal time while respecting no-fly zones. The three-step method generates optimized trajectories for all drones in less than 2 min, ensuring uniform precision and reduced flight distance compared to state-of-the-art methods, achieving mean distance gains of up to 9.31% with a homogeneous fleet of 10 drones. Additionally, a comparative analysis of area partitioning algorithms reveals that simplifying the geometry of the surveillance region can lead to more effective divisions and less complex trajectories. This simplification results in approximately 8.4% fewer turns, even if it slightly increases the total area to be covered. Full article
Show Figures

Graphical abstract

38 pages, 5342 KB  
Article
Risk-Based Design of Urban UAS Corridors
by Cristian Lozano Tafur, Jaime Orduy Rodríguez, Didier Aldana Rodríguez, Danny Stevens Traslaviña, Sebastián Fernández Valencia and Freddy Hernán Celis Ardila
Drones 2025, 9(12), 815; https://doi.org/10.3390/drones9120815 - 24 Nov 2025
Viewed by 83
Abstract
The rapid expansion of Advanced Air Mobility (AAM) and Urban Air Mobility (UAM) poses significant challenges for the integration of Unmanned Aircraft Systems (UAS) into dense urban environments, where safety risks and population exposure are particularly high. This study proposes and applies a [...] Read more.
The rapid expansion of Advanced Air Mobility (AAM) and Urban Air Mobility (UAM) poses significant challenges for the integration of Unmanned Aircraft Systems (UAS) into dense urban environments, where safety risks and population exposure are particularly high. This study proposes and applies a methodology based on probabilistic assessment of both ground and air risk, grounded in the principles of safety management and the use of geospatial data from OpenStreetMap (OSM), official aeronautical charts, and digital urban models. The urban area is discretized into a spatial grid on which independent risks are calculated per cell and later combined through a cumulative probabilistic fusion model. The resulting risk estimates enable the construction of cost matrices compatible with path-search algorithms. The methodology is applied to a case study in Medellín, Colombia, connecting the Oviedo and San Diego shopping centers through Beyond Visual Line of Sight (BVLOS) operations of a DJI FlyCart 30 drone. Results show that planning with the A* algorithm produces safe routes that minimize exposure to critical areas such as hospitals and restricted air corridors, while maintaining operational efficiency metrics. This approach demonstrates a practical bridge between regulatory theory and operational practice in UAM corridor design, offering a replicable solution for risk management in urban scenarios. Full article
(This article belongs to the Section Innovative Urban Mobility)
Show Figures

Figure 1

37 pages, 5454 KB  
Article
An Improved Hybrid MRAC–LQR Control Scheme for Robust Quadrotor Altitude and Attitude Regulation
by Abdelrahman A. Alblooshi, Ishaq Hafez and Rached Dhaouadi
Drones 2025, 9(12), 814; https://doi.org/10.3390/drones9120814 - 24 Nov 2025
Viewed by 256
Abstract
This paper presents the design and analysis of a hybrid Model Reference Adaptive Controller combined with a Linear Quadratic Regulator (MRAC–LQR) for a quadrotor unmanned aerial vehicle (UAV), addressing challenges posed by nonlinear dynamics, underactuated configurations, and sensitivity to external disturbances. A baseline [...] Read more.
This paper presents the design and analysis of a hybrid Model Reference Adaptive Controller combined with a Linear Quadratic Regulator (MRAC–LQR) for a quadrotor unmanned aerial vehicle (UAV), addressing challenges posed by nonlinear dynamics, underactuated configurations, and sensitivity to external disturbances. A baseline MRAC scheme is first developed to ensure stable tracking under varying payloads and wind disturbances. The proposed cascaded hybrid MRAC–LQR framework incorporates integral action to improve steady-state accuracy while preserving the original adaptive update laws. Performance is compared to the existing parallel MRAC–LQR and MRAC–PID control schemes. Simulation results on a nonlinear quadrotor model demonstrate that MRAC–LQR significantly enhances tracking accuracy and disturbance rejection. While MRAC–PID achieves slightly lower tracking error at the cost of higher control effort, MRAC–LQR offers smoother transients and greater control efficiency. Full article
Show Figures

Figure 1

19 pages, 875 KB  
Article
CogMUS: A Soar-Based Cognitive Framework for Mission Understanding in Multi-UAV Cooperative Operation
by Jiaxin Hu, Tao Wang, Hongrun Wang and Jingshuai Cao
Drones 2025, 9(12), 813; https://doi.org/10.3390/drones9120813 - 24 Nov 2025
Viewed by 132
Abstract
The cooperative operation of multiple Unmanned Aerial Vehicles (multi-UAV) is emerging as a pivotal trend in future complex autonomous systems. To enable accurate mission understanding and efficient collaboration among UAVs in complex, dynamic, and uncertain operational environments, this paper introduces CogMUS, a novel [...] Read more.
The cooperative operation of multiple Unmanned Aerial Vehicles (multi-UAV) is emerging as a pivotal trend in future complex autonomous systems. To enable accurate mission understanding and efficient collaboration among UAVs in complex, dynamic, and uncertain operational environments, this paper introduces CogMUS, a novel cooperative mission understanding framework based on the Soar cognitive architecture. We first construct a mission understanding framework for UAV operations centered around five typical mission categories. Building on this foundation, we design a distributed cognitive model where each UAV is equipped with a Soar agent. This model leverages the synergy of working memory (WM), long-term memory (LTM), and the decision cycle (DC) to achieve key functionalities, including hierarchical mission decomposition, dynamic task allocation, and proactive airspace conflict detection and resolution. Through comprehensive simulation experiments, we validate the performance of the proposed CogMUS framework across key metrics, including task understanding accuracy, cooperative efficiency, and overall task completion rate. The results demonstrate that CogMUS exhibits superior adaptability to diverse scenarios, as well as remarkable scalability and robustness. Full article
Show Figures

Figure 1

21 pages, 3527 KB  
Article
Real-Time Long-Range Control of an Autonomous UAV Using 4G LTE Network
by Mohamed Ahmed Mahrous Mohamed and Yesim Oniz
Drones 2025, 9(12), 812; https://doi.org/10.3390/drones9120812 - 21 Nov 2025
Viewed by 360
Abstract
The operational range and reliability of most commercially available UAVs employed in surveillance, agriculture, and infrastructure inspection missions are limited due to the use of short-range radio frequency connections. To alleviate this issue, the present work investigates the possibility of real-time long-distance UAV [...] Read more.
The operational range and reliability of most commercially available UAVs employed in surveillance, agriculture, and infrastructure inspection missions are limited due to the use of short-range radio frequency connections. To alleviate this issue, the present work investigates the possibility of real-time long-distance UAV control using a commercial 4G LTE network. The proposed system setup consists of a Raspberry Pi 4B as the onboard computer, connected to a Pixhawk-2.4 flight controller mounted on an F450 quadcopter platform. Flight tests were carried out in open-field conditions at altitudes up to 50 m above ground level (AGL). Communication between the UAV and the ground control station is established using TCP and UDP protocols. The flight tests demonstrated stable remote control operation, maintaining an average control delay of under 150 ms and a video quality resolution of 640×480, while the LTE bandwidth ranging from 3 Mbps to 55 Mbps. The farthest recorded test distance of around 4200 km from the UAV to the operator also indicates the capability of LTE systems for beyond-visual-line-of-sight operations. The results show that 4G LTE offers an effective method for extending UAV range at a reasonable cost, but there are limitations in terms of network performance, flight time and regulatory compliance. This study establishes essential groundwork for future UAV operations that will utilize 5G/6G and satellite communication systems. Full article
Show Figures

Figure 1

30 pages, 34352 KB  
Review
Infrared and Visible Image Fusion Techniques for UAVs: A Comprehensive Review
by Junjie Li, Cunzheng Fan, Congyang Ou and Haokui Zhang
Drones 2025, 9(12), 811; https://doi.org/10.3390/drones9120811 - 21 Nov 2025
Viewed by 404
Abstract
Infrared–visible (IR–VIS) image fusion is becoming central to unmanned aerial vehicle (UAV) perception, enabling robust operation across day–night cycles, backlighting, haze or smoke, and large viewpoint or scale changes. However, for practical applications some challenges still remain: visible images are illumination-sensitive; infrared imagery [...] Read more.
Infrared–visible (IR–VIS) image fusion is becoming central to unmanned aerial vehicle (UAV) perception, enabling robust operation across day–night cycles, backlighting, haze or smoke, and large viewpoint or scale changes. However, for practical applications some challenges still remain: visible images are illumination-sensitive; infrared imagery suffers thermal crossover and weak texture; motion and parallax cause cross-modal misalignment; UAV scenes contain many small or fast targets; and onboard platforms face strict latency, power, and bandwidth budgets. Given these UAV-specific challenges and constraints, we provide a UAV-centric synthesis of IR–VIS fusion. We: (i) propose a taxonomy linking data compatibility, fusion mechanisms, and task adaptivity; (ii) critically review learning-based methods—including autoencoders, CNNs, GANs, Transformers, and emerging paradigms; (iii) compare explicit/implicit registration strategies and general-purpose fusion frameworks; and (iv) consolidate datasets and evaluation metrics to reveal UAV-specific gaps. We further identify open challenges in benchmarking, metrics, lightweight design, and integration with downstream detection, segmentation, and tracking, offering guidance for real-world deployment. A continuously updated bibliography and resources are provided and discussed in the main text. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop