Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (414)

Search Parameters:
Keywords = UAV surveillance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4519 KiB  
Article
Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles
by Cristian Randieri, Sai Venkata Ganesh, Rayappa David Amar Raj, Rama Muni Reddy Yanamala, Archana Pallakonda and Christian Napoli
Drones 2025, 9(8), 549; https://doi.org/10.3390/drones9080549 - 4 Aug 2025
Viewed by 164
Abstract
Unmanned Aerial Vehicles (UAVs) have rapidly grown into different essential applications, including surveillance, disaster response, agriculture, and urban monitoring. However, for UAVS to steer safely and autonomously, the ability to detect obstacles and nearby aircraft remains crucial, especially under hard environmental conditions. This [...] Read more.
Unmanned Aerial Vehicles (UAVs) have rapidly grown into different essential applications, including surveillance, disaster response, agriculture, and urban monitoring. However, for UAVS to steer safely and autonomously, the ability to detect obstacles and nearby aircraft remains crucial, especially under hard environmental conditions. This study comprehensively analyzes the recent landscape of obstacle and aircraft detection techniques tailored for UAVs acting in difficult scenarios such as fog, rain, smoke, low light, motion blur, and disorderly environments. It starts with a detailed discussion of key detection challenges and continues with an evaluation of different sensor types, from RGB and infrared cameras to LiDAR, radar, sonar, and event-based vision sensors. Both classical computer vision methods and deep learning-based detection techniques are examined in particular, highlighting their performance strengths and limitations under degraded sensing conditions. The paper additionally offers an overview of suitable UAV-specific datasets and the evaluation metrics generally used to evaluate detection systems. Finally, the paper examines open problems and coming research directions, emphasising the demand for lightweight, adaptive, and weather-resilient detection systems appropriate for real-time onboard processing. This study aims to guide students and engineers towards developing stronger and intelligent detection systems for next-generation UAV operations. Full article
Show Figures

Figure 1

32 pages, 6588 KiB  
Article
Path Planning for Unmanned Aerial Vehicle: A-Star-Guided Potential Field Method
by Jaewan Choi and Younghoon Choi
Drones 2025, 9(8), 545; https://doi.org/10.3390/drones9080545 - 1 Aug 2025
Viewed by 368
Abstract
The utilization of Unmanned Aerial Vehicles (UAVs) in missions such as reconnaissance and surveillance has grown rapidly, underscoring the need for efficient path planning algorithms that ensure both optimality and collision avoidance. The A-star algorithm is widely used for global path planning due [...] Read more.
The utilization of Unmanned Aerial Vehicles (UAVs) in missions such as reconnaissance and surveillance has grown rapidly, underscoring the need for efficient path planning algorithms that ensure both optimality and collision avoidance. The A-star algorithm is widely used for global path planning due to its ability to generate optimal routes; however, its high computational cost makes it unsuitable for real-time applications, particularly in unknown or dynamic environments. For local path planning, the Artificial Potential Field (APF) algorithm enables real-time navigation by attracting the UAV toward the target while repelling it from obstacles. Despite its efficiency, APF suffers from local minima and limited performance in dynamic settings. To address these challenges, this paper proposes the A-star-Guided Potential Field (AGPF) algorithm, which integrates the strengths of A-star and APF to achieve robust performance in both global and local path planning. The AGPF algorithm was validated through simulations conducted in the Robot Operating System (ROS) environment. Simulation results demonstrate that AGPF produces smoother and more optimal paths than A-star, while avoiding the local minima issues inherent in APF. Furthermore, AGPF effectively handles moving and previously unknown obstacles by generating real-time avoidance trajectories, demonstrating strong adaptability in dynamic and uncertain environments. Full article
Show Figures

Figure 1

24 pages, 5286 KiB  
Article
Graph Neural Network-Enhanced Multi-Agent Reinforcement Learning for Intelligent UAV Confrontation
by Kunhao Hu, Hao Pan, Chunlei Han, Jianjun Sun, Dou An and Shuanglin Li
Aerospace 2025, 12(8), 687; https://doi.org/10.3390/aerospace12080687 - 31 Jul 2025
Viewed by 211
Abstract
Unmanned aerial vehicles (UAVs) are widely used in surveillance and combat for their efficiency and autonomy, whilst complex, dynamic environments challenge the modeling of inter-agent relations and information transmission. This research proposes a novel UAV tactical choice-making algorithm utilizing graph neural networks to [...] Read more.
Unmanned aerial vehicles (UAVs) are widely used in surveillance and combat for their efficiency and autonomy, whilst complex, dynamic environments challenge the modeling of inter-agent relations and information transmission. This research proposes a novel UAV tactical choice-making algorithm utilizing graph neural networks to tackle these challenges. The proposed algorithm employs a graph neural network to process the observed state information, the convolved output of which is then fed into a reconstructed critic network incorporating a Laplacian convolution kernel. This research first enhances the accuracy of obtaining unstable state information in hostile environments. The proposed algorithm uses this information to train a more precise critic network. In turn, this improved critic network guides the actor network to make decisions that better meet the needs of the battlefield. Coupled with a policy transfer mechanism, this architecture significantly enhances the decision-making efficiency and environmental adaptability within the multi-agent system. Results from the experiments show that the average effectiveness of the proposed algorithm across the six planned scenarios is 97.4%, surpassing the baseline by 23.4%. In addition, the integration of transfer learning makes the network convergence speed three times faster than that of the baseline algorithm. This algorithm effectively improves the information transmission efficiency between the environment and the UAV and provides strong support for UAV formation combat. Full article
(This article belongs to the Special Issue New Perspective on Flight Guidance, Control and Dynamics)
Show Figures

Figure 1

27 pages, 405 KiB  
Article
Comparative Analysis of Centralized and Distributed Multi-UAV Task Allocation Algorithms: A Unified Evaluation Framework
by Yunze Song, Zhexuan Ma, Nuo Chen, Shenghao Zhou and Sutthiphong Srigrarom
Drones 2025, 9(8), 530; https://doi.org/10.3390/drones9080530 - 28 Jul 2025
Viewed by 374
Abstract
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored [...] Read more.
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored to multi-UAV operations. We first contextualize the classical assignment problem (AP) under UAV mission constraints, including the flight time, propulsion energy capacity, and communication range, and evaluate optimal one-to-one solvers including the Hungarian algorithm, the Bertsekas ϵ-auction algorithm, and a minimum cost maximum flow formulation. To reflect the dynamic, uncertain environments that UAV fleets encounter, we extend our analysis to distributed multi-UAV task allocation (MUTA) methods. In particular, we examine the consensus-based bundle algorithm (CBBA) and a distributed auction 2-opt refinement strategy, both of which iteratively negotiate task bundles across UAVs to accommodate real-time task arrivals and intermittent connectivity. Finally, we outline how reinforcement learning (RL) can be incorporated to learn adaptive policies that balance energy efficiency and mission success under varying wind conditions and obstacle fields. Through simulations incorporating UAV-specific cost models and communication topologies, we assess each algorithm’s mission completion time, total energy expenditure, communication overhead, and resilience to UAV failures. Our results highlight the trade-off between strict optimality, which is suitable for small fleets in static scenarios, and scalable, robust coordination, necessary for large, dynamic multi-UAV deployments. Full article
Show Figures

Figure 1

24 pages, 12286 KiB  
Article
A UAV-Based Multi-Scenario RGB-Thermal Dataset and Fusion Model for Enhanced Forest Fire Detection
by Yalin Zhang, Xue Rui and Weiguo Song
Remote Sens. 2025, 17(15), 2593; https://doi.org/10.3390/rs17152593 - 25 Jul 2025
Viewed by 461
Abstract
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). [...] Read more.
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). RGB-Thermal fusion methods integrate visible-light texture and thermal infrared temperature features effectively, but current approaches are constrained by limited datasets and insufficient exploitation of cross-modal complementary information, ignoring cross-level feature interaction. A time-synchronized multi-scene, multi-angle aerial RGB-Thermal dataset (RGBT-3M) with “Smoke–Fire–Person” annotations and modal alignment via the M-RIFT method was constructed as a way to address the problem of data scarcity in wildfire scenarios. Finally, we propose a CP-YOLOv11-MF fusion detection model based on the advanced YOLOv11 framework, which can learn heterogeneous features complementary to each modality in a progressive manner. Experimental validation proves the superiority of our method, with a precision of 92.5%, a recall of 93.5%, a mAP50 of 96.3%, and a mAP50-95 of 62.9%. The model’s RGB-Thermal fusion capability enhances early fire detection, offering a benchmark dataset and methodological advancement for intelligent forest conservation, with implications for AI-driven ecological protection. Full article
(This article belongs to the Special Issue Advances in Spectral Imagery and Methods for Fire and Smoke Detection)
Show Figures

Figure 1

20 pages, 6748 KiB  
Article
YOLO-SSFA: A Lightweight Real-Time Infrared Detection Method for Small Targets
by Yuchi Wang, Minghua Cao, Qing Yang, Yue Zhang and Zexuan Wang
Information 2025, 16(7), 618; https://doi.org/10.3390/info16070618 - 20 Jul 2025
Viewed by 507
Abstract
Infrared small target detection is crucial for military surveillance and autonomous driving. However, complex scenes and weak signal characteristics make the identification of such targets particularly difficult. This study proposes YOLO-SSFA, an enhanced You Only Look Once version 11 (YOLOv11) model with three [...] Read more.
Infrared small target detection is crucial for military surveillance and autonomous driving. However, complex scenes and weak signal characteristics make the identification of such targets particularly difficult. This study proposes YOLO-SSFA, an enhanced You Only Look Once version 11 (YOLOv11) model with three modules: Scale-Sequence Feature Fusion (SSFF), LiteShiftHead detection head, and Noise Suppression Network (NSN). SSFF improves multi-scale feature representation through adaptive fusion; LiteShiftHead boosts efficiency via sparse convolution and dynamic integration; and NSN enhances localization accuracy by focusing on key regions. Experiments on the HIT-UAV and FLIR datasets show mAP50 scores of 94.9% and 85%, respectively. These findings showcase YOLO-SSFA’s strong potential for real-time deployment in challenging infrared environments. Full article
Show Figures

Figure 1

21 pages, 3826 KiB  
Article
UAV-OVD: Open-Vocabulary Object Detection in UAV Imagery via Multi-Level Text-Guided Decoding
by Lijie Tao, Guoting Wei, Zhuo Wang, Zhaoshuai Qi, Ying Li and Haokui Zhang
Drones 2025, 9(7), 495; https://doi.org/10.3390/drones9070495 - 14 Jul 2025
Viewed by 532
Abstract
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore [...] Read more.
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore open-vocabulary or open-world detection, their application to UAV imagery remains limited and underexplored. In this paper, we address this limitation by exploring the relationship between images and textual semantics to extend object detection in UAV imagery to an open-vocabulary setting. We propose a novel and efficient detector named Unmanned Aerial Vehicle Open-Vocabulary Detector (UAV-OVD), specifically designed for drone-captured scenes. To facilitate open-vocabulary object detection, we propose improvements from three complementary perspectives. First, at the training level, we design a region–text contrastive loss to replace conventional classification loss, allowing the model to align visual regions with textual descriptions beyond fixed category sets. Structurally, building on this, we introduce a multi-level text-guided fusion decoder that integrates visual features across multiple spatial scales under language guidance, thereby improving overall detection performance and enhancing the representation and perception of small objects. Finally, from the data perspective, we enrich the original dataset with synonym-augmented category labels, enabling more flexible and semantically expressive supervision. Experiments conducted on two widely used benchmark datasets demonstrate that our approach achieves significant improvements in both mean mAP and Recall. For instance, for Zero-Shot Detection on xView, UAV-OVD achieves 9.9 mAP and 67.3 Recall, 1.1 and 25.6 higher than that of YOLO-World. In terms of speed, UAV-OVD achieves 53.8 FPS, nearly twice as fast as YOLO-World and five times faster than DetrReg, demonstrating its strong potential for real-time open-vocabulary detection in UAV imagery. Full article
(This article belongs to the Special Issue Applications of UVs in Digital Photogrammetry and Image Processing)
Show Figures

Figure 1

29 pages, 16466 KiB  
Article
DMF-YOLO: Dynamic Multi-Scale Feature Fusion Network-Driven Small Target Detection in UAV Aerial Images
by Xiaojia Yan, Shiyan Sun, Huimin Zhu, Qingping Hu, Wenjian Ying and Yinglei Li
Remote Sens. 2025, 17(14), 2385; https://doi.org/10.3390/rs17142385 - 10 Jul 2025
Viewed by 551
Abstract
Target detection in UAV aerial images has found increasingly widespread applications in emergency rescue, maritime monitoring, and environmental surveillance. However, traditional detection models suffer significant performance degradation due to challenges including substantial scale variations, high proportions of small targets, and dense occlusions in [...] Read more.
Target detection in UAV aerial images has found increasingly widespread applications in emergency rescue, maritime monitoring, and environmental surveillance. However, traditional detection models suffer significant performance degradation due to challenges including substantial scale variations, high proportions of small targets, and dense occlusions in UAV-captured images. To address these issues, this paper proposes DMF-YOLO, a high-precision detection network based on YOLOv10 improvements. First, we design Dynamic Dilated Snake Convolution (DDSConv) to adaptively adjust the receptive field and dilation rate of convolution kernels, enhancing local feature extraction for small targets with weak textures. Second, we construct a Multi-scale Feature Aggregation Module (MFAM) that integrates dual-branch spatial attention mechanisms to achieve efficient cross-layer feature fusion, mitigating information conflicts between shallow details and deep semantics. Finally, we propose an Expanded Window-based Bounding Box Regression Loss Function (EW-BBRLF), which optimizes localization accuracy through dynamic auxiliary bounding boxes, effectively reducing missed detections of small targets. Experiments on the VisDrone2019 and HIT-UAV datasets demonstrate that DMF-YOLOv10 achieves 50.1% and 81.4% mAP50, respectively, significantly outperforming the baseline YOLOv10s by 27.1% and 2.6%, with parameter increases limited to 24.4% and 11.9%. The method exhibits superior robustness in dense scenarios, complex backgrounds, and long-range target detection. This approach provides an efficient solution for UAV real-time perception tasks and offers novel insights for multi-scale object detection algorithm design. Full article
Show Figures

Graphical abstract

22 pages, 3045 KiB  
Article
Optimization of RIS-Assisted 6G NTN Architectures for High-Mobility UAV Communication Scenarios
by Muhammad Shoaib Ayub, Muhammad Saadi and Insoo Koo
Drones 2025, 9(7), 486; https://doi.org/10.3390/drones9070486 - 10 Jul 2025
Viewed by 503
Abstract
The integration of reconfigurable intelligent surfaces (RISs) with non-terrestrial networks (NTNs), particularly those enabled by unmanned aerial vehicles (UAVs) or drone-based platforms, has emerged as a transformative approach to enhance 6G connectivity in high-mobility scenarios. UAV-assisted NTNs offer flexible deployment, dynamic altitude control, [...] Read more.
The integration of reconfigurable intelligent surfaces (RISs) with non-terrestrial networks (NTNs), particularly those enabled by unmanned aerial vehicles (UAVs) or drone-based platforms, has emerged as a transformative approach to enhance 6G connectivity in high-mobility scenarios. UAV-assisted NTNs offer flexible deployment, dynamic altitude control, and rapid network reconfiguration, making them ideal candidates for RIS-based signal optimization. However, the high mobility of UAVs and their three-dimensional trajectory dynamics introduce unique challenges in maintaining robust, low-latency links and seamless handovers. This paper presents a comprehensive performance analysis of RIS-assisted UAV-based NTNs, focusing on optimizing RIS phase shifts to maximize the signal-to-interference-plus-noise ratio (SINR), throughput, energy efficiency, and reliability under UAV mobility constraints. A joint optimization framework is proposed that accounts for UAV path loss, aerial shadowing, interference, and user mobility patterns, tailored specifically for aerial communication networks. Extensive simulations are conducted across various UAV operation scenarios, including urban air corridors, rural surveillance routes, drone swarms, emergency response, and aerial delivery systems. The results reveal that RIS deployment significantly enhances the SINR and throughput while navigating energy and latency trade-offs in real time. These findings offer vital insights for deploying RIS-enhanced aerial networks in 6G, supporting mission-critical drone applications and next-generation autonomous systems. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

19 pages, 598 KiB  
Article
Trajectory Planning and Optimisation for Following Drone to Rendezvous Leading Drone by State Estimation with Adaptive Time Horizon
by Javier Lee Hongrui and Sutthiphong Srigrarom
Aerospace 2025, 12(7), 606; https://doi.org/10.3390/aerospace12070606 - 4 Jul 2025
Viewed by 360
Abstract
With the increased proliferation of drone use for many purposes, counter drone technology has become crucial. This rapid expansion has inherently introduced significant opportunities and applications. This creates applications such as aerial surveillance, delivery services, agriculture monitoring, and, most importantly, security operations. Due [...] Read more.
With the increased proliferation of drone use for many purposes, counter drone technology has become crucial. This rapid expansion has inherently introduced significant opportunities and applications. This creates applications such as aerial surveillance, delivery services, agriculture monitoring, and, most importantly, security operations. Due to the relative simplicity of learning and operating a small-scale UAV, malicious organizations can field and use UAVs (drones) to form substantial threats. Their interception may then be hindered by evasive manoeuvres performed by the malicious UAV (mUAV). Novice operators may also unintentionally fly UAVs into restricted airspace such as civilian airports, posing a hazard to other air operations. This paper explores predictive trajectory code and methods for the neutralisation of mUAVs by following drones, using state estimation techniques such as the extended Kalman filter (EKF) and particle filter (PF). Interception strategies and optimization techniques are analysed to improve interception efficiency and robustness. The novelty introduced by this paper is the implementation of adaptive time horizon (ATH) and velocity control (VC) in the predictive process. Simulations in MATLAB were used to evaluate the effectiveness of trajectory prediction models and interception strategies against evasive manoeuvres. The tests discussed in this paper then demonstrated the following: the EKF predictive method achieved a significantly higher neutralisation rate (41%) compared to the PF method (30%) in linear trajectory scenarios, and a similar neutralisation rate of 5% in stochastic trajectory scenarios. Later, after incorporating adaptive time horizon (ATH) and 20 velocity control (VC) measures, the EKF method achieved a 98% neutralization rate, demonstrating significant improvement in performance. Full article
Show Figures

Figure 1

15 pages, 2091 KiB  
Review
AI Roles in 4R Crop Pest Management—A Review
by Hengyuan Yang, Yuexia Jin, Lili Jiang, Jia Lu and Guoqi Wen
Agronomy 2025, 15(7), 1629; https://doi.org/10.3390/agronomy15071629 - 3 Jul 2025
Viewed by 1012
Abstract
Insect pests are a major threat to agricultural production, causing significant crop yield reductions annually. Integrated pest management (IPM) is well-studied, but its precise application in farmlands is still challenging due to variable weather, diverse insect behaviors, crop variability, and soil heterogeneity. Recent [...] Read more.
Insect pests are a major threat to agricultural production, causing significant crop yield reductions annually. Integrated pest management (IPM) is well-studied, but its precise application in farmlands is still challenging due to variable weather, diverse insect behaviors, crop variability, and soil heterogeneity. Recent advancements in Artificial Intelligence (AI) have shown the potential to revolutionize pest management by implementing 4R pest stewardship: right pest identification, right method selection, right control timing, and right action taken. This review explores the roles of AI technologies within the 4R framework, highlighting AI models for accurate pest identification, computer vision systems for real-time monitoring, predictive analytics for optimizing control timing, and tools for selecting and applying pest control measures. Innovations in remote sensing, UAV surveillance, and IoT-enabled smart traps further strengthen pest monitoring and intervention strategies. By integrating AI into 4R pest management, this study underscores the potential of precision agriculture to develop sustainable, adaptive, and highly efficient pest control systems. Despite these advancements, challenges persist in data availability, model generalization, and economic feasibility for widespread adoption. The lack of interpretability in AI models also makes some agronomists hesitant to adopt these technologies. Future research should focus on scalable AI solutions, interdisciplinary collaborations, and real-world validation to enhance AI-driven pest management in field crops. Full article
Show Figures

Figure 1

19 pages, 3044 KiB  
Review
Deep Learning-Based Sound Source Localization: A Review
by Kunbo Xu, Zekai Zong, Dongjun Liu, Ran Wang and Liang Yu
Appl. Sci. 2025, 15(13), 7419; https://doi.org/10.3390/app15137419 - 2 Jul 2025
Viewed by 628
Abstract
As a fundamental technology in environmental perception, sound source localization (SSL) plays a critical role in public safety, marine exploration, and smart home systems. However, traditional methods such as beamforming and time-delay estimation rely on manually designed physical models and idealized assumptions, which [...] Read more.
As a fundamental technology in environmental perception, sound source localization (SSL) plays a critical role in public safety, marine exploration, and smart home systems. However, traditional methods such as beamforming and time-delay estimation rely on manually designed physical models and idealized assumptions, which struggle to meet practical demands in dynamic and complex scenarios. Recent advancements in deep learning have revolutionized SSL by leveraging its end-to-end feature adaptability, cross-scenario generalization capabilities, and data-driven modeling, significantly enhancing localization robustness and accuracy in challenging environments. This review systematically examines the progress of deep learning-based SSL across three critical domains: marine environments, indoor reverberant spaces, and unmanned aerial vehicle (UAV) monitoring. In marine scenarios, complex-valued convolutional networks combined with adversarial transfer learning mitigate environmental mismatch and multipath interference through phase information fusion and domain adaptation strategies. For indoor high-reverberation conditions, attention mechanisms and multimodal fusion architectures achieve precise localization under low signal-to-noise ratios by adaptively weighting critical acoustic features. In UAV surveillance, lightweight models integrated with spatiotemporal Transformers address dynamic modeling of non-stationary noise spectra and edge computing efficiency constraints. Despite these advancements, current approaches face three core challenges: the insufficient integration of physical principles, prohibitive data annotation costs, and the trade-off between real-time performance and accuracy. Future research should prioritize physics-informed modeling to embed acoustic propagation mechanisms, unsupervised domain adaptation to reduce reliance on labeled data, and sensor-algorithm co-design to optimize hardware-software synergy. These directions aim to propel SSL toward intelligent systems characterized by high precision, strong robustness, and low power consumption. This work provides both theoretical foundations and technical references for algorithm selection and practical implementation in complex real-world scenarios. Full article
Show Figures

Figure 1

19 pages, 2267 KiB  
Article
Closed-Loop Aerial Tracking with Dynamic Detection-Tracking Coordination
by Yang Wang, Heqing Huang, Jiahao He, Dongting Han and Zhiwei Zhao
Drones 2025, 9(7), 467; https://doi.org/10.3390/drones9070467 - 30 Jun 2025
Viewed by 368
Abstract
Aerial tracking is an important service for many Unmanned Aerial Vehicle (UAV) applications. Existing work has failed to provide robust solutions when handling target disappearance, viewpoint changes, and tracking drifts in practical scenarios with limited UAV resources. In this paper, we propose a [...] Read more.
Aerial tracking is an important service for many Unmanned Aerial Vehicle (UAV) applications. Existing work has failed to provide robust solutions when handling target disappearance, viewpoint changes, and tracking drifts in practical scenarios with limited UAV resources. In this paper, we propose a closed-loop framework integrating three key components: (1) a lightweight adaptive detection with multi-scale feature extraction, (2) spatiotemporal motion modeling through Kalman-filter-based trajectory prediction, and (3) autonomous decision-making through composite scoring of detection confidence, appearance similarity, and motion consistency. By implementing dynamic detection-tracking coordination with quality-aware feature preservation, our system enables real-time operation through performance-adaptive frequency modulation. Evaluated on VOT-ST2019 and OTB100 benchmarks, the proposed method yields marked improvements over baseline trackers, achieving a 27.94% increase in Expected Average Overlap (EAO) and a 10.39% reduction in failure rates, while sustaining a frame rate of 23–95 FPS on edge hardware. The framework achieves rapid target reacquisition during prolonged occlusion scenarios through optimized protocols, outperforming conventional methods in sustained aerial surveillance tasks. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

23 pages, 1913 KiB  
Article
UAVRM-A*: A Complex Network and 3D Radio Map-Based Algorithm for Optimizing Cellular-Connected UAV Path Planning
by Yanming Chai, Yapeng Wang, Xu Yang, Sio-Kei Im and Qibin He
Sensors 2025, 25(13), 4052; https://doi.org/10.3390/s25134052 - 29 Jun 2025
Viewed by 334
Abstract
In recent research on path planning for cellular-connected Unmanned Aerial Vehicles (UAVs), leveraging navigation models based on complex networks and applying the A* algorithm has emerged as a promising alternative to more computationally intensive methods, such as deep reinforcement learning (DRL). These approaches [...] Read more.
In recent research on path planning for cellular-connected Unmanned Aerial Vehicles (UAVs), leveraging navigation models based on complex networks and applying the A* algorithm has emerged as a promising alternative to more computationally intensive methods, such as deep reinforcement learning (DRL). These approaches offer performance that approaches that of DRL, while addressing key challenges like long training times and poor generalization. However, conventional A* algorithms fail to consider critical UAV flight characteristics and lack effective obstacle avoidance mechanisms. To address these limitations, this paper presents a novel solution for path planning of cellular-connected UAVs, utilizing a 3D radio map for enhanced situational awareness. We proposed an innovative path planning algorithm, UAVRM-A*, which builds upon the complex network navigation model and incorporates key improvements over traditional A*. Our experimental results demonstrate that the UAVRM-A* algorithm not only effectively avoids obstacles but also generates flight paths more consistent with UAV dynamics. Additionally, the proposed approach achieves performance comparable to DRL-based methods while significantly reducing radio outage duration and the computational time required for model training. This research contributes to the development of more efficient, reliable, and practical path planning solutions for UAVs, with potential applications in various fields, including autonomous delivery, surveillance, and emergency response operations. Full article
(This article belongs to the Special Issue Recent Advances in UAV Communications and Networks)
Show Figures

Figure 1

28 pages, 11793 KiB  
Article
Unsupervised Multimodal UAV Image Registration via Style Transfer and Cascade Network
by Xiaoye Bi, Rongkai Qie, Chengyang Tao, Zhaoxiang Zhang and Yuelei Xu
Remote Sens. 2025, 17(13), 2160; https://doi.org/10.3390/rs17132160 - 24 Jun 2025
Cited by 1 | Viewed by 412
Abstract
Cross-modal image registration for unmanned aerial vehicle (UAV) platforms presents significant challenges due to large-scale deformations, distinct imaging mechanisms, and pronounced modality discrepancies. This paper proposes a novel multi-scale cascaded registration network based on style transfer that achieves superior performance: up to 67% [...] Read more.
Cross-modal image registration for unmanned aerial vehicle (UAV) platforms presents significant challenges due to large-scale deformations, distinct imaging mechanisms, and pronounced modality discrepancies. This paper proposes a novel multi-scale cascaded registration network based on style transfer that achieves superior performance: up to 67% reduction in mean squared error (from 0.0106 to 0.0068), 9.27% enhancement in normalized cross-correlation, 26% improvement in local normalized cross-correlation, and 8% increase in mutual information compared to state-of-the-art methods. The architecture integrates a cross-modal style transfer network (CSTNet) that transforms visible images into pseudo-infrared representations to unify modality characteristics, and a multi-scale cascaded registration network (MCRNet) that performs progressive spatial alignment across multiple resolution scales using diffeomorphic deformation modeling to ensure smooth and invertible transformations. A self-supervised learning paradigm based on image reconstruction eliminates reliance on manually annotated data while maintaining registration accuracy through synthetic deformation generation. Extensive experiments on the LLVIP dataset demonstrate the method’s robustness under challenging conditions involving large-scale transformations, with ablation studies confirming that style transfer contributes 28% MSE improvement and diffeomorphic registration prevents 10.6% performance degradation. The proposed approach provides a robust solution for cross-modal image registration in dynamic UAV environments, offering significant implications for downstream applications such as target detection, tracking, and surveillance. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches: UAV Data Analysis)
Show Figures

Graphical abstract

Back to TopTop