Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (614)

Search Parameters:
Keywords = lightweight UAVs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2897 KB  
Review
Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava
by Wasiu Akande Ahmed, Olayinka Ademola Abiola, Dongkai Yang, Seyi Festus Olatoyinbo and Guifei Jing
Horticulturae 2026, 12(1), 87; https://doi.org/10.3390/horticulturae12010087 - 12 Jan 2026
Viewed by 130
Abstract
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes [...] Read more.
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes current advances in combining unmanned aerial vehicles (UAVs) with deep learning (DL) to enable scalable, data-driven cassava disease detection. It examines UAV platforms, sensor technologies, flight protocols, image preprocessing pipelines, DL architectures, and existing datasets, and it evaluates how these components interact within UAV–DL disease-monitoring frameworks. The review also compares model performance across convolutional neural network-based and Transformer-based architectures, highlighting metrics such as accuracy, recall, F1-score, inference speed, and deployment feasibility. Persistent challenges—such as limited UAV-acquired datasets, annotation inconsistencies, geographic model bias, and inadequate real-time deployment—are identified and discussed. Finally, the paper proposes a structured research agenda including lightweight edge-deployable models, UAV-ready benchmarking protocols, and multimodal data fusion. This review provides a consolidated reference for researchers and practitioners seeking to develop practical and scalable cassava-disease detection systems. Full article
Show Figures

Figure 1

25 pages, 7150 KB  
Article
Integrating Frequency-Spatial Features for Energy-Efficient OPGW Target Recognition in UAV-Assisted Mobile Monitoring
by Lin Huang, Xubin Ren, Daiming Qu, Lanhua Li and Jing Xu
Sensors 2026, 26(2), 506; https://doi.org/10.3390/s26020506 - 12 Jan 2026
Viewed by 160
Abstract
Optical Fiber Composite Overhead Ground Wire (OPGW) cables serve dual functions in power systems, lightning protection and critical communication infrastructure for real-time grid monitoring. Accurate OPGW identification during UAV inspections is essential to prevent miscuts and maintain power-communication functionality. However, detecting small, twisted [...] Read more.
Optical Fiber Composite Overhead Ground Wire (OPGW) cables serve dual functions in power systems, lightning protection and critical communication infrastructure for real-time grid monitoring. Accurate OPGW identification during UAV inspections is essential to prevent miscuts and maintain power-communication functionality. However, detecting small, twisted OPGW segments among visually similar ground wires is challenging, particularly given the computational and energy constraints of edge-based UAV platforms. We propose OPGW-DETR, a lightweight detector based on the D-FINE framework, optimized for low-power operation to enable reliable detection. The model incorporates two key innovations: multi-scale convolutional global average pooling (MC-GAP), which fuses spatial features across multiple receptive fields and integrates spectrally motivated features for enhanced fine-grained representation, and a hybrid gating mechanism that dynamically balances global and spatial features while preserving original information through residual connections. By enabling real-time inference with minimal energy consumption, OPGW-DETR addresses UAV battery and bandwidth limitations while ensuring continuous detection capability. Evaluated on a custom OPGW dataset, the S-scale model achieves 3.9% improvement in average precision (AP) and 2.5% improvement in AP50 over the baseline. By mitigating misidentification risks, these gains improve communication reliability. As a result, uninterrupted grid monitoring becomes feasible in low-power UAV inspection scenarios, where accurate detection is essential to ensure communication integrity and safeguard the power grid. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

55 pages, 1599 KB  
Review
The Survey of Evolutionary Deep Learning-Based UAV Intelligent Power Inspection
by Shanshan Fan and Bin Cao
Drones 2026, 10(1), 55; https://doi.org/10.3390/drones10010055 - 12 Jan 2026
Viewed by 251
Abstract
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the [...] Read more.
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the mainstream solution. The rapid development of computer vision and deep learning (DL) has significantly improved the accuracy and efficiency of UAV intelligent inspection systems for power equipment. However, mainstream deep learning models have complex structures, and manual design is time-consuming and labor-intensive. In addition, the images collected during the power inspection process by UAVs have problems such as complex backgrounds, uneven lighting, and significant differences in object sizes, which require expert DL domain knowledge and many trial-and-error experiments to design models suitable for application scenarios involving power inspection with UAVs. In response to these difficult problems, evolutionary computation (EC) technology has demonstrated unique advantages in simulating the natural evolutionary process. This technology can independently design lightweight and high-precision deep learning models by automatically optimizing the network structure and hyperparameters. Therefore, this review summarizes the development of evolutionary deep learning (EDL) technology and provides a reference for applying EDL in object detection models used in UAV intelligent power inspection systems. First, the application status of DL-based object detection models in power inspection is reviewed. Then, how EDL technology improves the performance of the models in challenging scenarios such as complex terrain and extreme weather is analyzed by optimizing the network architecture. Finally, the challenges and future research directions of EDL technology in the field of UAV power inspection are discussed, including key issues such as improving the environmental adaptability of the model and reducing computing energy consumption, providing theoretical references for promoting the development of UAV power inspection technology to a higher level. Full article
Show Figures

Figure 1

56 pages, 1834 KB  
Review
Detection and Mitigation of Cyber Attacks on UAV Networks
by Jack Burbank, Toro Caleb, Emmanuela Andam and Naima Kaabouch
Electronics 2026, 15(2), 317; https://doi.org/10.3390/electronics15020317 - 11 Jan 2026
Viewed by 117
Abstract
The topic of Unmanned Aerial Vehicle (UAV) cybersecurity has received significant recent interest from the research community, with many methods proposed in the literature to improve detect and mitigate various types of attacks. This paper provides a comprehensive review of UAV cybersecurity, addressing [...] Read more.
The topic of Unmanned Aerial Vehicle (UAV) cybersecurity has received significant recent interest from the research community, with many methods proposed in the literature to improve detect and mitigate various types of attacks. This paper provides a comprehensive review of UAV cybersecurity, addressing all aspects of the UAV ecosystem and presenting a thorough review of the various types of UAV attacks, including a survey of recent real-world UAV cybersecurity incidents. UAV cybersecurity threat analysis and risk assessment methodologies are reviewed, discussing how potential attacks translate to UAV system risk. The various threat detection and countermeasure (mitigation) techniques are analyzed. Finally, this paper’s unique contribution is that it provides a survey of existing tools and datasets that are available to UAV cybersecurity researchers. A key identified research gap is the need to conduct real-world experimentation to validate proposed cybersecurity techniques. Many proposed approaches are computationally expensive or require additional redundant hardware onboard the UAV. Future research should focus on the development of lightweight methods that are practical for UAV adoption. Another key research gap is the relative lack of RemoteID cybersecurity research, despite its mandated adoption by UAVs. Lastly, this paper concludes that Global Positioning System (GPS)-related threats pose the greatest continued risk to UAVs. Full article
(This article belongs to the Special Issue Advances in UAV-Assisted Wireless Communications)
Show Figures

Figure 1

19 pages, 2336 KB  
Article
A Lightweight Upsampling and Cross-Modal Feature Fusion-Based Algorithm for Small-Object Detection in UAV Imagery
by Jianglei Gong, Zhe Yuan, Wenxing Li, Weiwei Li, Yanjie Guo and Baolong Guo
Electronics 2026, 15(2), 298; https://doi.org/10.3390/electronics15020298 - 9 Jan 2026
Viewed by 141
Abstract
Small-object detection in UAV remote sensing faces common challenges such as tiny target size, blurred features, and severe background interference. Furthermore, single imaging modalities exhibit limited representation capability in complex environments. To address these issues, this paper proposes CTU-YOLO, a UAV-based small-object detection [...] Read more.
Small-object detection in UAV remote sensing faces common challenges such as tiny target size, blurred features, and severe background interference. Furthermore, single imaging modalities exhibit limited representation capability in complex environments. To address these issues, this paper proposes CTU-YOLO, a UAV-based small-object detection algorithm built upon cross-modal feature fusion and lightweight upsampling. The algorithm incorporates a dynamic and adaptive cross-modal feature fusion (DCFF) module, which achieves efficient feature alignment and fusion by combining frequency-domain analysis with convolutional operations. Additionally, a lightweight upsampling module (LUS) is introduced, integrating dynamic sampling and depthwise separable convolution to enhance the recovery of fine details for small objects. Experiments on the DroneVehicle and LLVIP datasets demonstrate that CTU-YOLO achieves 73.9% mAP on DroneVehicle and 96.9% AP on LLVIP, outperforming existing mainstream methods. Meanwhile, the model possesses only 4.2 MB parameters and 13.8 GFLOPs computational cost, with inference speeds reaching 129.9 FPS on DroneVehicle and 135.1 FPS on LLVIP. This exhibits an excellent lightweight design and real-time performance while maintaining high accuracy. Ablation studies confirm that both the DCFF and LUS modules contribute significantly to performance gains. Visualization analysis further indicates that the proposed method can accurately preserve the structure of small objects even under nighttime, low-light, and multi-scale background conditions, demonstrating strong robustness. Full article
(This article belongs to the Special Issue AI-Driven Image Processing: Theory, Methods, and Applications)
Show Figures

Figure 1

30 pages, 4507 KB  
Article
Training-Free Lightweight Transfer Learning for Land Cover Segmentation Using Multispectral Calibration
by Hye-Jung Moon and Nam-Wook Cho
Remote Sens. 2026, 18(2), 205; https://doi.org/10.3390/rs18020205 - 8 Jan 2026
Viewed by 117
Abstract
This study proposes a lightweight framework for transferring pretrained land cover classification architectures without additional training. The system utilizes French IGN imagery and Korean UAV and aerial imagery. It employs FLAIR U-Net models with ResNet34 and MiTB5 backbones, along with the AI-HUB U-Net. [...] Read more.
This study proposes a lightweight framework for transferring pretrained land cover classification architectures without additional training. The system utilizes French IGN imagery and Korean UAV and aerial imagery. It employs FLAIR U-Net models with ResNet34 and MiTB5 backbones, along with the AI-HUB U-Net. The implementation consists of four sequential stages. First, we perform class mapping between heterogeneous schemes and unify coordinate systems. Second, a quadratic polynomial regression equation is constructed. This formula uses multispectral band statistics as hyperparameters and class-wise IoU as the dependent variable. Third, optimal parameters are identified using the stationary point condition of Response Surface Methodology (RSM). Fourth, the final land cover map is generated by fusing class-wise optimal results at the pixel level. Experimental results show that optimization is typically completed within 60 inferences. This procedure achieves IoU improvements of up to 67.86 percentage points compared to the baseline. For automated application, these optimized values from a source domain are successfully transferred to target areas. This includes transfers between high-altitude mountainous and low-lying coastal territories via proportional mapping. This capability demonstrates cross-regional and cross-platform generalization between ResNet34 and MiTB5. Statistical validation confirmed that the performance surface followed a systematic quadratic response. Adjusted R2 values ranged from 0.706 to 0.999, with all p-values below 0.001. Consequently, the performance function is universally applicable across diverse geographic zones, spectral distributions, spatial resolutions, sensors, neural networks, and land cover classes. This approach achieves more than a 4000-fold reduction in computational resources compared to full model training, using only 32 to 150 tiles. Furthermore, the proposed technique demonstrates 10–74× superior resource efficiency (resource consumption per unit error reduction) over prior transfer learning schemes. Finally, this study presents a practical solution for inference and performance optimization of land cover semantic segmentation on standard commodity CPUs, while maintaining equivalent or superior IoU. Full article
Show Figures

Figure 1

19 pages, 1933 KB  
Article
ESS-DETR: A Lightweight and High-Accuracy UAV-Deployable Model for Surface Defect Detection
by Yunze Wang, Yong Yao, Heng Zheng and Yeqing Han
Drones 2026, 10(1), 43; https://doi.org/10.3390/drones10010043 - 8 Jan 2026
Viewed by 216
Abstract
Defects on large-scale structural surfaces can compromise integrity and pose safety hazards, highlighting the need for efficient automated inspection. UAVs provide a flexible and effective platform for such inspections, yet traditional vision-based methods often require high computational resources and show limited sensitivity to [...] Read more.
Defects on large-scale structural surfaces can compromise integrity and pose safety hazards, highlighting the need for efficient automated inspection. UAVs provide a flexible and effective platform for such inspections, yet traditional vision-based methods often require high computational resources and show limited sensitivity to small defects, restricting practical UAV deployment. To address these challenges, we propose ESS-DETR, a lightweight and high-precision detection model designed for UAV-based surface inspection, built upon core modules: EMO-inspired lightweight backbone that integrates convolution and efficient attention mechanisms to reduce parameters; Scale-Decoupled Loss that adaptively balances targets of various sizes to enhance accuracy and robustness for small and irregular defect patterns frequently encountered in UAV imagery; and SPPELAN multi-scale fusion module that improves feature discrimination under complex reflections, shadows, and lighting variations typical of aerial inspection environments. Experimental results demonstrate that ESS-DETR reduces computational complexity from 103.4 to 60.5 GFLOPs and achieves a Precision of 0.837, Recall of 0.738, and mAP of 79, outperforming Faster R-CNN, RT-DETR, and YOLOv11, particularly for small-scale defects, confirming that ESS-DETR effectively balances accuracy, efficiency, and onboard deployability, providing a practical solution for intelligent UAV-based surface inspection. Full article
Show Figures

Figure 1

23 pages, 3153 KB  
Article
SSCW-YOLO: A Lightweight and High-Precision Model for Small Object Detection in UAV Scenarios
by Zhuolun He, Rui She, Bo Tan, Jiajian Li and Xiaolong Lei
Drones 2026, 10(1), 41; https://doi.org/10.3390/drones10010041 - 7 Jan 2026
Viewed by 390
Abstract
To address the problems of missed and false detections caused by insufficient feature quality in small object detection from UAV perspectives, this paper proposes a UAV small object detection algorithm based on YOLOv8 feature optimization. A spatial cosine convolution module is introduced into [...] Read more.
To address the problems of missed and false detections caused by insufficient feature quality in small object detection from UAV perspectives, this paper proposes a UAV small object detection algorithm based on YOLOv8 feature optimization. A spatial cosine convolution module is introduced into the backbone network to optimize spatial features, thereby alleviating the problem of small object feature loss and improving the detection accuracy and speed of the model. An improved C2f_SCConv feature fusion module is employed for feature integration, which effectively reduces feature redundancy in spatial and channel dimensions, thereby lowering model complexity and computational cost. Meanwhile, the WIoU loss function is used to replace the original CIoU loss function, reducing the interference of geometric factors in anchor box regression, enabling the model to focus more on low-quality anchor boxes, and enhancing its small object detection capability. Ablation and comparative experiments on the VisDrone dataset validate the effectiveness of the proposed algorithm for small object detection from UAV perspectives, while generalization experiments on the DOTA and SSDD datasets demonstrate that the algorithm possesses strong generalization performance. Full article
Show Figures

Figure 1

28 pages, 12490 KB  
Article
A Full-Parameter Calibration Method for an RINS/CNS Integrated Navigation System in High-Altitude Drones
by Huanrui Zhang, Xiaoyue Zhang, Chunhua Cheng, Xinyi Lv and Chunxi Zhang
Vehicles 2026, 8(1), 11; https://doi.org/10.3390/vehicles8010011 - 5 Jan 2026
Viewed by 156
Abstract
High-altitude long-endurance (HALE) UAVs require navigation payloads that are both fully autonomous and lightweight. This paper presents a full-parameter calibration method for a dual-axis rotational-modulation RINS/CNS integrated system in which the IMU is mounted on a two-axis indexing mechanism and the reconnaissance camera [...] Read more.
High-altitude long-endurance (HALE) UAVs require navigation payloads that are both fully autonomous and lightweight. This paper presents a full-parameter calibration method for a dual-axis rotational-modulation RINS/CNS integrated system in which the IMU is mounted on a two-axis indexing mechanism and the reconnaissance camera is reused as the star sensor. We establish a unified error propagation model that simultaneously covers IMU device errors (bias, scale, cross-axis/installation), gimbal non-orthogonality and encoder angle errors, and camera exterior/interior parameters (EOPs/IOPs), including Brown–Conrady distortion. Building on this model, we design an error-decoupled calibration path that exploits (i) odd/even symmetry under inner-axis scans, (ii) basis switching via outer-axis waypoints, and (iii) frequency tagging through rate-limited triangular motions. A piecewise-constant system (PWCS)/SVD analysis quantifies segment-wise observability and guides trajectory tuning. Simulation and hardware-in-the-loop results show that all parameter groups converge primarily within the segments that excite them; the final relative errors are typically ≤5% in simulation and 6–16% with real IMU/gimbal data and catalog-based star pixels. Full article
Show Figures

Figure 1

25 pages, 18950 KB  
Article
Robust Object Detection for UAVs in Foggy Environments with Spatial-Edge Fusion and Dynamic Task Alignment
by Qing Dong, Tianxin Han, Gang Wu, Lina Sun and Yuchang Lu
Remote Sens. 2026, 18(1), 169; https://doi.org/10.3390/rs18010169 - 5 Jan 2026
Viewed by 213
Abstract
Robust scene perception in adverse environmental conditions, particularly under dense fog, presents a persistent and fundamental challenge to the reliability of object detection systems. To address this critical challenge, we propose Fog-UAVNet, a novel lightweight deep-learning architecture designed to enhance unmanned aerial vehicle [...] Read more.
Robust scene perception in adverse environmental conditions, particularly under dense fog, presents a persistent and fundamental challenge to the reliability of object detection systems. To address this critical challenge, we propose Fog-UAVNet, a novel lightweight deep-learning architecture designed to enhance unmanned aerial vehicle (UAV) object detection performance in foggy environments. Fog-UAVNet incorporates three key innovations: the Spatial-Edge Feature Fusion Module (SEFFM), which enhances feature extraction by effectively integrating edge and spatial information, the Frequency-Adaptive Dilated Convolution (FADC), which dynamically adjusts to fog density variations and further enhances feature representation under adverse conditions, and the Dynamic Task-Aligned Head (DTAH), which dynamically aligns localization and classification tasks and thus improves overall model performance. To evaluate the effectiveness of our approach, we independently constructed a real-world foggy dataset and synthesized the VisDrone-fog dataset using an atmospheric scattering model. Extensive experiments on multiple challenging datasets demonstrate that Fog-UAVNet consistently outperforms state-of-the-art methods in both detection accuracy and computational efficiency, highlighting its potential for enhancing robust visual perception under adverse weather. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

27 pages, 26025 KB  
Article
LFP-Mono: Lightweight Self-Supervised Network Applying Monocular Depth Estimation to Low-Altitude Environment Scenarios
by Hao Cai, Jiafu Liu, Jinhong Zhang, Jingxuan Xu, Yi Zhang and Qin Yang
Computers 2026, 15(1), 19; https://doi.org/10.3390/computers15010019 - 4 Jan 2026
Viewed by 191
Abstract
For UAVs, the industry currently relies on expensive sensors for obstacle avoidance. A significant challenge arises from the scarcity of high-quality depth estimation datasets tailored for low-altitude environments, which hinders the advancement of self-supervised learning methods in these settings. Furthermore, mainstream depth estimation [...] Read more.
For UAVs, the industry currently relies on expensive sensors for obstacle avoidance. A significant challenge arises from the scarcity of high-quality depth estimation datasets tailored for low-altitude environments, which hinders the advancement of self-supervised learning methods in these settings. Furthermore, mainstream depth estimation models capable of achieving obstacle avoidance through image recognition are built upon convolutional neural networks or hybrid Transformers. Their high computational costs make deployment on resource-constrained edge devices challenging. While existing lightweight convolutional networks reduce parameter counts, they struggle to simultaneously capture essential features and fine details in complex scenes. In this work, we introduce LFP-Mono as a lightweight self-supervised monocular depth estimation network. In the paper, we will detail the Pooling Convolution Downsampling (PCD) module, Continuously Dilated and Weighted Convolution (CDWC) module, and Cross-level Feature Integration (CFI) module. All results show that LFP-Mono outperforms existing lightweight methods on the KITTI benchmark, and by evaluating with the Make3D dataset, show that our method generalizes outdoors. Finally, by training and testing on the Syndrone dataset, baseline work shows that LFP-Mono exceeds state-of-the-art methods for low-altitude drone performance. Full article
Show Figures

Figure 1

16 pages, 3885 KB  
Article
Design and Evaluation of an Additively Manufactured UAV Fixed-Wing Using Gradient Thickness TPMS Structure and Various Shells and Infill Micro-Porosities
by Georgios Moysiadis, Savvas Koltsakidis, Odysseas Ziogas, Pericles Panagiotou and Dimitrios Tzetzis
Aerospace 2026, 13(1), 50; https://doi.org/10.3390/aerospace13010050 - 2 Jan 2026
Viewed by 332
Abstract
Unmanned Aerial Vehicles (UAVs) have become indispensable tools, playing a pivotal role in diverse applications such as rescue missions, agricultural surveying, and air defense. They significantly reduce operational costs while enhancing operator safety, enabling new strategies across multiple domains. The growing demand for [...] Read more.
Unmanned Aerial Vehicles (UAVs) have become indispensable tools, playing a pivotal role in diverse applications such as rescue missions, agricultural surveying, and air defense. They significantly reduce operational costs while enhancing operator safety, enabling new strategies across multiple domains. The growing demand for UAVs calls for structural components that are not only robust and lightweight, but also cost-efficient. This research introduces a novel approach that employs a pressure distribution map on the external surface of a UAV wing to optimize its internal structure through a variable-thickness TPMS (Triply Periodic Minimal Surface) design. Beyond structural optimization, the study explores a second novel approach with the use of filaments containing chemical blowing agents printed at different temperatures for both the infill and shell, producing varying porosities. As a result, the tailoring of density and weight is achieved through two different methods, and case studies were developed by combining them. Compared to the conventionally manufactured wing, a weight reduction of up to 7% was achieved while the wing could handle the aerodynamic loads under extreme conditions. Beyond enabling lightweight structures, the process has the potential to be substantially faster and more cost-effective, eliminating the need for molds and advanced composite materials such as carbon fiber sheets. Full article
Show Figures

Figure 1

30 pages, 18696 KB  
Article
A Lightweight Multi-Module Collaborative Optimization Framework for Detecting Small Unmanned Aerial Vehicles in Anti-Unmanned Aerial Vehicle Systems
by Zhiling Chen, Kuangang Fan, Jingzhen Ye, Zhitao Xu and Yupeng Wei
Drones 2026, 10(1), 20; https://doi.org/10.3390/drones10010020 - 31 Dec 2025
Viewed by 441
Abstract
In response to the safety threats posed by unauthorized unmanned aerial vehicles (UAVs), the importance of anti-UAV systems is becoming increasingly apparent. In tasks involving UAV detection, small UAVs are particularly difficult to detect due to their low resolution. Therefore, this study proposed [...] Read more.
In response to the safety threats posed by unauthorized unmanned aerial vehicles (UAVs), the importance of anti-UAV systems is becoming increasingly apparent. In tasks involving UAV detection, small UAVs are particularly difficult to detect due to their low resolution. Therefore, this study proposed YOLO-CoOp, a lightweight multi-module collaborative optimization framework for detecting small UAVs. First, a high-resolution feature pyramid network (HRFPN) was proposed to retain more spatial information of small UAVs. Second, a C3k2-WT module integrated with wavelet transform convolution was proposed to enhance feature extraction capability and expand the model’s receptive field. Then, a spatial-channel synergistic attention (SCSA) mechanism was introduced to integrate spatial and channel information and enhance feature fusion. Finally, the DyATF method replaced the upsampling with Dysample and the confidence loss with adaptive threshold focal loss (ATFL), aiming to restore UAV details and balance positive–negative sample weights. The ablation experiments show that YOLO-CoOp achieves 94.3% precision, 93.1% recall, 96.2% mAP50, and 57.6% mAP50−95 on the UAV-SOD dataset, with improvements of 3.6%, 10%, 5.9%, and 5% over the baseline model, respectively. The comparison experiments demonstrate that YOLO-CoOp has fewer parameters while maintaining superior detection performance. Cross-dataset validation experiments also demonstrate that YOLO-CoOp exhibits significant performance improvements in small object detection tasks. Full article
Show Figures

Figure 1

28 pages, 3652 KB  
Article
A Ground-Based Visual System for UAV Detection and Altitude Measurement Deployment and Evaluation of Ghost-YOLOv11n on Edge Devices
by Hongyu Wang, Yifeng Qu, Zheng Dang, Duosheng Wu, Mingzhu Cui, Hanqi Shi and Jintao Zhao
Sensors 2026, 26(1), 205; https://doi.org/10.3390/s26010205 - 28 Dec 2025
Viewed by 422
Abstract
The growing threat of unauthorized drones to ground-based critical infrastructure necessitates efficient ground-to-air surveillance systems. This paper proposes a lightweight framework for UAV detection and altitude measurement from a fixed ground perspective. We introduce Ghost-YOLOv11n, an optimized detector that integrates GhostConv modules into [...] Read more.
The growing threat of unauthorized drones to ground-based critical infrastructure necessitates efficient ground-to-air surveillance systems. This paper proposes a lightweight framework for UAV detection and altitude measurement from a fixed ground perspective. We introduce Ghost-YOLOv11n, an optimized detector that integrates GhostConv modules into YOLOv11n, reducing computational complexity by 12.7% while achieving 98.8% mAP0.5 on a comprehensive dataset of 8795 images. Deployed on a LuBanCat4 edge device with Rockchip RK3588S NPU acceleration, the model achieves 20 FPS. For stable altitude estimation, we employ an Extended Kalman Filter to refine measurements from a monocular ranging method based on similar-triangle geometry. Experimental results under ground monitoring scenarios show height measurement errors remain within 10% up to 30 m. This work provides a cost-effective, edge-deployable solution specifically for ground-based anti-drone applications. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

34 pages, 20157 KB  
Article
Dual-Level Attention Relearning for Cross-Modality Rotated Object Detection in UAV RGB–Thermal Imagery
by Zhuqiang Li, Zhijun Zhen, Shengbo Chen, Liqiang Zhang and Lisai Cao
Remote Sens. 2026, 18(1), 107; https://doi.org/10.3390/rs18010107 - 28 Dec 2025
Viewed by 427
Abstract
Effectively leveraging multi-source unmanned aerial vehicle (UAV) observations for reliable object recognition is often compromised by environmental extremes (e.g., occlusion and low illumination) and the inherent physical discrepancies between modalities. To overcome these limitations, we propose DLANet, a lightweight, rotation-aware multimodal object detection [...] Read more.
Effectively leveraging multi-source unmanned aerial vehicle (UAV) observations for reliable object recognition is often compromised by environmental extremes (e.g., occlusion and low illumination) and the inherent physical discrepancies between modalities. To overcome these limitations, we propose DLANet, a lightweight, rotation-aware multimodal object detection framework that introduces a dual-level attention relearning strategy to maximize complementary information from visible (RGB) and thermal infrared (TIR) imagery. DLANet integrates two novel components: the Implicit Fine-Grained Fusion Module (IF2M), which facilitates deep cross-modal interaction by jointly modeling channel and spatial dependencies at intermediate stages, and the Adaptive Branch Feature Weighting (ABFW) module, which dynamically recalibrates modality contributions at higher levels to suppress noise and pseudo-targets. This synergistic approach allows the network to relearn feature importance based on real-time scene conditions. To support industrial applications, we construct the OilLeak dataset, a dedicated benchmark for onshore oil-spill detection. The experimental results demonstrate that DLANet achieves state-of-the-art performance, recording an mAP0.5 of 0.858 on the public DroneVehicle dataset while maintaining high efficiency, with 39.04 M parameters and 72.69 GFLOPs, making it suitable for real-time edge deployment. Full article
(This article belongs to the Special Issue Advances in SAR, Optical, Hyperspectral and Infrared Remote Sensing)
Show Figures

Figure 1

Back to TopTop