Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,384)

Search Parameters:
Keywords = unmanned aerial vehicle detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2077 KiB  
Article
Benchmarking YOLO Models for Marine Search and Rescue in Variable Weather Conditions
by Aysha Alshibli and Qurban Memon
Automation 2025, 6(3), 35; https://doi.org/10.3390/automation6030035 (registering DOI) - 2 Aug 2025
Abstract
Deep learning with unmanned aerial vehicles (UAVs) is transforming maritime search and rescue (SAR) by enabling rapid object identification in challenging marine environments. This study benchmarks the performance of YOLO models for maritime SAR under diverse weather conditions using the SeaDronesSee and AFO [...] Read more.
Deep learning with unmanned aerial vehicles (UAVs) is transforming maritime search and rescue (SAR) by enabling rapid object identification in challenging marine environments. This study benchmarks the performance of YOLO models for maritime SAR under diverse weather conditions using the SeaDronesSee and AFO datasets. The results show that while YOLOv7 achieved the highest mAP@50, it struggled with detecting small objects. In contrast, YOLOv10 and YOLOv11 deliver faster inference speeds but compromise slightly on precision. The key challenges discussed include environmental variability, sensor limitations, and scarce annotated data, which can be addressed by such techniques as attention modules and multimodal data fusion. Overall, the research results provide practical guidance for deploying efficient deep learning models in SAR, emphasizing specialized datasets and lightweight architectures for edge devices. Full article
(This article belongs to the Section Intelligent Control and Machine Learning)
27 pages, 1382 KiB  
Review
Application of Non-Destructive Technology in Plant Disease Detection: Review
by Yanping Wang, Jun Sun, Zhaoqi Wu, Yilin Jia and Chunxia Dai
Agriculture 2025, 15(15), 1670; https://doi.org/10.3390/agriculture15151670 (registering DOI) - 1 Aug 2025
Abstract
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on [...] Read more.
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on the research status of non-destructive detection techniques used for plant disease identification and detection, mainly introducing the following two types of methods: spectral technology and imaging technology. It also elaborates, in detail, on the principles and application examples of each technology and summarizes the advantages and disadvantages of these technologies. This review clearly indicates that non-destructive detection techniques can achieve plant disease and pest detection quickly, accurately, and without damage. In the future, integrating multiple non-destructive detection technologies, developing portable detection devices, and combining more efficient data processing methods will become the core development directions of this field. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

18 pages, 10604 KiB  
Article
Fast Detection of Plants in Soybean Fields Using UAVs, YOLOv8x Framework, and Image Segmentation
by Ravil I. Mukhamediev, Valentin Smurygin, Adilkhan Symagulov, Yan Kuchin, Yelena Popova, Farida Abdoldina, Laila Tabynbayeva, Viktors Gopejenko and Alexey Oxenenko
Drones 2025, 9(8), 547; https://doi.org/10.3390/drones9080547 (registering DOI) - 1 Aug 2025
Abstract
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves [...] Read more.
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves chemicals but also reduces the environmental load on cultivated fields. Machine learning algorithms are widely used for plant classification. Research on the application of the YOLO algorithm is conducted for simultaneous identification, localization, and classification of plants. However, the quality of the algorithm significantly depends on the training set. The aim of this study is not only the detection of a cultivated plant (soybean) but also weeds growing in the field. The dataset developed in the course of the research allows for solving this issue by detecting not only soybean but also seven weed species common in the fields of Kazakhstan. The article describes an approach to the preparation of a training set of images for soybean fields using preliminary thresholding and bound box (Bbox) segmentation of marked images, which allows for improving the quality of plant classification and localization. The conducted research and computational experiments determined that Bbox segmentation shows the best results. The quality of classification and localization with the application of Bbox segmentation significantly increased (f1 score increased from 0.64 to 0.959, mAP50 from 0.72 to 0.979); for a cultivated plant (soybean), the best classification results known to date were achieved with the application of YOLOv8x on images obtained from the UAV, with an f1 score = 0.984. At the same time, the plant detection rate increased by 13 times compared to the model proposed earlier in the literature. Full article
Show Figures

Figure 1

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 (registering DOI) - 31 Jul 2025
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

28 pages, 5503 KiB  
Article
Feature Selection Framework for Improved UAV-Based Detection of Solenopsis invicta Mounds in Agricultural Landscapes
by Chun-Han Shih, Cheng-En Song, Su-Fen Wang and Chung-Chi Lin
Insects 2025, 16(8), 793; https://doi.org/10.3390/insects16080793 (registering DOI) - 31 Jul 2025
Abstract
The red imported fire ant (RIFA; Solenopsis invicta) is an invasive species that severely threatens ecology, agriculture, and public health in Taiwan. In this study, the feasibility of applying multispectral imagery captured by unmanned aerial vehicles (UAVs) to detect red fire ant [...] Read more.
The red imported fire ant (RIFA; Solenopsis invicta) is an invasive species that severely threatens ecology, agriculture, and public health in Taiwan. In this study, the feasibility of applying multispectral imagery captured by unmanned aerial vehicles (UAVs) to detect red fire ant mounds was evaluated in Fenlin Township, Hualien, Taiwan. A DJI Phantom 4 multispectral drone collected reflectance in five bands (blue, green, red, red-edge, and near-infrared), derived indices (normalized difference vegetation index, NDVI, soil-adjusted vegetation index, SAVI, and photochemical pigment reflectance index, PPR), and textural features. According to analysis of variance F-scores and random forest recursive feature elimination, vegetation indices and spectral features (e.g., NDVI, NIR, SAVI, and PPR) were the most significant predictors of ecological characteristics such as vegetation density and soil visibility. Texture features exhibited moderate importance and the potential to capture intricate spatial patterns in nonlinear models. Despite limitations in the analytics, including trade-offs related to flight height and environmental variability, the study findings suggest that UAVs are an inexpensive, high-precision means of obtaining multispectral data for RIFA monitoring. These findings can be used to develop efficient mass-detection protocols for integrated pest control, with broader implications for invasive species monitoring. Full article
(This article belongs to the Special Issue Surveillance and Management of Invasive Insects)
Show Figures

Figure 1

31 pages, 18320 KiB  
Article
Penetrating Radar on Unmanned Aerial Vehicle for the Inspection of Civilian Infrastructure: System Design, Modeling, and Analysis
by Jorge Luis Alva Alarcon, Yan Rockee Zhang, Hernan Suarez, Anas Amaireh and Kegan Reynolds
Aerospace 2025, 12(8), 686; https://doi.org/10.3390/aerospace12080686 (registering DOI) - 31 Jul 2025
Viewed by 34
Abstract
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, [...] Read more.
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, Weight, and Power) ultra-wideband (UWB) impulse radar with realistic electromagnetic modeling for deployment on unmanned aerial vehicles (UAVs). The system incorporates ultra-realistic antenna and propagation models, utilizing Finite Difference Time Domain (FDTD) solvers and multilayered media, to replicate realistic airborne sensing geometries. Verification and calibration are performed by comparing simulation outputs with laboratory measurements using varied material samples and target models. Custom signal processing algorithms are developed to extract meaningful features from complex electromagnetic environments and support anomaly detection. Additionally, machine learning (ML) techniques are trained on synthetic data to automate the identification of structural characteristics. The results demonstrate accurate agreement between simulations and measurements, as well as the potential for deploying this design in flight tests within realistic environments featuring complex electromagnetic interference. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

28 pages, 4007 KiB  
Article
Voting-Based Classification Approach for Date Palm Health Detection Using UAV Camera Images: Vision and Learning
by Abdallah Guettaf Temam, Mohamed Nadour, Lakhmissi Cherroun, Ahmed Hafaifa, Giovanni Angiulli and Fabio La Foresta
Drones 2025, 9(8), 534; https://doi.org/10.3390/drones9080534 - 29 Jul 2025
Viewed by 194
Abstract
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method [...] Read more.
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method to ensure stability and accurate image acquisition. These deep learning models are implemented by a voting-based classification (VBC) system that combines multiple CNN architectures, including MobileNet, a handcrafted CNN, VGG16, and VGG19, to enhance classification accuracy and robustness. The classifiers independently generate predictions, and a voting mechanism determines the final classification. This hybridization of image-based visual servoing (IBVS) and classifiers makes immediate adaptations to changing conditions, providing straightforward and smooth flying as well as vision classification. The dataset used in this study was collected using a dual-camera UAV, which captures high-resolution images to detect pests in date palm leaves. After applying the proposed classification strategy, the implemented voting method achieved an impressive accuracy of 99.16% on the test set for detecting health conditions in date palm leaves, surpassing individual classifiers. The obtained results are discussed and compared to show the effectiveness of this classification technique. Full article
Show Figures

Figure 1

19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 287
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

28 pages, 10524 KiB  
Article
Automating Three-Dimensional Cadastral Models of 3D Rights and Buildings Based on the LADM Framework
by Ratri Widyastuti, Deni Suwardhi, Irwan Meilano, Andri Hernandi and Juan Firdaus
ISPRS Int. J. Geo-Inf. 2025, 14(8), 293; https://doi.org/10.3390/ijgi14080293 - 28 Jul 2025
Viewed by 350
Abstract
Before the development of 3D cadastre, cadastral systems were based on 2D representations, which now require transformation or updating. In this context, the first issue is that existing 2D rights are not aligned with recent 3D data acquired using advanced technologies such as [...] Read more.
Before the development of 3D cadastre, cadastral systems were based on 2D representations, which now require transformation or updating. In this context, the first issue is that existing 2D rights are not aligned with recent 3D data acquired using advanced technologies such as Unmanned Aerial Vehicle–Light Detection and Ranging (UAV-LiDAR). The second issue is that point clouds of objects captured by UAV-LiDAR, such as fences and exterior building walls—are often neglected. However, these point cloud objects can be utilized to adjust 2D rights to correspond with recent 3D data and to update 3D building models with a higher level of detail. This research leverages such point cloud objects to automatically generate 3D rights and building models. By combining several algorithms, such as Iterative Closest Point (ICP), Random Forest (RF), Gaussian Mixture Model (GMM), Region Growing, the Polyfit method, and the orthogonality concept—an automatic workflow for generating 3D cadastral models is developed. The proposed workflow improves the horizontal accuracy of the updated 2D parcels from 1.19 m to 0.612 m. The floor area of the 3D models improves by approximately ±3 m2. Furthermore, the resulting 3D building models provide approximately 43% to 57% of the elements required for 3D property valuation. The case study of this research is in Indonesia. Full article
Show Figures

Figure 1

24 pages, 1530 KiB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 338
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

22 pages, 6010 KiB  
Article
Mapping Waterbird Habitats with UAV-Derived 2D Orthomosaic Along Belgium’s Lieve Canal
by Xingzhen Liu, Andrée De Cock, Long Ho, Kim Pham, Diego Panique-Casso, Marie Anne Eurie Forio, Wouter H. Maes and Peter L. M. Goethals
Remote Sens. 2025, 17(15), 2602; https://doi.org/10.3390/rs17152602 - 26 Jul 2025
Viewed by 356
Abstract
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, [...] Read more.
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, Belgium. We systematically classified habitats into residential, industrial, riparian tree, and herbaceous vegetation zones, examining their influence on the spatial distribution of three focal waterbird species: Eurasian coot (Fulica atra), common moorhen (Gallinula chloropus), and wild duck (Anas platyrhynchos). Herbaceous vegetation zones consistently supported the highest waterbird densities, attributed to abundant nesting substrates and minimal human disturbance. UAV-based waterbird counts correlated strongly with ground-based surveys (R2 = 0.668), though species-specific detectability varied significantly due to morphological visibility and ecological behaviors. Detection accuracy was highest for coots, intermediate for ducks, and lowest for moorhens, highlighting the crucial role of image resolution ground sampling distance (GSD) in aerial monitoring. Operational challenges, including image occlusion and habitat complexity, underline the need for tailored survey protocols and advanced sensing techniques. Our findings demonstrate that UAV imagery provides a reliable and scalable method for monitoring waterbird habitats, offering critical insights for biodiversity conservation and sustainable management practices in aquatic landscapes. Full article
Show Figures

Figure 1

22 pages, 4664 KiB  
Article
Aerial Image-Based Crop Row Detection and Weed Pressure Mapping Method
by László Moldvai, Péter Ákos Mesterházi, Gergely Teschner and Anikó Nyéki
Agronomy 2025, 15(8), 1762; https://doi.org/10.3390/agronomy15081762 - 23 Jul 2025
Viewed by 251
Abstract
Accurate crop row detection is crucial for determining weed pressure (weeds item per square meter). However, this task is complicated by the similarity between crops and weeds, the presence of missing plants within rows, and the varying growth stages of both. Our hypothesis [...] Read more.
Accurate crop row detection is crucial for determining weed pressure (weeds item per square meter). However, this task is complicated by the similarity between crops and weeds, the presence of missing plants within rows, and the varying growth stages of both. Our hypothesis was that in drone imagery captured at altitudes of 20–30 m—where individual plant details are not discernible—weed presence among crops can be statistically detected, allowing for the generation of a weed distribution map. This study proposes a computer vision detection method using images captured by unmanned aerial vehicles (UAVs) consisting of six main phases. The method was tested on 208 images. The algorithm performs well under normal conditions; however, when the weed density is too high, it fails to detect the row direction properly and begins processing misleading data. To investigate these cases, 120 artificial datasets were created with varying parameters, and the scenarios were analyzed. It was found that a rate variable—in-row concentration ratio (IRCR)—can be used to determine whether the result is valid (usable) or invalid (to be discarded). The F1 score is a metric combining precision and recall using a harmonic mean, where “1” indicates that precision and recall are equally weighted, i.e., β = 1 in the general Fβ formula. In the case of moderate weed infestation, where 678 crop plants and 600 weeds were present, the algorithm achieved an F1 score of 86.32% in plant classification, even with a 4% row disturbance level. Furthermore, IRCR also indicates the level of weed pressure in the area. The correlation between the ground truth weed-to-crop ratio and the weed/crop classification rate produced by the algorithm is 98–99%. As a result, the algorithm is capable of filtering out heavily infested areas that require full weed control and capable of generating weed density maps on other cases to support precision weed management. Full article
Show Figures

Figure 1

19 pages, 2726 KiB  
Article
Lightweight Detection of Inserted Chirp Symbols in Radio Transmission from Commercial UAVs
by Krzysztof K. Cwalina, Piotr Rajchowski and Jarosław Sadowski
Sensors 2025, 25(15), 4552; https://doi.org/10.3390/s25154552 - 23 Jul 2025
Viewed by 225
Abstract
Most small, commercial unmanned aerial vehicles (UAVs) maintain continuous two-way radio communication with the controller. Signals emitted by the UAVs can be used for detection of their presence, but as these drones use unlicensed frequency bands that are shared with many other wireless [...] Read more.
Most small, commercial unmanned aerial vehicles (UAVs) maintain continuous two-way radio communication with the controller. Signals emitted by the UAVs can be used for detection of their presence, but as these drones use unlicensed frequency bands that are shared with many other wireless communication devices, UAV detection should rely on the unique characteristics of the transmitted signals. In this article, low-complexity methods for the detection of chirp symbols in downlink transmission from a UAV produced by DJI are proposed. The presented methods were developed with focus on the ability to detect presence of chirp symbols in radio transmission without a priori knowledge or need for center frequency estimation. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

26 pages, 78396 KiB  
Article
SWRD–YOLO: A Lightweight Instance Segmentation Model for Estimating Rice Lodging Degree in UAV Remote Sensing Images with Real-Time Edge Deployment
by Chunyou Guo and Feng Tan
Agriculture 2025, 15(15), 1570; https://doi.org/10.3390/agriculture15151570 - 22 Jul 2025
Viewed by 276
Abstract
Rice lodging severely affects crop growth, yield, and mechanized harvesting efficiency. The accurate detection and quantification of lodging areas are crucial for precision agriculture and timely field management. However, Unmanned Aerial Vehicle (UAV)-based lodging detection faces challenges such as complex backgrounds, variable lighting, [...] Read more.
Rice lodging severely affects crop growth, yield, and mechanized harvesting efficiency. The accurate detection and quantification of lodging areas are crucial for precision agriculture and timely field management. However, Unmanned Aerial Vehicle (UAV)-based lodging detection faces challenges such as complex backgrounds, variable lighting, and irregular lodging patterns. To address these issues, this study proposes SWRD–YOLO, a lightweight instance segmentation model that enhances feature extraction and fusion using advanced convolution and attention mechanisms. The model employs an optimized loss function to improve localization accuracy, achieving precise lodging area segmentation. Additionally, a grid-based lodging ratio estimation method is introduced, dividing images into fixed-size grids to calculate local lodging proportions and aggregate them for robust overall severity assessment. Evaluated on a self-built rice lodging dataset, the model achieves 94.8% precision, 88.2% recall, 93.3% mAP@0.5, and 91.4% F1 score, with real-time inference at 16.15 FPS on an embedded NVIDIA Jetson Orin NX device. Compared to the baseline YOLOv8n-seg, precision, recall, mAP@0.5, and F1 score improved by 8.2%, 16.5%, 12.8%, and 12.8%, respectively. These results confirm the model’s effectiveness and potential for deployment in intelligent crop monitoring and sustainable agriculture. Full article
Show Figures

Figure 1

Back to TopTop