Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,840)

Search Parameters:
Keywords = unmanned aerial vehicle image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4142 KiB  
Article
Onboard Real-Time Hyperspectral Image Processing System Design for Unmanned Aerial Vehicles
by Ruifan Yang, Min Huang, Wenhao Zhao, Zixuan Zhang, Yan Sun, Lulu Qian and Zhanchao Wang
Sensors 2025, 25(15), 4822; https://doi.org/10.3390/s25154822 - 5 Aug 2025
Abstract
This study proposes and implements a dual-processor FPGA-ARM architecture to resolve the critical contradiction between massive data volumes and real-time processing demands in UAV-borne hyperspectral imaging. The integrated system incorporates a shortwave infrared hyperspectral camera, IMU, control module, heterogeneous computing core, and SATA [...] Read more.
This study proposes and implements a dual-processor FPGA-ARM architecture to resolve the critical contradiction between massive data volumes and real-time processing demands in UAV-borne hyperspectral imaging. The integrated system incorporates a shortwave infrared hyperspectral camera, IMU, control module, heterogeneous computing core, and SATA SSD storage. Through hardware-level task partitioning—utilizing FPGA for high-speed data buffering and ARM for core computational processing—it achieves a real-time end-to-end acquisition–storage–processing–display pipeline. The compact integrated device exhibits a total weight of merely 6 kg and power consumption of 40 W, suitable for airborne platforms. Experimental validation confirms the system’s capability to store over 200 frames per second (at 640 × 270 resolution, matching the camera’s maximum frame rate), quick-look imaging capability, and demonstrated real-time processing efficacy via relative radio-metric correction tasks (processing 5000 image frames within 1000 ms). This framework provides an effective technical solution to address hyperspectral data processing bottlenecks more efficiently on UAV platforms for dynamic scenario applications. Future work includes actual flight deployment to verify performance in operational environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 1971 KiB  
Article
FFG-YOLO: Improved YOLOv8 for Target Detection of Lightweight Unmanned Aerial Vehicles
by Tongxu Wang, Sizhe Yang, Ming Wan and Yanqiu Liu
Appl. Syst. Innov. 2025, 8(4), 109; https://doi.org/10.3390/asi8040109 - 4 Aug 2025
Abstract
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), [...] Read more.
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), where small targets are often occluded, multi-scale semantic information is easily lost, and there is a trade-off between real-time processing and computational resources. Existing algorithms struggle to effectively extract multi-dimensional features and deep semantic information from images and to balance detection accuracy with model complexity. To address these limitations, we developed FFG-YOLO, a lightweight small-target detection method for UAVs based on YOLOv8. FFG-YOLO incorporates three modules: a feature enhancement block (FEB), a feature concat block (FCB), and a global context awareness block (GCAB). These modules strengthen feature extraction from small targets, resolve semantic bias in multi-scale feature fusion, and help differentiate small targets from complex backgrounds. We also improved the positioning accuracy of small targets using the Wasserstein distance loss function. Experiments showed that FFG-YOLO outperformed other algorithms, including YOLOv8n, in small-target detection due to its lightweight nature, meeting the stringent real-time performance and deployment requirements of UAVs. Full article
Show Figures

Figure 1

22 pages, 5136 KiB  
Article
Application of UAVs to Support Blast Design for Flyrock Mitigation: A Case Study from a Basalt Quarry
by Józef Pyra and Tomasz Żołądek
Appl. Sci. 2025, 15(15), 8614; https://doi.org/10.3390/app15158614 (registering DOI) - 4 Aug 2025
Viewed by 74
Abstract
Blasting operations in surface mining pose a risk of flyrock, which is a critical safety concern for both personnel and infrastructure. This study presents the use of unmanned aerial vehicles (UAVs) and photogrammetric techniques to improve the accuracy of blast design, particularly in [...] Read more.
Blasting operations in surface mining pose a risk of flyrock, which is a critical safety concern for both personnel and infrastructure. This study presents the use of unmanned aerial vehicles (UAVs) and photogrammetric techniques to improve the accuracy of blast design, particularly in relation to controlling burden values and reducing flyrock. The research was conducted in a basalt quarry in Lower Silesia, where high rock fracturing complicated conventional blast planning. A DJI Mavic 3 Enterprise UAV was used to capture high-resolution aerial imagery, and 3D models were created using Strayos software. These models enabled precise analysis of bench face geometry and burden distribution with centimeter-level accuracy. The results showed a significant improvement in identifying zones with improper burden values and allowed for real-time corrections in blasthole design. Despite a ten-fold reduction in the number of images used, no loss in model quality was observed. UAV-based surveys followed software-recommended flight paths, and the application of this methodology reduced the flyrock range by an average of 42% near sensitive areas. This approach demonstrates the operational benefits and enhanced safety potential of integrating UAV-based photogrammetry into blasting design workflows. Full article
(This article belongs to the Special Issue Advanced Blasting Technology for Mining)
Show Figures

Figure 1

23 pages, 4382 KiB  
Article
MTL-PlotCounter: Multitask Driven Soybean Seedling Counting at the Plot Scale Based on UAV Imagery
by Xiaoqin Xue, Chenfei Li, Zonglin Liu, Yile Sun, Xuru Li and Haiyan Song
Remote Sens. 2025, 17(15), 2688; https://doi.org/10.3390/rs17152688 - 3 Aug 2025
Viewed by 118
Abstract
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep [...] Read more.
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep learning regression model based on the TasselNetV2++ architecture, designed for plot-scale soybean seedling counting. It employs a patch-based training strategy combined with full-plot validation to achieve reliable performance with limited breeding plot data. To incorporate additional agronomic information, PlotCounter is extended into a multitask learning framework (MTL-PlotCounter) that integrates sowing metadata such as variety, number of seeds per hole, and sowing density as auxiliary classification tasks. RGB images of 54 breeding plots were captured in 2023 using a DJI Mavic 2 Pro UAV and processed into an orthomosaic for model development and evaluation, showing effective performance. PlotCounter achieves a root mean square error (RMSE) of 6.98 and a relative RMSE (rRMSE) of 6.93%. The variety-integrated MTL-PlotCounter, V-MTL-PlotCounter, performs the best, with relative reductions of 8.74% in RMSE and 3.03% in rRMSE compared to PlotCounter, and outperforms representative YOLO-based models. Additionally, both PlotCounter and V-MTL-PlotCounter are deployed on a web-based platform, enabling users to upload images via an interactive interface, automatically count seedlings, and analyze plot-scale emergence, powered by a multimodal large language model. This study highlights the potential of integrating UAV remote sensing, agronomic metadata, specialized deep learning models, and multimodal large language models for advanced crop monitoring. Full article
(This article belongs to the Special Issue Recent Advances in Multimodal Hyperspectral Remote Sensing)
Show Figures

Figure 1

27 pages, 1382 KiB  
Review
Application of Non-Destructive Technology in Plant Disease Detection: Review
by Yanping Wang, Jun Sun, Zhaoqi Wu, Yilin Jia and Chunxia Dai
Agriculture 2025, 15(15), 1670; https://doi.org/10.3390/agriculture15151670 - 1 Aug 2025
Viewed by 328
Abstract
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on [...] Read more.
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on the research status of non-destructive detection techniques used for plant disease identification and detection, mainly introducing the following two types of methods: spectral technology and imaging technology. It also elaborates, in detail, on the principles and application examples of each technology and summarizes the advantages and disadvantages of these technologies. This review clearly indicates that non-destructive detection techniques can achieve plant disease and pest detection quickly, accurately, and without damage. In the future, integrating multiple non-destructive detection technologies, developing portable detection devices, and combining more efficient data processing methods will become the core development directions of this field. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 - 1 Aug 2025
Viewed by 140
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

18 pages, 10604 KiB  
Article
Fast Detection of Plants in Soybean Fields Using UAVs, YOLOv8x Framework, and Image Segmentation
by Ravil I. Mukhamediev, Valentin Smurygin, Adilkhan Symagulov, Yan Kuchin, Yelena Popova, Farida Abdoldina, Laila Tabynbayeva, Viktors Gopejenko and Alexey Oxenenko
Drones 2025, 9(8), 547; https://doi.org/10.3390/drones9080547 - 1 Aug 2025
Viewed by 196
Abstract
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves [...] Read more.
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves chemicals but also reduces the environmental load on cultivated fields. Machine learning algorithms are widely used for plant classification. Research on the application of the YOLO algorithm is conducted for simultaneous identification, localization, and classification of plants. However, the quality of the algorithm significantly depends on the training set. The aim of this study is not only the detection of a cultivated plant (soybean) but also weeds growing in the field. The dataset developed in the course of the research allows for solving this issue by detecting not only soybean but also seven weed species common in the fields of Kazakhstan. The article describes an approach to the preparation of a training set of images for soybean fields using preliminary thresholding and bound box (Bbox) segmentation of marked images, which allows for improving the quality of plant classification and localization. The conducted research and computational experiments determined that Bbox segmentation shows the best results. The quality of classification and localization with the application of Bbox segmentation significantly increased (f1 score increased from 0.64 to 0.959, mAP50 from 0.72 to 0.979); for a cultivated plant (soybean), the best classification results known to date were achieved with the application of YOLOv8x on images obtained from the UAV, with an f1 score = 0.984. At the same time, the plant detection rate increased by 13 times compared to the model proposed earlier in the literature. Full article
Show Figures

Figure 1

22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 - 1 Aug 2025
Viewed by 200
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Graphical abstract

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 - 31 Jul 2025
Viewed by 170
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

27 pages, 6715 KiB  
Article
Structural Component Identification and Damage Localization of Civil Infrastructure Using Semantic Segmentation
by Piotr Tauzowski, Mariusz Ostrowski, Dominik Bogucki, Piotr Jarosik and Bartłomiej Błachowski
Sensors 2025, 25(15), 4698; https://doi.org/10.3390/s25154698 - 30 Jul 2025
Viewed by 317
Abstract
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have [...] Read more.
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have become a standard tool and can be used for structural health inspections. A key challenge, however, is the availability of reliable datasets. In this work, the U-net and DeepLab v3+ convolutional neural networks are trained on a synthetic Tokaido dataset. This dataset comprises images representative of data acquired by unmanned aerial vehicle (UAV) imagery and corresponding ground truth data. The data includes semantic segmentation masks for both categorizing structural elements (slabs, beams, and columns) and assessing structural damage (concrete spalling or exposed rebars). Data augmentation, including both image quality degradation (e.g., brightness modification, added noise) and image transformations (e.g., image flipping), is applied to the synthetic dataset. The selected neural network architectures achieve excellent performance, reaching values of 97% for accuracy and 87% for Mean Intersection over Union (mIoU) on the validation data. It also demonstrates promising results in the semantic segmentation of real-world structures captured in photographs, despite being trained solely on synthetic data. Additionally, based on the obtained results of semantic segmentation, it can be concluded that DeepLabV3+ outperforms U-net in structural component identification. However, this is not the case in the damage identification task. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

28 pages, 4007 KiB  
Article
Voting-Based Classification Approach for Date Palm Health Detection Using UAV Camera Images: Vision and Learning
by Abdallah Guettaf Temam, Mohamed Nadour, Lakhmissi Cherroun, Ahmed Hafaifa, Giovanni Angiulli and Fabio La Foresta
Drones 2025, 9(8), 534; https://doi.org/10.3390/drones9080534 - 29 Jul 2025
Viewed by 248
Abstract
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method [...] Read more.
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method to ensure stability and accurate image acquisition. These deep learning models are implemented by a voting-based classification (VBC) system that combines multiple CNN architectures, including MobileNet, a handcrafted CNN, VGG16, and VGG19, to enhance classification accuracy and robustness. The classifiers independently generate predictions, and a voting mechanism determines the final classification. This hybridization of image-based visual servoing (IBVS) and classifiers makes immediate adaptations to changing conditions, providing straightforward and smooth flying as well as vision classification. The dataset used in this study was collected using a dual-camera UAV, which captures high-resolution images to detect pests in date palm leaves. After applying the proposed classification strategy, the implemented voting method achieved an impressive accuracy of 99.16% on the test set for detecting health conditions in date palm leaves, surpassing individual classifiers. The obtained results are discussed and compared to show the effectiveness of this classification technique. Full article
Show Figures

Figure 1

19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 413
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

22 pages, 6010 KiB  
Article
Mapping Waterbird Habitats with UAV-Derived 2D Orthomosaic Along Belgium’s Lieve Canal
by Xingzhen Liu, Andrée De Cock, Long Ho, Kim Pham, Diego Panique-Casso, Marie Anne Eurie Forio, Wouter H. Maes and Peter L. M. Goethals
Remote Sens. 2025, 17(15), 2602; https://doi.org/10.3390/rs17152602 - 26 Jul 2025
Viewed by 461
Abstract
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, [...] Read more.
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, Belgium. We systematically classified habitats into residential, industrial, riparian tree, and herbaceous vegetation zones, examining their influence on the spatial distribution of three focal waterbird species: Eurasian coot (Fulica atra), common moorhen (Gallinula chloropus), and wild duck (Anas platyrhynchos). Herbaceous vegetation zones consistently supported the highest waterbird densities, attributed to abundant nesting substrates and minimal human disturbance. UAV-based waterbird counts correlated strongly with ground-based surveys (R2 = 0.668), though species-specific detectability varied significantly due to morphological visibility and ecological behaviors. Detection accuracy was highest for coots, intermediate for ducks, and lowest for moorhens, highlighting the crucial role of image resolution ground sampling distance (GSD) in aerial monitoring. Operational challenges, including image occlusion and habitat complexity, underline the need for tailored survey protocols and advanced sensing techniques. Our findings demonstrate that UAV imagery provides a reliable and scalable method for monitoring waterbird habitats, offering critical insights for biodiversity conservation and sustainable management practices in aquatic landscapes. Full article
Show Figures

Figure 1

23 pages, 4324 KiB  
Article
Monitoring Nitrogen Uptake and Grain Quality in Ponded and Aerobic Rice with the Squared Simplified Canopy Chlorophyll Content Index
by Gonzalo Carracelas, John Hornbuckle and Carlos Ballester
Remote Sens. 2025, 17(15), 2598; https://doi.org/10.3390/rs17152598 - 25 Jul 2025
Viewed by 444
Abstract
Remote sensing tools have been proposed to assist with rice crop monitoring but have been developed and validated on ponded rice. This two-year study was conducted on a commercial rice farm with irrigation automation technology aimed to (i) understand how canopy reflectance differs [...] Read more.
Remote sensing tools have been proposed to assist with rice crop monitoring but have been developed and validated on ponded rice. This two-year study was conducted on a commercial rice farm with irrigation automation technology aimed to (i) understand how canopy reflectance differs between high-yielding ponded and aerobic rice, (ii) validate the feasibility of using the squared simplified canopy chlorophyll content index (SCCCI2) for N uptake estimates, and (iii) explore the SCCCI2 and similar chlorophyll-sensitive indices for grain quality monitoring. Multispectral images were collected from an unmanned aerial vehicle during both rice-growing seasons. Above-ground biomass and nitrogen (N) uptake were measured at panicle initiation (PI). The performance of single-vegetation-index models in estimating rice N uptake, as previously published, was assessed. Yield and grain quality were determined at harvest. Results showed that canopy reflectance in the visible and near-infrared regions differed between aerobic and ponded rice early in the growing season. Chlorophyll-sensitive indices showed lower values in aerobic rice than in the ponded rice at PI, despite having similar yields at harvest. The SCCCI2 model (RMSE = 20.52, Bias = −6.21 Kg N ha−1, and MAPE = 11.95%) outperformed other models assessed. The SCCCI2, squared normalized difference red edge index, and chlorophyll green index correlated at PI with the percentage of cracked grain, immature grain, and quality score, suggesting that grain milling quality parameters could be associated with N uptake at PI. This study highlights canopy reflectance differences between high-yielding aerobic (averaging 15 Mg ha−1) and ponded rice at key phenological stages and confirms the validity of a single-vegetation-index model based on the SCCCI2 for N uptake estimates in ponded and non-ponded rice crops. Full article
Show Figures

Figure 1

22 pages, 4664 KiB  
Article
Aerial Image-Based Crop Row Detection and Weed Pressure Mapping Method
by László Moldvai, Péter Ákos Mesterházi, Gergely Teschner and Anikó Nyéki
Agronomy 2025, 15(8), 1762; https://doi.org/10.3390/agronomy15081762 - 23 Jul 2025
Viewed by 283
Abstract
Accurate crop row detection is crucial for determining weed pressure (weeds item per square meter). However, this task is complicated by the similarity between crops and weeds, the presence of missing plants within rows, and the varying growth stages of both. Our hypothesis [...] Read more.
Accurate crop row detection is crucial for determining weed pressure (weeds item per square meter). However, this task is complicated by the similarity between crops and weeds, the presence of missing plants within rows, and the varying growth stages of both. Our hypothesis was that in drone imagery captured at altitudes of 20–30 m—where individual plant details are not discernible—weed presence among crops can be statistically detected, allowing for the generation of a weed distribution map. This study proposes a computer vision detection method using images captured by unmanned aerial vehicles (UAVs) consisting of six main phases. The method was tested on 208 images. The algorithm performs well under normal conditions; however, when the weed density is too high, it fails to detect the row direction properly and begins processing misleading data. To investigate these cases, 120 artificial datasets were created with varying parameters, and the scenarios were analyzed. It was found that a rate variable—in-row concentration ratio (IRCR)—can be used to determine whether the result is valid (usable) or invalid (to be discarded). The F1 score is a metric combining precision and recall using a harmonic mean, where “1” indicates that precision and recall are equally weighted, i.e., β = 1 in the general Fβ formula. In the case of moderate weed infestation, where 678 crop plants and 600 weeds were present, the algorithm achieved an F1 score of 86.32% in plant classification, even with a 4% row disturbance level. Furthermore, IRCR also indicates the level of weed pressure in the area. The correlation between the ground truth weed-to-crop ratio and the weed/crop classification rate produced by the algorithm is 98–99%. As a result, the algorithm is capable of filtering out heavily infested areas that require full weed control and capable of generating weed density maps on other cases to support precision weed management. Full article
Show Figures

Figure 1

Back to TopTop