Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,845)

Search Parameters:
Keywords = unmanned aerial vehicle images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1275 KiB  
Technical Note
Agronomic Information Extraction from UAV-Based Thermal Photogrammetry Using MATLAB
by Francesco Paciolla, Giovanni Popeo, Alessia Farella and Simone Pascuzzi
Remote Sens. 2025, 17(15), 2746; https://doi.org/10.3390/rs17152746 (registering DOI) - 7 Aug 2025
Abstract
Thermal cameras are becoming popular in several applications of precision agriculture, including crop and soil monitoring, for efficient irrigation scheduling, crop maturity, and yield mapping. Nowadays, these sensors can be integrated as payloads on unmanned aerial vehicles, providing high spatial and temporal resolution, [...] Read more.
Thermal cameras are becoming popular in several applications of precision agriculture, including crop and soil monitoring, for efficient irrigation scheduling, crop maturity, and yield mapping. Nowadays, these sensors can be integrated as payloads on unmanned aerial vehicles, providing high spatial and temporal resolution, to deeply understand the variability of crop and soil conditions. However, few commercial software programs, such as PIX4D Mapper, can process thermal images, and their functionalities are very limited. This paper reports on the implementation of a custom MATLAB® R2024a script to extract agronomic information from thermal orthomosaics obtained from images acquired by the DJI Mavic 3T drone. This approach enables us to evaluate the temperature at each point of an orthomosaic, create regions of interest, calculate basic statistics of spatial temperature distribution, and compute the Crop Water Stress Index. In the authors’ opinion, the reported approach can be easily replicated and can serve as a valuable tool for scientists who work with thermal images in the agricultural sector. Full article
22 pages, 5681 KiB  
Article
Automatic Detection System for Rainfall-Induced Shallow Landslides in Southeastern China Using Deep Learning and Unmanned Aerial Vehicle Imagery
by Yunfu Zhu, Bing Xia, Jianying Huang, Yuxuan Zhou, Yujie Su and Hong Gao
Water 2025, 17(15), 2349; https://doi.org/10.3390/w17152349 - 7 Aug 2025
Abstract
In the southeast of China, seasonal rainfall intensity is high, the distribution of mountains and hills is extensive, and many small-scale, shallow landslides frequently occur after consecutive seasons of heavy rainfall. High-precision automated identification systems can quickly pinpoint the scope of the disaster [...] Read more.
In the southeast of China, seasonal rainfall intensity is high, the distribution of mountains and hills is extensive, and many small-scale, shallow landslides frequently occur after consecutive seasons of heavy rainfall. High-precision automated identification systems can quickly pinpoint the scope of the disaster and help with important decisions like evacuating people, managing engineering, and assessing damage. Many people have designed systems for detecting such shallow landslides, but few have designed systems that combine high resolution, high automation, and real-time capability of landslide identification. Taking accuracy, automation, and real-time capability into account, we designed an automatic rainfall-induced shallow landslide detection system based on deep learning and Unmanned Aerial Vehicle (UAV) images. The system uses UAVs to capture high-resolution imagery, the U-Net (a U-shaped convolutional neural network) to combine multi-scale features, an adaptive edge enhancement loss function to improve landslide boundary identification, and the development of the “UAV Cruise Geological Hazard AI Identification System” software with an automated processing chain. The system integrates UAV-specific preprocessing and achieves a processing speed of 30 s per square kilometer. It was validated in Wanli District, Nanchang City, Jiangxi Province. The results show a Mean Intersection over Union (MIoU) of 90.7% and a Pixel Accuracy of 92.3%. Compared with traditional methods, the system significantly improves the accuracy of landslide detection. Full article
Show Figures

Figure 1

26 pages, 10480 KiB  
Article
Monitoring Chlorophyll Content of Brassica napus L. Based on UAV Multispectral and RGB Feature Fusion
by Yongqi Sun, Jiali Ma, Mengting Lyu, Jianxun Shen, Jianping Ying, Skhawat Ali, Basharat Ali, Wenqiang Lan, Yiwa Hu, Fei Liu, Weijun Zhou and Wenjian Song
Agronomy 2025, 15(8), 1900; https://doi.org/10.3390/agronomy15081900 - 7 Aug 2025
Abstract
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to [...] Read more.
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to evaluate six rapeseed cultivars chlorophyll content across mixed-growth stages, including seedling, bolting, and initial flowering stages. The ExG-ExR threshold segmentation was applied to remove background interference. Subsequently, color and spectral indices were extracted from segmented images and ranked according to their correlations with measured chlorophyll content. Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), and Support Vector Regression (SVR) models were independently established using subsets of the top-ranked features. Model performance was assessed by comparing prediction accuracy (R2 and RMSE). Results demonstrated significant accuracy improvements following background removal, especially for the SVR model. Compared to data without background removal, accuracy increased notably with background removal by 8.0% (R2p improved from 0.683 to 0.763) for color indices and 3.1% (R2p from 0.835 to 0.866) for spectral indices. Additionally, stepwise fusion of spectral and color indices further improved prediction accuracy. Optimal results were obtained by fusing the top seven color features ranked by correlation with chlorophyll content, achieving an R2p of 0.878 and an RMSE of 52.187 μg/g. These findings highlight the effectiveness of background removal and feature fusion in enhancing chlorophyll prediction accuracy. Full article
Show Figures

Figure 1

13 pages, 14213 KiB  
Article
All-Weather Drone Vision: Passive SWIR Imaging in Fog and Rain
by Alexander Bessonov, Aleksei Rozanov, Richard White, Galih Suwito, Ivonne Medina-Salazar, Marat Lutfullin, Dmitrii Gusev and Ilya Shikov
Drones 2025, 9(8), 553; https://doi.org/10.3390/drones9080553 - 7 Aug 2025
Abstract
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a [...] Read more.
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a broadband 400–1700 nm setting and three sub-band filters, each at four lens apertures (f/1.8–5.6). Entropy, structural-similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were computed for every weather–aperture–filter combination. Broadband SWIR consistently outperformed all filtered configurations. The gain stems from higher photon throughput, which outweighs the modest scattering reduction offered by narrowband selection. Under passive illumination, broadband SWIR therefore represents the most robust single-camera choice for unmanned aerial vehicles (UAVs), enhancing situational awareness and flight safety in fog and rain. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

19 pages, 17158 KiB  
Article
Deep Learning Strategy for UAV-Based Multi-Class Damage Detection on Railway Bridges Using U-Net with Different Loss Functions
by Yong-Hyoun Na and Doo-Kie Kim
Appl. Sci. 2025, 15(15), 8719; https://doi.org/10.3390/app15158719 - 7 Aug 2025
Abstract
Periodic visual inspections are currently conducted to maintain the condition of railway bridges. These inspections rely on direct visual assessments by human inspectors, often requiring specialized equipment such as aerial ladders. However, this method is not only time-consuming and costly but also involves [...] Read more.
Periodic visual inspections are currently conducted to maintain the condition of railway bridges. These inspections rely on direct visual assessments by human inspectors, often requiring specialized equipment such as aerial ladders. However, this method is not only time-consuming and costly but also involves significant safety risks. Therefore, there is a growing need for a more efficient and reliable alternative to traditional visual inspections of railway bridges. In this study, we evaluated and compared the performance of damage detection using U-Net-based deep learning models on images captured by unmanned aerial vehicles (UAVs). The target damage types include cracks, concrete spalling and delamination, water leakage, exposed reinforcement, and paint peeling. To enable multi-class segmentation, the U-Net model was trained using three different loss functions: Cross-Entropy Loss, Focal Loss, and Intersection over Union (IoU) Loss. We compared these methods to determine their ability to distinguish actual structural damage from environmental factors and surface contamination, particularly under real-world site conditions. The results showed that the U-Net model trained with IoU Loss outperformed the others in terms of detection accuracy. When applied to field inspection scenarios, this approach demonstrates strong potential for objective and precise damage detection. Furthermore, the use of UAVs in the inspection process is expected to significantly reduce both time and cost in railway infrastructure maintenance. Future research will focus on extending the detection capabilities to additional damage types such as efflorescence and corrosion, aiming to ultimately replace manual visual inspections of railway bridge surfaces with deep-learning-based methods. Full article
Show Figures

Figure 1

19 pages, 4142 KiB  
Article
Onboard Real-Time Hyperspectral Image Processing System Design for Unmanned Aerial Vehicles
by Ruifan Yang, Min Huang, Wenhao Zhao, Zixuan Zhang, Yan Sun, Lulu Qian and Zhanchao Wang
Sensors 2025, 25(15), 4822; https://doi.org/10.3390/s25154822 - 5 Aug 2025
Abstract
This study proposes and implements a dual-processor FPGA-ARM architecture to resolve the critical contradiction between massive data volumes and real-time processing demands in UAV-borne hyperspectral imaging. The integrated system incorporates a shortwave infrared hyperspectral camera, IMU, control module, heterogeneous computing core, and SATA [...] Read more.
This study proposes and implements a dual-processor FPGA-ARM architecture to resolve the critical contradiction between massive data volumes and real-time processing demands in UAV-borne hyperspectral imaging. The integrated system incorporates a shortwave infrared hyperspectral camera, IMU, control module, heterogeneous computing core, and SATA SSD storage. Through hardware-level task partitioning—utilizing FPGA for high-speed data buffering and ARM for core computational processing—it achieves a real-time end-to-end acquisition–storage–processing–display pipeline. The compact integrated device exhibits a total weight of merely 6 kg and power consumption of 40 W, suitable for airborne platforms. Experimental validation confirms the system’s capability to store over 200 frames per second (at 640 × 270 resolution, matching the camera’s maximum frame rate), quick-look imaging capability, and demonstrated real-time processing efficacy via relative radio-metric correction tasks (processing 5000 image frames within 1000 ms). This framework provides an effective technical solution to address hyperspectral data processing bottlenecks more efficiently on UAV platforms for dynamic scenario applications. Future work includes actual flight deployment to verify performance in operational environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 1971 KiB  
Article
FFG-YOLO: Improved YOLOv8 for Target Detection of Lightweight Unmanned Aerial Vehicles
by Tongxu Wang, Sizhe Yang, Ming Wan and Yanqiu Liu
Appl. Syst. Innov. 2025, 8(4), 109; https://doi.org/10.3390/asi8040109 - 4 Aug 2025
Viewed by 228
Abstract
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), [...] Read more.
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), where small targets are often occluded, multi-scale semantic information is easily lost, and there is a trade-off between real-time processing and computational resources. Existing algorithms struggle to effectively extract multi-dimensional features and deep semantic information from images and to balance detection accuracy with model complexity. To address these limitations, we developed FFG-YOLO, a lightweight small-target detection method for UAVs based on YOLOv8. FFG-YOLO incorporates three modules: a feature enhancement block (FEB), a feature concat block (FCB), and a global context awareness block (GCAB). These modules strengthen feature extraction from small targets, resolve semantic bias in multi-scale feature fusion, and help differentiate small targets from complex backgrounds. We also improved the positioning accuracy of small targets using the Wasserstein distance loss function. Experiments showed that FFG-YOLO outperformed other algorithms, including YOLOv8n, in small-target detection due to its lightweight nature, meeting the stringent real-time performance and deployment requirements of UAVs. Full article
Show Figures

Figure 1

22 pages, 5136 KiB  
Article
Application of UAVs to Support Blast Design for Flyrock Mitigation: A Case Study from a Basalt Quarry
by Józef Pyra and Tomasz Żołądek
Appl. Sci. 2025, 15(15), 8614; https://doi.org/10.3390/app15158614 - 4 Aug 2025
Viewed by 114
Abstract
Blasting operations in surface mining pose a risk of flyrock, which is a critical safety concern for both personnel and infrastructure. This study presents the use of unmanned aerial vehicles (UAVs) and photogrammetric techniques to improve the accuracy of blast design, particularly in [...] Read more.
Blasting operations in surface mining pose a risk of flyrock, which is a critical safety concern for both personnel and infrastructure. This study presents the use of unmanned aerial vehicles (UAVs) and photogrammetric techniques to improve the accuracy of blast design, particularly in relation to controlling burden values and reducing flyrock. The research was conducted in a basalt quarry in Lower Silesia, where high rock fracturing complicated conventional blast planning. A DJI Mavic 3 Enterprise UAV was used to capture high-resolution aerial imagery, and 3D models were created using Strayos software. These models enabled precise analysis of bench face geometry and burden distribution with centimeter-level accuracy. The results showed a significant improvement in identifying zones with improper burden values and allowed for real-time corrections in blasthole design. Despite a ten-fold reduction in the number of images used, no loss in model quality was observed. UAV-based surveys followed software-recommended flight paths, and the application of this methodology reduced the flyrock range by an average of 42% near sensitive areas. This approach demonstrates the operational benefits and enhanced safety potential of integrating UAV-based photogrammetry into blasting design workflows. Full article
(This article belongs to the Special Issue Advanced Blasting Technology for Mining)
Show Figures

Figure 1

23 pages, 4382 KiB  
Article
MTL-PlotCounter: Multitask Driven Soybean Seedling Counting at the Plot Scale Based on UAV Imagery
by Xiaoqin Xue, Chenfei Li, Zonglin Liu, Yile Sun, Xuru Li and Haiyan Song
Remote Sens. 2025, 17(15), 2688; https://doi.org/10.3390/rs17152688 - 3 Aug 2025
Viewed by 148
Abstract
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep [...] Read more.
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep learning regression model based on the TasselNetV2++ architecture, designed for plot-scale soybean seedling counting. It employs a patch-based training strategy combined with full-plot validation to achieve reliable performance with limited breeding plot data. To incorporate additional agronomic information, PlotCounter is extended into a multitask learning framework (MTL-PlotCounter) that integrates sowing metadata such as variety, number of seeds per hole, and sowing density as auxiliary classification tasks. RGB images of 54 breeding plots were captured in 2023 using a DJI Mavic 2 Pro UAV and processed into an orthomosaic for model development and evaluation, showing effective performance. PlotCounter achieves a root mean square error (RMSE) of 6.98 and a relative RMSE (rRMSE) of 6.93%. The variety-integrated MTL-PlotCounter, V-MTL-PlotCounter, performs the best, with relative reductions of 8.74% in RMSE and 3.03% in rRMSE compared to PlotCounter, and outperforms representative YOLO-based models. Additionally, both PlotCounter and V-MTL-PlotCounter are deployed on a web-based platform, enabling users to upload images via an interactive interface, automatically count seedlings, and analyze plot-scale emergence, powered by a multimodal large language model. This study highlights the potential of integrating UAV remote sensing, agronomic metadata, specialized deep learning models, and multimodal large language models for advanced crop monitoring. Full article
(This article belongs to the Special Issue Recent Advances in Multimodal Hyperspectral Remote Sensing)
Show Figures

Figure 1

27 pages, 1382 KiB  
Review
Application of Non-Destructive Technology in Plant Disease Detection: Review
by Yanping Wang, Jun Sun, Zhaoqi Wu, Yilin Jia and Chunxia Dai
Agriculture 2025, 15(15), 1670; https://doi.org/10.3390/agriculture15151670 - 1 Aug 2025
Viewed by 367
Abstract
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on [...] Read more.
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on the research status of non-destructive detection techniques used for plant disease identification and detection, mainly introducing the following two types of methods: spectral technology and imaging technology. It also elaborates, in detail, on the principles and application examples of each technology and summarizes the advantages and disadvantages of these technologies. This review clearly indicates that non-destructive detection techniques can achieve plant disease and pest detection quickly, accurately, and without damage. In the future, integrating multiple non-destructive detection technologies, developing portable detection devices, and combining more efficient data processing methods will become the core development directions of this field. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 - 1 Aug 2025
Viewed by 160
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

18 pages, 10604 KiB  
Article
Fast Detection of Plants in Soybean Fields Using UAVs, YOLOv8x Framework, and Image Segmentation
by Ravil I. Mukhamediev, Valentin Smurygin, Adilkhan Symagulov, Yan Kuchin, Yelena Popova, Farida Abdoldina, Laila Tabynbayeva, Viktors Gopejenko and Alexey Oxenenko
Drones 2025, 9(8), 547; https://doi.org/10.3390/drones9080547 - 1 Aug 2025
Viewed by 223
Abstract
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves [...] Read more.
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves chemicals but also reduces the environmental load on cultivated fields. Machine learning algorithms are widely used for plant classification. Research on the application of the YOLO algorithm is conducted for simultaneous identification, localization, and classification of plants. However, the quality of the algorithm significantly depends on the training set. The aim of this study is not only the detection of a cultivated plant (soybean) but also weeds growing in the field. The dataset developed in the course of the research allows for solving this issue by detecting not only soybean but also seven weed species common in the fields of Kazakhstan. The article describes an approach to the preparation of a training set of images for soybean fields using preliminary thresholding and bound box (Bbox) segmentation of marked images, which allows for improving the quality of plant classification and localization. The conducted research and computational experiments determined that Bbox segmentation shows the best results. The quality of classification and localization with the application of Bbox segmentation significantly increased (f1 score increased from 0.64 to 0.959, mAP50 from 0.72 to 0.979); for a cultivated plant (soybean), the best classification results known to date were achieved with the application of YOLOv8x on images obtained from the UAV, with an f1 score = 0.984. At the same time, the plant detection rate increased by 13 times compared to the model proposed earlier in the literature. Full article
Show Figures

Figure 1

22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 - 1 Aug 2025
Viewed by 221
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Graphical abstract

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 - 31 Jul 2025
Viewed by 202
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

27 pages, 6715 KiB  
Article
Structural Component Identification and Damage Localization of Civil Infrastructure Using Semantic Segmentation
by Piotr Tauzowski, Mariusz Ostrowski, Dominik Bogucki, Piotr Jarosik and Bartłomiej Błachowski
Sensors 2025, 25(15), 4698; https://doi.org/10.3390/s25154698 - 30 Jul 2025
Viewed by 345
Abstract
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have [...] Read more.
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have become a standard tool and can be used for structural health inspections. A key challenge, however, is the availability of reliable datasets. In this work, the U-net and DeepLab v3+ convolutional neural networks are trained on a synthetic Tokaido dataset. This dataset comprises images representative of data acquired by unmanned aerial vehicle (UAV) imagery and corresponding ground truth data. The data includes semantic segmentation masks for both categorizing structural elements (slabs, beams, and columns) and assessing structural damage (concrete spalling or exposed rebars). Data augmentation, including both image quality degradation (e.g., brightness modification, added noise) and image transformations (e.g., image flipping), is applied to the synthetic dataset. The selected neural network architectures achieve excellent performance, reaching values of 97% for accuracy and 87% for Mean Intersection over Union (mIoU) on the validation data. It also demonstrates promising results in the semantic segmentation of real-world structures captured in photographs, despite being trained solely on synthetic data. Additionally, based on the obtained results of semantic segmentation, it can be concluded that DeepLabV3+ outperforms U-net in structural component identification. However, this is not the case in the damage identification task. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

Back to TopTop