Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,472)

Search Parameters:
Keywords = UAV (unmanned aerial vehicle) images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5681 KiB  
Article
Automatic Detection System for Rainfall-Induced Shallow Landslides in Southeastern China Using Deep Learning and Unmanned Aerial Vehicle Imagery
by Yunfu Zhu, Bing Xia, Jianying Huang, Yuxuan Zhou, Yujie Su and Hong Gao
Water 2025, 17(15), 2349; https://doi.org/10.3390/w17152349 - 7 Aug 2025
Abstract
In the southeast of China, seasonal rainfall intensity is high, the distribution of mountains and hills is extensive, and many small-scale, shallow landslides frequently occur after consecutive seasons of heavy rainfall. High-precision automated identification systems can quickly pinpoint the scope of the disaster [...] Read more.
In the southeast of China, seasonal rainfall intensity is high, the distribution of mountains and hills is extensive, and many small-scale, shallow landslides frequently occur after consecutive seasons of heavy rainfall. High-precision automated identification systems can quickly pinpoint the scope of the disaster and help with important decisions like evacuating people, managing engineering, and assessing damage. Many people have designed systems for detecting such shallow landslides, but few have designed systems that combine high resolution, high automation, and real-time capability of landslide identification. Taking accuracy, automation, and real-time capability into account, we designed an automatic rainfall-induced shallow landslide detection system based on deep learning and Unmanned Aerial Vehicle (UAV) images. The system uses UAVs to capture high-resolution imagery, the U-Net (a U-shaped convolutional neural network) to combine multi-scale features, an adaptive edge enhancement loss function to improve landslide boundary identification, and the development of the “UAV Cruise Geological Hazard AI Identification System” software with an automated processing chain. The system integrates UAV-specific preprocessing and achieves a processing speed of 30 s per square kilometer. It was validated in Wanli District, Nanchang City, Jiangxi Province. The results show a Mean Intersection over Union (MIoU) of 90.7% and a Pixel Accuracy of 92.3%. Compared with traditional methods, the system significantly improves the accuracy of landslide detection. Full article
Show Figures

Figure 1

26 pages, 10480 KiB  
Article
Monitoring Chlorophyll Content of Brassica napus L. Based on UAV Multispectral and RGB Feature Fusion
by Yongqi Sun, Jiali Ma, Mengting Lyu, Jianxun Shen, Jianping Ying, Skhawat Ali, Basharat Ali, Wenqiang Lan, Yiwa Hu, Fei Liu, Weijun Zhou and Wenjian Song
Agronomy 2025, 15(8), 1900; https://doi.org/10.3390/agronomy15081900 - 7 Aug 2025
Abstract
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to [...] Read more.
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to evaluate six rapeseed cultivars chlorophyll content across mixed-growth stages, including seedling, bolting, and initial flowering stages. The ExG-ExR threshold segmentation was applied to remove background interference. Subsequently, color and spectral indices were extracted from segmented images and ranked according to their correlations with measured chlorophyll content. Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), and Support Vector Regression (SVR) models were independently established using subsets of the top-ranked features. Model performance was assessed by comparing prediction accuracy (R2 and RMSE). Results demonstrated significant accuracy improvements following background removal, especially for the SVR model. Compared to data without background removal, accuracy increased notably with background removal by 8.0% (R2p improved from 0.683 to 0.763) for color indices and 3.1% (R2p from 0.835 to 0.866) for spectral indices. Additionally, stepwise fusion of spectral and color indices further improved prediction accuracy. Optimal results were obtained by fusing the top seven color features ranked by correlation with chlorophyll content, achieving an R2p of 0.878 and an RMSE of 52.187 μg/g. These findings highlight the effectiveness of background removal and feature fusion in enhancing chlorophyll prediction accuracy. Full article
Show Figures

Figure 1

13 pages, 14213 KiB  
Article
All-Weather Drone Vision: Passive SWIR Imaging in Fog and Rain
by Alexander Bessonov, Aleksei Rozanov, Richard White, Galih Suwito, Ivonne Medina-Salazar, Marat Lutfullin, Dmitrii Gusev and Ilya Shikov
Drones 2025, 9(8), 553; https://doi.org/10.3390/drones9080553 - 7 Aug 2025
Abstract
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a [...] Read more.
Short-wave-infrared (SWIR) imaging can extend drone operations into fog and rain, yet the optimum spectral strategy remains unclear. We evaluated a drone-borne quantum-dot SWIR camera inside a climate-controlled tunnel that generated calibrated advection fog, radiation fog, and rain. Images were captured with a broadband 400–1700 nm setting and three sub-band filters, each at four lens apertures (f/1.8–5.6). Entropy, structural-similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were computed for every weather–aperture–filter combination. Broadband SWIR consistently outperformed all filtered configurations. The gain stems from higher photon throughput, which outweighs the modest scattering reduction offered by narrowband selection. Under passive illumination, broadband SWIR therefore represents the most robust single-camera choice for unmanned aerial vehicles (UAVs), enhancing situational awareness and flight safety in fog and rain. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

19 pages, 17158 KiB  
Article
Deep Learning Strategy for UAV-Based Multi-Class Damage Detection on Railway Bridges Using U-Net with Different Loss Functions
by Yong-Hyoun Na and Doo-Kie Kim
Appl. Sci. 2025, 15(15), 8719; https://doi.org/10.3390/app15158719 - 7 Aug 2025
Abstract
Periodic visual inspections are currently conducted to maintain the condition of railway bridges. These inspections rely on direct visual assessments by human inspectors, often requiring specialized equipment such as aerial ladders. However, this method is not only time-consuming and costly but also involves [...] Read more.
Periodic visual inspections are currently conducted to maintain the condition of railway bridges. These inspections rely on direct visual assessments by human inspectors, often requiring specialized equipment such as aerial ladders. However, this method is not only time-consuming and costly but also involves significant safety risks. Therefore, there is a growing need for a more efficient and reliable alternative to traditional visual inspections of railway bridges. In this study, we evaluated and compared the performance of damage detection using U-Net-based deep learning models on images captured by unmanned aerial vehicles (UAVs). The target damage types include cracks, concrete spalling and delamination, water leakage, exposed reinforcement, and paint peeling. To enable multi-class segmentation, the U-Net model was trained using three different loss functions: Cross-Entropy Loss, Focal Loss, and Intersection over Union (IoU) Loss. We compared these methods to determine their ability to distinguish actual structural damage from environmental factors and surface contamination, particularly under real-world site conditions. The results showed that the U-Net model trained with IoU Loss outperformed the others in terms of detection accuracy. When applied to field inspection scenarios, this approach demonstrates strong potential for objective and precise damage detection. Furthermore, the use of UAVs in the inspection process is expected to significantly reduce both time and cost in railway infrastructure maintenance. Future research will focus on extending the detection capabilities to additional damage types such as efflorescence and corrosion, aiming to ultimately replace manual visual inspections of railway bridge surfaces with deep-learning-based methods. Full article
Show Figures

Figure 1

20 pages, 1971 KiB  
Article
FFG-YOLO: Improved YOLOv8 for Target Detection of Lightweight Unmanned Aerial Vehicles
by Tongxu Wang, Sizhe Yang, Ming Wan and Yanqiu Liu
Appl. Syst. Innov. 2025, 8(4), 109; https://doi.org/10.3390/asi8040109 - 4 Aug 2025
Viewed by 228
Abstract
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), [...] Read more.
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), where small targets are often occluded, multi-scale semantic information is easily lost, and there is a trade-off between real-time processing and computational resources. Existing algorithms struggle to effectively extract multi-dimensional features and deep semantic information from images and to balance detection accuracy with model complexity. To address these limitations, we developed FFG-YOLO, a lightweight small-target detection method for UAVs based on YOLOv8. FFG-YOLO incorporates three modules: a feature enhancement block (FEB), a feature concat block (FCB), and a global context awareness block (GCAB). These modules strengthen feature extraction from small targets, resolve semantic bias in multi-scale feature fusion, and help differentiate small targets from complex backgrounds. We also improved the positioning accuracy of small targets using the Wasserstein distance loss function. Experiments showed that FFG-YOLO outperformed other algorithms, including YOLOv8n, in small-target detection due to its lightweight nature, meeting the stringent real-time performance and deployment requirements of UAVs. Full article
Show Figures

Figure 1

22 pages, 5136 KiB  
Article
Application of UAVs to Support Blast Design for Flyrock Mitigation: A Case Study from a Basalt Quarry
by Józef Pyra and Tomasz Żołądek
Appl. Sci. 2025, 15(15), 8614; https://doi.org/10.3390/app15158614 - 4 Aug 2025
Viewed by 114
Abstract
Blasting operations in surface mining pose a risk of flyrock, which is a critical safety concern for both personnel and infrastructure. This study presents the use of unmanned aerial vehicles (UAVs) and photogrammetric techniques to improve the accuracy of blast design, particularly in [...] Read more.
Blasting operations in surface mining pose a risk of flyrock, which is a critical safety concern for both personnel and infrastructure. This study presents the use of unmanned aerial vehicles (UAVs) and photogrammetric techniques to improve the accuracy of blast design, particularly in relation to controlling burden values and reducing flyrock. The research was conducted in a basalt quarry in Lower Silesia, where high rock fracturing complicated conventional blast planning. A DJI Mavic 3 Enterprise UAV was used to capture high-resolution aerial imagery, and 3D models were created using Strayos software. These models enabled precise analysis of bench face geometry and burden distribution with centimeter-level accuracy. The results showed a significant improvement in identifying zones with improper burden values and allowed for real-time corrections in blasthole design. Despite a ten-fold reduction in the number of images used, no loss in model quality was observed. UAV-based surveys followed software-recommended flight paths, and the application of this methodology reduced the flyrock range by an average of 42% near sensitive areas. This approach demonstrates the operational benefits and enhanced safety potential of integrating UAV-based photogrammetry into blasting design workflows. Full article
(This article belongs to the Special Issue Advanced Blasting Technology for Mining)
Show Figures

Figure 1

23 pages, 4382 KiB  
Article
MTL-PlotCounter: Multitask Driven Soybean Seedling Counting at the Plot Scale Based on UAV Imagery
by Xiaoqin Xue, Chenfei Li, Zonglin Liu, Yile Sun, Xuru Li and Haiyan Song
Remote Sens. 2025, 17(15), 2688; https://doi.org/10.3390/rs17152688 - 3 Aug 2025
Viewed by 148
Abstract
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep [...] Read more.
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep learning regression model based on the TasselNetV2++ architecture, designed for plot-scale soybean seedling counting. It employs a patch-based training strategy combined with full-plot validation to achieve reliable performance with limited breeding plot data. To incorporate additional agronomic information, PlotCounter is extended into a multitask learning framework (MTL-PlotCounter) that integrates sowing metadata such as variety, number of seeds per hole, and sowing density as auxiliary classification tasks. RGB images of 54 breeding plots were captured in 2023 using a DJI Mavic 2 Pro UAV and processed into an orthomosaic for model development and evaluation, showing effective performance. PlotCounter achieves a root mean square error (RMSE) of 6.98 and a relative RMSE (rRMSE) of 6.93%. The variety-integrated MTL-PlotCounter, V-MTL-PlotCounter, performs the best, with relative reductions of 8.74% in RMSE and 3.03% in rRMSE compared to PlotCounter, and outperforms representative YOLO-based models. Additionally, both PlotCounter and V-MTL-PlotCounter are deployed on a web-based platform, enabling users to upload images via an interactive interface, automatically count seedlings, and analyze plot-scale emergence, powered by a multimodal large language model. This study highlights the potential of integrating UAV remote sensing, agronomic metadata, specialized deep learning models, and multimodal large language models for advanced crop monitoring. Full article
(This article belongs to the Special Issue Recent Advances in Multimodal Hyperspectral Remote Sensing)
Show Figures

Figure 1

22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 - 1 Aug 2025
Viewed by 160
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

18 pages, 10604 KiB  
Article
Fast Detection of Plants in Soybean Fields Using UAVs, YOLOv8x Framework, and Image Segmentation
by Ravil I. Mukhamediev, Valentin Smurygin, Adilkhan Symagulov, Yan Kuchin, Yelena Popova, Farida Abdoldina, Laila Tabynbayeva, Viktors Gopejenko and Alexey Oxenenko
Drones 2025, 9(8), 547; https://doi.org/10.3390/drones9080547 - 1 Aug 2025
Viewed by 223
Abstract
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves [...] Read more.
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves chemicals but also reduces the environmental load on cultivated fields. Machine learning algorithms are widely used for plant classification. Research on the application of the YOLO algorithm is conducted for simultaneous identification, localization, and classification of plants. However, the quality of the algorithm significantly depends on the training set. The aim of this study is not only the detection of a cultivated plant (soybean) but also weeds growing in the field. The dataset developed in the course of the research allows for solving this issue by detecting not only soybean but also seven weed species common in the fields of Kazakhstan. The article describes an approach to the preparation of a training set of images for soybean fields using preliminary thresholding and bound box (Bbox) segmentation of marked images, which allows for improving the quality of plant classification and localization. The conducted research and computational experiments determined that Bbox segmentation shows the best results. The quality of classification and localization with the application of Bbox segmentation significantly increased (f1 score increased from 0.64 to 0.959, mAP50 from 0.72 to 0.979); for a cultivated plant (soybean), the best classification results known to date were achieved with the application of YOLOv8x on images obtained from the UAV, with an f1 score = 0.984. At the same time, the plant detection rate increased by 13 times compared to the model proposed earlier in the literature. Full article
Show Figures

Figure 1

22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 - 1 Aug 2025
Viewed by 221
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Graphical abstract

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 - 31 Jul 2025
Viewed by 202
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

27 pages, 6715 KiB  
Article
Structural Component Identification and Damage Localization of Civil Infrastructure Using Semantic Segmentation
by Piotr Tauzowski, Mariusz Ostrowski, Dominik Bogucki, Piotr Jarosik and Bartłomiej Błachowski
Sensors 2025, 25(15), 4698; https://doi.org/10.3390/s25154698 - 30 Jul 2025
Viewed by 345
Abstract
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have [...] Read more.
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have become a standard tool and can be used for structural health inspections. A key challenge, however, is the availability of reliable datasets. In this work, the U-net and DeepLab v3+ convolutional neural networks are trained on a synthetic Tokaido dataset. This dataset comprises images representative of data acquired by unmanned aerial vehicle (UAV) imagery and corresponding ground truth data. The data includes semantic segmentation masks for both categorizing structural elements (slabs, beams, and columns) and assessing structural damage (concrete spalling or exposed rebars). Data augmentation, including both image quality degradation (e.g., brightness modification, added noise) and image transformations (e.g., image flipping), is applied to the synthetic dataset. The selected neural network architectures achieve excellent performance, reaching values of 97% for accuracy and 87% for Mean Intersection over Union (mIoU) on the validation data. It also demonstrates promising results in the semantic segmentation of real-world structures captured in photographs, despite being trained solely on synthetic data. Additionally, based on the obtained results of semantic segmentation, it can be concluded that DeepLabV3+ outperforms U-net in structural component identification. However, this is not the case in the damage identification task. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

28 pages, 4007 KiB  
Article
Voting-Based Classification Approach for Date Palm Health Detection Using UAV Camera Images: Vision and Learning
by Abdallah Guettaf Temam, Mohamed Nadour, Lakhmissi Cherroun, Ahmed Hafaifa, Giovanni Angiulli and Fabio La Foresta
Drones 2025, 9(8), 534; https://doi.org/10.3390/drones9080534 - 29 Jul 2025
Viewed by 260
Abstract
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method [...] Read more.
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method to ensure stability and accurate image acquisition. These deep learning models are implemented by a voting-based classification (VBC) system that combines multiple CNN architectures, including MobileNet, a handcrafted CNN, VGG16, and VGG19, to enhance classification accuracy and robustness. The classifiers independently generate predictions, and a voting mechanism determines the final classification. This hybridization of image-based visual servoing (IBVS) and classifiers makes immediate adaptations to changing conditions, providing straightforward and smooth flying as well as vision classification. The dataset used in this study was collected using a dual-camera UAV, which captures high-resolution images to detect pests in date palm leaves. After applying the proposed classification strategy, the implemented voting method achieved an impressive accuracy of 99.16% on the test set for detecting health conditions in date palm leaves, surpassing individual classifiers. The obtained results are discussed and compared to show the effectiveness of this classification technique. Full article
Show Figures

Figure 1

19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 425
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

22 pages, 6010 KiB  
Article
Mapping Waterbird Habitats with UAV-Derived 2D Orthomosaic Along Belgium’s Lieve Canal
by Xingzhen Liu, Andrée De Cock, Long Ho, Kim Pham, Diego Panique-Casso, Marie Anne Eurie Forio, Wouter H. Maes and Peter L. M. Goethals
Remote Sens. 2025, 17(15), 2602; https://doi.org/10.3390/rs17152602 - 26 Jul 2025
Viewed by 473
Abstract
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, [...] Read more.
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, Belgium. We systematically classified habitats into residential, industrial, riparian tree, and herbaceous vegetation zones, examining their influence on the spatial distribution of three focal waterbird species: Eurasian coot (Fulica atra), common moorhen (Gallinula chloropus), and wild duck (Anas platyrhynchos). Herbaceous vegetation zones consistently supported the highest waterbird densities, attributed to abundant nesting substrates and minimal human disturbance. UAV-based waterbird counts correlated strongly with ground-based surveys (R2 = 0.668), though species-specific detectability varied significantly due to morphological visibility and ecological behaviors. Detection accuracy was highest for coots, intermediate for ducks, and lowest for moorhens, highlighting the crucial role of image resolution ground sampling distance (GSD) in aerial monitoring. Operational challenges, including image occlusion and habitat complexity, underline the need for tailored survey protocols and advanced sensing techniques. Our findings demonstrate that UAV imagery provides a reliable and scalable method for monitoring waterbird habitats, offering critical insights for biodiversity conservation and sustainable management practices in aquatic landscapes. Full article
Show Figures

Figure 1

Back to TopTop