Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (532)

Search Parameters:
Keywords = inspection UAVs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 17158 KiB  
Article
Deep Learning Strategy for UAV-Based Multi-Class Damage Detection on Railway Bridges Using U-Net with Different Loss Functions
by Yong-Hyoun Na and Doo-Kie Kim
Appl. Sci. 2025, 15(15), 8719; https://doi.org/10.3390/app15158719 (registering DOI) - 7 Aug 2025
Abstract
Periodic visual inspections are currently conducted to maintain the condition of railway bridges. These inspections rely on direct visual assessments by human inspectors, often requiring specialized equipment such as aerial ladders. However, this method is not only time-consuming and costly but also involves [...] Read more.
Periodic visual inspections are currently conducted to maintain the condition of railway bridges. These inspections rely on direct visual assessments by human inspectors, often requiring specialized equipment such as aerial ladders. However, this method is not only time-consuming and costly but also involves significant safety risks. Therefore, there is a growing need for a more efficient and reliable alternative to traditional visual inspections of railway bridges. In this study, we evaluated and compared the performance of damage detection using U-Net-based deep learning models on images captured by unmanned aerial vehicles (UAVs). The target damage types include cracks, concrete spalling and delamination, water leakage, exposed reinforcement, and paint peeling. To enable multi-class segmentation, the U-Net model was trained using three different loss functions: Cross-Entropy Loss, Focal Loss, and Intersection over Union (IoU) Loss. We compared these methods to determine their ability to distinguish actual structural damage from environmental factors and surface contamination, particularly under real-world site conditions. The results showed that the U-Net model trained with IoU Loss outperformed the others in terms of detection accuracy. When applied to field inspection scenarios, this approach demonstrates strong potential for objective and precise damage detection. Furthermore, the use of UAVs in the inspection process is expected to significantly reduce both time and cost in railway infrastructure maintenance. Future research will focus on extending the detection capabilities to additional damage types such as efflorescence and corrosion, aiming to ultimately replace manual visual inspections of railway bridge surfaces with deep-learning-based methods. Full article
Show Figures

Figure 1

20 pages, 9888 KiB  
Article
WeatherClean: An Image Restoration Algorithm for UAV-Based Railway Inspection in Adverse Weather
by Kewen Wang, Shaobing Yang, Zexuan Zhang, Zhipeng Wang, Limin Jia, Mengwei Li and Shengjia Yu
Sensors 2025, 25(15), 4799; https://doi.org/10.3390/s25154799 - 4 Aug 2025
Viewed by 182
Abstract
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, [...] Read more.
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, and fog have two main limitations: they do not adaptively learn features under varying weather complexities and struggle with managing complex noise patterns in drone inspections, leading to incomplete noise removal. To address these challenges, this study proposes a novel framework for removing rain, snow, and fog from drone images, called WeatherClean. This framework introduces a Weather Complexity Adjustment Factor (WCAF) in a parameterized adjustable network architecture to process weather degradation of varying degrees adaptively. It also employs a hierarchical multi-scale cropping strategy to enhance the recovery of fine noise and edge structures. Additionally, it incorporates a degradation synthesis method based on atmospheric scattering physical models to generate training samples that align with real-world weather patterns, thereby mitigating data scarcity issues. Experimental results show that WeatherClean outperforms existing methods by effectively removing noise particles while preserving image details. This advancement provides more reliable high-definition visual references for drone-based railway inspections, significantly enhancing inspection capabilities under complex weather conditions and ensuring the safety of railway operations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 - 1 Aug 2025
Viewed by 160
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

18 pages, 74537 KiB  
Article
SDA-YOLO: Multi-Scale Dynamic Branching and Attention Fusion for Self-Explosion Defect Detection in Insulators
by Zhonghao Yang, Wangping Xu, Nanxing Chen, Yifu Chen, Kaijun Wu, Min Xie, Hong Xu and Enhui Zheng
Electronics 2025, 14(15), 3070; https://doi.org/10.3390/electronics14153070 - 31 Jul 2025
Viewed by 194
Abstract
To enhance the performance of UAVs in detecting insulator self-explosion defects during power inspections, this paper proposes an insulator self-explosion defect recognition algorithm, SDA-YOLO, based on an improved YOLOv11s network. First, the SODL is added to YOLOv11 to fuse shallow features with deeper [...] Read more.
To enhance the performance of UAVs in detecting insulator self-explosion defects during power inspections, this paper proposes an insulator self-explosion defect recognition algorithm, SDA-YOLO, based on an improved YOLOv11s network. First, the SODL is added to YOLOv11 to fuse shallow features with deeper features, thereby improving the model’s focus on small-sized self-explosion defect features. The OBB is also employed to reduce interference from the complex background. Second, the DBB module is incorporated into the C3k2 module in the backbone to extract target features through a multi-branch parallel convolutional structure. Finally, the AIFI module replaces the C2PSA module, effectively directing and aggregating information between channels to improve detection accuracy and inference speed. The experimental results show that the average accuracy of SDA-YOLO reaches 96.0%, which is higher than the YOLOv11s baseline model of 6.6%. While maintaining high accuracy, the inference speed of SDA-YOLO can reach 93.6 frames/s, which achieves the purpose of the real-time detection of insulator faults. Full article
Show Figures

Figure 1

31 pages, 18320 KiB  
Article
Penetrating Radar on Unmanned Aerial Vehicle for the Inspection of Civilian Infrastructure: System Design, Modeling, and Analysis
by Jorge Luis Alva Alarcon, Yan Rockee Zhang, Hernan Suarez, Anas Amaireh and Kegan Reynolds
Aerospace 2025, 12(8), 686; https://doi.org/10.3390/aerospace12080686 - 31 Jul 2025
Viewed by 251
Abstract
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, [...] Read more.
The increasing demand for noninvasive inspection (NII) of complex civil infrastructures requires overcoming the limitations of traditional ground-penetrating radar (GPR) systems in addressing diverse and large-scale applications. The solution proposed in this study focuses on an initial design that integrates a low-SWaP (Size, Weight, and Power) ultra-wideband (UWB) impulse radar with realistic electromagnetic modeling for deployment on unmanned aerial vehicles (UAVs). The system incorporates ultra-realistic antenna and propagation models, utilizing Finite Difference Time Domain (FDTD) solvers and multilayered media, to replicate realistic airborne sensing geometries. Verification and calibration are performed by comparing simulation outputs with laboratory measurements using varied material samples and target models. Custom signal processing algorithms are developed to extract meaningful features from complex electromagnetic environments and support anomaly detection. Additionally, machine learning (ML) techniques are trained on synthetic data to automate the identification of structural characteristics. The results demonstrate accurate agreement between simulations and measurements, as well as the potential for deploying this design in flight tests within realistic environments featuring complex electromagnetic interference. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

21 pages, 7362 KiB  
Article
Multi-Layer Path Planning for Complete Structural Inspection Using UAV
by Ho Wang Tong, Boyang Li, Hailong Huang and Chih-Yung Wen
Drones 2025, 9(8), 541; https://doi.org/10.3390/drones9080541 - 31 Jul 2025
Viewed by 216
Abstract
This article addresses the path planning problem for complete structural inspection using an unmanned aerial vehicle (UAV). The proposed method emphasizes the scalability of the viewpoints and aims to provide practical solutions to different inspection distance requirements, eliminating the need for extra view-planning [...] Read more.
This article addresses the path planning problem for complete structural inspection using an unmanned aerial vehicle (UAV). The proposed method emphasizes the scalability of the viewpoints and aims to provide practical solutions to different inspection distance requirements, eliminating the need for extra view-planning procedures. First, the mixed-viewpoint generation is proposed. Then, the Multi-Layered Angle-Distance Traveling Salesman Problem (ML-ADTSP) is solved, which aims to reduce overall energy consumption and inspection path complexity. A two-step Genetic Algorithm (GA) is used to solve the combinatorial optimization problem. The performance of different crossover functions is also discussed. By solving the ML-ADTSP, the simulation results demonstrate that the mean accelerations of the UAV throughout the inspection path are flattened significantly, improving the overall path smoothness and reducing traversal difficulty. With minor low-level optimization, the proposed framework can be applied to inspect different structures. Full article
Show Figures

Figure 1

17 pages, 4557 KiB  
Article
Potential of LiDAR and Hyperspectral Sensing for Overcoming Challenges in Current Maritime Ballast Tank Corrosion Inspection
by Sergio Pallas Enguita, Jiajun Jiang, Chung-Hao Chen, Samuel Kovacic and Richard Lebel
Electronics 2025, 14(15), 3065; https://doi.org/10.3390/electronics14153065 - 31 Jul 2025
Viewed by 229
Abstract
Corrosion in maritime ballast tanks is a major driver of maintenance costs and operational risks for maritime assets. Inspections are hampered by complex geometries, hazardous conditions, and the limitations of conventional methods, particularly visual assessment, which struggles with subjectivity, accessibility, and early detection, [...] Read more.
Corrosion in maritime ballast tanks is a major driver of maintenance costs and operational risks for maritime assets. Inspections are hampered by complex geometries, hazardous conditions, and the limitations of conventional methods, particularly visual assessment, which struggles with subjectivity, accessibility, and early detection, especially under coatings. This paper critically examines these challenges and explores the potential of Light Detection and Ranging (LiDAR) and Hyperspectral Imaging (HSI) to form the basis of improved inspection approaches. We discuss LiDAR’s utility for accurate 3D mapping and providing a spatial framework and HSI’s potential for objective material identification and surface characterization based on spectral signatures along a wavelength range of 400-1000nm (visible and near infrared). Preliminary findings from laboratory tests are presented, demonstrating the basic feasibility of HSI for differentiating surface conditions (corrosion, coatings, bare metal) and relative coating thickness, alongside LiDAR’s capability for detailed geometric capture. Although these results do not represent a deployable system, they highlight how LiDAR and HSI could address key limitations of current practices and suggest promising directions for future research into integrated sensor-based corrosion assessment strategies. Full article
Show Figures

Figure 1

27 pages, 6715 KiB  
Article
Structural Component Identification and Damage Localization of Civil Infrastructure Using Semantic Segmentation
by Piotr Tauzowski, Mariusz Ostrowski, Dominik Bogucki, Piotr Jarosik and Bartłomiej Błachowski
Sensors 2025, 25(15), 4698; https://doi.org/10.3390/s25154698 - 30 Jul 2025
Viewed by 345
Abstract
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have [...] Read more.
Visual inspection of civil infrastructure for structural health assessment, as performed by structural engineers, is expensive and time-consuming. Therefore, automating this process is highly attractive, which has received significant attention in recent years. With the increasing capabilities of computers, deep neural networks have become a standard tool and can be used for structural health inspections. A key challenge, however, is the availability of reliable datasets. In this work, the U-net and DeepLab v3+ convolutional neural networks are trained on a synthetic Tokaido dataset. This dataset comprises images representative of data acquired by unmanned aerial vehicle (UAV) imagery and corresponding ground truth data. The data includes semantic segmentation masks for both categorizing structural elements (slabs, beams, and columns) and assessing structural damage (concrete spalling or exposed rebars). Data augmentation, including both image quality degradation (e.g., brightness modification, added noise) and image transformations (e.g., image flipping), is applied to the synthetic dataset. The selected neural network architectures achieve excellent performance, reaching values of 97% for accuracy and 87% for Mean Intersection over Union (mIoU) on the validation data. It also demonstrates promising results in the semantic segmentation of real-world structures captured in photographs, despite being trained solely on synthetic data. Additionally, based on the obtained results of semantic segmentation, it can be concluded that DeepLabV3+ outperforms U-net in structural component identification. However, this is not the case in the damage identification task. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 425
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

19 pages, 8766 KiB  
Article
Fusion of Airborne, SLAM-Based, and iPhone LiDAR for Accurate Forest Road Mapping in Harvesting Areas
by Evangelia Siafali, Vasilis Polychronos and Petros A. Tsioras
Land 2025, 14(8), 1553; https://doi.org/10.3390/land14081553 - 28 Jul 2025
Viewed by 389
Abstract
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and [...] Read more.
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and ensure accurate and efficient data collection and mapping. Airborne data were collected using the DJI Matrice 300 RTK UAV equipped with a Zenmuse L2 LiDAR sensor, which achieved a high point density of 285 points/m2 at an altitude of 80 m. Ground-level data were collected using the BLK2GO handheld laser scanner (HPLS) with SLAM methods (LiDAR SLAM, Visual SLAM, Inertial Measurement Unit) and the iPhone 13 Pro Max LiDAR. Data processing included generating DEMs, DSMs, and True Digital Orthophotos (TDOMs) via DJI Terra, LiDAR360 V8, and Cyclone REGISTER 360 PLUS, with additional processing and merging using CloudCompare V2 and ArcGIS Pro 3.4.0. The pairwise comparison analysis between ALS data and each alternative method revealed notable differences in elevation, highlighting discrepancies between methods. ALS + iPhone demonstrated the smallest deviation from ALS (MAE = 0.011, RMSE = 0.011, RE = 0.003%) and HPLS the larger deviation from ALS (MAE = 0.507, RMSE = 0.542, RE = 0.123%). The findings highlight the potential of fusing point clouds from diverse platforms to enhance forest road mapping accuracy. However, the selection of technology should consider trade-offs among accuracy, cost, and operational constraints. Mobile LiDAR solutions, particularly the iPhone, offer promising low-cost alternatives for certain applications. Future research should explore real-time fusion workflows and strategies to improve the cost-effectiveness and scalability of multisensor approaches for forest road monitoring. Full article
Show Figures

Figure 1

27 pages, 405 KiB  
Article
Comparative Analysis of Centralized and Distributed Multi-UAV Task Allocation Algorithms: A Unified Evaluation Framework
by Yunze Song, Zhexuan Ma, Nuo Chen, Shenghao Zhou and Sutthiphong Srigrarom
Drones 2025, 9(8), 530; https://doi.org/10.3390/drones9080530 - 28 Jul 2025
Viewed by 374
Abstract
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored [...] Read more.
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored to multi-UAV operations. We first contextualize the classical assignment problem (AP) under UAV mission constraints, including the flight time, propulsion energy capacity, and communication range, and evaluate optimal one-to-one solvers including the Hungarian algorithm, the Bertsekas ϵ-auction algorithm, and a minimum cost maximum flow formulation. To reflect the dynamic, uncertain environments that UAV fleets encounter, we extend our analysis to distributed multi-UAV task allocation (MUTA) methods. In particular, we examine the consensus-based bundle algorithm (CBBA) and a distributed auction 2-opt refinement strategy, both of which iteratively negotiate task bundles across UAVs to accommodate real-time task arrivals and intermittent connectivity. Finally, we outline how reinforcement learning (RL) can be incorporated to learn adaptive policies that balance energy efficiency and mission success under varying wind conditions and obstacle fields. Through simulations incorporating UAV-specific cost models and communication topologies, we assess each algorithm’s mission completion time, total energy expenditure, communication overhead, and resilience to UAV failures. Our results highlight the trade-off between strict optimality, which is suitable for small fleets in static scenarios, and scalable, robust coordination, necessary for large, dynamic multi-UAV deployments. Full article
Show Figures

Figure 1

24 pages, 12286 KiB  
Article
A UAV-Based Multi-Scenario RGB-Thermal Dataset and Fusion Model for Enhanced Forest Fire Detection
by Yalin Zhang, Xue Rui and Weiguo Song
Remote Sens. 2025, 17(15), 2593; https://doi.org/10.3390/rs17152593 - 25 Jul 2025
Viewed by 461
Abstract
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). [...] Read more.
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). RGB-Thermal fusion methods integrate visible-light texture and thermal infrared temperature features effectively, but current approaches are constrained by limited datasets and insufficient exploitation of cross-modal complementary information, ignoring cross-level feature interaction. A time-synchronized multi-scene, multi-angle aerial RGB-Thermal dataset (RGBT-3M) with “Smoke–Fire–Person” annotations and modal alignment via the M-RIFT method was constructed as a way to address the problem of data scarcity in wildfire scenarios. Finally, we propose a CP-YOLOv11-MF fusion detection model based on the advanced YOLOv11 framework, which can learn heterogeneous features complementary to each modality in a progressive manner. Experimental validation proves the superiority of our method, with a precision of 92.5%, a recall of 93.5%, a mAP50 of 96.3%, and a mAP50-95 of 62.9%. The model’s RGB-Thermal fusion capability enhances early fire detection, offering a benchmark dataset and methodological advancement for intelligent forest conservation, with implications for AI-driven ecological protection. Full article
(This article belongs to the Special Issue Advances in Spectral Imagery and Methods for Fire and Smoke Detection)
Show Figures

Figure 1

28 pages, 42031 KiB  
Article
A Building Crack Detection UAV System Based on Deep Learning and Linear Active Disturbance Rejection Control Algorithm
by Lei Zhang, Lili Gong, Le Wang, Zhou Wang and Song Yan
Electronics 2025, 14(15), 2975; https://doi.org/10.3390/electronics14152975 - 25 Jul 2025
Viewed by 214
Abstract
This paper presents a UAV-based building crack real-time detection system that integrates an improved YOLOv8 algorithm with Linear Active Disturbance Rejection Control (LADRC). The system is equipped with a high-resolution camera and sensors to capture high-definition images and height information. First, a trajectory [...] Read more.
This paper presents a UAV-based building crack real-time detection system that integrates an improved YOLOv8 algorithm with Linear Active Disturbance Rejection Control (LADRC). The system is equipped with a high-resolution camera and sensors to capture high-definition images and height information. First, a trajectory tracking controller based on LADRC was designed for the UAV, which uses a linear extended state observer to estimate and compensate for unknown disturbances such as wind interference, significantly enhancing the flight stability of the UAV in complex environments and ensuring stable crack image acquisition. Secondly, we integrated Convolutional Block Attention Module (CBAM) into the YOLOv8 model, dynamically enhancing crack feature extraction through both channel and spatial attention mechanisms, thereby improving recognition robustness in complex backgrounds. Lastly, a skeleton extraction algorithm was applied for the secondary processing of the segmented cracks, enabling precise calculations of crack length and average width and outputting the results to a user interface for visualization. The experimental results demonstrate that the system successfully identifies and extracts crack regions, accurately calculates crack dimensions, and enables real-time monitoring through high-speed data transmission to the ground station. Compared to traditional manual inspection methods, the system significantly improves detection efficiency while maintaining high accuracy and reliability. Full article
Show Figures

Figure 1

18 pages, 4203 KiB  
Article
SRW-YOLO: A Detection Model for Environmental Risk Factors During the Grid Construction Phase
by Yu Zhao, Fei Liu, Qiang He, Fang Liu, Xiaohu Sun and Jiyong Zhang
Remote Sens. 2025, 17(15), 2576; https://doi.org/10.3390/rs17152576 - 24 Jul 2025
Viewed by 283
Abstract
With the rapid advancement of UAV-based remote sensing and image recognition techniques, identifying environmental risk factors from aerial imagery has emerged as a focal point in intelligent inspection during the power transmission and distribution projects construction phase. The uneven spatial distribution of risk [...] Read more.
With the rapid advancement of UAV-based remote sensing and image recognition techniques, identifying environmental risk factors from aerial imagery has emerged as a focal point in intelligent inspection during the power transmission and distribution projects construction phase. The uneven spatial distribution of risk factors on construction sites, their weak texture signatures, and the inherently multi-scale nature of UAV imagery pose significant detection challenges. To address these issues, we propose a one-stage SRW-YOLO algorithm built upon the YOLOv11 framework. First, a P2-scale shallow feature detection layer is added to capture high-resolution fine details of small targets. Second, we integrate a reparameterized convolution based on channel shuffle (RCS) of a one-shot aggregation (RCS-OSA) module into the backbone and neck’s shallow layers, enhancing feature extraction while significantly reducing inference latency. Finally, a dynamic non-monotonic focusing mechanism WIoU v3 loss function is employed to reweigh low-quality annotations, thereby improving small-object localization accuracy. Experimental results demonstrate that SRW-YOLO achieves an overall precision of 80.6% and mAP of 79.1% on the State Grid dataset, and exhibits similarly superior performance on the VisDrone2019 dataset. Compared with other one-stage detectors, SRW-YOLO delivers markedly higher detection accuracy, offering critical technical support for multi-scale, heterogeneous environmental risk monitoring during the power transmission and distribution projects construction phase, and establishes the theoretical foundation for rapid and accurate inspection using UAV-based intelligent imaging. Full article
Show Figures

Graphical abstract

34 pages, 7293 KiB  
Article
Evaluation of Photogrammetric Methods for Displacement Measurement During Structural Load Testing
by Ante Marendić, Dubravko Gajski, Ivan Duvnjak and Rinaldo Paar
Remote Sens. 2025, 17(15), 2569; https://doi.org/10.3390/rs17152569 - 24 Jul 2025
Viewed by 295
Abstract
The safety and longevity of engineering structures depend on precise and timely monitoring, especially during load testing inspections. Conventional displacement measurement methods—such as LVDT sensors, GNSS, RTS, and levels—each present benefits and limitations in terms of accuracy, applicability, and practicality. Photogrammetry has emerged [...] Read more.
The safety and longevity of engineering structures depend on precise and timely monitoring, especially during load testing inspections. Conventional displacement measurement methods—such as LVDT sensors, GNSS, RTS, and levels—each present benefits and limitations in terms of accuracy, applicability, and practicality. Photogrammetry has emerged as a promising alternative, offering non-contact measurement, cost-effectiveness, and adaptability in challenging environments. This study investigates the potential of photogrammetric methods for determining structural displacements during load testing in real-world conditions where such approaches remain underutilized. Two photogrammetric techniques were tested: (1) a single-image homography-based approach, and (2) a multi-image bundle block adjustment (BBA) approach using both UAV and tripod-mounted imaging platforms. Displacement results from both methods were compared against reference measurements obtained by traditional LVDT sensors and robotic total station. The study evaluates the influence of different camera systems, image acquisition techniques, and processing methods on the overall measurement accuracy. The findings suggest that the photogrammetric method, especially when optimized, can provide reliable displacement data with sub-millimeter accuracy, highlighting their potential as a viable alternative or complement to established geodetic and sensor-based approaches in structural testing. Full article
Show Figures

Figure 1

Back to TopTop