Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (230)

Search Parameters:
Keywords = autonomous inspection system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 1840 KiB  
Review
Enabling Intelligent Industrial Automation: A Review of Machine Learning Applications with Digital Twin and Edge AI Integration
by Mohammad Abidur Rahman, Md Farhan Shahrior, Kamran Iqbal and Ali A. Abushaiba
Automation 2025, 6(3), 37; https://doi.org/10.3390/automation6030037 - 5 Aug 2025
Abstract
The integration of machine learning (ML) into industrial automation is fundamentally reshaping how manufacturing systems are monitored, inspected, and optimized. By applying machine learning to real-time sensor data and operational histories, advanced models enable proactive fault prediction, intelligent inspection, and dynamic process control—directly [...] Read more.
The integration of machine learning (ML) into industrial automation is fundamentally reshaping how manufacturing systems are monitored, inspected, and optimized. By applying machine learning to real-time sensor data and operational histories, advanced models enable proactive fault prediction, intelligent inspection, and dynamic process control—directly enhancing system reliability, product quality, and efficiency. This review explores the transformative role of ML across three key domains: Predictive Maintenance (PdM), Quality Control (QC), and Process Optimization (PO). It also analyzes how Digital Twin (DT) and Edge AI technologies are expanding the practical impact of ML in these areas. Our analysis reveals a marked rise in deep learning, especially convolutional and recurrent architectures, with a growing shift toward real-time, edge-based deployment. The paper also catalogs the datasets used, the tools and sensors employed for data collection, and the industrial software platforms supporting ML deployment in practice. This review not only maps the current research terrain but also highlights emerging opportunities in self-learning systems, federated architectures, explainable AI, and themes such as self-adaptive control, collaborative intelligence, and autonomous defect diagnosis—indicating that ML is poised to become deeply embedded across the full spectrum of industrial operations in the coming years. Full article
(This article belongs to the Section Industrial Automation and Process Control)
23 pages, 10936 KiB  
Article
Towards Autonomous Coordination of Two I-AUVs in Submarine Pipeline Assembly
by Salvador López-Barajas, Alejandro Solis, Raúl Marín-Prades and Pedro J. Sanz
J. Mar. Sci. Eng. 2025, 13(8), 1490; https://doi.org/10.3390/jmse13081490 - 1 Aug 2025
Viewed by 209
Abstract
Inspection, maintenance, and repair (IMR) operations on underwater infrastructure remain costly and time-intensive because fully teleoperated remote operated vehicle s(ROVs) lack the range and dexterity necessary for precise cooperative underwater manipulation, and the alternative of using professional divers is ruled out due to [...] Read more.
Inspection, maintenance, and repair (IMR) operations on underwater infrastructure remain costly and time-intensive because fully teleoperated remote operated vehicle s(ROVs) lack the range and dexterity necessary for precise cooperative underwater manipulation, and the alternative of using professional divers is ruled out due to the risk involved. This work presents and experimentally validates an autonomous, dual-I-AUV (Intervention–Autonomous Underwater Vehicle) system capable of assembling rigid pipeline segments through coordinated actions in a confined underwater workspace. The first I-AUV is a Girona 500 (4-DoF vehicle motion, pitch and roll stable) fitted with multiple payload cameras and a 6-DoF Reach Bravo 7 arm, giving the vehicle 10 total DoF. The second I-AUV is a BlueROV2 Heavy equipped with a Reach Alpha 5 arm, likewise yielding 10 DoF. The workflow comprises (i) detection and grasping of a coupler pipe section, (ii) synchronized teleoperation to an assembly start pose, and (iii) assembly using a kinematic controller that exploits the Girona 500’s full 10 DoF, while the BlueROV2 holds position and orientation to stabilize the workspace. Validation took place in a 12 m × 8 m × 5 m water tank. Results show that the paired I-AUVs can autonomously perform precision pipeline assembly in real water conditions, representing a significant step toward fully automated subsea construction and maintenance. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 3506 KiB  
Review
A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots
by Haolei Ru, Meiping Sheng, Jiahui Qi, Zhanghao Li, Lei Cheng, Jiahao Zhang, Jiangjian Xiao, Fei Gao, Baolei Wang and Qingwei Jia
Electronics 2025, 14(15), 3069; https://doi.org/10.3390/electronics14153069 - 31 Jul 2025
Viewed by 177
Abstract
Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a [...] Read more.
Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a systematic and comprehensive review of spatial positioning techniques tailored to magnetic climbing robots. This paper addresses this gap by categorizing and evaluating current spatial positioning approaches. Initially, single-sensor-based methods are analyzed with a focus on external sensor approaches. Then, multi-sensor fusion methods are explored to overcome the shortcomings of single-sensor-based approaches. Multi-sensor fusion methods include simultaneous localization and mapping (SLAM), integrated positioning systems, and multi-robot cooperative positioning. To address non-uniform noise and environmental interference, both analytical and learning-based reinforcement approaches are reviewed. Common analytical methods include Kalman-type filtering, particle filtering, and correlation filtering, while typical learning-based approaches involve deep reinforcement learning (DRL) and neural networks (NNs). Finally, challenges and future development trends are discussed. Multi-sensor fusion and lightweight design are the future trends in the advancement of spatial positioning technologies for magnetic climbing robots. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

26 pages, 2457 KiB  
Review
Crack Detection in Civil Infrastructure Using Autonomous Robotic Systems: A Synergistic Review of Platforms, Cognition, and Autonomous Action
by Rong Dai, Rui Wang, Chang Shu, Jianming Li and Zhe Wei
Sensors 2025, 25(15), 4631; https://doi.org/10.3390/s25154631 - 26 Jul 2025
Viewed by 490
Abstract
Traditional manual crack inspection methods often face limitations in terms of efficiency, safety, and consistency. To overcome these issues, a new approach based on autonomous robotic systems has gained attention, combining robotics, artificial intelligence, and advanced sensing technologies. However, most existing reviews focus [...] Read more.
Traditional manual crack inspection methods often face limitations in terms of efficiency, safety, and consistency. To overcome these issues, a new approach based on autonomous robotic systems has gained attention, combining robotics, artificial intelligence, and advanced sensing technologies. However, most existing reviews focus on individual components in isolation and fail to present a complete picture of how these systems work together. This study focuses on robotic crack detection and proposes a structured framework that connects three core modules: the physical platform (robots and sensors), the cognitive core (crack detection algorithms), and autonomous action (navigation and planning). We analyze key technologies, their interactions, and the challenges involved in real-world implementation. The aim is to provide a clear roadmap of current progress and future directions, helping researchers and engineers better understand the field and develop smart, deployable systems for infrastructure crack inspection. Full article
Show Figures

Figure 1

25 pages, 13994 KiB  
Article
A Semi-Autonomous Aerial Platform Enhancing Non-Destructive Tests
by Simone D’Angelo, Salvatore Marcellini, Alessandro De Crescenzo, Michele Marolla, Vincenzo Lippiello and Bruno Siciliano
Drones 2025, 9(8), 516; https://doi.org/10.3390/drones9080516 - 23 Jul 2025
Viewed by 509
Abstract
The use of aerial robots for inspection and maintenance in industrial settings demands high maneuverability, precise control, and reliable measurements. This study explores the development of a fully customized unmanned aerial manipulator (UAM), composed of a tilting drone and an articulated robotic arm, [...] Read more.
The use of aerial robots for inspection and maintenance in industrial settings demands high maneuverability, precise control, and reliable measurements. This study explores the development of a fully customized unmanned aerial manipulator (UAM), composed of a tilting drone and an articulated robotic arm, designed to perform non-destructive in-contact inspections of iron structures. The system is intended to operate in complex and potentially hazardous environments, where autonomous execution is supported by shared-control strategies that include human supervision. A parallel force–impedance control framework is implemented to enable smooth and repeatable contact between a sensor for ultrasonic testing (UT) and the inspected surface. During interaction, the arm applies a controlled push to create a vacuum seal, allowing accurate thickness measurements. The control strategy is validated through repeated trials in both indoor and outdoor scenarios, demonstrating consistency and robustness. The paper also addresses the mechanical and control integration of the complex robotic system, highlighting the challenges and solutions in achieving a responsive and reliable aerial platform. The combination of semi-autonomous control and human-in-the-loop operation significantly improves the effectiveness of inspection tasks in hard-to-reach environments, enhancing both human safety and task performance. Full article
(This article belongs to the Special Issue Unmanned Aerial Manipulation with Physical Interaction)
Show Figures

Figure 1

27 pages, 705 KiB  
Article
A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification
by Ramazan Enisoglu and Veselin Rakocevic
Algorithms 2025, 18(8), 457; https://doi.org/10.3390/a18080457 - 23 Jul 2025
Viewed by 326
Abstract
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static [...] Read more.
Accurate and real-time classification of low-latency Internet traffic is critical for applications such as video conferencing, online gaming, financial trading, and autonomous systems, where millisecond-level delays can degrade user experience. Existing methods for low-latency traffic classification, reliant on raw temporal features or static statistical analyses, fail to capture dynamic frequency patterns inherent to real-time applications. These limitations hinder accurate resource allocation in heterogeneous networks. This paper proposes a novel framework integrating wavelet transform (WT) and artificial neural networks (ANNs) to address this gap. Unlike prior works, we systematically apply WT to commonly used temporal features—such as throughput, slope, ratio, and moving averages—transforming them into frequency-domain representations. This approach reveals hidden multi-scale patterns in low-latency traffic, akin to structured noise in signal processing, which traditional time-domain analyses often overlook. These wavelet-enhanced features train a multilayer perceptron (MLP) ANN, enabling dual-domain (time–frequency) analysis. We evaluate our approach on a dataset comprising FTP, video streaming, and low-latency traffic, including mixed scenarios with up to four concurrent traffic types. Experiments demonstrate 99.56% accuracy in distinguishing low-latency traffic (e.g., video conferencing) from FTP and streaming, outperforming k-NN, CNNs, and LSTMs. Notably, our method eliminates reliance on deep packet inspection (DPI), offering ISPs a privacy-preserving and scalable solution for prioritizing time-sensitive traffic. In mixed-traffic scenarios, the model achieves 74.2–92.8% accuracy, offering ISPs a scalable solution for prioritizing time-sensitive traffic without deep packet inspection. By bridging signal processing and deep learning, this work advances efficient bandwidth allocation and enables Internet Service Providers to prioritize time-sensitive flows without deep packet inspection, improving quality of service in heterogeneous network environments. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

36 pages, 7426 KiB  
Article
PowerLine-MTYOLO: A Multitask YOLO Model for Simultaneous Cable Segmentation and Broken Strand Detection
by Badr-Eddine Benelmostafa and Hicham Medromi
Drones 2025, 9(7), 505; https://doi.org/10.3390/drones9070505 - 18 Jul 2025
Viewed by 527
Abstract
Power transmission infrastructure requires continuous inspection to prevent failures and ensure grid stability. UAV-based systems, enhanced with deep learning, have emerged as an efficient alternative to traditional, labor-intensive inspection methods. However, most existing approaches rely on separate models for cable segmentation and anomaly [...] Read more.
Power transmission infrastructure requires continuous inspection to prevent failures and ensure grid stability. UAV-based systems, enhanced with deep learning, have emerged as an efficient alternative to traditional, labor-intensive inspection methods. However, most existing approaches rely on separate models for cable segmentation and anomaly detection, leading to increased computational overhead and reduced reliability in real-time applications. To address these limitations, we propose PowerLine-MTYOLO, a lightweight, one-stage, multitask model designed for simultaneous power cable segmentation and broken strand detection from UAV imagery. Built upon the A-YOLOM architecture, and leveraging the YOLOv8 foundation, our model introduces four novel specialized modules—SDPM, HAD, EFR, and the Shape-Aware Wise IoU loss—that improve geometric understanding, structural consistency, and bounding-box precision. We also present the Merged Public Power Cable Dataset (MPCD), a diverse, open-source dataset tailored for multitask training and evaluation. The experimental results show that our model achieves up to +10.68% mAP@50 and +1.7% IoU compared to A-YOLOM, while also outperforming recent YOLO-based detectors in both accuracy and efficiency. These gains are achieved with a smaller model memory footprint and a similar inference speed compared to A-YOLOM. By unifying detection and segmentation into a single framework, PowerLine-MTYOLO offers a promising solution for autonomous aerial inspection and lays the groundwork for future advances in fine-structure monitoring tasks. Full article
Show Figures

Figure 1

22 pages, 3768 KiB  
Article
A Collaborative Navigation Model Based on Multi-Sensor Fusion of Beidou and Binocular Vision for Complex Environments
by Yongxiang Yang and Zhilong Yu
Appl. Sci. 2025, 15(14), 7912; https://doi.org/10.3390/app15147912 - 16 Jul 2025
Viewed by 346
Abstract
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references [...] Read more.
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references and binocular vision enabling local environmental perception through a collaborative fusion strategy. The Unscented Kalman Filter (UKF) is used to integrate data from multiple sensors to ensure high-precision positioning and dynamic obstacle avoidance capabilities for robots in complex environments. Simulation results show that the Beidou–Binocular Cooperative Navigation (BBCN) model achieves a global positioning error of less than 5 cm in non-interference scenarios, and an error of only 6.2 cm under high-intensity electromagnetic interference, significantly outperforming the single Beidou model’s error of 40.2 cm. The path planning efficiency is close to optimal (with an efficiency factor within 1.05), and the obstacle avoidance success rate reaches 95%, while the system delay remains within 80 ms, meeting the real-time requirements of industrial scenarios. The innovative fusion approach enables unprecedented reliability for autonomous robot inspection in high-voltage environments, offering significant practical value in reducing human risk exposure, lowering maintenance costs, and improving inspection efficiency in power industry applications. This technology enables continuous monitoring of critical power infrastructure that was previously difficult to automate due to navigation challenges in electromagnetically complex environments. Full article
(This article belongs to the Special Issue Advanced Robotics, Mechatronics, and Automation)
Show Figures

Figure 1

18 pages, 3225 KiB  
Article
Autonomous Tracking of Steel Lazy Wave Risers Using a Hybrid Vision–Acoustic AUV Framework
by Ali Ghasemi and Hodjat Shiri
J. Mar. Sci. Eng. 2025, 13(7), 1347; https://doi.org/10.3390/jmse13071347 - 15 Jul 2025
Viewed by 292
Abstract
Steel lazy wave risers (SLWRs) are critical in offshore hydrocarbon transport for linking subsea wells to floating production facilities in deep-water environments. The incorporation of buoyancy modules reduces curvature-induced stress concentrations in the touchdown zone (TDZ); however, extended operational exposure under cyclic environmental [...] Read more.
Steel lazy wave risers (SLWRs) are critical in offshore hydrocarbon transport for linking subsea wells to floating production facilities in deep-water environments. The incorporation of buoyancy modules reduces curvature-induced stress concentrations in the touchdown zone (TDZ); however, extended operational exposure under cyclic environmental and operational loads results in repeated seabed contact. This repeated interaction modifies the seabed soil over time, gradually forming a trench and altering the riser configuration, which significantly impacts stress patterns and contributes to fatigue degradation. Accurately reconstructing the riser’s evolving profile in the TDZ is essential for reliable fatigue life estimation and structural integrity evaluation. This study proposes a simulation-based framework for the autonomous tracking of SLWRs using a fin-actuated autonomous underwater vehicle (AUV) equipped with a monocular camera and multibeam echosounder. By fusing visual and acoustic data, the system continuously estimates the AUV’s relative position concerning the riser. A dedicated image processing pipeline, comprising bilateral filtering, edge detection, Hough transform, and K-means clustering, facilitates the extraction of the riser’s centerline and measures its displacement from nearby objects and seabed variations. The framework was developed and validated in the underwater unmanned vehicle (UUV) Simulator, a high-fidelity underwater robotics and pipeline inspection environment. Simulated scenarios included the riser’s dynamic lateral and vertical oscillations, in which the system demonstrated robust performance in capturing complex three-dimensional trajectories. The resulting riser profiles can be integrated into numerical models incorporating riser–soil interaction and non-linear hysteretic behavior, ultimately enhancing fatigue prediction accuracy and informing long-term infrastructure maintenance strategies. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

33 pages, 10063 KiB  
Article
Wide-Angle Image Distortion Correction and Embedded Stitching System Design Based on Swin Transformer
by Shiwen Lai, Zuling Cheng, Wencui Zhang and Maowei Chen
Appl. Sci. 2025, 15(14), 7714; https://doi.org/10.3390/app15147714 - 9 Jul 2025
Viewed by 358
Abstract
Wide-angle images often suffer from severe radial distortion, compromising geometric accuracy and challenging image correction and real-time stitching, especially in resource-constrained embedded environments. To address this, this study proposes a wide-angle image correction and stitching framework based on a Swin Transformer, optimized for [...] Read more.
Wide-angle images often suffer from severe radial distortion, compromising geometric accuracy and challenging image correction and real-time stitching, especially in resource-constrained embedded environments. To address this, this study proposes a wide-angle image correction and stitching framework based on a Swin Transformer, optimized for lightweight deployment on edge devices. The model integrates multi-scale feature extraction, Thin Plate Spline (TPS) control point prediction, and optical flow-guided constraints, balancing correction accuracy and computational efficiency. Experiments on synthetic and real-world datasets show that the method outperforms mainstream algorithms, with PSNR gains of 3.28 dB and 2.18 dB on wide-angle and fisheye images, respectively, while maintaining real-time performance. To validate practical applicability, the model is deployed on a Jetson TX2 NX device, and a real-time dual-camera stitching system is built using C++ and DeepStream. The system achieves 15 FPS at 1400 × 1400 resolution, with a correction latency of 56 ms and stitching latency of 15 ms, demonstrating efficient hardware utilization and stable performance. This study presents a deployable, scalable, and edge-compatible solution for wide-angle image correction and real-time stitching, offering practical value for applications such as smart surveillance, autonomous driving, and industrial inspection. Full article
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing)
Show Figures

Figure 1

22 pages, 3862 KiB  
Review
Rail Maintenance, Sensor Systems and Digitalization: A Comprehensive Review
by Higinio Gonzalez-Jorge, Eduardo Ríos-Otero, Enrique Aldao, Eduardo Balvís, Fernando Veiga-López and Gabriel Fontenla-Carrera
Future Transp. 2025, 5(3), 83; https://doi.org/10.3390/futuretransp5030083 - 1 Jul 2025
Viewed by 387
Abstract
Railway infrastructures necessitate the inspection of various elements to ensure operational safety. This study concentrates on five key components: rail, sleepers and ballast, track geometry, and catenary. The operational principles of the primary defect measurement sensors are elaborated, emphasizing the use of ultrasound, [...] Read more.
Railway infrastructures necessitate the inspection of various elements to ensure operational safety. This study concentrates on five key components: rail, sleepers and ballast, track geometry, and catenary. The operational principles of the primary defect measurement sensors are elaborated, emphasizing the use of ultrasound, eddy currents, active and passive optical elements, accelerometers, and ground penetrating radar. Each sensor type is evaluated in terms of its advantages and limitations. Examples of mobile inspection platforms are provided, ranging from laboratory trains to draisines and track trolleys. The authors foresee future trends in railway inspection, including the implementation of IoT sensors, autonomous robots, and geospatial intelligence technologies. It is anticipated that the integration of sensors within both infrastructure and rolling stock will enhance maintenance and safety, with an increased utilization of autonomous robotic systems for hazardous and hard-to-reach areas. Full article
Show Figures

Figure 1

22 pages, 7106 KiB  
Article
Enhancing Highway Scene Understanding: A Novel Data Augmentation Approach for Vehicle-Mounted LiDAR Point Cloud Segmentation
by Dalong Zhou, Yuanyang Yi, Yu Wang, Zhenfeng Shao, Yanjun Hao, Yuyan Yan, Xiaojin Zhao and Junkai Guo
Remote Sens. 2025, 17(13), 2147; https://doi.org/10.3390/rs17132147 - 23 Jun 2025
Viewed by 394
Abstract
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data [...] Read more.
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data scarcity, occlusion, and imbalance in highway point cloud segmentation. A large-scale dataset (PEA-PC Dataset) was constructed, covering six key asset categories, addressing the lack of specialized highway datasets. A hybrid conical masking augmentation strategy was designed to simulate natural occlusions and enhance local feature retention, while semi-supervised learning prioritized foreground differentiation. The experimental results showed that the overall mIoU reached 73.8%, with the IoU of bridge railings and emergency obstacles exceeding 95%. The IoU of columnar assets increased from 2.6% to 29.4% through occlusion perception enhancement, demonstrating the effectiveness of this method in improving object recognition accuracy. The framework balances computational efficiency and robustness, offering a scalable solution for sparse highway scenes. However, challenges remain in segmenting vegetation-occluded pole-like assets due to partial data loss. This work highlights the efficacy of tailored augmentation and semi-supervised strategies in refining 3D segmentation, advancing applications in intelligent transportation and digital infrastructure. Full article
Show Figures

Figure 1

51 pages, 13105 KiB  
Review
Current Status and Trends of Wall-Climbing Robots Research
by Shengjie Lou, Zhong Wei, Jinlin Guo, Yu Ding, Jia Liu and Aiguo Song
Machines 2025, 13(6), 521; https://doi.org/10.3390/machines13060521 - 15 Jun 2025
Viewed by 1256
Abstract
A wall-climbing robot is an electromechanical device capable of autonomous or semi-autonomous movement on intricate vertical surfaces (e.g., walls, glass facades, pipelines, ceilings, etc.), typically incorporating sensing and adaptive control systems to enhance task performance. It is designed to perform tasks such as [...] Read more.
A wall-climbing robot is an electromechanical device capable of autonomous or semi-autonomous movement on intricate vertical surfaces (e.g., walls, glass facades, pipelines, ceilings, etc.), typically incorporating sensing and adaptive control systems to enhance task performance. It is designed to perform tasks such as inspection, cleaning, maintenance, and rescue while maintaining stable adhesion to the surface. Its applications span various sectors, including industrial maintenance, marine engineering, and aerospace manufacturing. This paper provides a systematic review of the physical principles and scalability of various attachment methods used in wall-climbing robots, with a focus on the applicability and limitations of different attachment mechanisms in relation to robot size and structural design. For specific attachment methods, the design and compatibility of motion and attachment mechanisms are analyzed to offer design guidance for wall-climbing robots tailored to different operational tasks. Additionally, this paper reviews localization and path planning methods for wall-climbing robots, comparing graph search, sampling-based, and feedback-based algorithms to guide strategy selection across varying environments and tasks. Finally, this paper outlines future development trends in wall-climbing robots, including the diversification of locomotion mechanisms, hybridization of attachment systems, and advancements in intelligent localization and path planning. This work provides a comprehensive theoretical foundation and practical reference for the design and application of wall-climbing robots. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

16 pages, 3447 KiB  
Review
Autonomous Mobile Inspection Robots in Deep Underground Mining—The Current State of the Art and Future Perspectives
by Martyna Konieczna-Fuławka, Anton Koval, George Nikolakopoulos, Matteo Fumagalli, Laura Santas Moreu, Victor Vigara-Puche, Jakob Müller and Michael Prenner
Sensors 2025, 25(12), 3598; https://doi.org/10.3390/s25123598 - 7 Jun 2025
Viewed by 996
Abstract
In this article, the current state of the art in the area of autonomously working and mobile robots used for inspections in deep underground mining and exploration is described, and directions for future development are highlighted. The increasing demand for CRMs (critical raw [...] Read more.
In this article, the current state of the art in the area of autonomously working and mobile robots used for inspections in deep underground mining and exploration is described, and directions for future development are highlighted. The increasing demand for CRMs (critical raw materials) and deeper excavations pose a higher risk for people and require new solutions in the maintenance and inspection of both underground machines and excavations. Mitigation of risks and a reduction in accidents (fatal, serious and light) may be achieved by the implementation of mobile or partly autonomous solutions such as drones for exploration, robots for exploration or initial excavation, etc. This study examines various types of mobile unmanned robots such as ANYmal on legs, robots on a tracked chassis, or flying drones. The main scope of this review is the evaluation of the effectiveness and technological advancement in the aspect of improving safety and efficiency in deep underground and abandoned mines. Notable possibilities are multi-sensor systems or cooperative behaviors in systems which involve many robots. This study also highlights the challenges and difficulties of working and navigating (in an environment where we cannot use GNSS or GPS systems) in deep underground mines. Mobile inspection robots have a major role in transforming underground operations; nevertheless, there are still aspects that need to be developed. Further improvement might focus on increasing autonomy, improving sensor technology, and the integration of robots with existing mining infrastructure. This might lead to safer and more efficient extraction and the SmartMine of the future. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 11680 KiB  
Article
ETAFHrNet: A Transformer-Based Multi-Scale Network for Asymmetric Pavement Crack Segmentation
by Chao Tan, Jiaqi Liu, Zhedong Zhao, Rufei Liu, Peng Tan, Aishu Yao, Shoudao Pan and Jingyi Dong
Appl. Sci. 2025, 15(11), 6183; https://doi.org/10.3390/app15116183 - 30 May 2025
Viewed by 649
Abstract
Accurate segmentation of pavement cracks from high-resolution remote sensing imagery plays a crucial role in automated road condition assessment and infrastructure maintenance. However, crack structures often exhibit asymmetry, irregular morphology, and multi-scale variations, posing significant challenges to conventional CNN-based methods in real-world environments. [...] Read more.
Accurate segmentation of pavement cracks from high-resolution remote sensing imagery plays a crucial role in automated road condition assessment and infrastructure maintenance. However, crack structures often exhibit asymmetry, irregular morphology, and multi-scale variations, posing significant challenges to conventional CNN-based methods in real-world environments. Specifically, the proposed ETAFHrNet focuses on two predominant pavement-distress morphologies—linear cracks (transverse and longitudinal) and alligator cracks—and has been empirically validated on their intersections and branching patterns over both asphalt and concrete road surfaces. In this work, we present ETAFHrNet, a novel attention-guided segmentation network designed to address the limitations of traditional architectures in detecting fine-grained and asymmetric patterns. ETAFHrNet integrates Transformer-based global attention and multi-scale hybrid feature fusion, enhancing both contextual perception and detail sensitivity. The network introduces two key modules: the Efficient Hybrid Attention Transformer (EHAT), which captures long-range dependencies, and the Cross-Scale Hybrid Attention Module (CSHAM), which adaptively fuses features across spatial resolutions. To support model training and benchmarking, we also propose QD-Crack, a high-resolution, pixel-level annotated dataset collected from real-world road inspection scenarios. Experimental results show that ETAFHrNet significantly outperforms existing methods—including U-Net, DeepLabv3+, and HRNet—in both segmentation accuracy and generalization ability. These findings demonstrate the effectiveness of interpretable, multi-scale attention architectures in complex object detection and image classification tasks, making our approach relevant for broader applications, such as autonomous driving, remote sensing, and smart infrastructure systems. Full article
(This article belongs to the Special Issue Object Detection and Image Classification)
Show Figures

Figure 1

Back to TopTop