Next Article in Journal
Ensuring Safe Physical HRI: Integrated MPC and ADRC for Interaction Control
Previous Article in Journal
Dynamics Analysis of Multibody Systems Based on Flexible Thermal Coupling Solid Elements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Wall-Climbing Robot with a Mechanical Arm for Weld Inspection of Large Pressure Vessels

1
Robotics Institute, Harbin Institute of Technology, Weihai 264200, China
2
State Nuclear Power Demonstration Plant Co., Ltd., Rongcheng 264300, China
*
Author to whom correspondence should be addressed.
Actuators 2025, 14(12), 607; https://doi.org/10.3390/act14120607
Submission received: 21 September 2025 / Revised: 9 November 2025 / Accepted: 2 December 2025 / Published: 12 December 2025
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)

Abstract

Inspecting the inner walls of large pressure vessels requires accurate weld seam recognition, complete coverage, and precise path tracking, particularly in low-feature environments. This paper presents a fully autonomous mobile robotic system that integrates weld seam detection, localization, and tracking to support ultrasonic testing. An improved Differentiable Binarization Network (DBNet) combined with the Spatially Variant Transformer (SVTR) model enhances digital stamp recognition, while weld paths are reconstructed from three-dimensional position data acquired via binocular stereo vision. To ensure complete traversal and accurate tracking, a global–local hierarchical planning strategy is implemented: the A-star (A*) algorithm performs global path planning, the Rapidly Exploring Random Tree Connect (RRT-Connect) algorithm handles local path generation, and point cloud normal–based spherical interpolation produces smooth tracking trajectories for robotic arm motion control. Experimental validation demonstrates a 94.7% digital stamp recognition rate, 95.8% localization success, 1.65 mm average weld tracking error, 2.12° normal fitting error, 98.2% seam coverage, and a tracking speed of 96 mm/s. These results confirm the system’s capability to automate weld seam inspection and provide a reliable foundation for subsequent ultrasonic testing in pressure vessel applications.

1. Introduction

As indispensable elements of contemporary industrial infrastructure, pressure vessels serve critical functions across sectors including nuclear energy, petroleum processing, and chemical engineering. Prolonged exposure to high temperatures, elevated pressures, and corrosive environments renders their structural materials prone to accelerated fatigue and degradation. Among these regions, weld seams constitute the most vulnerable zones, where creep cracking and fatigue fracture are most likely to initiate. The failure of these seams can ultimately lead to catastrophic structural collapse [1,2]. Currently, the inspection of pressure vessel welds remains predominantly manual, relying on handheld instruments operated by skilled technicians. For internal wall examinations, inspectors are frequently suspended by safety harnesses, resulting in exceptionally harsh working conditions, substantial physical demands, low inspection efficiency, and significant safety risks [3].
Robotic systems designed for large-scale weld inspection of pressure vessels face three principal challenges. First, they must achieve stable navigation within dark, confined environments characterized by complex geometries and multi-curved inner surfaces. Second, they require high inspection efficiency and comprehensive coverage, demanding both global path planning and precise local tracking along weld regions. Third, they must reliably detect and localize weld seams in low-feature environments to ensure accurate and consistent data acquisition.
Reliable adhesion and locomotion are essential for stable robotic operation on pressure vessel surfaces. Over the past decade, diverse climbing mechanisms have been investigated, including negative-pressure adhesion, magnetic attachment, and vacuum suction [4]. Jae-Hee Kim et al. developed a submersible inspection robot employing magnetic rollers to enhance both stability and mobility [5]. Similarly, researchers at Tohoku University in Japan introduced a wall-climbing robot equipped with multi-suction feet based on an innovative vacuum suction design [6].
To achieve full coverage of autonomous weld seam inspection by robots and high-precision tracking of local inspection, path planning algorithms, and the design of tracking schemes for end-effectors are among the core components of realizing autonomous robotic operations. Classical global planners, such as Dijkstra’s, A*, D*, and Floyd’s algorithms, provide foundational solutions for shortest-path routing [7]. To enhance inspection performance, Govindaraju et al. proposed an optimization algorithm for coverage path planning (CPP) in multi-robot systems [8]. Fareh et al. introduced a strategy that improves both planning speed and quality [9]. Zhang et al. further refined path optimization using an improved Gray Wolf Algorithm (IGWA), achieving a 14.84% enhancement in performance [10].
The emergence of neural networks and advances in machine vision have brought weld seam detection via deep learning to the forefront of welding research. A review of prominent studies in recent years [11,12,13,14,15,16] shows that most of them leverage image processing and deep learning-based convolutional neural networks (CNNs) to automatically learn the complex mapping from weld seam images to weld seam features, For instance, Chen et al. developed an improved deep CNN for real-time seam detection in underwater marine structures [17]. Miao et al. enhanced image segmentation by extracting edge features through convolution and integrating them with integral images [11]. In low-feature environments, digital steel stamps are often used to aid in seam localization. Because these stamps carry semantic content, their detection falls under the scope of optical character recognition (OCR) [18,19,20,21,22]. Zhou et al. introduced a single-network model capable of detecting text regions in natural scenes with high speed and accuracy [22]. Shi et al. proposed an end-to-end neural architecture that integrates feature extraction, sequence modeling, and transcription [23]. Sun et al. applied a YOLO-based deep CNN to recognize handwritten identification numbers on steel billets [20].
To achieve autonomous weld seam detection and localization within large pressure vessels, we developed a compact wall-climbing robotic system capable of operating in complex, curved environments. The vessel interior is partitioned into multiple inspection regions to enable sequential route planning and ensure complete weld coverage. In low-feature environments where visual cues are limited, digital steel stamps are employed as auxiliary markers. The system integrates OCR-based recognition with point cloud data to determine the three-dimensional spatial coordinates of each stamp. Weld seams are subsequently reconstructed through spatial interpolation and used to generate reference trajectories for inspection. A vision-guided tracking algorithm for the robotic arm is then implemented to follow the reconstructed weld seams. This algorithm receives seam position data from the vision module, extracts the surrounding point cloud, estimates the local surface normal, and determines the target end-effector pose to achieve precise and stable tracking along the weld path.
The main contributions of this work are summarized as follows:
  • A hierarchical motion planning–based weld traversal strategy that ensures complete inspection coverage in large, curved, and segmented environments, demonstrating strong adaptability to diverse pressure vessel geometries.
  • An advanced weld seam identification framework that integrates DBNet and SVTR network architectures, incorporating improvements in bottom-up path design and spatial–channel feature extraction. This method achieves high detection accuracy while maintaining real-time performance in experimental evaluations.
The remainder of this paper is organized as follows. Section 2 details the inspection environment, hardware design of the robotic system, and overall workflow. Section 3 presents the methodologies for hierarchical traversal, stamped mark detection, and weld path reconstruction. Section 4 reports the experimental setup and corresponding results. Section 5 discusses the findings and concludes the study.

2. System Scheme

2.1. Working Environment and Requirements

In the practical scenario examined in this study, weld seams on the inner wall of a reactor pressure vessel are inspected for potential defects. The vessel measures 430 mm in diameter and 1280 mm in height. During scheduled maintenance, six welds located on the upper nozzles and two circumferential welds along the lower inner wall require inspection. According to nuclear power plant manufacturing standards, the weld surfaces are overlaid with a 6 mm-thick stainless-steel cladding to prevent corrosion and oxidation. This protective layer, however, obscures visual features indicative of weld locations or geometries, resulting in a low-feature surface.
As illustrated in Figure 1, numerical steel stamp marks are applied to the vessel surface prior to delivery to facilitate weld identification. Starting from the 0° reference position and proceeding clockwise in the top view, numerical marks (0, 1, 2, …) are imprinted at 187.5 mm intervals along the entire circumference of each weld.
Based on the preceding analysis of the operational background, the fundamental functional requirements for the large pressure vessel weld inspection robot are defined as follows:
  • The system must be capable of planning and executing complete traversal paths for all weld seams distributed along the vessel’s inner wall.
  • It should perform weld detection and localization using steel stamp markers in low-feature environments.
  • It must ensure high-precision tracking of weld seams across large, curved surfaces.
The corresponding technical specifications are summarized below:
  • The inspection system should achieve a weld coverage rate of at least 97%.
  • Weld seam localization accuracy in low-feature regions should exceed 95%, with positional deviations within 2 mm.
  • The tracking accuracy for weld seams should be maintained within 3 mm.

2.2. Overall Program

Considering the operational environment and system requirements, the robot is designed as a dual-wheel differential-drive platform equipped with a six-degree-of-freedom (6-DOF) manipulator (Elite Robot Co., Ltd., Suzhou, China). The perception module integrates an Intel RealSense D435i (Intel, Santa Clara, CA, USA) depth camera mounted on the manipulator’s end effector, enabling weld seam inspection in large-scale, curved, and low-feature environments. The depth camera offers a sensing precision of up to 10 mm, with a horizontal field of view of 91.2° and a vertical field of view of 65.5°. To satisfy spatial constraints and inspection demands, the robotic arm provides a maximum reach of 914 mm, weighs 20 kg, and supports a payload capacity of 6 kg. The control module incorporates an NVIDIA Jetson Orin embedded computer (NVIDIA, Santa Clara, CA, USA) to perform real-time data acquisition and processing from all onboard sensors. The complete hardware configuration—including the chassis motion, perception, manipulation, and control modules—is illustrated in Figure 2, which depicts the integrated system architecture of the robot.
Figure 3 presents the overall workflow of the proposed system, which comprises three main stages: Global Coverage Planning, Weld Inspection and Tracking, and Output Tracking Parameters. In the first stage, the system defines the inspection workspace, establishes waypoints, and performs global path planning to ensure full coverage of all weld regions. During the Weld Inspection and Tracking stage, the robot detects and recognizes numerical stamps, extracts the weld paths, and executes local operations—including collision checking, obstacle analysis, local path planning, and pose adjustment—to adapt to curved surfaces. Throughout the process, Output Tracking Parameters are continuously generated to guide the manipulator, maintaining precise motion and stable contact during ultrasonic inspection. The results, including tracking accuracy and inspection imagery, are then evaluated to verify whether the inspection quality meets the required standards before proceeding to the next region.

3. Materials and Methods

3.1. Weld Traversal and Inspection Methods for Large Pressure Vessels

The weld distribution within the pressure vessel is predetermined during the design stage, with weld seams primarily located along the circumferential joints of the vessel shell. To ensure complete traversal of these welds across large, curved surfaces, a hierarchical motion planning strategy is proposed. As illustrated in Figure 4, the planning framework operates on two levels, enabling efficient global coverage and precise local trajectory generation.
At the first level, global path planning integrates weld distribution, the robot’s workspace, and obstacle layout. A hybrid A*–ACO algorithm computes the optimal route between inspection zones, ensuring full coverage with minimal path length.
At the second level, local motion planning refines the trajectory after weld identification. The robotic arm follows the weld centerline precisely, dynamically adjusting its pose to maintain stable operation on curved surfaces.

3.1.1. Global Path Planning Strategy Based on the A Algorithm*

Global path planning algorithms such as Dijkstra’s, A*, D*, and Floyd’s algorithms have been widely applied in known environments [24]. After segmenting the inspection area based on weld distribution and defining path waypoints, the robot’s global path must be optimized for smoothness to minimize chassis wear from excessive steering. Safety considerations require the robot to avoid hazardous pipeline openings that pose fall risks. The A* algorithm effectively accounts for obstacles and efficiently plans optimal paths between waypoints [25]. However, A* primarily handles single start-to-goal pathfinding. When multiple goal points need to be visited, the problem can be formulated as a Travelling Salesman Problem (TSP), where the objective is to determine the optimal visiting sequence that minimizes the overall travel distance. To solve this, the ant colony optimization (ACO) algorithm is employed to determine the traversal order, and A* is subsequently used to plan the optimal paths between consecutive points.
As shown in Figure 5, the pressure vessel has a diameter of 6400 mm and a height of 22,000 mm. A two-dimensional grid map was constructed based on the upper half of the vessel to facilitate motion planning and simulation. White cells indicate accessible areas, while black cells show obstacles. Yellow dots mark weld locations, blue dashed lines outline the robot’s inspection range, and green cells represent the starting point.
Table 1 demonstrates that the A*-based global path planning method outperforms the zigzag coverage strategy on the vessel’s inner surface. It minimizes redundant traversal, thereby enhancing coverage efficiency and reducing overall inspection time.

3.1.2. Local Motion Planning for Precise Weld Tracking

After reaching the designated inspection point and obtaining a sequence of weld path coordinates from the vision system, the robot generates a collision-free and time-efficient trajectory from its current manipulator pose to the target position. To ensure stable weld tracking, trajectory planning constrains the end-effector orientation to remain within an acceptable deviation from the weld surface normal while minimizing motion-induced oscillations. Given these requirements, sampling-based planners from the Rapidly exploring Random Tree family are well-suited due to their efficiency in high-dimensional configuration spaces and proven effectiveness in robotic manipulation and navigation. In this study, four representative RRT variants—RRT, RRT*, RRT-Connect, and RRT-Informed—were evaluated in a simulated environment with obstacles of varying shapes, as illustrated in Figure 6, to compare their planning efficiency and path quality under identical start and goal conditions.
Table 2 presents a summary of the path planning experiments conducted in a simulated environment utilizing the OMPL library (version 1.5.0). Each planner was executed 50 times with identical start and goal configurations to determine the mean and standard deviation (SD) of performance metrics. The maximum number of iterations was set to 500, and the termination criterion required a Euclidean distance of less than 20 mm between the current node and the goal. Metrics, including path length, iteration count, and execution time, were recorded. RRT-Connect consistently demonstrated the highest efficiency and was therefore selected for subsequent weld inspection operations.
After path generation, trajectory planning is conducted for the manipulator. Because weld seams are on curved surfaces, minimizing the deviation between the end-effector trajectory normal and the weld surface normal is critical for accurate tracking. Trajectory planning uses surface normal vectors extracted from vision-based point cloud data along the weld centerline. These normals allow continuous adjustment of the end-effector pose, ensuring precise alignment with the weld path.
The raw point cloud is first denoised using a statistical outlier removal filter, which removes points whose average distance to their n nearest neighbors exceeds a Gaussian-based threshold. To further reduce wave-like artifacts from lighting or sensor noise, the Moving Least Squares (MLS) method is applied to smooth the surface while preserving key local geometric features. Surface normals are then computed with a mesh-based estimation approach: local triangular patches are generated from the k-nearest neighbors, and each point’s normal is estimated as the weighted average of its associated patches. The complete processing workflow is shown in Figure 7.
Once accurate surface normal vectors are obtained, the manipulator’s motion poses are derived from the weld seam positions. Due to the complexity of the curved surface and the requirements of pose interpolation, spherical linear interpolation (SLERP) with quaternions is used to interpolate between adjacent surface normals, generating a smooth sequence of end-effector orientations. This method ensures natural and continuous orientation transitions, effectively preventing abrupt changes and oscillations.
Assuming interpolation is performed between manipulator orientations p and q , where the poses are represented by quaternions, the schematic diagram is shown in Figure 8.
Let a ( t ) and b ( t ) denote the interpolation coefficients corresponding to quaternions p and q , respectively. The time corresponding to p is t 1 , the time corresponding to q is t 2 , and the quaternion corresponding to time t is r . The proportional coefficient t is defined as shown in Equation (1). Based on the geometric relationship between quaternions illustrated in Figure 8, the relationship in Equation (2) can be derived. The interpolated orientation r ( t ) is then obtained through the interpolation process described in Equations (3) and (4).
t = t t 1 t 2 t 1
r t = a t p + b ( t ) q
a t = sin 1 t θ sin θ , b t = sin t θ sin θ ,
r t = sin 1 t θ sin θ + sin t θ q sin θ .
where the variables are defined as follows:
  • θ denote the angle between   p   and q ;
  • t denotes the time at which the interpolated orientation is r t .
Thus, the interpolated orientations between two surface normals can be computed to generate a smooth trajectory, enabling natural and continuous changes in the manipulator’s orientation while avoiding abrupt transitions or oscillations.

3.2. Weld Seam Path Detection and Extraction Using Digital Nameplate Information

After sequentially reaching each predefined inspection area via global path planning, the robot employs vision-based sensing to accurately locate the weld seam. However, direct weld seam recognition in low-texture environments is inherently challenging. To overcome this limitation, an auxiliary approach based on steel stamp information is introduced. In this method, the steel stamps are precisely detected, and their positions are used to indirectly infer the weld seam path. The actual steel stamp markings on the weld seams are illustrated in Figure 1.

3.2.1. OCR-Based Nameplate Detection and Recognition Using an Improved Algorithm

Steel imprints on the inner surfaces of pressure vessels contain critical semantic information. In conventional approaches, this information is extracted using OCR-based text detection and recognition algorithms. However, digital steel imprint detection remains challenging due to low contrast and small character size. Existing detection techniques can be broadly classified into three categories: traditional image processing methods, regression-based approaches such as CTPN and EAST, and segmentation-based approaches including PSENet, DBNet, and PAN [26]. This study achieves steel stamp detection and recognition through a tandem network architecture that integrates an improved DBNet-based detection module with an SVTR-based recognition module.
The DBNet framework, known for its high accuracy and real-time performance in scene text detection, is adopted and further enhanced with a Differentiable Binarization (DB) module to enable adaptive thresholding for precise segmentation of digital stamps in robotic tasks. The detection network employs ResNet-18 as the backbone to extract multi-level features through stacked convolutional and residual layers. A Feature Pyramid Network (FPN) is then used for multi-scale feature fusion, combining feature maps from 1/32 to 1/4 resolutions to integrate global semantic information with fine spatial details. The architecture of the ResNet-18 + FPN feature extraction and fusion network is illustrated in Figure 9.
The fused features are subsequently fed into the DB module, where convolutional layers generate a probability map and a threshold map representing the pixel-wise likelihood of text regions and their corresponding adaptive thresholds, respectively. The differentiable binarization process is formulated as shown in Equation (5).
B ^ i , j = 1 1 + e k ( P i , j T i , j )
where P i , j and T i , j denote the probability and threshold values.
The overall loss function combines three components: probability map loss L p , binary map loss L b (both computed using weighted BCE to mitigate class imbalance), and threshold map loss L t (L1 distance between predicted and ground truth thresholds), as shown in Equation (6).
L = L p + α L b + β L t
where α and β are weighting coefficients.
Finally, the binary map is generated from the probability map using a fixed threshold, and connected components are extracted using the Vatti clipping algorithm to obtain accurate bounding boxes of digital stamps.
In scene text recognition, traditional architectures typically comprise a visual feature extractor and a sequence modeling module. SVTR introduces a unified visual-only framework that enables accurate text recognition without explicit sequence modeling. The network adopts a three-stage multi-scale backbone, where each stage alternates between local and global mixing blocks to capture both fine-grained character features and global contextual dependencies.
During feature extraction, the input image undergoes a progressive overlapping patch embedding process that converts it into low-dimensional feature patches. The global mixing block employs multi-head self-attention to model inter-character relationships and enhance global perception, while the local mixing block extracts stroke-level features within a sliding window to strengthen character shape representation. The outputs from each stage are downsampled and expanded through merging operations, and the final combining operation compresses the 2D feature map into a 1D sequence. A linear classifier subsequently generates the transcription sequence as the final recognition output. The overall SVTR architecture is illustrated in Figure 10.

3.2.2. Improved Digital Stamp Detection Algorithm

In real-world scenarios, prolonged exposure to harsh environments causes contour degradation, wear, and deformation of digital steel imprints. DBNet, which employs a differentiable binarization mechanism and a segmentation-based detection framework, provides flexible adaptation to irregular contours. However, the typically small size of steel imprints limits effective feature extraction. To overcome these challenges, the DBNet detection model is modified as follows:
  • Integration of a Convolutional Attention Module
The convolutional attention module integrates both channel and spatial attention mechanisms. Channel attention dynamically adjusts the weights of feature channels, enhancing the network’s focus on salient channel-level information in steel imprint images. Spatial attention, in turn, highlights critical spatial regions within the image.
In the proposed design, feature maps are first reweighted along the channel dimension to emphasize informative channels, followed by spatial weighting to focus on key regions. This sequential attention refinement enhances the network’s sensitivity to relevant features. The structure of the convolutional attention module is illustrated in Figure 11.
2.
Reverse Multi-Scale Feature Fusion
In the original DBNet model, the Feature Pyramid Network (FPN) propagates semantically rich features through a top-down pathway with lateral connections. However, digital imprints are typically small and contain fine-grained details that are easily lost during this propagation process. To better preserve such information, a bottom-up reverse pathway is incorporated into the original FPN to enhance the extraction of low-level features.
Instead of relying on simple concatenation, features from different spatial scales are processed through multi-branch modules to capture information across varying receptive fields. Dilated convolutions are further introduced to expand the receptive field. The fused features are then adaptively combined, enabling dynamic receptive field adjustment and improving detection accuracy for small-scale targets. The network architecture of the enhanced steel imprint detection algorithm is illustrated in Figure 12.

3.2.3. Performance Evaluation of the Improved Digital Stamp Detection Algorithm

The robot in this study performs digital stamp detection and recognition within pressure vessel environments, for which no publicly available dataset currently exists. To overcome this limitation, a combination of a public dataset and a self-constructed steel stamp dataset was employed for model training. The ICDAR series dataset, which contains numerous natural scene text images with uneven illumination, occlusion, and blur, was selected as the public dataset due to its visual similarity to stamped text on pressure vessel surfaces. The custom dataset comprises real steel stamp images collected under normal, worn, and varying lighting conditions in simulated pressure vessel environments. In total, the constructed dataset includes 1345 training images and 169 validation images. The training process was carried out on a workstation equipped with an Intel i7-12700H processor (Intel, Santa Clara, CA, USA), an RTX 3060 GPU (NVIDIA, Santa Clara, CA, USA) (6 GB VRAM), and 16 GB of RAM.
To evaluate detection performance, the improved DBNet-based model was compared with DBNet, EAST, and TextBoxes++ on the same validation set under identical input resolution (640 × 640) and confidence threshold (0.5). Precision, recall, and fps were measured on the same hardware platform, and the comparative results are presented in Table 3.
The improved algorithm achieves higher precision and recall than the other three models. Although the integration of the bottom-up multi-scale enhancement structure decreases the detection speed by 3.6 FPS compared with the original DBNet, it still maintains a processing rate of 21.6 FPS—exceeding the real-time threshold of 20 FPS. Therefore, the proposed method effectively balances accuracy and efficiency, meeting the real-time performance requirements for weld seam detection tasks.

3.2.4. The Method for Extracting the Weld Seam Parameters

After detecting and identifying the digital steel stamp, its pixel coordinates were obtained in the camera coordinate frame. To precisely determine the stamp’s position in the robot base coordinate system, an extrinsic calibration between the camera and the robotic arm’s end effector was performed using a calibration plate and the PnP algorithm, followed by numerical optimization. As shown in Figure 13, the resulting transformation matrix was validated and then applied to convert the stamp position from the camera frame to the robot base frame.
Pressure vessels have two primary weld types: the circular main weld along the inner wall and the linear receiver weld connecting the vessel to the receiver. Each steel seal corresponds to a pair of parameters α ,   β indicating its position on the weld. Using the steel seal as the coordinate system origin, the weld path relative to the steel seal is defined, as shown in Figure 14.
Equations (7) and (8) describe the coordinate systems of the main weld and receiver weld paths, respectively. Applying the transformation matrix, the weld path coordinates in the robot base coordinate system are obtained, as detailed in Equation (9). Detecting and localizing the steel mark thus provides the necessary basis for subsequent robotic arm weld tracking.
z B 2 2 z B R + y B 2 = 0 x B = 0 .
z B R c o s β = R 2 ( y B + R s i n β ) 2 x B + r c o s α = ± r 2 ( y B + r s i n α ) 2 .
  x A y A z A = T B A x B y B z B .
where the variables are defined as follows:
  • R denote the radius of the pressure vessel;
  • x A , y A , z A denote the weld path in the robotic arm base coordinate system.
  • x B , y B , z B denote the coordinates of the weld path in the steel stamp coordinate system.
  • r is the radius of the receiver.
  • α ,   β denote the radius of the weld position parameter.
  • T B A denote transformation matrix from the weld path to the robotic arm base coordinate system.

4. Experiment and Results

4.1. System Integration and Experimental Platform Construction

During practical pressure vessel inspections, the robot must traverse long distances, which requires extended cabling to maintain a stable power supply and reliable communication. To replicate these operational conditions, the experimental platform was equipped with a 30 m drag cable and a remote monitoring system, ensuring consistency with real-world applications. Weld seam nameplates were affixed to the vessel wall according to the positions of the steel stamps and the technical drawings provided by the industrial partner. For safety, a 2000 kg-rated safety rope was used to secure the robot and prevent accidental falls or damage. The experimental setup is shown in Figure 15.

4.2. Weld Seam Stamp Detection and Positioning Experiment

This experiment was conducted to validate the weld seam detection and localization algorithm under real-world conditions, focusing on absolute error and success rate. The robot sequentially navigated to each designated weld inspection area and paused for measurement. The binocular camera captured images and generated the 3D coordinates of the steel mark relative to the robot’s center, which were then compared with manually measured ground-truth values. After each inspection, the robot moved to the next area, repeating the process. Positioning errors were recorded across multiple trials, and successful localization was defined as an absolute error below 2 mm along the x, y, and z axes. To replicate operational variability, tests were conducted under different lighting conditions (from dim to bright), with multi-angle acquisition within ±45°, and at various detection distances. The steel mark recognition results on the curved weld surface are shown in Figure 16.
A total of 167 three-dimensional weld steel mark positions were measured during the detection and localization experiments. Among them, 160 were successfully localized, resulting in a success rate of 95.8%. Of the seven failures, six exhibited absolute errors between 2 mm and 5 mm. Table 4 summarizes the quantitative evaluation of the weld steel mark localization accuracy. Errors are reported as the mean ± standard deviation, providing both the average deviation and its variability across trials.
The results demonstrate that the proposed algorithm achieves an average absolute error of 1.13 mm, with most errors below 2 mm, thereby meeting the technical requirement of maintaining positioning accuracy within 3 mm under low-feature conditions. This level of performance provides a solid foundation for subsequent robotic arm weld seam trajectory planning.

4.3. Weld Tracking Experiment

To ensure accurate robotic arm tracking of weld seams during actual wall operations, tracking performance was evaluated by integrating extracted weld seam normal vector information into the trajectory planning process. Both tracking error and normal deviation angle were recorded as performance metrics. The experimental tracking results are presented in Figure 17.
The evaluation indexes are the three-dimensional coordinate error and normal vector deviation angle of the robotic arm tracking the weld seam, which are calculated by the following Equations (10) and (11):
ε = ( x x c ) 2 + ( y y c ) 2 + ( z z c ) 2
θ = arccos n n c n n c
where the variables are defined as follows:
  • ε   represents the Euclidean distance error of the robotic arm tracking the weld;
  • x ,   y , z   and   x c ,   y c , z c   represent the 3D coordinates recorded;
  • n   denotes the weld surface normal vector;
  • n c   denotes the weld surface normal vector;
  • θ   denotes the deviation angle between n and n c .
The tracking experiments were repeated twenty times, with 100 uniformly sampled data points collected in each trial to evaluate performance. The tracking error was measured using a laser triangulation sensor, which quantified the positional deviation between the end-effector trajectory and the weld centerline. The orientation deviation was obtained through a vision-based method by comparing the real-time surface normal of the weld region with the reference normal captured during localization. Figure 18 illustrates the tracking performance from a representative experiment.
Analysis of multiple experimental datasets indicates that the proposed method achieves an average positional error of 1.65 ± 0.52 mm for weld seam tracking, with most errors falling within 3 mm. The average deviation angle between the end-effector normal vector and the weld centerline normal vector is 2.12° ± 0.45°, with a maximum deviation of less than 3.5°. This level of tracking accuracy meets the design specifications for pressure vessel inspection robots and satisfies the precision requirements for subsequent ultrasonic testing.

4.4. Weld Inspection Integral Traversal Experiment

The overall robot inspection experiment flow is shown in Figure 19 below:
The overall weld inspection test for the pressure vessel inspection robot followed the procedures described above. Key operational data, including weld length, total time, weld detection coverage, and tracking speed, were recorded. The experimental platform, designed according to the robot’s workspace, was divided into two detection zones to ensure full weld coverage. The traversal process is shown in Figure 20.
The recorded parameters include weld seam length, total operation time, weld detection coverage, and robotic arm tracking speed. The total inspected weld length was 225 mm, with the robot operating for 128.4 s. The weld detection coverage was 98.2%, and the robotic arm tracked the weld at 96 mm/s. The overall experimental results are summarized as follows:
  • Chassis movement accounts for the largest portion of operation time. Robotic arm trajectory positioning, returning, and tracking consume similar durations, while detection and positioning require the least time. Increasing chassis speed could further improve efficiency.
  • Detection leakage primarily occurs at the two ends of the weld, influenced by edge determination and positioning errors. This issue can be effectively mitigated in a complete weld structure.
  • The robotic arm’s tracking speed of 96 mm/s complies with the ultrasonic flaw detection standard, which requires speeds below 150 mm/s.

5. Conclusions

This study addresses the challenges of low weld seam detection accuracy and failures in low-feature environments encountered by large wall-climbing robots during pressure vessel inspections. It highlights key technologies, including weld seam detection and positioning methods for low-feature scenarios, high-precision robotic arm weld tracking motion planning, and global weld traversal path planning. The integration of robot hardware and software systems was implemented, and an experimental platform was developed to validate the proposed solutions.
Experiments on steel mark detection and localization, robotic arm weld tracking, and overall inspection were conducted. Results indicate a steel mark detection accuracy of 94.7% and a positioning success rate of 95.8%. Robotic arm tracking achieved an average positional error of 1.65 mm and a normal deviation angle of 2.12°. Overall weld detection coverage reached 98.2%, with a tracking speed of 96 mm/s.
Compared with industry standards, the system’s tracking error of 1.65 mm and coverage rate of 98.2% meet the requirements for ultrasonic flaw detection and on-site inspection (≤2 mm, ≥95%). Relative to relevant domestic and international studies [4,12,27], this approach improves steel mark detection accuracy and localization success by 5–8 percentage points and reduces robotic arm tracking error by approximately 30%. Overall, the work demonstrates high accuracy, efficiency, and coverage in weld seam inspection under low-feature conditions in large pressure vessels, establishing a solid foundation for automated online inspection.

Author Contributions

Conceptualization, M.Z.; Funding acquisition, M.Z.; Investigation, Z.M., R.L. and Y.L.; Methodology, M.P., Z.M. and R.L.; Resources, Z.M. and R.L.; Software, M.P.; Supervision, Y.L. and M.Z.; Validation, Z.M. and R.L.; Writing—original draft, M.P.; Writing—review and editing, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Grant No. 2024YFB4709400), and partially supported by the Key R&D Program of Shandong Province, China (Grant No. 2023SFGC0101 and 2023CXGC010203).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Authors Zhengxiong Mao and Ruifei Lyu were employed by the State Nuclear Power Demonstration Plant Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Ma, Y.; He, S.; Song, D.; He, X.; Chen, T.; Shen, F.; Baig, O.; Luo, S.; Wang, J. Crack propagation characteristics and fatigue life of the large austenitic stainless steel hydrogen storage pressure vessel. Int. J. Press. Vessel. Pip. 2024, 210, 105226. [Google Scholar] [CrossRef]
  2. Louis, H.K.; Ateya, A.A.E.; Amin, E. Evaluation of neutron radiation damage in the VVER-1200 reactor pressure vessel. Radiat. Phys. Chem. 2024, 221, 111738. [Google Scholar] [CrossRef]
  3. Shi, Q.; Han, S.; Li, J.; Li, Y.; Li, J.; Yang, X. Research on the Application and Development of Intelligent Inspection System. In Proceedings of the 2020 IEEE International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 6–8 November 2020. [Google Scholar]
  4. Fang, G.; Cheng, J. Advances in Climbing Robots for Vertical Structures in the Past Decade: A Review. Biomimetics 2023, 8, 47. [Google Scholar] [CrossRef] [PubMed]
  5. Kim, J.-H.; Lee, J.-C.; Choi, Y.-R. LAROB: Laser-Guided Underwater Mobile Robot for Reactor Vessel Inspection. IEEE/ASME Trans. Mechatron. 2014, 19, 1216–1225. [Google Scholar] [CrossRef]
  6. Fujita, M.; Ikeda, S.; Fujimoto, T.; Shimizu, T.; Ikemoto, S.; Miyamoto, T. Development of universal vacuum gripper for wall-climbing robot. Adv. Robot. 2018, 32, 283–296. [Google Scholar] [CrossRef]
  7. Šelek, A.; Seder, M.; Brezak, M.; Petrović, I. Smooth Complete Coverage Trajectory Planning Algorithm for a Nonholonomic Robot. Sensors 2022, 22, 9269. [Google Scholar] [CrossRef] [PubMed]
  8. Govindaraju, M.; Fontanelli, D.; Kumar, S.S.; Pillai, A.S. Optimized Offline-Coverage Path Planning Algorithm for Multi-Robot for Weeding in Paddy Fields. IEEE Access 2023, 11, 109868–109884. [Google Scholar] [CrossRef]
  9. Fareh, R.; Baziyad, M.; Rabie, T.; Bettayeb, M. Enhancing Path Quality of Real-Time Path Planning Algorithms for Mobile Robots: A Sequential Linear Paths Approach. IEEE Access 2020, 8, 167090–167104. [Google Scholar] [CrossRef]
  10. Zhang, Q.; Zhao, J.; Pan, L.; Wu, X.; Hou, Y.; Qi, X. Optimal Path Planning for Mobile Robots in Complex Environments Based on the Gray Wolf Algorithm and Self-Powered Sensors. IEEE Sens. J. 2023, 23, 20756–20765. [Google Scholar] [CrossRef]
  11. Miao, R.; Jiang, Z.; Zhou, Q.; Wu, Y.; Gao, Y.; Zhang, J.; Jiang, Z. Online inspection of narrow overlap weld quality using two-stage convolution neural network image recognition. Mach. Vis. Appl. 2021, 32, 27. [Google Scholar] [CrossRef]
  12. Li, J.; Li, B.; Dong, L.; Wang, X.; Tian, M. Weld Seam Identification and Tracking of Inspection Robot Based on Deep Learning Network. Drones 2022, 6, 216. [Google Scholar] [CrossRef]
  13. Wu, Y.; Li, Q. The Algorithm of Watershed Color Image Segmentation Based on Morphological Gradient. Sensors 2022, 22, 8202. [Google Scholar] [CrossRef] [PubMed]
  14. Dhruva, K.D.; Fang, C.; Zheng, Y.; Gao, Y. Semi-supervised transfer learning-based automatic weld defect detection and visual inspection. Eng. Struct. 2023, 292, 116580. [Google Scholar] [CrossRef]
  15. He, W.; Zhang, A.; Wang, P. Weld Cross-Section Profile Fitting and Geometric Dimension Measurement Method Based on Machine Vision. Appl. Sci. 2023, 13, 4455. [Google Scholar] [CrossRef]
  16. Du, Y.; Liu, M.; Wang, J.; Liu, X.; Wang, K.; Liu, Z.; Dong, Q.; Yao, J.; Lu, D.; Su, Y. A wall climbing robot based on machine vision for automatic welding seam inspection. Ocean. Eng. 2024, 310, 118825. [Google Scholar] [CrossRef]
  17. Chen, R.; Hu, P.; Gui, X.; Hua, L. An on-line weld inspection method for underwater offshore structure based on an improved deep convolutional network. Nondestruct. Test. Eval. 2025, 40, 289–308. [Google Scholar] [CrossRef]
  18. Dometios, A.C.; Papageorgiou, X.S.; Arvanitakis, A.; Tzafestas, C.S.; Maragos, P. Real-time end-effector motion behavior planning approach using on-line point-cloud data towards a user adaptive assistive bath robot. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar]
  19. Papageorgiou, X.S.; Dometios, A.C.; Tzafestas, C.S. Towards a User Adaptive Assistive Robot: Learning from Demonstration Using Navigation Functions. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar]
  20. Sun, Q.; Chen, D.; Wang, S.; Liu, S. Recognition Method for Handwritten Steel Billet Identification Number Based on Yolo Deep Convolutional Neural Network. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020. [Google Scholar]
  21. Zhang, Z.; Yang, G.; Wang, C.; Chang, G. Recognition of Casting Embossed Convex and Concave Characters Based on YOLO v5 for Different Distribution Conditions. In Proceedings of the 2021 International Wireless Communications and Mobile Computing (IWCMC), Harbin, China, 28 June–2 July 2021. [Google Scholar]
  22. Zhou, X.; Yao, C.; Wen, H.; Wang, Y.; Zhou, S.; He, W.; Liang, J. EAST: An Efficient and Accurate Scene Text Detector. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  23. Shi, B.G.; Bai, X.; Yao, C. An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2298–2304. [Google Scholar] [CrossRef] [PubMed]
  24. Chiang, Y.-Y. Harvesting Geographic Features from Heterogeneous Raster Maps. Ph.D. Thesis, University of Southern California, Los Angeles, CA, USA, December 2010. [Google Scholar]
  25. He, Z.; Liu, C.; Chu, X.; Negenborn, R.R.; Wu, Q. Dynamic anti-collision A-star algorithm for multi-ship encounter situations. Appl. Ocean. Res. 2022, 118, 102995. [Google Scholar] [CrossRef]
  26. Huang, B.; Bai, A.; Wu, Y.; Yang, C.; Sun, H. DB-EAC and LSTR: DBnet based seal text detection and Lightweight Seal Text Recognition. PLoS ONE 2024, 19, e0301862. [Google Scholar] [CrossRef] [PubMed]
  27. Zhao, M.; Liu, X.; Wang, K.; Liu, Z.; Dong, Q.; Wang, P.; Su, Y. Welding Seam Tracking and Inspection Robot Based on Improved YOLOv8s-Seg Model. Sensors 2024, 24, 4690. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic of pressure vessel weld distribution: (a) Overall layout of pressure vessel welds; (b) Stencil markings in low-feature weld environments; (c) Distribution of numerical steel marks.
Figure 1. Schematic of pressure vessel weld distribution: (a) Overall layout of pressure vessel welds; (b) Stencil markings in low-feature weld environments; (c) Distribution of numerical steel marks.
Actuators 14 00607 g001
Figure 2. Schematic of the inspection robot structure.
Figure 2. Schematic of the inspection robot structure.
Actuators 14 00607 g002
Figure 3. Workflow of the robotic system for pressure vessel inspection.
Figure 3. Workflow of the robotic system for pressure vessel inspection.
Actuators 14 00607 g003
Figure 4. Schematic of the hierarchical weld traversal planning: (a) Global path planning schematic; (b) Local motion planning schematic. In (a), the red line segments represent the actual weld distribution, while the blue line segments represent the planned global movement path of the robot.
Figure 4. Schematic of the hierarchical weld traversal planning: (a) Global path planning schematic; (b) Local motion planning schematic. In (a), the red line segments represent the actual weld distribution, while the blue line segments represent the planned global movement path of the robot.
Actuators 14 00607 g004
Figure 5. Comparison of path planning results: (a) Path planning result using the A* algorithm; (b) Path planning result using a zigzag (boustrophedon) coverage algorithm.
Figure 5. Comparison of path planning results: (a) Path planning result using the A* algorithm; (b) Path planning result using a zigzag (boustrophedon) coverage algorithm.
Actuators 14 00607 g005
Figure 6. Comparison of RRT algorithm variants: (a) RRT-Informed algorithm; (b) RRT* algorithm; (c) RRT algorithm; (d) RRT-Connect algorithm. The red line represents the optimal path selected from the set of candidate paths shown in green. Blue points denote the waypoints along the path, while black objects indicate obstacles. The letters ‘S’ and ‘T’ denote the start and target points, respectively. The cyan dashed ellipse in (a) represents the informative sampling domain constrained by the current best path cost.
Figure 6. Comparison of RRT algorithm variants: (a) RRT-Informed algorithm; (b) RRT* algorithm; (c) RRT algorithm; (d) RRT-Connect algorithm. The red line represents the optimal path selected from the set of candidate paths shown in green. Blue points denote the waypoints along the path, while black objects indicate obstacles. The letters ‘S’ and ‘T’ denote the start and target points, respectively. The cyan dashed ellipse in (a) represents the informative sampling domain constrained by the current best path cost.
Actuators 14 00607 g006
Figure 7. Surface point cloud normal estimation process: (a) Original Surface Image; (b) Point Cloud after Preprocessing; (c) Point Cloud with Wavy Artifacts (visualized in orange); (d) Smoothed Point Cloud (visualized in blue); (e) Estimated Normal Vectors of the Point Cloud, where the colors represent the orientation of the surface normals mapped to the RGB color space, where the colors represent the orientation of the surface normals mapped to the RGB color space.
Figure 7. Surface point cloud normal estimation process: (a) Original Surface Image; (b) Point Cloud after Preprocessing; (c) Point Cloud with Wavy Artifacts (visualized in orange); (d) Smoothed Point Cloud (visualized in blue); (e) Estimated Normal Vectors of the Point Cloud, where the colors represent the orientation of the surface normals mapped to the RGB color space, where the colors represent the orientation of the surface normals mapped to the RGB color space.
Actuators 14 00607 g007
Figure 8. Quaternion spherical interpolation.
Figure 8. Quaternion spherical interpolation.
Actuators 14 00607 g008
Figure 9. ResNet18 + FPN feature extraction fusion.
Figure 9. ResNet18 + FPN feature extraction fusion.
Actuators 14 00607 g009
Figure 10. SVTR network architecture.
Figure 10. SVTR network architecture.
Actuators 14 00607 g010
Figure 11. Schematic diagram of convolutional attention module.
Figure 11. Schematic diagram of convolutional attention module.
Actuators 14 00607 g011
Figure 12. Improved detection network structure.
Figure 12. Improved detection network structure.
Actuators 14 00607 g012
Figure 13. Coordinate relationship transformation diagram. (a) Hand–eye calibration coordinate transformation. (b) Target positioning coordinate transformation.
Figure 13. Coordinate relationship transformation diagram. (a) Hand–eye calibration coordinate transformation. (b) Target positioning coordinate transformation.
Actuators 14 00607 g013
Figure 14. Weld Path in stamped mark frame.
Figure 14. Weld Path in stamped mark frame.
Actuators 14 00607 g014
Figure 15. Overall experimental environment of the inspection robot.
Figure 15. Overall experimental environment of the inspection robot.
Actuators 14 00607 g015
Figure 16. Multi-view and multi-lighting weld digital stamp detection results: (ac) Bright conditions; (df) Moderate conditions; (gi) Dark conditions. The red numbers indicate the detected character labels.
Figure 16. Multi-view and multi-lighting weld digital stamp detection results: (ac) Bright conditions; (df) Moderate conditions; (gi) Dark conditions. The red numbers indicate the detected character labels.
Actuators 14 00607 g016
Figure 17. Experimental results of robotic arm weld tracking: (a) Camera extracting normal vector information. (b) Dynamic adjustment of end position. (c) Path tracking from the start point to the end point under the field of view.
Figure 17. Experimental results of robotic arm weld tracking: (a) Camera extracting normal vector information. (b) Dynamic adjustment of end position. (c) Path tracking from the start point to the end point under the field of view.
Actuators 14 00607 g017
Figure 18. Tracking experimental results. (a) Tracking Euclidean distance error. (b) Trajectory normal vector deviation.
Figure 18. Tracking experimental results. (a) Tracking Euclidean distance error. (b) Trajectory normal vector deviation.
Actuators 14 00607 g018
Figure 19. Iteration detection process.
Figure 19. Iteration detection process.
Actuators 14 00607 g019
Figure 20. Traversal process: (a) Departure to the first position to be inspected. (b) The robot arm tracks the weld seam. (c) Moving to the next inspection position. (d) The camera recognizes the coordinates of the inspection stamp. (e) Guide the robot arm to track the weld seam. (f) Return to the starting point.
Figure 20. Traversal process: (a) Departure to the first position to be inspected. (b) The robot arm tracks the weld seam. (c) Moving to the next inspection position. (d) The camera recognizes the coordinates of the inspection stamp. (e) Guide the robot arm to track the weld seam. (f) Return to the starting point.
Actuators 14 00607 g020
Table 1. Comparison of Path Planning Result Parameters.
Table 1. Comparison of Path Planning Result Parameters.
Planning MethodPath
Length (m)
Overlapping
Distance (m)
Maximum
Steering Angle (°)
Zigzag Coverage Algorithm41590
A* Algorithm32260
Table 2. Comparison of Path Planning Algorithm Performance.
Table 2. Comparison of Path Planning Algorithm Performance.
AlgorithmPath
Length (m)
Number of
Iterations
Execution
Time (s)
RRT31.045 ± 1.423186 ± 220.786 ± 0.112
RRT*23.793 ± 1.151199 ± 181.599 ± 0.159
RRT-informed23.538 ± 0.947169 ± 150.988 ± 0.083
RRT-connect23.634 ± 0.915194 ± 170.207 ± 0.041
Table 3. Comparison of the detection performance of different algorithms.
Table 3. Comparison of the detection performance of different algorithms.
Detection ModelsPrecision (%)Recall (%)Detection Rate (fps)
TextBoxes++87.785.85.7
DBnet88.684.925.2
EAST90.588.110.1
Improved Algorithm94.790.221.6
Table 4. Quantitative Evaluation of Weld Steel Mark Localization Accuracy.
Table 4. Quantitative Evaluation of Weld Steel Mark Localization Accuracy.
Number of Total SamplesNumber of
Valid Localizations
Error (mm)Confidence
Interval (95%)
1671601.13 ± 0.43[1.06, 1.20]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhong, M.; Pan, M.; Mao, Z.; Lyu, R.; Liu, Y. A Wall-Climbing Robot with a Mechanical Arm for Weld Inspection of Large Pressure Vessels. Actuators 2025, 14, 607. https://doi.org/10.3390/act14120607

AMA Style

Zhong M, Pan M, Mao Z, Lyu R, Liu Y. A Wall-Climbing Robot with a Mechanical Arm for Weld Inspection of Large Pressure Vessels. Actuators. 2025; 14(12):607. https://doi.org/10.3390/act14120607

Chicago/Turabian Style

Zhong, Ming, Mingjian Pan, Zhengxiong Mao, Ruifei Lyu, and Yaxin Liu. 2025. "A Wall-Climbing Robot with a Mechanical Arm for Weld Inspection of Large Pressure Vessels" Actuators 14, no. 12: 607. https://doi.org/10.3390/act14120607

APA Style

Zhong, M., Pan, M., Mao, Z., Lyu, R., & Liu, Y. (2025). A Wall-Climbing Robot with a Mechanical Arm for Weld Inspection of Large Pressure Vessels. Actuators, 14(12), 607. https://doi.org/10.3390/act14120607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop