Next Article in Journal
SDA-YOLO: Multi-Scale Dynamic Branching and Attention Fusion for Self-Explosion Defect Detection in Insulators
Previous Article in Journal
Emotional Analysis in a Morphologically Rich Language: Enhancing Machine Learning with Psychological Feature Lexicons
Previous Article in Special Issue
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots

1
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
2
Ningbo Institute of Materials Technology and Engineering, CAS, Ningbo 315201, China
3
Ningbo Institute of Northwestern Polytechnical University, Ningbo 315103, China
4
Ningbo Weierskeler Intelligent Technology Limited Company, Ningbo 315502, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(15), 3069; https://doi.org/10.3390/electronics14153069
Submission received: 19 April 2025 / Revised: 30 June 2025 / Accepted: 18 July 2025 / Published: 31 July 2025
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)

Abstract

Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a systematic and comprehensive review of spatial positioning techniques tailored to magnetic climbing robots. This paper addresses this gap by categorizing and evaluating current spatial positioning approaches. Initially, single-sensor-based methods are analyzed with a focus on external sensor approaches. Then, multi-sensor fusion methods are explored to overcome the shortcomings of single-sensor-based approaches. Multi-sensor fusion methods include simultaneous localization and mapping (SLAM), integrated positioning systems, and multi-robot cooperative positioning. To address non-uniform noise and environmental interference, both analytical and learning-based reinforcement approaches are reviewed. Common analytical methods include Kalman-type filtering, particle filtering, and correlation filtering, while typical learning-based approaches involve deep reinforcement learning (DRL) and neural networks (NNs). Finally, challenges and future development trends are discussed. Multi-sensor fusion and lightweight design are the future trends in the advancement of spatial positioning technologies for magnetic climbing robots.

1. Introduction

With the continuous advancement of modern industries, the demand for autonomous robot operations in complex environments is sharply increasing, especially for the inspection and maintenance of metal structures. In typical metal-structure environments, such as offshore platforms, ships, storage tanks, and pipelines, traditional manual operations are no longer applicable due to their high security risks and low efficiency [1,2,3,4]. Magnetic climbing robots, characterized by their strong adhesion capabilities and flexible locomotion mechanisms, have great advantages when it comes to operating in such an environment. Magnetic climbing robots require precise spatial positioning to follow predefined trajectories and ensure effective task completion. The robot can steadily follow the predefined trajectories while relying on accurate spatial positioning, which is the foundation for path planning, defect detection, and autonomic decision making. Therefore, spatial positioning has become one of the core research topics in the development of magnetic climbing robots. However, there is a lack of comprehensive research covering both of these points. This paper aims to fill this gap by systematically analyzing the mainstream spatial positioning methods relevant to magnetic climbing robots.
Compared to other mobile robots, magnetic climbing robots can maintain stable adhesion to and movement up vertical walls, on curved surfaces, and even in inverted positions. Therefore, magnetic climbing robots are utilized in various hazardous and complex environments, which reduces labor demands, material costs, and operational risks [5,6]. For the intelligent and autonomous evolution of the magnetic climbing robot, the core technologies encompass magnetic adhesion mechanisms, motion control, environmental perception, path planning, and spatial positioning.
In practical applications, autonomous navigation on large-scale metal surfaces is necessary for magnetic climbing robots. Due to the complexity of working environments characterized by curved surfaces, multiple obstacles, and strong magnetic interference, traditional positioning methods are usually unsuitable. The development of highly robust spatial positioning methods with high precision is essential for the enhancement of intelligent magnetic climbing robots. Accurate localization supports autonomous navigation and path planning, ensuring the full coverage of the target operational area. During tasks such as defect detection, welding, or coating tasks, precise spatial alignment with the actual structural geometry is required. Reliable positioning ensures task repeatability and data accuracy, which are crucial for consistent performance [7].
The primary spatial positioning methods include vision-based positioning, inertial navigation, magnetic field sensing, LiDAR, and multi-sensor fusion techniques. Figure 1 illustrates various complex operational scenarios, such as vertical ship hulls, cylindrical storage tanks, and uneven metallic surfaces, highlighting the demand for advanced spatial positioning systems that are capable of maintaining a high degree of precision in dynamic environments. The vision-based positioning method relies on a camera to capture environmental images, and feature matching and depth estimation are conducted to calculate the target location based on the image information [8]. The inertial measurement unit (IMU) is utilized in the inertial navigation method to obtain the measurements, which are used to estimate the position and attitude of the robot. Although its adaptive ability is strong, its inertial navigation suffers from accumulated drift errors [9]. The subtle change in magnetic field distribution is caught by a magnetic sensor to calculate the position of the robot on the metal surface. Although a magnetic sensor can be used in low-light environments, the positioning precision is limited due to magnetic field instability. LiDAR is also chosen as a kind of positioning method, but the positioning accuracy is greatly reduced in enclosed or highly reflective environments [10,11]. In recent years, vision, the IMU, and magnetic sensing have been proposed to improve overall positioning performance. Filter algorithms such as Kalman-type filtering, particle filtering, and correlation filtering are used to enhance positioning accuracy. However, the optimal fusion strategies of multi-sensor data remain a significant research challenge [12,13,14].
In terms of multi-sensor data fusion, the rise of deep learning has introduced new possibilities for the spatial positioning of magnetic climbing robots. Convolutional neural networks (CNNs) combined with long short-term memory (LSTM) networks have been employed to classify and fuse vision and the IMU [15,16]. The experimental results demonstrate that the impact of range errors is effectively decreased for ultra-wideband (UWB) positioning accuracy, and the robustness of vision-based positioning is enhanced. A deep reinforcement learning (DRL)-based autonomous navigation method [17,18] was proposed to enhance the self-adaptiveness of robots in different operation environments. The navigation accuracy and stability are consequently improved for the robot. Additionally, a temporal convolutional network (TCN)-based positioning model has been proposed to process complex magnetic field data [19], yielding notable improvements in positioning accuracy under magnetically unstable conditions. These deep learning techniques—particularly CNNs, DRL, and TCNs—show considerable potential for enhancing spatial positioning performance in dynamic and unstructured environments.
Various improvements are needed for the further development of spatial positioning for magnetic climbing robots. Integrating with deep learning, multi-sensor fusion will be an effective method to enhance the positioning precision and system robustness of the magnetic climbing robot in complex and hazardous operation environments. Due to the limited computational resources of magnetic climbing robots, lightweight deep learning models and optimized computation strategies are the key points for realizing high-precision and real-time spatial positioning. The problem of positioning errors should be addressed to improve the adaptability of robots in harsh conditions, such as high temperature, high humidity, strong magnetic interference, and so on. Moreover, integrating artificial intelligence with path planning will enable robots to adjust inspection trajectories and enhance operation efficiency autonomously. In contrast to prior reviews that primarily focus on mobile or industrial robots, this work provides a task-specific overview of spatial positioning technologies for magnetic climbing robots. By classifying the current positioning methods from both sensor and algorithm perspectives, and identifying practical deployment challenges, this review provides a comprehensive and structured survey of intelligent positioning systems in the domain of climbing robotics.

2. Signal-Sensor-Based Spatial Positioning Methods

Currently, spatial positioning methods based on a single sensor are categorized into external sensor-based spatial positioning and onboard sensor-based spatial positioning.

2.1. External Sensor-Based Spatial Positioning

External-positioning systems are used to obtain the position information of robots for external sensor-based spatial positioning. As shown in Figure 2, external sensor-based positioning methods [20] include systems such as laser trackers, ultrasonic beacon arrays, and calibrated external cameras. These configurations typically provide high positioning accuracy within controlled environments. However, their dependency on costly infrastructure and their limited adaptability restrict their usage in dynamic or cluttered field scenarios.
The ultrasonic beacon system-based spatial positioning method was firstly proposed by Enjikalayil et al. [21] for the magnetic climbing robot in the operation environment of ship hulls. The radio wave is used to establish the signal communication network, and the precise position is consequently obtained to realize the tracking of the robot within the valid signal transmission range. However, due to the limited signal transmission range, this kind of system is only suitable for small to medium-sized ship hulls. The vertical tank positioning method was established by Wang et al. [22] based on the laser tracker for the magnetic climbing robot. Since a high-precision and real-time spatial position is obtained, the positioning error is less than 0.04% for the vertical tank positioning method according to that paper. However, the requirements of high-cost equipment and a strict operation environment inhibit the development of laser tracker-based spatial positioning methods.
Instead of the laser tracker, a cheap RGB-D camera was applied for the positioning of climbing robots by Zhang et al. [23]. Nevertheless, the visible scope was limited for fixed camera sensors. If the camera is mounted on a gimbal, the visible scope can be effectively expanded, and then the positioning performance is enhanced for the autonomous positioning method. The external reference points-based global positioning method was established by Gu et al. [24] for the climbing robots. The odometry was used to enhance the positioning accuracy based on the data obtained by the displacement sensor. However, the position accuracy of the global positioning method heavily depends on the layout of reference points.
In summary, the external sensor-based spatial positioning method can obtain guaranteed positioning precision in the specific operation environments. If the preset operation environments are unsuitable, such as the limited external sensor performance and inappropriate reference points, the positioning precision will be heavily decreased for the external sensor. Additionally, high-cost positioning devices and poor adaptability make the external sensor-based spatial positioning method less suitable for magnetic climbing robots operating in highly dynamic and obstacle-filled environments.

2.2. Onboard Sensor-Based Spatial Positioning

The commonly used onboard positioning sensors include 2D and 3D optical sensors. Figure 3 compares two major categories of external sensor-based spatial positioning. Subfigure (a) presents a 2D image-based method employing ArUco markers in conjunction with a fisheye camera, which is particularly suitable for structured environments where reference markers are clearly visible. Subfigure (b) illustrates a 3D vision-based method utilizing stereo or depth cameras to build spatial awareness from point clouds, offering superior adaptability and precision in complex and unstructured environments.
An A-IEF positioning method was proposed by Zhang et al. [25] to achieve high-precision robot localization by observing ArUco markers. The 2D fisheye camera was used as the external sensor for the A-IEF positioning method, which is suitable for low-light environments. The principle of the A-IEF positioning method is given in Figure 3a. However, since 2D image recognition may suffer from precision instability arising from data fluctuations during marker detection (e.g., ArUco recognition errors), the A-IEF positioning method is not suitable for climbing robot operations in the real world.
Cameras or LiDARs are used to capture 3D point-cloud data for 3D optical sensor-based spatial positioning methods, and then the precise spatial position is calculated through processing the acquired 3D point-cloud data [26]. Zhong et al. [27] proposed a 3D positioning method based on binocular vision for the climbing robot. The calculated spatial position was used to guide robots to complete the operations of rust removal and painting. A global positioning method was developed by Wang et al. [28] based on the LiDAR odometry. The traditional odometry was replaced by the PL-ICP LiDAR odometry, and the adaptive Monte Carlo localization (AMCL) algorithm was proposed to enhance the positioning precision. However, significant computation resources are required to calculate the high-precision spatial position of robots for the global positioning method.
In summary, compared with the 2D image-based positioning methods, 3D optical sensor-based positioning methods are more applicable for real-world applications. Meanwhile, a spatial positioning method based on onboard sensors is more adaptive and flexible for the complex operation environment than a spatial positioning method based on external sensors. However, optical sensors, especially vision sensors, are easily affected by lighting conditions and surface obstacles, and processing of point-cloud data requires a high computational cost, which poses challenges for lightweight and resource-constrained climbing robots.

3. Multi-Sensor Fusion-Based Spatial Positioning Methods

In recent years, multi-sensor fusion technology has been increasingly applied to spatial positioning for magnetic climbing robots, particularly for complex application scenarios such as autonomous navigation and path planning. Due to the performance limitations of single sensors, such as the accumulated drift in inertial measurement units (IMUs), the sensitivity of visual SLAM to lighting conditions and texture variations, and the environmental reflection sensitivity of LiDAR, multi-sensor fusion-based positioning is essential to improve positioning accuracy and system robustness, as show in Figure 4. This section summarizes the widely used multi-sensor fusion-based spatial positioning methods, including the multi-sensor fusion-based SLAMs [29], the multi-sensor integrated positioning systems, and multi-robot cooperative positioning.

3.1. Multi-Sensor Fusion-Based SLAMs

Simultaneous localization and mapping (SLAM) aims to obtain precise localization and map-building of unknown environments. However, due to the inherent performance limitations, the single-sensor-based SLAM often suffers from poor stability. As a consequence, the multi-sensor fusion-based SLAM has become the primary research focus. The use of multi-sensor coordination brings robustness, accuracy, and stability enhancement to SLAM systems [30].
A LiDAR- and IMU-based tightly-coupled fusion was proposed by Zhou et al. [31] to enable the spatial positioning and map-building for quadruped robots. The failure problem arising from the unstable motion was addressed for the SLAM during the robot’s operation. The VI-SLAM was established by Liu et al. [32] based on mean filtering. The IMU data are preprocessed by mean filtering to eliminate random noise and angular errors. Then, the filtered IMU data integrated with visual information are selected as the input to SLAM to enhance the system’s robustness and positioning accuracy. Figure 5 presents the VI-SLAM framework, which integrates visual and inertial data streams. By fusing motion and image cues, VI-SLAM improves positioning precision in texture-rich environments while mitigating the drift issues that are often found in IMU-only or vision-only systems. An IMU- and binocular vision-based SLAM was proposed by Wang et al. [33] for indoor localization. The separated initialization was performed for the binocular vision and IMU, and the nonlinear optimization method was used to jointly remove visual and inertial constraints for achieving the fused results of the IMU and binocular vision data.

3.2. Multi-Sensor Integrated Positioning Systems

Significant progress has been made with the IMU-, camera-, and LiDAR-integrated multi-sensor fusion methods. As shown in Figure 6, Zhang et al. [34] proposed a factor graph-based multi-sensor fusion positioning method to improve the positioning accuracy and robustness of indoor mobile robots. The sensor-factor node models were constructed for the IMU, odometry, and LiDAR, and the factor graph theory was used to complete the data fusion. A multi-sensor fusion approach was developed by Zhai et al. [35] for the localization and mapping of two-wheeled inverted pendulum (TWIP) robots on approximate flat surfaces. The LiDAR, IMU, and odometry measurements were coupled through the factor graph, and the ground constraints and nonholonomic constraint factors were introduced to enhance localization accuracy. The multi-sensor fusion approach provides a novel solution for wall-climbing two-wheeled robots. An IMU/LiDAR/camera-based automatic calibration method was proposed by Liu et al. [36] and Hou et al. [37] to avoid manual annotation. An online temporal calibration method was further established by Liu et al. [38] to complete the multi-sensor fusion. The time-offset issues were addressed among the sensors, and the motion constraint model enhanced the calibration robustness. The motion accuracy was apparently improved for the robotic systems through the calibration-based multi-sensor fusion methods.

3.3. Multi-Robot Cooperative Positioning

Typical multi-robot cooperative positioning methods, such as feature-map optimization, base-station referencing, and LiDAR-based local alignment, are widely applied in engineering structural inspection and tank monitoring. As shown in Figure 7, multi-robot cooperative positioning enables multiple units to share environmental data and positioning references, improving system scalability and coverage. Cao et al. [39] proposed a UWB-based dynamic cooperative positioning method, in which the three-robot formations were selected to establish a self-adaptive global coordinate system. Glace et al. [40] designed a cooperative SLAM algorithm for hazardous industrial environments by utilizing the ROS/Gazebo-based TurtleBot3 Waffle robots. Combining Mecanum wheels, LiDAR, and depth cameras, Dai et al. [41] developed a laser-visual fusion algorithm to facilitate multi-robot cooperative searching. Benefited from the dynamic task allocation and YOLO-based detection, the effective area coverage and closed-loop feedback are guaranteed for the laser-visual fusion algorithm.
The efficiency and scalability of multi-robot positioning methods have been significantly improved, and the reliance on global infrastructure is consequently reduced. However, limitations still exist for the following situations, such as vertical environments (signal occlusion), limited onboard computing resources, environmental sensitivity in multi-modal fusion, and high system costs. The improvements in communication ability, edge computing, and lightweight AI algorithms are a possible solution to break the above limitations.

4. Spatial Positioning Reinforcement Methods

To cope with non-uniform noise and environment interference, analytic methods and learning-based approaches [42] are widely used to reinforce positioning precision for the multi-sensor fusion-based spatial positioning methods. Analytic methods mainly include Kalman-type filtering, particle filtering, and correlation filtering. The machine learning algorithms are usually utilized for the learning-based approach. Deep reinforcement learning (DRL) and neural networks (NNs) are the typical learning-based approaches for climbing robots.

4.1. Filtering-Based Reinforcement Methods

Yang et al. [43] designed a robot navigation and positioning system for the air–ground cooperative operations. Based on the data of AprilTag markers and visual odometry, extended Kalman filtering (EKF) was selected as the reinforcement method to enhance the UAV positioning precision under a GPS-denied environment. Xu et al. [44] proposed a visual positioning and navigation system based on ArUco markers and wheel encoders. Unscented Kalman filtering (UKF) was used to effectively mitigate camera-scale uncertainty and decrease the visual positioning instability. González et al. developed a particle filtering (PF)-based fusion algorithm. The UWB beacon and odometry data were integrated to mitigate multi-path induced errors. However, the Kalman-type filtering algorithms were only applicable for Gaussian noise systems.
Correlation filtering belongs to the typical image-processing algorithms. The core principle of correlation filtering lies in training a filter that maximizes the correlation output corresponding to the target region within the acquired image. The core working principle of correlation filtering, as illustrated in Figure 8, involves maximizing the signal correlation between a predefined target and the current frame. This allows the system to maintain target lock, even in visually noisy conditions. An advanced correlation filtering method was proposed by Li et al. [45] to enhance positioning accuracy by integrating target prior information, multi-feature fusion, and adaptive model updates. He et al. [29] developed an improved correlation filtering algorithm to effectively suppress background noise interference and improve target positioning accuracy. Wang et al. [46] proposed a real-time robust correlation filter algorithm to obtain high-precision spatial positions under target deformation, scale variation, occlusion, and fast-motion scenarios. Wang et al. [47] developed a real-time target tracking algorithm based on correlation filtering. The target tracking accuracy was apparently improved through the effective separation of targets from background noise interference in high-speed motion scenarios. Yu et al. [48] proposed a robot-vision-based pedestrian avoidance control method. Benefited from the utilization of correlation filtering, the method achieved 98% tracking accuracy while maintaining real-time performance, according to the paper.
In summary, correlation filtering offers high computational efficiency and model robustness. The correlation filtering is an effective method for the spatial positioning of climbing robots to cope with complex environment interference, such as lighting variations and occlusions. The correlation filtering can accelerate the lightweight development of magnetic climbing robots. Figure 9 categorizes classical correlation filter algorithms, showcasing how improvements like multi-feature fusion and adaptive updates contribute to enhanced positioning precision and robustness across diverse operational scenarios.

4.2. Deep Reinforcement Learning-Based Reinforcement Methods

Deep reinforcement learning (DRL) plays a crucial role in the machine vision-based spatial positioning of robots. Figure 10 introduces the role of DRL in dynamic positioning tasks. DRL can adapt to variable lighting, occlusions, and surface changes, enabling robots to make real-time navigation decisions with improved autonomy. The applications of DRL for the spatial positioning mainly involve:
  • Spatial position prediction of climbing robots.
  • The interference elimination of robot positioning in a dynamic environment, such as surface texture, partial occlusion, reflected light, etc.
  • Target recognition and tracking.
DRL-based spatial position prediction has received considerable attention. A DRL-based navigation method [49] was proposed by Zhang et al. to complete the detection and trajectory prediction of pedestrians. The YOLO algorithm was used to construct a semantic map for identifying interactive objects, and an attention mechanism-based trajectory prediction model was established to improve navigation efficiency. Through the introduction of an interaction penalty term in the reward function, Liu et al. [50] developed a visual interaction navigation method to reduce the energy consumption. Additionally, a spatial position prediction mechanism was employed to further optimize interactive decision making for the visual interaction navigation method. To obtain the optimized plan, an improved path planning method was proposed by Sivaranjani et al. [51] through integrating deep Q-networks (DQN) with the artificial potential field (APF). Compared with the traditional path planning method, the training speed and accuracy are enhanced for the improved path planning method. The structure of the deep Q-network (DQN) used for path prediction is presented in Figure 11. This model facilitates the learning of optimal control strategies through reward feedback, which is crucial for safe and energy-efficient motion planning.
The DRL is also used to eliminate dynamic environment interference on the spatial positioning accuracy for magnetic climbing robots. A DRL-based highly efficient view planning algorithm was developed by Wang et al. [52] to obtain the optimal viewpoint for the detection of complex products. In the highly efficient view planning algorithm, a visibility estimation method was designed to obtain the visible areas under the given viewpoint quickly, and the asynchronous advantage actor–critic was used to solve the view planning problem. A graph-based DRL algorithm [53] was proposed to detect objects with coverings. The DRL model was established to effectively explore the occluded objects with a 100% success rate in the simulation scenario, and the grasping and pushing performance was improved for the robots, with a 90% success rate according to that paper.
In the field of target recognition and tracking, DRL is utilized to enhance a system’s stability. The multiple pools DQN (MP-DQN) [54] was developed to enhance the performance of obstacle avoidance and target tracking for drones. The directional reward function was established to improve the environment generalization capability for the MP-DQN. A DRL-based multi-feature fusion target tracking method (MFFT) was proposed by Wang et al. [55] to improve the robustness of the tracking system through the 3D and 2D feature fusion of targets. To overcome the low data utilization and instability of DRL, a compact asynchronous advantage actor–critic (Compact_A3C) model was given to optimize the training process, reduce the computation time, and enhance algorithm stability in the MFFT. A path planning method was founded on the DRL and gradient descent by Guo et al. [56]. The problems of excessive waypoints and long computation times were addressed for the multiply unmanned ground vehicle (Multi-UGV) navigation in the path planning method.
DRL plays a significant role in spatial position prediction, reducing environmental interference and target tracking. Nevertheless, DRL-based spatial positioning is rarely addressed for magnetic climbing robots. DRL-based spatial positioning is likely to enable more intelligent and lightweight magnetic climbing robots.

4.3. Neural Network-Based Reinforcement Methods

With the development of deep learning, the neural network-based spatial positioning methods have demonstrated their powerful capability in the feature extraction and modeling of vision sensor data. The neural network is usually regarded as a reinforcement tool to enhance the accuracy and robustness of spatial positioning systems. In recent years, various deep learning algorithms, such as convolutional neural networks (CNN), recurrent neural networks (RNN), temporal convolutional networks (TCN), and attention mechanisms, have been proposed for visual localization, indoor positioning, and multi-sensor fusion localization to achieve high-precision position information in complex environments.
The CNN-based visual positioning algorithm was designed by Wang et al. [57] to achieve high-precision position based on image data. CNN was combined with the LSTM to reduce the impact of random noise on the positioning accuracy. The utilization of LSTM is helpful to decrease the number of training parameters and eliminate the overfitting issue. The CNN-LSTM-based indoor positioning method was proposed by Yoon et al. [58] to calculate the distance between the receiver and transmitter through the sequential received signal strength indicator (RSSI) data. During the model training of CNN-LSTM, a one-dimensional CNN was employed to extract local features from RSSI signals, and an LSTM network was used for capturing feature differences. As a result, the CNN-LSTM model is helpful in obtaining more accurate distances and improving indoor positioning performance. As shown in Figure 12, the CNN-LSTM network structure is effective for modeling time-dependent signal patterns, such as RSSI variations in indoor positioning. By combining spatial and temporal feature extraction, the model delivers improved positioning accuracy.
A deep learning-based visual–inertial odometry method (VINet) was introduced to fuse visual and IMU data. The CNN-RNN-based end-to-end training structure was employed in VINet to calculate the spatial position through the RGB image and IMU data. Further, a multi-rate LSTM was used to handle the high-frequency IMU data. Benefited from the joint training strategy, high-precision position estimates are obtained even in complex environments. Figure 13 illustrates the VINet architecture, which integrates CNN-RNN modules for processing camera and IMU data in a unified framework. This end-to-end system is capable of producing robust pose estimates, even in visually degraded or high-motion scenarios. Through the utilization of a neural network, a UWB-based indoor positioning method was proposed by Wu et al. [59] to enhance the positioning accuracy. The channel impulse response (CIR) data are automatically classified, and the CNN is combined with the attention mechanism to identify channel conditions and predict range errors. The calculated range errors are used to compensate for the distance measurements, and then the locations of unknown nodes are obtained through the weighted least squares algorithm.

5. Challenges and Trends for the Spatial Positioning

5.1. Challenges for the Spatial Positioning

Due to the complex operation environments and task requirements, significant challenges remain for the spatial positioning of magnetic climbing robots. The key challenges are summarized as follows.
  • Variations in surface shape and material properties: Due to the different radiation properties, frictional coefficients, and geometry changes [60], magnetic climbing robots’ sensor data quality and positioning accuracy are heavily affected.
  • Accumulated sensor errors: Since the positioning sensors, such as IMU, suffer from the accumulation error, the attenuated precision makes the positioning sensors unsuitable for long-duration or long-distance tasks [61].
  • Interference of the dynamic environment: The positioning stability is usually affected by the wind force, lighting conditions, and abnormal vibration in high-altitude operations.
  • The constraints of the working environment: Magnetic climbing robots must frequently adjust their posture to adapt to complex geometries. Due to the continuous change in the robot’s posture, the camera’s viewpoint also changes constantly.
  • Energy consumption problem: The magnetic robot’s battery power is limited, but the positioning sensors, vision systems, and motion computing units usually require high power consumption. It is necessary to find an optimal balance among the positioning accuracy, battery capacity, and system size.
  • Data synchronization issue: In large-scale tasks, a single robot often cannot cover the entire operation surface, and multi-robot cooperation is required for the positioning and mapping [62,63]. The synchronization of communication data remains a challenge for multi-robot cooperation.
  • Limited computing resources: High-precision positioning and target recognition require large amounts of computing resources. Nevertheless, the design of magnetic climbing robots aims for lightweight, and the integration of high-performance computing platforms is difficult for magnetic climbing robots.

5.2. Trends for the Spatial Positioning

Based on the referenced spatial positioning methods and challenges, the specific requirements are given for the magnetic climbing robots as follows.
  • Data-Processing Demands on Various Operation Surfaces: To cope with the impact of various operation surfaces on the sensor’s data quality, appropriate sensor combinations should be selected according to the environmental conditions, guided by fusion models that balance accuracy, energy, and computational load. To enable real-time deployment, future research should focus on lightweight machine learning models and efficient data-processing pipelines that are suitable for onboard computing.
  • Correction of Accumulated Sensor Errors: To realize the long-duration and long-distance operations, filtering algorithms, such as Kalman-type filtering, particle filtering, correlation filtering, etc., should be used for the rectification of accumulated sensor errors.
  • Handling Dynamic Environment Disturbances: Since the dynamic environment disturbances heavily affect the stability of sensor data in real applications, anti-interference algorithms and mechanical design methods should be considered to cope with the dynamic environment disturbances.
  • Intelligent Energy Consumption Management: Due to this, energy efficiency is crucial for the sustained operation of positioning sensors and computing units. Low-power sensors will be a major development trend, and the dynamic adjustment of sensor operation modes is an effective energy-saving method.
  • Multi-Robot Cooperation: Multi-robot cooperation is helpful for expanding the positioning range and improving task efficiency. The multi-robot cooperative SLAM algorithms are useful to realize map-sharing and the collaborative optimization of position and posture parameters under large-scale or complex operation environments.
  • Lightweight Algorithms: Image tracking should be integrated with spatial positioning to develop a lightweight algorithm to reduce computational complexity.
  • Deployment of AI models: More artificial intelligence models are expected to be applied to the localization of wall-climbing robots in the future.

6. Conclusions

The spatial positioning method plays a crucial role for magnetic climbing robots in industrial applications. Currently, most spatial positioning techniques can be categorized into single-sensor and multi-sensor fusion-based positioning methods. With the increasing complexity of industrial environments, the integration of lightweight multi-sensor positioning frameworks has emerged as a key direction in current technological development. Further, to deal with environment interference and non-uniform noise, the spatial positioning reinforcement methods, including analytic methods and learning-based techniques, have been introduced to improve the accuracy of multi-sensor fusion-based localization. However, due to the complexity of industrial scenarios, spatial positioning methods still face significant challenges. Single-sensor approaches, such as those based on vision, LiDAR, or IMUs, are simple and cost-effective but often suffer from accumulated drift, limited fields of view, and environmental sensitivity, making them unreliable in unstructured or dynamic settings. Multi-sensor fusion strategies improve positioning accuracy and robustness by leveraging complementary sensor data, yet they are typically associated with increased system complexity, calibration overheads, and computational burden—issues that can hinder their application on lightweight climbing robots. Filtering-based reinforcement methods, such as Kalman filters, particle filters, and correlation filtering, are widely used for handling noise and uncertainty; however, they often depend on strict statistical assumptions (e.g., Gaussian noise) and may fail under high nonlinearity or dynamic disturbances. Meanwhile, machine learning and deep reinforcement learning (DRL) methods have shown growing potential in dynamic environment adaptation, decision making, and spatial prediction tasks. Still, they are constrained by the need for large training datasets, generalization issues, and high computational costs, which pose challenges for real-time deployment in embedded systems. Collectively, while each method provides unique advantages, none alone can fully satisfy the demands of robust, precise, and efficient localization in real-world magnetic climbing robot operations. The following table presents a comparative analysis of the four technologies reviewed in terms of accuracy, robustness, real-time capability, and power consumption, as shown in Table 1.
Multi-sensor fusion and lightweight designs represent key and inevitable trends for the development of spatial positioning of magnetic climbing robots. Meanwhile, lightweight considerations must encompass both the physical robot structures and the space-positioning algorithms. Advancements in the above areas will make the magnetic climbing robot more suitable for the inspection and maintenance of large-scale metal structures.

Author Contributions

Conceptualization, H.R. and J.Q.; methodology, H.R. and J.Q.; writing—original draft preparation, H.R.; writing—review and editing, H.R., M.S., J.Q. and J.Z.; visualization, Z.L., J.Z., B.W. and L.C.; supervision, M.S.; revision, F.G.; project administration, H.R. and J.X.; funding acquisition, H.R. and Q.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key Technology Breakthrough Program of Ningbo under Grants 2023Z031, 2024Z170, and 2025Z188, and in part by the Major Industrial Science and Technology Project of Fenghua District under Grant 202411105.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Baolei Wang and Qingwei Jia were employed by Ningbo Weierskeler Intelligent Technology Limited Company. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Tao, B.; Gong, Z.; Ding, H. Climbing robots for manufacturing. Natl. Sci. Rev. 2023, 10, nwad042. [Google Scholar] [CrossRef]
  2. Bridge, B.; Sattar, T.P.; Leon-Rodriguez, H.E. Climbing robot cell for fast and flexible manufacture of large scale structures. In CAD/CAM Robotics and Factories of the Future, 22nd International Conference; Narosa Publishing House: New Delhi, India, 2006; pp. 584–597. [Google Scholar]
  3. Tao, J.; Yubo, H.; Lirong, S.; Wenfei, F.; Weijun, Z. Design and Research of Segmented-Stepping Tower Climbing Robot. Mach. Des. Res. 2024, 40, 46–51. [Google Scholar]
  4. Jiang, A.; Zhan, Q.; Zhang, Y. Development of New Magnetic Adhesion Wall Climbing Robot. Mach. Build. Autom. 2018, 47, 146–148+161. [Google Scholar]
  5. Fang, Y.; Wang, S.; Bi, Q.; Cui, D.; Yan, C. Design and technical development of wall-climbing robots: A review. J. Bionic Eng. 2022, 19, 877–901. [Google Scholar] [CrossRef]
  6. Schmidt, D.; Berns, K. Climbing robots for maintenance and inspections of vertical structures—A survey of design aspects and technologies. Robot. Auton. Syst. 2013, 61, 1288–1305. [Google Scholar] [CrossRef]
  7. Wang, H. Research on Rose Recognition and Spatial Localization Based on Monocular Vision. Master’s Thesis, Wuhan Institute of Technology, Wuhan, China, 2024. [Google Scholar]
  8. Shuwen, H.; Keyu, G.; Xiangyu, S.; Feng, H.; Shijie, S.; Huansheng, S. Multi-target 3D visual grounding method based on monocular images. J. Comput. Appl. 2025; 1–11, in press. [Google Scholar]
  9. Yan, W. Research on Motion Control and Positioning Method of Wall-Climbing Robot. Master’s Thesis, Changzhou University, Changzhou, China, 2023. [Google Scholar]
  10. Shao, S. Research on Global Navigation System of Indoor Unmanned Vehicle Based on Ultrasonic Positioning. Master’s Thesis, Northeast Petroleum University, Daqing, China, 2023. [Google Scholar]
  11. Wu, J. Research on Indoor LiDAR Localization and Mapping Technology of Mobile Robot. Master’s Thesis, Shanghai Ocean University, Shanghai, China, 2024. [Google Scholar]
  12. Zhang, Y. Research on Positioning of High-Altitude Strong Magnetic Climbing Robots Based on Multi-Sensor Fusion. Master’s Thesis, Fujian University of Technology, Fuzhou, China, 2023. [Google Scholar]
  13. Fang, X.; Liu, J.; Chen, Y. Research on autonomous movement of the wall-climbing robot based on SLAM. Manuf. Autom. 2023, 45, 85–88. [Google Scholar]
  14. Zhao, M. Research on Visual Positioning and Path-Planning Algorithms for Mobile Robots. Master’s Thesis, Inner Mongolia University, Hohhot, China, 2024. [Google Scholar]
  15. Li, D. Optimization Design and Algorithm Research on Ultra Wide Band Indoor Positioning. Master’s Thesis, Shandong Jianzhu University, Jinan, China, 2024. [Google Scholar]
  16. Xu, S. Non-Line-of-Sight Identification and Error Compensation for Indoor Positioning Based on UWB. Master’s Thesis, Nanjing University of Information Science & Technology, Nanjing, China, 2024. [Google Scholar]
  17. Liu, M.; Yang, S.; Rathee, A.; Du, W. Orientation Estimation Piloted by Deep Reinforcement Learning. In Proceedings of the 2024 IEEE/ACM Ninth International Conference on Internet-of-Things Design and Implementation (IoTDI), Hong Kong, China, 13–16 May 2024; pp. 134–145. [Google Scholar]
  18. Zeng, F.; Wang, C.; Ge, S.S. A Survey on Visual Navigation for Artificial Agents with Deep Reinforcement Learning. IEEE Access 2020, 8, 135426–135442. [Google Scholar] [CrossRef]
  19. Ouyang, G.; Abed-Meraim, K.; Ouyang, Z. Magnetic-Field-Based Indoor Positioning Using Temporal Convolutional Networks. Sensors 2023, 23, 1514. [Google Scholar] [CrossRef] [PubMed]
  20. Li, A.; Tao, B.; Ding, H. Research Progress and Application of Magnetic Adhesion Wall-climbing Robots. Robot 2025, 47, 123–144. [Google Scholar]
  21. Enjikalayil Abdulkader, R.; Veerajagadheswar, P.; Htet Lin, N.; Kumaran, S.; Vishaal, S.R.; Mohan, R.E. Sparrow: Amagnetic climbing robot for autonomous thickness measurement in ship hull maintenance. J. Mar. Sci. Eng. 2020, 8, 469. [Google Scholar] [CrossRef]
  22. Wang, T.; Zhu, S.; Song, W.; Li, C.; Shi, H. Path planning for wall climbing robot in volume measurement of vertical tank. Robot 2023, 46, 36–44. [Google Scholar]
  23. Zhang, W.; Ding, Y.; Chen, Y.; Sun, Z. Autonomous positioning for wall climbing robots based on a combination of an external camera and a robot-mounted inertial measurement unit. J. Tsinghua Univ. (Sci. Technol.) 2022, 62, 1524–1531. [Google Scholar]
  24. Gu, Z.; Gong, Z.; Tao, B.; Yin, Z.; Ding, H. Global localization based on tether and visual-inertial odometry with adsorption constraints for climbing robots. IEEE Trans. Ind. Inform. 2023, 19, 6762–6772. [Google Scholar] [CrossRef]
  25. Zhang, W.; Yang, Y.; Huang, T.; Sun, Z. ArUco-assisted autonomous localization method for wall climbing robots. Robot 2024, 46, 27–35, 44. [Google Scholar]
  26. Tache, F.; Pomerleau, F.; Caprari, G.; Siegwart, R.; Bosse, M.; Moser, R. Three dimensional localization for the Magne Bike inspection robot. J. Field Robot. 2011, 28, 180–203. [Google Scholar] [CrossRef]
  27. Zhong, M.; Ma, Y.; Li, Z.; He, J.; Liu, Y. Facade protrusion recognition and operation-effect inspection methods based on binocular vision for wall-climbing robots. Appl. Sci. 2023, 13, 5721. [Google Scholar] [CrossRef]
  28. Wang, Z.; Yan, B.; Dong, M.; Wang, J.; Sun, P. A localization method of wall-climbing robot based on LiDAR and improved AMCL. Chin. J. Sci. Instrum. 2022, 43, 220–227. [Google Scholar]
  29. He, J. Research on Visual Object Tracking method based on Discriminative Correlation Filter. Master’s Thesis, Guilin University of Electronic Technology, Guilin, China, 2023. [Google Scholar]
  30. Gao, Q.; Lu, K.; Ji, Y.; Liu, J.; Xu, L.; Wei, G. Survey on the Research of Multi-sensor Fusion SLAM. Mod. Radar 2024, 46, 29–39. [Google Scholar]
  31. Zhou, Z.; Zhang, C.; Li, C.; Zhang, Y.; Shi, Y.; Zhang, W. A tightly-coupled LIDAR-IMU SLAM method for quadruped robots. Meas. Control 2024, 57, 1004–1013. [Google Scholar] [CrossRef]
  32. Liu, S.; Dong, N.; Mai, X. Research on Visual-Inertial Fusion SLAM Based on Mean Filtering Algorithm. In Proceedings of the 42nd Chinese Control Conference, Tianjin, China, 24–26 July 2023; pp. 392–397. [Google Scholar]
  33. Wang, C.; Wu, B.; Wang, H.; Zheng, H.; Wang, L. Indoor localization technology of SLAM based on binocular vision and IMU. In Proceedings of the 2022 4th International Conference on Robotics, Intelligent Control and Artificial Intelligence, Dongguan, China, 16–18 December 2022; pp. 441–446. [Google Scholar]
  34. Zhang, L.; Wu, X.; Gao, R. A multi-sensor fusion positioning approach for indoor mobile robot using factor graph. Measurement 2023, 216, 112926. [Google Scholar] [CrossRef]
  35. Zhai, Y.; Zhang, S. A Novel LiDAR–IMU–Odometer Coupling Framework for Two-Wheeled Inverted Pendulum (TWIP) Robot Localization and Mapping with Nonholonomic Constraint Factors. Sensors 2022, 22, 4778. [Google Scholar] [CrossRef]
  36. Liu, H.; Zhang, X.; Jiang, J. Spatiotemporal LiDAR-IMU-Camera Calibration: A Targetless and IMU-Centric Approach Based on Continuous-time Batch Optimization. In Proceedings of the 2022 34th Chinese Control and Decision Conference (CCDC), Hefei, China, 15–17 August 2022. [Google Scholar]
  37. Hou, L.; Xu, X.; Ito, T. An Optimization-Based IMU/LiDAR/Camera Co-calibration Method. In Proceedings of the 2022 7th International Conference on Robotics and Automation Engineering (ICRAE), Singapore, 18–20 November 2022; pp. 118–122. [Google Scholar]
  38. Liu, W.; Li, Z.; Sun, S.; Du, H.; Angel Sotelo, M. A novel motion-based online temporal calibration method for multi-rate sensors fusion. Inf. Fusion 2022, 88, 59–77. [Google Scholar] [CrossRef]
  39. Cao, Y.; Li, M.; Svogor, I.; Wei, S.; Beltrame, G. Dynamic Range-Only Localization for Multi-Robot Systems. IEEE Access 2018, 6, 46527–46537. [Google Scholar] [CrossRef]
  40. Varghese, G.; Reddy, T.G.C.; Menon, A.K. Multi-Robot System for Map** and Localization. In Proceedings of the 2023 8th International Conference on Robotics and Automation Engineering (ICRAE), Singapore, 17–19 November 2023; pp. 79–84. [Google Scholar]
  41. Dai, Y.; Li, C.; Wang, D. Joint SLAM and Joint Search of Unmanned Car Cluster. In Proceedings of the 2023 38th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Hefei, China, 27–29 August 2023; pp. 1015–1019. [Google Scholar]
  42. Zhuang, Y.; Sun, X.; Li, Y. Multi-sensor integrated navigation/positioning systems using data fusion: From analytics-based to learning-based approaches. Inf. Fusion 2023, 95, 62–90. [Google Scholar] [CrossRef]
  43. Yang, Y. Research on Key Technologies for Aerial Ground Cooperative Navigation and Localization of Warehouse Inspection Robots. Master’s Thesis, Southwest University of Science and Technology, Mianyang, China, 2024. [Google Scholar]
  44. Xu, J. Research on Indoor Mobile Robot Navigation System. Master’s Thesis, Guangdong University of Technology, Guangzhou, China, 2022. [Google Scholar]
  45. Li, C. Research on the Combination of Siamese Network and Correlation Filter in Visual Tracking. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2021. [Google Scholar]
  46. Wang, M. Research on Visual Object Tracking Algorithm Based on Correlation Filtering. Master’s Thesis, Beijing Jiaotong University, Beijing, China, 2020. [Google Scholar]
  47. Wang, Z.; Liu, K.; Qin, Y.; Miao, B.; Tian, G.; Tang, T. Real Time Tracking Algorithm for High Speed Railway Catenary Support Device Based on Correlation Filtering and Saliency Detection. China Transp. Rev. 2024, 46, 98–105. [Google Scholar]
  48. Yu, J.; Zhang, L.; Zhang, K. Autonomous Avoidance Pedestrian Control Method for Indoor Mobile Robot. J. Chin. Comput. Syst. 2020, 41, 1776–1782. [Google Scholar]
  49. Zhang, P. Design and Implementation of Interactive Navigation System for Mobile Robot in Dynamic Environments. Master’s Thesis, Southeast University, Nanjing, China, 2021. [Google Scholar]
  50. Liu, Q. Research on Indoor Interactive Visual Navigation Based on Deep Reinforcement Learning. Master’s Thesis, Xiangtan University, Xiangtan, China, 2023. [Google Scholar]
  51. Sivaranjani, A.; Vinod, B. Artificial Potential Field Incorporated Deep-Q-Network Algorithm for Mobile Robot Path Prediction. Intell. Autom. Soft Comput. 2023, 35, 1135–1150. [Google Scholar] [CrossRef]
  52. Wang, Y.; Peng, T.; Wang, W.; Luo, M. High-efficient view planning for surface inspection based on parallel deep reinforcement learning. Adv. Eng. Inform. 2023, 55, 101849. [Google Scholar] [CrossRef]
  53. Zuo, G.; Tong, J.; Wang, Z.; Gong, D. A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects. Cogn. Comput. 2022, 15, 36–49. [Google Scholar] [CrossRef]
  54. Jiang, W.; Xu, G.; Wang, Y. A Method for Autonomous Obstacle Avoidance and Target Tracking of Unmanned Aerial Vehicle. J. Astronaut. 2022, 43, 802–810. [Google Scholar]
  55. Wang, Z. Research on Target Tracking via Multiple-Feature Based on Deep Reinforcement Learning. Master’s Thesis, Liaoning Normal University, Dalian, China, 2022. [Google Scholar]
  56. Guo, H.; Xu, Y.; Ma, Y.; Xu, S.; Li, Z. Pursuit Path Planning for Multiple Unmanned Ground Vehicles Based on Deep Reinforcement Learning. Electronics 2023, 12, 4759. [Google Scholar] [CrossRef]
  57. Wang, D. Research and Application of CNN-Based Visual Localization. Master’s Thesis, Shanghai University of Electric Power, Shanghai, China, 2024. [Google Scholar]
  58. Yoon, J.; Kim, H.; Lee, D. Indoor Positioning Method by CNN-LSTM of Continuous Received Signal Strength Indicator. Electronics 2024, 13, 4518. [Google Scholar] [CrossRef]
  59. Wu, S.; Wang, X.; Zhang, L.; Xu, K.; Zhang, M.; Jin, S. Temporal convolutional neural network indoor UWB positioning method based on SimCLR-CIR-SC autonomous classification. J. Electron. Meas. Instrum. 2025, 39, 65–76. [Google Scholar]
  60. Xu, F.; Meng, F.; Jiang, Q.; Peng, G. Grappling claws for a robot to climb rough wall surfaces: Mechanical design, gras** algorithm, and experiments. Robot. Auton. Syst. 2020, 128, 103501. [Google Scholar] [CrossRef]
  61. Zhang, J.; Singh, S. Visual-LiDAR odometry and map**: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar]
  62. Zhou, X.; Wen, X.; Wang, Z.; Gao, Y.; Li, H.; Wang, Q.; Yang, T.; Lu, H.; Cao, Y.; Xu, C. Swarm of micro flying robots in the wild. Sci. Robot. 2022, 7, eabm5954. [Google Scholar] [CrossRef]
  63. Mehrez, M.W.; Mann, G.K.I.; Gosine, R.G. An optimization based approach for relative localization and relative tracking control in multi-robot systems. J. Intell. Robot. Syst. 2017, 85, 385–408. [Google Scholar] [CrossRef]
Figure 1. Challenges of spatial positioning for magnetic climbing robots.
Figure 1. Challenges of spatial positioning for magnetic climbing robots.
Electronics 14 03069 g001
Figure 2. External sensor-based spatial positioning methods.
Figure 2. External sensor-based spatial positioning methods.
Electronics 14 03069 g002
Figure 3. External sensor-based spatial positioning methods: (a) 2D image-based positioning; (b) 3D vision-based positioning.
Figure 3. External sensor-based spatial positioning methods: (a) 2D image-based positioning; (b) 3D vision-based positioning.
Electronics 14 03069 g003
Figure 4. The multi-sensor fusion-based SLAM framework.
Figure 4. The multi-sensor fusion-based SLAM framework.
Electronics 14 03069 g004
Figure 5. The VI-SLAM framework.
Figure 5. The VI-SLAM framework.
Electronics 14 03069 g005
Figure 6. Indoor positioning system.
Figure 6. Indoor positioning system.
Electronics 14 03069 g006
Figure 7. Multi-robot cooperative positioning.
Figure 7. Multi-robot cooperative positioning.
Electronics 14 03069 g007
Figure 8. The principles of the correlation filtering algorithm.
Figure 8. The principles of the correlation filtering algorithm.
Electronics 14 03069 g008
Figure 9. Classical correlation filter algorithms.
Figure 9. Classical correlation filter algorithms.
Electronics 14 03069 g009
Figure 10. Deep reinforcement learning-based reinforcement method.
Figure 10. Deep reinforcement learning-based reinforcement method.
Electronics 14 03069 g010
Figure 11. Deep Q-network algorithm framework.
Figure 11. Deep Q-network algorithm framework.
Electronics 14 03069 g011
Figure 12. The CNN-LSTM network structures.
Figure 12. The CNN-LSTM network structures.
Electronics 14 03069 g012
Figure 13. The proposed VINet architecture for visual–inertial odometry.
Figure 13. The proposed VINet architecture for visual–inertial odometry.
Electronics 14 03069 g013
Table 1. A comparative analysis of the four technologies.
Table 1. A comparative analysis of the four technologies.
Technology CategoryAccuracyRobustnessReal-Time CapabilityPower Consumption
Single-Sensor ApproachesMedium: Limited by sensor errors accumulationLow: Sensitive to environmental changes and interferenceHigh: Low computational complexityLow: Simple hardware, low power consumption
Multi-Sensor Fusion StrategiesHigh: Fusion of multiple sensors improves positioning accuracyHigh: Effectively compensates for single sensor weaknessesMedium: Fusion computations are complex but can be optimized for real-time
performance
Medium: Multiple sensors and higher computational demand increase power usage
Filtering-Based Reinforcement MethodsMedium to High: Effective error correction in Gaussian noise systemsMedium: Works well with Gaussian noise, but less so in complex noise environmentsHigh: Filtering algorithms are computationally efficientLow to Medium: Requires moderate processing power for filtering
Deep Reinforcement Learning-based MethodsRelatively high: Can adapt to complex environments and dynamically optimize High: Adapts to dynamic environments and nonlinear disturbancesLow to Medium: Training requires high computation, and inference can be optimizedHigh: Deep learning models require significant computational resources
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ru, H.; Sheng, M.; Qi, J.; Li, Z.; Cheng, L.; Zhang, J.; Xiao, J.; Gao, F.; Wang, B.; Jia, Q. A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots. Electronics 2025, 14, 3069. https://doi.org/10.3390/electronics14153069

AMA Style

Ru H, Sheng M, Qi J, Li Z, Cheng L, Zhang J, Xiao J, Gao F, Wang B, Jia Q. A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots. Electronics. 2025; 14(15):3069. https://doi.org/10.3390/electronics14153069

Chicago/Turabian Style

Ru, Haolei, Meiping Sheng, Jiahui Qi, Zhanghao Li, Lei Cheng, Jiahao Zhang, Jiangjian Xiao, Fei Gao, Baolei Wang, and Qingwei Jia. 2025. "A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots" Electronics 14, no. 15: 3069. https://doi.org/10.3390/electronics14153069

APA Style

Ru, H., Sheng, M., Qi, J., Li, Z., Cheng, L., Zhang, J., Xiao, J., Gao, F., Wang, B., & Jia, Q. (2025). A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots. Electronics, 14(15), 3069. https://doi.org/10.3390/electronics14153069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop