Next Article in Journal
Bidirectional Endothelial Feedback Drives Turing-Vascular Patterning and Drug-Resistance Niches: A Hybrid PDE-Agent-Based Study
Previous Article in Journal
Examining the Acceptance and Use of AI-Based Assistive Technology Among University Students with Visual Disability: The Moderating Role of Physical Self-Esteem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation

1
School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
2
Department of Orthopedics, Xuanwu Hospital Capital Medical University, Beijing 100053, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2025, 12(10), 1096; https://doi.org/10.3390/bioengineering12101096
Submission received: 20 August 2025 / Revised: 29 September 2025 / Accepted: 9 October 2025 / Published: 12 October 2025
(This article belongs to the Section Biosignal Processing)

Abstract

In surgical navigation-assisted pedicle screw fixation, cross-source pre- and intra-operative point clouds registration faces challenges like significant initial pose differences and low overlapping ratio. Classical algorithms based on feature descriptor have high computational complexity and are less robust to noise, leading to a decrease in accuracy and navigation performance. To address these problems, this paper proposes a coarse-to-fine registration framework. In the coarse registration stage, a Point Matching algorithm based on Curvature Feature Learning (CFL-PM) is proposed. Through CFL-PM and Farthest Point Sampling (FPS), the coarse registration of overlapping regions between the two point clouds is achieved. In the fine registration stage, the Iterative Closest Point (ICP) is used for further optimization. The proposed method effectively addresses the challenges of noise, initial pose and low overlapping ratio. In noise-free point cloud registration experiments, the average rotation and translation errors reached 0.34° and 0.27 mm. Under noisy conditions, the average rotation error of the coarse registration is 7.28°, and the average translation error is 9.08 mm. Experiments on pre- and intra-operative point cloud datasets demonstrate the proposed algorithm outperforms the compared algorithms in registration accuracy, speed, and robustness. Therefore, the proposed method can achieve the precise alignment of the surgical navigation-assisted pedicle screw fixation.

Graphical Abstract

1. Introduction

Pedicle screw fixation is a common spinal surgical method for treating spinal fractures, lumbar spondylolisthesis and other diseases [1]. However, due to the spine’s complex structure and high precision requirements, the procedure is challenging for surgeons. Surgical navigation systems improve accuracy and safety by providing precise positioning, and are widely adopted in neuro-navigation [2,3,4], spinal [5,6], laparoscopic [7,8], knee replacement [9,10], and mandibular surgeries [11,12], playing an increasingly important role in various fields [13]. Registration is one of the core technologies in surgical navigation systems, and its precision is an important factor affecting the accuracy of the navigation system [14]. Therefore, many scholars are actively engaged in research to improve the accuracy of registration in navigation systems with a view to further optimizing the performance of the system [15,16,17]. In the study of registration applied to neuro-surgical navigation systems [15], surface registration based on automatic machine learning is proposed, which can improve the registration accuracy through experimental verification. In total knee arthroplasty assisted by a surgical navigation system [16], a point-to-surface iterative closest point registration algorithm is used, and the RMSE is 0.3 mm.
Surgical navigation-assisted pedicle screw fixation still faces challenges in achieving accurate the pre- and intra-operative registration. First, pre- and intra-operative point clouds from different imaging devices vary significantly in density and position. Second, intra-operative scanning limitations—such as limited anatomical exposure, soft tissue occlusion, and noise—reduce the low overlapping ratio between pre- and intra-operative point clouds. The point cloud registration algorithms in the above application will fail in this navigation-assisted context.
In terms of the existing classical feature extraction registration algorithms, Liu [18] proposed an algorithm integrating feature regions and Super4PCS for coarse registration under low overlap, which improves accuracy, success rate and reduces computation time compared to Super4PCS. Wang [19] developed a coarse registration algorithm using grid normal deviation angle statistics, improving alignment accuracy for partially overlapping point clouds. To address noise and low overlap, Zhang [20] proposed an SSR feature descriptor and design SSR-Net. The results demonstrated the effectiveness of the proposed method. Yan [21] proposed a hybrid approach combining SHOT with RANSAC for initial alignment, followed by symmetric ICP refinement, achieving clinically acceptable registration precision. Zhang [22] optimized registration by selecting FPS-based local regions for SAC-IA before ICP refinement, outperforming other methods in accuracy and success rate.
However, classic feature extraction-based point cloud registration algorithms have higher computational complexity, long execution times, and weaker noise robustness compared to deep learning algorithms [23,24,25,26,27]. To address this issue, Zhang [28] proposed an end-to-end network adapting focus on overlapping regions, and experiments proved it outperformed existing methods above 30% overlap and detects these regions accurately. Zhou [29] introduced SCANet, a spatial and channel attention network, which outperforms existing algorithms on partially overlapping point clouds with Gaussian noise. Li [30] proposed a transformer-based algorithm to address low overlap challenges. This algorithm employed a learnable geometric position update module and a deeper cross-attention module. Results demonstrate the proposed algorithm exhibits improvements over existing similar methods.
These methods do not consider both cross-source and a low overlapping ratio simultaneously, and cannot be directly applied to the pre- and intra-operative point clouds registration of pedicle screw fixation. Therefore, inspired by [31], this paper proposes a novel coarse-to-fine registration framework. The main contributions of this paper are as follows,
  • A novel and robust coarse-to-fine registration framework has been proposed to address significant initial pose differences, low overlapping ratio, and noise interference issues.
  • A novel Curvature Feature Learning-based Point Matching (CFL-PM) algorithm based on a curvature feature coder and graph attention network is proposed. The algorithm effectively generates more reliable correspondences for coarse registration and shows strong anti-interference ability against noise.
  • A challenging dataset consisting of cross-source, low-overlapping pre- and intra-operative point cloud pairs to simulate real surgical environments. The noise-free conditions simulate an ideal surgical scenario, while noisy conditions simulate various noises present in the surgical field, such as soft tissues and blood. The results verified the feasibility and robustness of the proposed algorithm in the surgical navigation system.
The structure of this paper is as follows, Section 2 introduces the proposed algorithm. Section 3 describes the experimental setup, the results and the analysis. Section 4 concludes the paper.

2. Materials and Methods

In surgical navigation system-assisted pedicle screw fixation, an important prerequisite for accurate intra-operative positioning is the alignment of the patient’s pre-operative point cloud with the actual intra-operative point cloud. The algorithmic framework proposed in this paper is shown in Figure 1.
The proposed algorithm identifies and registers the corresponding relationship by learning point features. During the training process, firstly, the intra-and the pre- with corresponding spines operative point clouds containing curvature information are input into the point-wise feature encoder, to obtain the key point subset X , Y and corresponding feature vectors f X , f Y .
Secondly, through the graph attention network, the updated point features f ^ X , f ^ Y are obtained. And the generated features are used to determine the corresponding relationship between the points in the pre- and intra-operative point clouds. Thirdly, the Random Sample Consensus (RANSAC) method is used to delete the wrong matches from the matching relationship and perform coarse registration.
During testing, the pre-operative point cloud is sampled by FPS, and divided into multiple local regions. Based on the CFL-PM algorithm, registration errors between the intra-operative point cloud and each local region are compared to identify the highest-overlap region. The local region with the minimum error provides the coarse registration transformation. Finally, ICP is used for further optimization to obtain the optimal rotation matrix R ^ and translation vector t ^ .

2.1. Curvature Feature Encoder

The curvature feature encoder consists of a point-wise feature encoder and a graph attention network [31]. The network structure of a point-wise feature encoder is shown in Figure 2, including three Set Abstraction (SA) layers SA1–SA3 and one Feature Propagation (FP) layer [32].
The input to the curvature feature encoder consists of 3D point coordinates and 1D corresponding features. The curvature value of the point is taken as the feature input, and its calculation is shown in Equation (1) [33,34],
σ = λ 3 λ 1 + λ 2 + λ 3
where λ 1 , λ 2 , λ 3 are the eigenvalues of the covariance matrix of the point-fitting surface ( λ 1 > λ 2 > λ 3 ). The curvature estimation neighborhood radius in this study is set to 5 mm.
The pre- and intra-operative point clouds are input to the encoder separately, and after the SA and FP layers, the coordinates of the sampled subsets (key-points) of the pre- and intra-operative point clouds are output, denoted by X and Y, respectively, as well as their respective feature vectors f X , f Y . Each SA layer does the FPS method for the points from the previous layer, which reduces the number of key points and improves the computational time without losing the information of the points on the overlapping region. The hyper-parameters n, r, and L in the SA layer are selected through several experiments to select the values corresponding to the time when the training effect is the best.
However, the output feature vectors f X , f Y represent the local information, are not related to the global context. And the features of the points will change with the change in the neighborhood point density, which increases the difficulty of correctly identifying the correspondences. In order to reduce the impact of the above problems on registration, the graph attention network is added. Through the self-attention and cross-attention of the source and target point clouds [35], the updated point features f ^ X , f ^ Y are obtained.

2.2. Correspondence Identification and Transformation Matrix Estimation

The correspondence is obtained by comparing the features of the key points X and Y of the source and target point clouds. Through the feature vectors f ^ X , f ^ Y , the similarity function (Equation (2)) of the key point matching represents the matching probability between X and Y,
ϕ = S o f t m a x ( f ^ X f ^ Y T T ) N × N
where N is the number of key points, T is a hyper-parameter, and the SoftMax function is a normalized exponential function. Each ϕ i j represents the probability that the i-th source key point matches the j-th target key point, while each row ϕ i represents its matching distribution across all target key points.
Next, considering that the two point clouds partially overlap, only some points have corresponding relationships. The noise of the scanner sensor and the difference in point density will also cause encoding errors and matching relationship errors. To reduce the influence of outliers, the RANSAC method is used and then estimates the transformation matrix.
The training process optimizes the encoder by minimizing the loss function [31] to maximize the match probability of the corresponding points and minimize the match probability of the non-corresponding points under the ground-truth transformation matrix to obtain the transformation matrix between the source and target point clouds and achieve coarse registration. The initial values of the hyper-parameters T in Equation (2) and in the loss function are set to 1 × 10−2 and 10, respectively.

2.3. Pre- and Intra-Operative Registration of Cross-Source and Low Overlapping Ratio Point Clouds

In practical applications, the low overlapping ratio and large density difference present a significant challenge to correctly identifying the correspondence and achieve successful alignment. To address this issue, the CFL-PM method is combined with the FPS to achieve pre- and intra-operative registration.
Firstly, the intra-operative point cloud is down-sampled to achieve a density that is similar to the pre-operative point cloud. For the pre-operative point cloud Q, the FPS method is employed in order to obtain a set of sampling points Q′ (Figure 3a). A neighborhood search is then performed to divide the pre-operative point cloud into multiple local regions, with the local regions forming a candidate set (Figure 3b). The search radius is set to r, and for each point pi in the sampled points, its corresponding 3D region P p i can be expressed as Equation (3).
P p i = p | p Q , p i Q , p i p r
Next, traverse the candidate set, based on the trained model, calculate the depth features of the local regions and the intra-operative point cloud, perform feature matching and pose estimation, and then calculate the rough alignment’s rotation error and translation error [36], as shown in Equations (4) and (5),
Δ T = T ( T G ) 1 = Δ R Δ t 0 1
e r = arccos ( t r Δ R 1 2 ) e t = Δ t
Comparing the rotation error e r of the matching between the intra-operative point cloud and the local regions, as shown in Equation (6). The local region corresponding to the smallest error is the optimal local region, and its corresponding transformation is the transformation of the intra- and the pre-operative point clouds for coarse registration.
E ; T = min e 1 r , e 2 r , , e i r , , e N r i = 1 , 2 , N
The FPS and CFL-PM methods determine point correspondences to complete the coarse registration of the optimal local region with the intra-operative point cloud. Finally, ICP fine registration ensures better alignment despite large initial positional differences and low overlapping ratio, achieving cross-source, low-overlapping ratio pre- and intra-operative point cloud registration.

3. Results and Discussion

3.1. Evaluation Metrics

To assess the accuracy and robustness of the proposed algorithm for pre- and intra-operative point clouds registration, it is compared with three algorithms: ICP based on FPFH features (FPFH + ICP) [37], ICP based on SHOT features (SHOT + ICP) [38], and ICP based on FPS and FPFH (FPS + FPFH + ICP) [22]. The registration performance is gauged by rotation error e r and translation error e r . Additionally, the runtime considered is the time taken for registering two point clouds, excluding the loading time for the point cloud data.

3.2. Model Training and Testing

The patients’ CT data used come from the SpineWeb Dataset [39,40] and the affiliated hospital, including lumbar vertebrae and cervical vertebrae. Pre-operative point cloud data is obtained through three-dimensional reconstruction of the patient’s pre-operative CT images. The reconstructed pre-operative CT are 3D printed. A COMET6 structured light scanner scanned the surgically exposed areas of the patient from the 3D printed phantom to simulate an actual intra-operative procedure to acquire intra-operative point cloud data. The intra-operative point cloud contained only the single spinous process. Figure 4 shows the cumulative distribution of point cloud pairs in the dataset in terms of rotation angle R x , R y , R z , and translation distance t . The rotation angle and distance are obtained through ground-truth matrix. The ground-truth transformation matrix for the initial cross-source point cloud samples was established by performing a manual alignment. Then, the ground-truth transformation matrix was further refined by the ICP algorithm [41]. After data augmentation, we obtained 142 points cloud samples in total.
The dataset for training and validation comprised 142 samples. The dataset was divided into a training set and a validation set at a 3:1 ratio. Each sample consists of a pair of pre- and intra-operative point clouds and a ground-truth transformation matrix for registration. The dataset includes vertebrae with varying degrees of degeneration and anatomical morphology to ensure diversity. The test set comprises additional data, as shown in Table 1. Since the pre-operative point cloud is sampled at a frame rate of 25 local regions (as shown in Figure 3b), the intra-operative exposed spinous process point cloud must be sequentially matched with each segmented local region. Therefore, each test data point contains 25 sets of pre-intra operative point cloud pairs.
During the training process, the Adam optimizer is used, with the learning rate set to 1 × 10−4, and the learning rate is halved every five Epochs. The batch size and Epoch are set to 4 and 100, respectively.
To verify the trained model and the proposed algorithm, the results obtained are shown in Figure 5. It can be seen that the coarse registration achieves rotation error < 15°, translation error < 10 mm, and the registration time is approximately 0.1 s.
The operating system used in the experiment was Ubuntu 18.04, conducted on a NVIDIA Tesla V100 GPU with 32 GB memory and an Intel (R) Xeon (R) Silver 4116 CPU @ 2.10 GHz. The software environment includes CUDA 11.4, Python 3.9.12, and PyTorch 1.7.1.

3.3. Experiment on Pre- and Intra-Operative Registration of Cross-Source and Low Overlapping Ratio Point Clouds

In this experiment, the trained model registers intra- and pre-operative point clouds on a noise-free dataset, and evaluates the registration performance. The test data includes three cervical vertebrae and eight lumbar vertebrae cases. The pre-operative point clouds cover complete vertebrae, while intra-operative ones focus on surgically exposed areas. Table 1 details exposed sites, overlapping ratios, and initial pose differences between the pre- and intra-operative point clouds, providing insights into the registration effectiveness and challenges.
Figure 6 illustrates the registration results of point cloud data from Table 1 under various initial poses. The first column displays the initial positions of the pre- and intra-operative point clouds. The second column shows the initial pose of the intra-operative point cloud and pre-operative spinous point cloud selected through FPS. The third column displays the coarse registration results obtained from the trained model. The fourth column reveals the fine registration results after ICP refinement. The fifth column visualizes the ultimate registration result.
Table 2 compares the accuracy and efficiency of the proposed algorithm and the other three algorithms. The results show that the proposed algorithm outperforms the others. Under the FPFH + ICP and SHOT + ICP, Data 1, 3, 6, and 8 cannot be successfully registered. Data 9 cannot be successfully registered under the FPFH + ICP. Data 11 and Data 4 fail to register under the SHOT + ICP algorithm. When the C1 spinous process is exposed in data 1, FPS + FPFH + ICP selected the best local area, but struggled with accurate alignment during the ICP stage. In contrast, the proposed algorithm successfully registered all datasets. For point cloud data that all can be successfully registered, at the coarse registration stage, the average rotation error and translation error of FPFH + ICP are 18.891° and 24.785 mm, respectively. With SHOT + ICP, the average rotation error is 20.392° and the translation error is 21.957 mm. The average rotation and translation errors of FPS + FPFH + ICP are 9.317° and 9.326 mm, respectively. Under the proposed algorithm, the average rotation error is 5.679°, and the translation error is 7.420 mm. The coarse registration error of the algorithm proposed is the smallest, which can provide a better initial pose for fine registration and accelerate convergence. The final average rotation error and translation error of the proposed algorithm are 0.342° and 0.268 mm, respectively, meeting clinical requirements. In terms of speed, proposed algorithm is the fastest, taking only 9.862 s on average, compared to much longer times for the other algorithms.

3.4. Robustness Evaluation

To evaluate the robustness of the proposed algorithm to noise, Gaussian noise with different standard deviations is added to the intra-operative point cloud, and compared with the registration algorithms based on FPFH and SHOT. The intra-operative point cloud used in the experiment still consists of scans from the exposed surgical site. The pre-operative point cloud data is manually extracted from the complete lumbar and cervical vertebrae point clouds, specifically targeting the spinous processes of the respective vertebrae. The initial positioning and orientation obtained from the previous experiment are maintained. The experimental data remains the same as the point cloud data presented in Table 1. Figure 7 visualizes the effects of adding noise with standard deviations of 0.5 mm, 0.75 mm, and 1.0 mm to intra-operative point cloud of dataset 1.
The experiment obtained the rotation error e r and translation error e r of pre- and intra-operative point clouds registration under the interference of different levels of standard deviation noise δ, as shown in Table 3. It can be seen from the results in the table that the registration errors e r of the three feature matching algorithms all increase with the increase in noise. Among them, both FPFH-based and SHOT-based coarse registration have results with errors greater than 60° or 60 mm. When the coarse registration error is less than 60, the mean coarse registration error under different δ is calculated as the smallest error of CFL-PM algorithm. The mean e r is 7.28°, and the mean e r of coarse registration is 9.08 mm. CFL-PM algorithm is stable for registration with or without noise, and has stronger robustness than the other two feature matching algorithms.
Figure 8 visualizes the pre- and intra-operation point clouds coarse registration results for Data 9 with Gaussian noise standard deviation of 1, which can be more intuitively seen that the proposed algorithm can improve the robustness of registration.

4. Conclusions

This study presented a novel coarse-to-fine registration framework to address the critical challenges of large initial pose differences, low overlap, and noise in surgeon-navigated pedicle screw placement. The proposed CFL-PM algorithm, rooted in curvature feature learning, demonstrated its efficacy as a robust solution for the critical coarse registration stage. The experimental results confirmed that the proposed method achieves superior performance in terms of accuracy (e.g., 0.34° rotation error and 0.27 mm translation error under ideal conditions), computational efficiency, and robustness to noise. Besides the technical metrics of accuracy and speed, the proposed registration framework holds potential impact on enhancing clinical workflows in spinal surgery. For patient counseling, surgeons can more effectively explain the surgical plans, thereby helping to alleviate patient anxiety. For intra-operative surgical judgment, the proposed method showed the ability to achieve robust and precise registration between pre- and intra-operative point clouds, even under challenging conditions of noise and low overlap. The surgeons can more confidently make decisions based on the pre-determine the optimal pedicle screw trajectory. Daly [42] proposed a markerless tracking system based on a clinical RGB-D camera, which is capable of capturing the point cloud of the intra-operative spine. The proposed CFL-PM algorithm can be utilized in this scenario to extract patient-specific spinal features, to achieve precise registration between pre-operative and intra-operative point clouds. This enables the augmented visualization of the patient’s surgical field during surgery with surgical navigation-assisted pedicle screw fixation, allowing the procedure to be executed according to the screw trajectory planned pre-operatively.
Despite these encouraging results, several limitations of the current work should be acknowledged. The dataset used although is sufficient for the algorithm, further validation through prospective clinical trials and larger datasets is necessary to establish its generalization in different patient groups and surgical environments. Future work may focus on integrating this algorithm into a clinical navigation platform and validating its benefits through clinical studies. At the same time, its applicability in other orthopedic navigation procedures beyond the placement of pedicle screws will also be explored.

Author Contributions

Conceptualization, L.Z. and W.W.; methodology, L.Z., B.W. and N.Z.; software, L.Z.; validation, L.Z. and T.L.; formal analysis, L.Z.; investigation, L.Z., B.W. and N.Z.; resources, W.W.; data curation, L.Z.; writing—original draft preparation, L.Z. and J.G.; writing—review and editing W.W., J.G., B.W. and N.Z.; visualization, L.Z. and T.L.; supervision, B.W. and N.Z.; project administration, B.W.; funding acquisition, B.W., N.Z. and W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by Beijing Natural Science Foundation (No. 4232002), Beijing Municipal Education Commission’s Research Program (No. KZ20231002537), Chinese Institutes for Medical Research, Beijing (No. CX24PY10) and National Natural Science Foundation of China (No. 61672362).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Medical Ethics Committee of Clinical Trials, Capital Medical University (2022SY127).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available from the corresponding author upon reasonable request, subject to the approval of the Medical Ethics Committee of Capital Medical University.

Acknowledgments

The authors would like to acknowledge all funds received in support of this research. The authors would also like to thank the Affiliated Hospital of Capital Medical University for providing the clinical CT scan dataset acquired at their institution.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yan, K.; Zhang, Q.; Liu, B.; He, D.; Liu, Y.J.; Tian, W. The Clinical Application of Tianji II Robot-assisted Pedicle Screw Internal Fixation for Thoracolumbar Spine. Beijing Biomed. Eng. 2022, 41, 297–301. [Google Scholar]
  2. Smith, A.D.; Teague, A.J.; Naik, A.; Janbahan, M.; Smith, E.J.; Krist, D.T.; Parupalli, S.; Teal, K.; Hassaneen, W. Robotic External Ventricular Drain Placement for Acute Neurosurgical Care in Low-Resource Settings: Feasibility Considerations and a Prototype Design. Neurosurg. Focus 2022, 52, E14. [Google Scholar] [CrossRef]
  3. Wu, B.; Liu, P.; Xiong, C.; Li, C.; Zhang, F.; Shen, S.; Shao, P.; Yao, P.; Niu, C.; Xu, R. Stereotactic Co-Axial Projection Imaging for Augmented Reality Neuro-navigation: A Proof-Of-Concept Study. Quant. Imaging Med. Surg. 2022, 12, 3792–3802. [Google Scholar] [CrossRef]
  4. Li, Y.H.; Jiang, S.; Yang, Z.Y.; Yang, S.; Zhou, Z.Y. Microscopic augmented reality calibration with contactless line-structured light registration for surgical navigation. Med. Biol. Eng. Comput. 2025, 63, 1463–1479. [Google Scholar] [CrossRef] [PubMed]
  5. Tu, P.X.; Qin, C.X.; Guo, Y.; Li, D.Y.; Lungu, A.J.; Wang, H.X. Ultrasound Image Guided and Mixed Reality-Based Surgical System with Real-Time Soft Tissue Deformation Computing for Robotic Cervical Pedicle Screw Placement. IEEE Trans. Biomed. Eng. 2022, 69, 2593–2603. [Google Scholar] [CrossRef]
  6. Liu, Z.; Hsieh, C.; Hsu, W.; Tseng, C.; Chang, C. Two-Dimensional C-arm Robotic Navigation System (i-Navi) in Spine Surgery: A Pilot Study. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 2281–2290. [Google Scholar] [CrossRef] [PubMed]
  7. Li, D.; Wang, M. A 3D Image Registration Method for Laparoscopic Liver Surgery Navigation. Electronics 2022, 11, 1670. [Google Scholar] [CrossRef]
  8. Zhang, X.H.; Otoo, E.M.; Fan, Y.B.; Tao, C.J.; Wang, T.M.; Rhode, K. Autostereoscopic 3D Augmented Reality Navigation for Laparoscopic Surgery: A Preliminary Assessment. IEEE Trans. Biomed. Eng. 2022, 70, 1413–1421. [Google Scholar] [CrossRef]
  9. Maharjan, N.; Alsadoon, A.; Prasad, P.; Abdullah, S.; Rashid, T.A. A Novel Visualization System of Using Augmented Reality in Knee Replacement Surgery: Enhanced Bidirectional Maximum Correntropy Algorithm. Int. J. Med. Robot. Comput. Assist. Surg. 2021, 17, e2223. [Google Scholar] [CrossRef]
  10. Saadat, S.; Perriman, D.; Scarvell, L.M.; Smith, P.N.; Galvin, C.R.; Lynch, J.; Pickering, M.R. An Efficient Hybrid Method for 3D to 2D Medical Image Registration. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1313–1320. [Google Scholar] [CrossRef]
  11. Jiang, L.; Shao, L.J.; Wu, J.Y.; Xu, X.F.; Chen, X.R.; Zhang, S.L. LL-MAROCO: A Large Language Model-Assisted Robotic System for Oral and Craniomaxillofacial Osteotomy. Bioengineering 2025, 12, 629. [Google Scholar] [CrossRef]
  12. Aukema, L.M.N.; Geer, A.F.D.; Alphen, M.J.A.; Schreuder, W.H.; Veen, R.L.P.; Ruers, T.J.M.; Siepel, F.J.; Karakullukcu, M.B. Hybrid registration of the fibula for electromagnetically navigated osteotomies in mandibular reconstructive surgery: A phantom study. Int. J. Comput. Assist. Radiol. Surg. 2025, 20, 369–377. [Google Scholar] [CrossRef]
  13. Zhang, M.S.; Wu, B.; Ye, C.; Wang, Y.; Duan, J.; Zhang, X.; Zhang, N. Multiple Instruments Motion Trajectory Tracking in Optical Surgical Navigation. Opt. Express 2019, 27, 15827–15845. [Google Scholar] [CrossRef]
  14. Zheng, G.; Kowal, J.; González Ballester, M.A.; Caversaccio, M.; Nolte, L. Registration Techniques for Computer Navigation. Curr. Orthop. 2007, 21, 170–179. [Google Scholar] [CrossRef]
  15. Yoo, H.; Sim, T. Automated Machine Learning (AutoML)-Based Surface Registration Methodology for Image-Guided Surgical Navigation System. Med. Phys. 2022, 49, 4845–4860. [Google Scholar] [CrossRef]
  16. Wei, Y.K.; Fu, Z.Y.; Zhang, D.B.; Wang, G.Q.; Zhang, C.; Xie, X. A Registration Method for Total Knee Arthroplasty Surgical Robot. In Proceedings of the IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 4–6 March 2022. [Google Scholar]
  17. Lee, D.; Choi, A.; Mun, J.H. Deep Learning-Based Fine-Tuning Approach of Coarse Registration for Ear-Nose-Throat (ENT) Surgical Navigation Systems. Bioengineering 2024, 11, 941. [Google Scholar] [CrossRef] [PubMed]
  18. Liu, W.; Sun, W.; Wang, S.; Liu, Y. Coarse Registration of Point Clouds with Low Overlap Rate On Feature Regions. Signal Process. Image Commun. 2021, 98, 116428. [Google Scholar] [CrossRef]
  19. Wang, J.; Wu, B.; Kang, J. Registration of 3D Point Clouds Using a Local Descriptor Based On Grid Point Normal. Appl. Opt. 2021, 60, 8818–8828. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, Y.R.; Xu, J.B.; Zou, Y.N.; Liu, P.X. A Point Cloud Registration Method Based on Segmenting Sphere Region Feature Descriptor and Overlapping Region Matching Strategy. IEEE Sens. J. 2024, 24, 38387–38401. [Google Scholar] [CrossRef]
  21. Yan, W.Q.; Zhang, L.J.; Wang, W.; Wu, B. Preoperative and Intra-operative Point Cloud Registration Algorithm Based on SHOT and Symmetric ICP with Objective Function for Low Overlap Rates. Beijing Biomed. Eng. 2023, 42, 111–116. [Google Scholar]
  22. Zhang, L.J.; Wang, B.B.; Wang, W.; Wu, B.; Zhang, N. Point Cloud Registration Algorithm with Cross-Source and Low Overlapping Ratio for Pedicle Screw Fixation. Chin. J. Lasers 2023, 50, 0907108. [Google Scholar]
  23. Dai, Y.; Yang, X.; Hao, J.; Luo, H.L.; Mei, G.H.; Jia, F.C. Preoperative and Intraoperative Laparoscopic Liver Surface Registration Using Deep Graph Matching of Representative Overlapping Points. Int. J. Comput. Assist. Radiol. Surg. 2025, 20, 269–278. [Google Scholar] [CrossRef]
  24. Zhang, Z.Y.; Sun, J.D.; Dai, Y.C.; Fan, B.; He, M.Y. VRNet: Learning the Rectified Virtual Corresponding Points for 3D Point Cloud Registration. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 4997–5010. [Google Scholar] [CrossRef]
  25. Li, R.X.; Cai, Y.Y.; Davoodi, A.; Borghesan, G.; Vander Poorten, E. 3D Ultrasound Shape Completion and Anatomical Feature Detection for Minimally Invasive Spine Surgery. Med. Biol. Eng. Comput. 2025; in press. [Google Scholar] [CrossRef]
  26. Wang, L.P.; Yang, B.; Ye, H.L.; Cao, F.L. Two-view Point Cloud Registration Network: Feature and Geometry. Appl. Intell. 2024, 54, 3135–3151. [Google Scholar] [CrossRef]
  27. Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. PREDATOR: Registration of 3D Point Clouds with Low Overlap. In Proceedings of the 2021 Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021. [Google Scholar]
  28. Zhang, Z.; Chen, G.; Wang, X.; Wu, H. Sparse and Low-Overlapping Point Cloud Registration Network for Indoor Building Environments. J. Comput. Civ. Eng. 2021, 35, 402006901–402006913. [Google Scholar] [CrossRef]
  29. Zhou, R.; Li, X.; Jiang, W. SCANet: A Spatial and Channel Attention based Network for Partial-to-Partial Point Cloud Registration. Pattern Recognit. Lett. 2021, 151, 120–126. [Google Scholar] [CrossRef]
  30. Zhang, W.; Zhang, Y.; Li, J. A Two-Stage Correspondence-Free Algorithm for Partially Overlapping Point Cloud Registration. Sensors 2022, 22, 5023. [Google Scholar] [CrossRef] [PubMed]
  31. Arnold, E.; Mozaffari, S.; Dianati, M. Fast and Robust Registration of Partially Overlapping Point Clouds. IEEE Robtics Autom. Lett. 2021, 7, 1502–1509. [Google Scholar] [CrossRef]
  32. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  33. Liao, Y.W.; Xiao, C.X.; Zhou, M.H.; Peng, Q.S. Estimation of Principal Curvatures and Principal Normals in Point Sampled Surfaces. In Proceedings of the Second National Conference on Geometric Design and Computing, Huangshan, China, 15–18 April 2005. [Google Scholar]
  34. He, Y.; Yang, J.; Hou, X.; Pang, S.Y.; Chen, J. ICP Registration with DCA Descriptor for 3D Point Clouds. Opt. Express 2021, 29, 20423. [Google Scholar] [CrossRef]
  35. Xie, T.; Grossman, J.C. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Phys. Rev. Lett. 2018, 120, 145301.1–145301.6. [Google Scholar] [CrossRef]
  36. Ghorbani, F.; Ebadi, H.; Pfeifer, N.; Sedaghat, A. Uniform and Competency-Based 3D Keypoint Detection for Coarse Registration of Point Clouds with Homogeneous Structure. Remote Sens. 2022, 14, 4099. [Google Scholar] [CrossRef]
  37. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D Registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  38. Salti, S.; Tombari, F.; Stefano, L.D. SHOT: Unique Signatures of Histograms for Surface and Texture Description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  39. Aslan, M.S.; Ali, A.; Rara, H.; Arnold, B.; Farag, A.A.; Fahmi, R.; Xiang, P. A Novel 3D Segmentation of Vertebral Bones from Volumetric CT Images Using Graph Cuts. In Proceedings of the 5th International Symposium on Advances in Visual Computing (ISVC), Las Vegas, NV, USA, 30 November–2 December 2009; pp. 519–528. [Google Scholar]
  40. Aslan, M.S.; Shalaby, A.; Farag, A.A. Clinically Desired Segmentation Method for Vertebral Bodies. In Proceedings of the IEEE 10th International Symposlum on Biomedical Imaging, San Francisco, CA, USA, 7–11 April 2013; pp. 840–843. [Google Scholar]
  41. Dong, Z.; Yang, B.; Liang, F.; Huang, R.; Scherer, S. Hierarchical Registration of Unordered TLS Point Clouds Based on Binary Shape Context Descriptor. J. Photogramm. Remote Sens. 2018, 144, 61–79. [Google Scholar] [CrossRef]
  42. Daly, C.; Marconi, E.; Riva, M.; Ekanayake, J.; Elson, D.S.; Rodriguez y Baena, F. Towards Markerless Intraoperative Tracking of Deformable Spine Tissue. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI, Daejeon, Republic of Korea, 23–27 September 2025; Volume 15968, pp. 627–637. [Google Scholar]
Figure 1. Pre- and intra-operative point clouds registration algorithm framework based on curvature feature encoder and graph attention network.
Figure 1. Pre- and intra-operative point clouds registration algorithm framework based on curvature feature encoder and graph attention network.
Bioengineering 12 01096 g001
Figure 2. Point-wise feature encoder model architecture. (n: the number of output points; r: sampling radius; L: perceptron layers and nodes).
Figure 2. Point-wise feature encoder model architecture. (n: the number of output points; r: sampling radius; L: perceptron layers and nodes).
Bioengineering 12 01096 g002
Figure 3. Pre-operative point cloud sampling points and the corresponding local regions. (a) Sampling points; (b) local area corresponding to sampling points.
Figure 3. Pre-operative point cloud sampling points and the corresponding local regions. (a) Sampling points; (b) local area corresponding to sampling points.
Bioengineering 12 01096 g003
Figure 4. Cumulative distribution of rotation angles and relative distances between two point cloud coordinate systems. (a) Cumulative distribution of rotation angles; (b) cumulative distribution of relative distances.
Figure 4. Cumulative distribution of rotation angles and relative distances between two point cloud coordinate systems. (a) Cumulative distribution of rotation angles; (b) cumulative distribution of relative distances.
Bioengineering 12 01096 g004
Figure 5. Cumulative distribution of rotation error, translation error, and coarse registration time on the validation set. (a) Cumulative distribution of rotation error; (b) cumulative distribution of translation error; (c) cumulative distribution of time.
Figure 5. Cumulative distribution of rotation error, translation error, and coarse registration time on the validation set. (a) Cumulative distribution of rotation error; (b) cumulative distribution of translation error; (c) cumulative distribution of time.
Bioengineering 12 01096 g005
Figure 6. Registration results of pre- and intra-operative point clouds with various initial poses (red indicates intra-operative point cloud; blue indicates pre-operative point cloud).
Figure 6. Registration results of pre- and intra-operative point clouds with various initial poses (red indicates intra-operative point cloud; blue indicates pre-operative point cloud).
Bioengineering 12 01096 g006aBioengineering 12 01096 g006b
Figure 7. Intra-operative point clouds with different levels of Gaussian noise. (a) Point cloud with a standard deviation of 0.5 mm; (b) point cloud with a standard deviation of 0.75 mm; (c) point cloud with a standard deviation of 1.0 mm.
Figure 7. Intra-operative point clouds with different levels of Gaussian noise. (a) Point cloud with a standard deviation of 0.5 mm; (b) point cloud with a standard deviation of 0.75 mm; (c) point cloud with a standard deviation of 1.0 mm.
Bioengineering 12 01096 g007
Figure 8. Coarse registration results with Gaussian noise standard deviation of 1. (a) Coarse registration based on SHOT; (b) coarse registration based on FPFH; (c) coarse registration based on CFL-PM. (Red indicates intra-operative point cloud after adding Gaussian noise; Green indicates pre-operative point cloud).
Figure 8. Coarse registration results with Gaussian noise standard deviation of 1. (a) Coarse registration based on SHOT; (b) coarse registration based on FPFH; (c) coarse registration based on CFL-PM. (Red indicates intra-operative point cloud after adding Gaussian noise; Green indicates pre-operative point cloud).
Bioengineering 12 01096 g008
Table 1. Exposed sites and initial pose differences between pre- and intra-operative point clouds.
Table 1. Exposed sites and initial pose differences between pre- and intra-operative point clouds.
DataExposed SiteOverlapping Ratio (%)Rotation Angles (°)Relative Distances (mm)
XYZXYZ
1C12.04−93.94−83.85150.773.6321.99−31.57
2C21.9878.07−51.4810.12−48.1760.9537.35
3C2–C34.27−169.52−55.43−136.575.816.1933.71
4L11.35−76.5968.13−88.32−98.28−59.3626.07
5L11.6689.3958.1582.35−57.78−42.7338.71
6L11.6796.9422.4182.55−89.1145.3442.65
7L11.7980.76−62.50−48.06−52.53110.76−86.79
8L21.5389.98−73.34−78.476.99102.230.95
9L21.69148.5348.47138.47114.08−64.99−103.52
10L31.85161.82−83.28−116.87−4.17−83.28−91.19
11L32.29109.4828.1979.30−55.677.2340.84
Table 2. Comparison of registration errors and runtime for pre- and intra-operative point clouds under different algorithms.
Table 2. Comparison of registration errors and runtime for pre- and intra-operative point clouds under different algorithms.
DataExposed SiteAlgorithmCoarse RegistrationFine Registration
e r (°) e t (mm)Time (s) e r (°) e t (mm)Time (s)
1C1[37] + ICP//////
[38] + ICP//////
[22]23.7529.42102.95///
Proposed10.1119.048.110.430.370.33
2C2[37] + ICP36.6444.5227.560.430.230.40
[38] + ICP29.3715.0337.560.320.310.34
[22]7.438.97115.300.210.370.20
Proposed6.327.287.890.300.240.25
3C2–C3[37] + ICP//////
[38] + ICP//////
[22]18.8923.73124.561.070.830.47
Proposed2.300.709.850.430.360.30
4L1[37] + ICP19.8240.4947.010.990.410.35
[38] + ICP20.6235.2264.90///
[22]8.434.84115.180.250.850.08
Proposed4.033.9110.180.38660.09
5L1[37] + ICP2.711.1643.250.280.270.29
[38] + ICP3.021.8157.610.410.580.31
[22]2.793.00108.350.170.190.05
Proposed5.367.3810.680.370.100.04
6L1[37] + ICP//////
[38] + ICP//////
[22]4.128.51113.540.590.600.06
Proposed4.525.549.840.380.410.04
7L1[37] + ICP10.2826.1626.960.333.500.45
[38] + ICP12.8414.7146.390.391.520.57
[22]1.601.26129.480.880.510.15
Proposed6.8311.459.880.300.250.18
8L2[37] + ICP//////
[38] + ICP//////
[22]4.843.11115.440.310.470.31
Proposed5.935.869.960.490.110.21
9L2[37] + ICP//////
[38] + ICP36.4543.4358.970.600.780.69
[22]14.819.8192.300.450.320.07
Proposed6.897.9911.100.310.160.06
10L3[37] + ICP35.1326.5247.650.330.380.44
[38] + ICP19.9721.5455.700.710.660.40
[22]9.279.80117.780.350.320.26
Proposed2.764.219.750.260.140.26
11L3[37] + ICP8.789.8644.421.000.410.35
[38] + ICP//////
[22]8.213.5489.5440.250.850.08
Proposed7.438.949.4860.130.130.07
Note: “/” indicates registration failure under the current algorithm.
Table 3. Comparison of coarse registration errors with different standard deviation noise.
Table 3. Comparison of coarse registration errors with different standard deviation noise.
DataExposed SiteAlgorithmδ = 0.25 mmδ = 0.5 mmδ = 0.75 mmδ = 1.0 mm
e r (°) e t (mm) e r (°) e t (mm) e r (°) e t (mm) e r (°) e t
1C1[37]27.3222.9728.8923.18////
[38]35.4225.1627.8314.0452.3520.3040.8621.12
CFL-PM4.217.818.7615.7012.2820.5717.2620.20
2C2[37]32.6417.4933.9914.58////
[38]36.8520.71//////
CFL-PM6.365.9916.325.8520.1810.0325.248.51
3C2–C3[37]11.253.8120.666.28////
[38]23.615.4523.388.4318.486.2023.019.20
CFL-PM7.615.0311.685.967.521.7220.499.01
4L1[37]33.3212.0220.0825.9641.7623.1731.0846.65
[38]////////
CFL-PM3.223.082.322.203.934.077.5510.19
5L1[37]////////
[38]////////
CFL-PM4.736.612.242.583.521.649.5816.90
6L1[37]5.781.4415.124.2336.627.30//
[38]3.970.9852.646.9315.501.7321.637.08
CFL-PM3.240.303.991.213.105.046.262.84
7L1[37]20.2824.9046.5627.8119.6824.23//
[38]13.9116.9535.5424.446.4014.67//
CFL-PM8.4814.732.4911.775.8811.117.6024.5
8L2[37]10.774.1121.1214.5421.0914.35//
[38]7.426.3412.545.529.027.9320.5115.04
CFL-PM2.522.104.563.465.546.995.669.31
9L2[37]26.7913.7822.1616.5833.3915.6421.1816.43
[38]14.415.4013.986.8035.3219.0123.7514.79
CFL-PM1.791.403.571.065.189.196.289.09
10L3[37]48.2747.2047.4343.0219.833.2852.4946.78
[38]22.9631.9915.5636.1755.5254.1934.0044.69
CFL-PM8.9828.112.3724.786.6331.855.2629.11
11L3[37]10.787.3016.412.2726.037.5252.7910.39
[38]9.675.715.490.776.451.2016.983.18
CFL-PM4.270.489.431.453.411.578.984.67
Mean coarse registration error[37]22.7215.5027.2417.8528.3417.9339.3830.06
[38]18.6913.1923.3712.8924.8815.6525.8216.44
CFL-PM5.046.886.166.917.029.4310.9213.12
Note: When the rotation error is greater than 60° or the translation error is greater than 60 mm, use “/” instead.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Wang, W.; Liu, T.; Guo, J.; Wu, B.; Zhang, N. A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation. Bioengineering 2025, 12, 1096. https://doi.org/10.3390/bioengineering12101096

AMA Style

Zhang L, Wang W, Liu T, Guo J, Wu B, Zhang N. A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation. Bioengineering. 2025; 12(10):1096. https://doi.org/10.3390/bioengineering12101096

Chicago/Turabian Style

Zhang, Lijing, Wei Wang, Tianbao Liu, Jiahui Guo, Bo Wu, and Nan Zhang. 2025. "A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation" Bioengineering 12, no. 10: 1096. https://doi.org/10.3390/bioengineering12101096

APA Style

Zhang, L., Wang, W., Liu, T., Guo, J., Wu, B., & Zhang, N. (2025). A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation. Bioengineering, 12(10), 1096. https://doi.org/10.3390/bioengineering12101096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop