Next Article in Journal
Quantification of the Mechanical Properties in the Human–Exoskeleton Upper Arm Interface During Overhead Work Postures in Healthy Young Adults
Previous Article in Journal
Diff-Pre: A Diffusion Framework for Trajectory Prediction
Previous Article in Special Issue
RoboCT: The State and Current Challenges of Industrial Twin Robotic CT Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution

School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4604; https://doi.org/10.3390/s25154604
Submission received: 7 June 2025 / Revised: 5 July 2025 / Accepted: 22 July 2025 / Published: 25 July 2025
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)

Abstract

Medical image registration is an indispensable preprocessing step to align medical images to a common coordinate system before in-depth analysis. The registration precision is critical to the following analysis. In addition to representative image features, the initial pose settings and multiple poses in images will significantly affect the registration precision, which is largely neglected in state-of-the-art works. To address this, the paper proposes a dual-pose medical image registration algorithm based on improved differential evolution. More specifically, the proposed algorithm defines a composite similarity measurement based on contour points and utilizes this measurement to calculate the similarity between frontal–lateral positional DRR (Digitally Reconstructed Radiograph) images and X-ray images. In order to ensure the accuracy of the registration algorithm in particular dimensions, the algorithm implements a dual-pose registration strategy. A PDE (Phased Differential Evolution) algorithm is proposed for iterative optimization, enhancing the optimization algorithm’s ability to globally search in low-dimensional space, aiding in the discovery of global optimal solutions. Extensive experimental results demonstrate that the proposed algorithm provides more accurate similarity metrics compared to conventional registration algorithms; the dual-pose registration strategy largely reduces errors in specific dimensions, resulting in reductions of 67.04% and 71.84%, respectively, in rotation and translation errors. Additionally, the algorithm is more suitable for clinical applications due to its lower complexity.

1. Introduction

With the continuous improvement of medical standards, medical imaging technology plays an increasingly important role in modern clinical medicine [1]. It has become an indispensable component of clinical diagnosis, surgical planning, and condition assessment, among other application areas. Images from different modalities provide diverse diagnostic support and functional information, and a single image does not offer sufficient data for comprehensive analysis [2]. To enhance the quality of medical diagnosis and treatment, it is typically necessary to comprehensively analyze patients’ image data from multiple modalities during clinical treatment. However, relying solely on doctors’ experience to analyze medical images from different modalities is challenging. The use of medical image registration technology addresses the problem of multimodal medical image analysis by mapping image information from multiple modalities to the optimal spatial correspondence through optimal geometric transformations. Medical image registration serves as the foundation and prerequisite for medical imagery fusion, holding significant value in both medical research and practical clinical applications [3,4,5].
Numerous medical image registration techniques have been proposed by domestic and international scholars to enhance the accuracy of 2D-3D medical image registration. These algorithms can be broadly categorized into two groups: those based on image features and those relying on image grayscale [6]. To accurately locate and track the patient’s lesions [7], feature-based methods typically depend on internal inherent features and markers that are either implanted in the patient’s body or applied to the skin’s surface before surgery [8]. Regodić M et al. [9] achieved image registration by automatically identifying pairs of skin-attached marker points to expedite the registration process. However, this approach falls short of meeting minimally invasive requirements. Yu et al. [10] proposed using a B-spline statistical transformation model to extract image features for implementing a non-rigid 2D-3D registration algorithm. Nevertheless, the model requires retraining to apply the algorithm to other sections, and the operational procedure is somewhat intricate. In general, feature-based methods do not necessitate highly accurate registration algorithms [11]. Techniques like training models or markers can be employed to retrieve spatial feature information of the lesion [12]. However, this approach is more labor-intensive in the feature extraction step, requires additional calibration equipment, and may pose risks to the patient’s body [13].
Ongoing research is focusing on medical image registration techniques based on image grayscale to address the requirements of clinical medical treatments [14]. To achieve optimal registration, the fundamental approach involves initially acquiring 2D images using Digital Reconstructed Radiography (DRR) technology. Subsequently, an optimization strategy is selected, and continuous iterations are performed to minimize the similarity difference between the reconstructed image and the X-ray image [15]. Among these, various similarity measures have been proposed. These encompass the grayscale-based SSD (Sum of Squared Differences) algorithm, the NCC (Normalized Cross Correlation) similarity algorithm, and approaches based on mutual information. Notably, recent years have witnessed the emergence of effective solutions facilitated by deep learning-based algorithms [16]. Siamese neural networks [17,18] belong to the category of few-shot learning networks, primarily achieving the goal of image matching by measuring the similarity between two input vectors. McCauley J et al. [19] suggested a FaceNet to quantify the facial similarity of identical twins in general. A deep graph similarity learning model SimGNN is proposed in Bai et al. [20], which also aims to learn similarity for chemical compounds as one of the tasks. Nonetheless, deep learning models necessitate vast volumes of labeled data for training. Insufficient data could make it difficult for these models to effectively generalize to new examples. The processing of extensive medical datasets is also a time-consuming task.
In single-modality medical image registration, the grayscale-based method has demonstrated promising results [21]. However, in certain scenarios, relying on a single global gray value for the similarity measure may overlook crucial image details such as bone structures and muscle tissues. This can result in an algorithm lacking robustness and being susceptible to noise interference. The limitation of this method lies in its sensitivity to grayscale variations, presenting a significant challenge when dealing with multimodal or fuzzy medical images. In line with research on grayscale-based registration methods, a majority of studies focus on single-plane registration techniques. Banks et al. [22] simulated elements such as light sources and projection planes on a computer for registration, utilizing contour information to detect similarity. Although this technique yields favorable registration results, the error in the Z-axis is considerably larger than that in the X-axis and Y-axis. Investigation into the root cause of this issue reveals that the contour information extracted by the single posture method is missing relevant data in the Z-axis direction.
To address the aforementioned challenges, this paper introduces a dual-pose medical image registration algorithm grounded in enhanced differential evolution. The primary contributions of this research can be encapsulated as follows.
(1)
Dual-Pose Strategy for Dimensional Accuracy: To tackle the issue of significant errors in specific dimensions, we introduce a dual-pose strategy for medical image registration. Utilizing the prior DRR image from the frontal pose, which is derived from the single-posture image, we generate the DRR image for the lateral pose by amalgamating the transformation matrix employed for lateral pose conversion. This approach offers a novel perspective to validate the registration accuracy of the single-posture stance from a different viewpoint, effectively rectifying biases in specific dimensions.
(2)
Composite Similarity Measure for Fuzzy Image Challenges: To mitigate the interference from fuzzy images during registration, we design a composite similarity measure. This measure aims to precisely compute the composite similarity between the frontal–lateral posture DRR image and the X-ray image using contour-based similarity metrics, ensuring accurate registration.
(3)
Phased Differential Evolution (PDE) for Optimal Results: Addressing the propensity of the registration outcome to converge to local optima, we propose a Phased Differential Evolution (PDE) optimization algorithm. This iterative approach refines the objective function, continuously calculating the composite similarity between the frontal–lateral posture DRR image and the X-ray image to achieve optimal registration results.
(4)
Efficiency Enhancement via GPU-Accelerated Parallel Computation: To expedite the registration process, we employ multi-threaded parallel computation leveraging Graphics Processing Units (GPUs). This significantly boosts the efficiency of DRR image generation and minimizes data transmission overheads during the registration procedure.
Detailed descriptions of the associated algorithms follow below.

2. Method

A 2D-3D medical image registration aims to determine an optimal spatial position match for the medical image (floating image) through spatial transformation operations. This match ensures the alignment of the floating image with the spatial coordinates of corresponding points on the reference image.
The medical image registration algorithm outlined in this paper comprises the following four main steps.
(1)
Acquisition of Reference Image Data: In contrast to traditional single-posture registration algorithms, this step involves acquiring X-ray image information from two frontal–lateral positions.
(2)
Generation of Frontal and Lateral DRR Images: After identifying the six-degree-of-freedom parameters of the initial pose through manual registration, DRR images for frontal and lateral poses are obtained. Utilizing the transformation matrix (LateralMat) for lateral pose projection, GPU parallel processing accelerates the rendering process to generate the DRR images.
(3)
Similarity Measure Calculation: Contour information is extracted from both the reference image and the floating image. The composite similarity between the DRR image and the reference image under the dual-pose configuration is then calculated.
(4)
Optimization Algorithm for Parameter Tuning: An optimization algorithm is employed to find the optimal parameters by identifying the smallest or largest value of the objective function.
The process of dual-pose medical image registration, based on the improved differential evolution, is illustrated in Figure 1.

3. A Dual-Pose Medical Image Registration Algorithm Based on Improved Differential Evolution

3.1. Acquisition of DRR in the Frontal and Lateral Positions

In conventional single-posture registration, a single planar image provides relatively limited information for medical image registration and proves ineffective in analyzing spatial information across all six degrees of freedom of the targeted body data [23,24]. The absence of corrective analysis for multiple postures results in significant registration errors, particularly in specific dimensions. Therefore, utilizing a dual-pose image as the reference image for registration, containing more comprehensive image and projection view information, enhances the potential for superior registration outcomes. The projection model is illustrated in Figure 2.
L 1 and L 2 represent simulated point light sources in the frontal and lateral positions, respectively. M 3 D denotes the CT data, and the transformation of the CT data is determined by the translation parameters t X , t Y , t Z along the X, Y, and Z axes of the spatial coordinate system and the rotation parameters θ X , θ Y , θ Z around the respective axes. D R R f and D R R l are the frontal and lateral DRR images generated from the CT body data after the frontal and lateral position projections, respectively. When the CT body data undergo geometric transformations such as translation or rotation, the 2D image projected onto the projection plane undergoes corresponding transformations. In the registration task of CT images projected by the DRR (Digital Reconstructed Radiograph) and X-ray images, the core goal is to establish the spatial transformation relationship between them, so that the DRR projection can accurately map to the anatomical structures of the actual X-ray images. This process can be formalized as a parameter optimization problem: finding the optimal transformation parameter ( T , R ) * to maximize the similarity S ( · , · ) between the transformed DRR image and the X-ray image. The mathematical expression of this optimization problem is Equation (1).
( T , R ) * = arg max ( T , R ) S DRR T ( t x , t y , t z ) · R ( θ x , θ y , θ z ) · CT , I X - ray
In Equation (1), T ( t X , t Y , t Z ) denotes a translation matrix, and the Mat contains a rotation matrix describing the relationship of rotation and a translation vector describing the relationship of translation in the spatial transformation of the medical image. The transformation matrix (FrontalMat) is computed from the six-degree-of-freedom parameters of the frontal position as follows.
FrontalMat = Mat ( t X , t Y , t Z , θ X , θ Y , θ Z )
In Equation (2), Mat is the method of calculating the matrix by Equation (1). And then LateralMat, the matrix of lateral positional transformations, is derived by Equation (3) based on the TransMat, the known matrix of frontal–lateral pose transformation from the image data. According to a matrix of frontal–lateral positional transformations, the DRR image of the frontal and lateral position is finally obtained.
LateralMat = TransMat · FrontalMat

3.2. Digitally Reconstructed Radiograph Imaging Based on GPU Parallel Acceleration

The Digitally Reconstructed Radiograph (DRR) [25] involves processing CT data through analog projection rendering to generate virtual X-ray 2D images, facilitating the conversion from 3D data to 2D images. Specifically, the core process of the DRR algorithm includes the following steps: First, the 3D CT volume data are transformed into the perspective of a virtual camera, defining the projection center, detector position, and display range. Then, virtual X-rays are emitted from each pixel of the detector, uniformly sampling the CT data along the ray direction (trilinear interpolation is commonly used to improve accuracy) and accumulating the attenuation values of points along the way. Finally, these accumulated values are converted into grayscale to generate a 2D projection image. This process is illustrated in Figure 3.
Traditional DRR algorithms are typically implemented on the CPU, and the generation process consumes a significant amount of time due to the simulation of a large number of X-rays with limited parallelism. To address the need for real-time processing, Fluck et al. [26] introduced an accelerated DRR generation method based on GPU. This approach significantly enhances the efficiency of image generation.
Since each simulated X-ray that passes through the CT data does not intersect each other and the CT accumulated value is calculated in an identical manner, It is consistent with the characteristics of the CUDA for parallel computation. Using the GPU for multi-threaded parallel computation to realize the simultaneous rendering of multiple simulated X-rays greatly improves the efficiency of DRR generation [27]. Efficiency is impacted by frequent read and write data operations between main memory and graphics memory in traditional algorithms. Every time a CUDA computation is triggered, there must be an extra GPU startup timing overhead. An algorithm for multiple DRR generation in paralleling is designed in order to address this problem. Multiple sets of pose parameters are input, the pose data are transferred to the graphics memory, and multiple DRR images are generated by parallel computation of CUDA corresponding to this set of poses. Finally, send these image data back to the main memory by sequential output. The process of parallel computation is shown in Figure 4. Multiple sets of pose data are transmitted simultaneously by the GPU-based parallel generation of the DRR algorithm at a time, which uses multiple threads to compute several DRR images parallelly. Compared to the conventional methods, this process significantly reduces the time overhead and improves the algorithm’s execution efficiency because it only requires one GPU startup.
GPU acceleration represents an effective strategy for improving system performance and expanding its applicability, particularly in real-time processing scenarios with stringent computational requirements. Nonetheless, the system also exhibits stable performance on CPU-only platforms in resource-limited settings, reflecting its strong generalizability and adaptability across diverse deployment environments.

3.3. Similarity Measure

The similarity measure for medical images quantifies the degree of resemblance between a reference image and a floating image [28], providing a metric for assessing their likeness. This measure proves invaluable in the analysis and registration of multimodal images or those acquired from different devices, playing a crucial role in disease tracking, surgical navigation, and treatment plan development [29]. Fuzzy medical images, often characterized by uncertainties such as noise and artifacts, pose challenges in extracting local information. These uncertainties can hinder the accurate measurement of the degree of match between images. To address these challenges, this paper introduces a similarity measure based on contour points. The algorithm unfolds in the following steps.
In the initial step, the obtained frontal–lateral position X-ray image and frontal–lateral position DRR image undergo preprocessing. Specifically, contour extraction images are computed for both the frontal–lateral pose DRR images and frontal–lateral pose X-ray images using the classical Canny operator [30]. The contour extraction process involves the computation of contour extraction images for the frontal–lateral positional DRR images and frontal–lateral positional X-ray images, each employing the classical Canny operator. The resulting contour-extracted images from the DRR and X-ray images serve as the reference and floating images, respectively. The coordinates of the contour points in these images are stored using the variable S as outlined below.
S = { S 1 , S 2 , , S N }
In Equation (4), N represents the number of contour points. The search for contour points commences by defining a region centered on the corresponding pixel coordinate points in the floating image. The search is then conducted within this region. If a corresponding contour point is found within the region, a weight (c) is assigned based on the distance from the center, contributing to the matching score (W). The matching score for the i-th contour point is defined as Equation (5). The final similarity is calculated by summing the matching scores W of all contour points and normalizing by the number of points, as shown in Equation (6).
s i ( d i ) = 1 , d i = 0 w 1 , 0 < d i a w 2 , a < d i b 0 , d i > b
Similarity = 1 N i = 1 N s i ( d i )
After the algorithm searches all the contour points of the reference image, it finally obtains the similarity of the front position Sim f and lateral position Sim l . The similarity measure, which fully considers the positional relation of the matched contour point pairs, derives a high similarity value if the difference in the edge point positions between the reference image and the floating image is small, and vice versa for a low similarity value. Using weighted calculations at different ranges from the central region improves algorithm accuracy and optimization efficiency. The algorithm is applied to medical images where the changes are not particularly significant and can effectively reflect the subtle changes between images. Algorithm 1 is represented as follows.
Algorithm 1. Similarity measure based on contour points.
N : Number of contour point pixels
S i r : Coordinates of the contour points of the reference image
S i t : Coordinates of the contour points in the image to be registered
1: Preprocessing of images to remove noise
2: Computing image contour images using the Canny operator
3: Calculating similarity:
for i = 1 : N do:
       if S i r = S i t
           Calculate the number of contour points in the region counted in W
           Construct W i = c W
           Construct W d + = W i
End(for)
       Similarity = W d N
Return: Similarity
End

3.4. Composite Similarity

In traditional single-posture medical image registration, a single similarity metric is typically employed to assess the degree of matching between the reference image and the floating image. This approach encounters challenges in effectively handling multimodal medical images. Moreover, it exhibits high sensitivity to noise and artifacts in fuzzy images, potentially compromising the accuracy of similarity measurements and leading to suboptimal registration outcomes. To enhance the overall accuracy of image similarity assessment, particularly in the context of the dual-pose strategy, this paper introduces the concept of composite similarity—a composite measure designed to comprehensively evaluate similarity. This involves assigning weighting factors, denoted as a and b. The composite similarity is derived by assigning weights to the similarity scores calculated for the frontal and lateral positions in the preceding step, as illustrated in Equation (7).
S i m = a · Sim f + b · Sim l
The weight factors a, b ∈ (0,1) in the equation, should be chosen to appropriately emphasize the significance of different similarity measures in determining the final similarity. Recognizing that the positive pose image holds greater importance for similarity measurement in medical image registration, whereas the lateral posture image serves as more of an auxiliary guide, contributing slightly less to the similarity measure than the positive pose image, it is established that a > b. Incorporating the composite similarity during the iterations guides the adjustment of positional parameters toward a trend that is more conducive to achieving accurate registration results, thereby enhancing the reliability of the image similarity assessment.

3.5. Intelligent Optimization Algorithm

Utilizing the aforementioned composite similarity, the process of medical image registration is conceptualized as solving a multi-parameter optimization problem. This involves iteratively calculating and refining the optimal solution for the parameters through an optimization search method. The objective function of the optimization algorithm can be expressed as follows.
S p = arg M a x S p [ a · F f ( R f , D R R f S p ) + b · F l ( R l , D R R l S p ) ]
In Equation (8), S p represents the spatial transformation of the CT data using DRR technology, encompassing the parameters of the objective function ( t X , t Y , t Z , ` X , ` Y , ` Z ). R f , R l denote reference images of the frontal–lateral postures, respectively. D R R f and D R R l refer to the floating images of the frontal–lateral postures under transformation ( S p ), respectively. F f , F l represent the similarity measures Sim f and Sim l used in calculating the contours of two sets of frontal and lateral images.
This paper employs Phased Differential Evolution (PDE) as the optimization algorithm for iterative optimization, aiming to find the optimal pose parameters. Compared to other optimization algorithms, the PDE algorithm has a greater capability to converge toward precise solutions and exhibits stronger global search abilities. Throughout the iterative process, PDE often demonstrates rapid convergence speed and is less prone to becoming trapped in local optima. Therefore, the PDE algorithm is highly suitable for the search task of target skeleton image registration. Simulating the principles of crossover and mutation in genetics, the differential evolution algorithm is designed using genetic operators [31]. The standard process of the differential evolution algorithm is illustrated in Figure 5.

3.5.1. Population Initialization

Randomly initialize D-dimensional parameter vectors of quantity NP as X, where each set of parameters represents
X i , G = { x i , G 1 , , x i , G D } , i = 1 , , NP
The number of solutions, NP, is chosen according to the circumstances. In the process of medical image registration, each generated DRR image represents a set of pose parameters in the optimization algorithm. In this paper, NP is selected as 10, where each set of pose parameters requires six degrees of freedom. Thus, the dimension D is chosen as 6.

3.5.2. Mutation

Once initialized, for each particle within the population, a specific mutation strategy can be employed to generate the associated mutation vector V i , G . The mutation operation in the PDE algorithm is a crucial component, where mutation is regarded as a stochastic element of change or disturbance. The mutation vector in the PDE algorithm is generated from parent differential vectors and combined with the parent individual vectors to create new individual vectors through crossover. To maintain population diversity, the following mutation strategy “DE/rand/2” [32] is employed, which utilizes three randomly selected individual vectors to generate the mutation vector, as depicted in Equation (10).
V i , G = X r 1 i , G + F · X r 2 i , G X r 3 i , G + F · X r 4 i , G X r 5 i , G
In the Equation (10), the indices r 1 i , r 2 i , r 3 i , r 4 i , r 5 i are mutually exclusive integers randomly generated within the range [1, NP]. For each mutation vector, these indices are randomly generated once. The scaling factor F is a control parameter for scaling the differential vectors. If F is excessively small, it reduces the population diversity, leading to premature convergence of the algorithm. Conversely, if F is excessively large, it decreases the algorithm’s convergence speed and reduces the accuracy of obtaining the global optimum solution. During the initial stages of evolution, due to the considerable diversity among individuals in the population, the differential vectors used as perturbations in mutation are also large. This significant perturbation among individuals favors global exploration in the algorithm. As the evolution progresses and the algorithm approaches convergence, the differences among individuals in the population decrease. Consequently, the differential vectors used as perturbations in mutation also adaptively reduce in size. This adaptive reduction in perturbation is beneficial for local exploration [33,34]. Furthermore, during iterative computation, the evolution rates of parameters across different dimensions in medical images vary. Parameters along the X- and Y-axes have a greater impact on similarity, while those along the Z-axis have a relatively smaller impact. Therefore, a slightly smaller F is set for this dimension compared to the others. Apart from the Z-axis, the evolution of the population is divided into two stages. Through extensive experimentation and comparison, it has been established that the F value is set to 0.5 for the first half stage and 0.8 for the latter half stage.

3.5.3. Crossover

After the mutation operation, for each particle X i , G and its mutation vector V i , G , a crossover operation is performed to generate the trial vector U i , G = { u i , G 1 , u i , G 2 , , u i , G D } . The most commonly used binomial crossover operation in PDE is depicted in Equation (11).
u i , G j = v i , G j , i f r a n d j [ 0 , 1 ) CR o r j = j r a n d x i , G j , o t h e r w i s e
In Equation (11): j = 1 , 2 , , D . The crossover probability CR is used to control the influence of parents in offspring generation, where CR∈ [0, 1). A higher value indicates less influence from the parents. Therefore, the evolution process is divided into two parts using 0.5 as the threshold: the first half stage is set with CR > 0.5, and the second half stage is set with CR < 0.5.

3.5.4. Selection

Compute the objective function values for all trial vectors u i , G j , then compare the objective function value of each trial vector u i , G j with the corresponding vector x i , G j in the current population. If the objective function value of the trial vector u i , G j is greater than that of the corresponding current vector, the trial vector u i , G j will replace the respective current vector x i , G j , entering the next iteration of the population. The selection operation can be represented as follows.
X i , G + 1 = U i , G , i f f U i , G > X i , G X i , G , o t h e r w i s e
During the iterative optimization process, the CT images undergo pose parameter transformations generated randomly during population initialization. Each generated DRR image’s corresponding pose parameters are treated as a particle by the PDE algorithm. In each iteration, the similarity between each particle and the reference images in the frontal–lateral position is computed separately. The composite similarity, evaluated as Sim, measures the overall similarity of the images. Eventually, the particle with the highest similarity evaluation value represents the optimal particle. In the subsequent iteration of the population, the previous iteration’s optimal particle serves as the reference to generate new particles, aiming for higher similarity. When the maximum number of iterations is reached, the particle achieving the highest global fitness corresponds to the pose parameter combination with the highest similarity, and the corresponding DRR image represents the desired registration result. By refining the control parameters of population individuals and the mutation strategy, this algorithm exhibits efficient global optimization capabilities, preventing the algorithm from converging to local optima. Algorithm 2 is represented as follows.
Algorithm 2. Improved Differential Evolution.
N : Population size
D : Dimension of solution space
f ( x ) : Objective function (to be minimized)
[ x min , x max ] : Search bounds
G max : Maximum number of generations
CR : Crossover probability
F : Scaling factor (0.5 for first half, 0.8 for second half)
1: Initialize population P = { x i | i = 1 , 2 , , N } randomly within bounds
2: Compute fitness values f i = f ( x i ) for all individuals
3: Set initial optimal solution x best = arg min ( f i ) , f best = min ( f i )
4: Set generation counter g = 1
5: While g G max do:
       Set F = 0 . 5 if g G max / 2 , else F = 0 . 8
       for i = 1 : N do:
           Randomly select 5 distinct indices r 1 , r 2 , r 3 , r 4 , r 5 i
           Generate mutant vector:
               v i , G = x r 1 , G + F · ( x r 2 , G x r 3 , G ) + F · ( x r 4 , G x r 5 , G )
           Generate trial vector u i via crossover:
              for j = 1 : D do:
                  if rand ( 0 , 1 ) CR or j = randint ( 1 , D ) :
                      u i , j = v i , j
                  else:
                      u i , j = x i , j
           Evaluate fitness f u = f ( u i )
           if f u < f i , update x i = u i and f i = f u
       End(for)
       Update x best and f best if new minimum found
       Increment generation counter g = g + 1
6: Return x best and f best
End

4. Experimental and Results

4.1. Dataset and Experimental Setup

The experiment utilized a real dataset obtained from a hospital, consisting of several sets of the spine, pelvis, and calcaneus medical images depicting various morphologies. X-ray medical images were acquired using frontal–lateral dual-plane imaging techniques for patients.
The hardware configuration used for the experiment consisted of an Intel(R) Core (TM) i5-4590 CPU @ 3.30GHz processor, an NVIDIA GeForce GTX 1060 6GB graphics card, and a PC running 64-bit Windows 10 Professional edition. The programming environment is Matlab R2020a, Visual Studio 2022.

4.2. Experiments with Intelligent Optimization Algorithms

From the CEC (Computational Experimental Competition) test suite [35,36], we selected six representative benchmark functions to evaluate the performance of the PDE algorithm in addressing complex optimization challenges. Detailed information about these benchmark functions is presented in Table 1. F1 to F3 represent single-mode benchmark functions designed to assess the algorithm’s optimization capability and convergence speed. F4 and F5 are multimodal benchmark functions that primarily examine the algorithm’s resilience against local optima. Lastly, F6 serves as a composite benchmark function.
The experiment set up the Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), Genetic Algorithm (GA), Equilibrium Optimizer (EO), and PDE algorithm to evaluate the optimization ability of the benchmark function. The maximum number of iterations is 1000, and all population sizes are set to 30. Figure 6 illustrates the function convergence curves of several optimization algorithms for the benchmark function.
In light of Figure 6, it is evident that the PSO, GWO, and GA exhibit poor performance for the single-modal function F1–F3, with subpar convergence speed and optimization-seeking ability. The EO and PDE, on the other hand, demonstrate the capability to approach the theoretical optimal values in single-mode functions. The PDE algorithm outperforms other algorithms in terms of both convergence speed and optimization accuracy on single-mode functions. For multimodal functions F4–F5, the PSO, GWO, and GA often get trapped in local optimality during the early and mid-term stages, leading to algorithm stagnation. The EO has a limited ability to escape from local extremes, and its convergence speed is notably slow. In conclusion, the PDE algorithm outshines other optimization algorithms in terms of convergence speed, optimization accuracy, and the ability to escape from local extremes in most test functions. It also exhibits robust global search capabilities.
To assess the efficiency of the PDE optimization algorithm, this experiment utilized three sets of spinal data with different morphologies. Comparative experiments in single-posture medical image registration were conducted, pitting the PDE algorithm against the Equilibrium Optimizer (EO) and Particle Swarm Optimization (PSO), with the aim of validating the efficiency of the proposed algorithm. The experiment was standardized with 50 iterations and 10 particles for each data group, and the average of 10 tests per group served as the registration result. Manual registration was considered the optimal result, and successful registration was defined as rotation differences less than 3° and translation differences less than 3 mm, accounting for minor errors in manual registration. The errors in the six degrees of freedom and the average registration time after using each algorithm are presented in Table 2.
The registration success rate of PDE reached 80.56%, surpassing the success rates of the EO (58.33%) and PSO (52.78%). Furthermore, the proposed algorithm reduced the average registration time by 4.14% compared to the other two optimization algorithms. The experimental results indicate that the proposed optimization algorithm demonstrates higher registration efficiency and stronger robustness.

4.3. Experiments on GPU Parallel Generation of DRR

In order to validate the accelerated impact of parallel DRR generation using the CUDA architecture, this experiment employed five sets of spinal data with varying morphologies. The PDE optimization algorithm was applied to traditional medical image registration with 50 iterations and 10 particles. The results, presented in Table 3, highlight a notable reduction of 43.71% in the average registration time when employing GPUs for multi-threaded parallel computation. These findings underscore the efficiency gains achieved through parallel DRR generation based on the CUDA framework, resulting in a substantial decrease in the time required for DRR generation.

4.4. Dual-Pose Registration Experiments

Utilizing the PDE optimization algorithm for single-posture registration, we conducted additional experiments for dual-pose registration using five sets of spinal data. The proposed optimization algorithm was configured with 50 iterations and 10 particles, and the average of 10 registrations per dataset was computed. The experimental results, illustrated in Figure 7a–e, showcase a successful rate of 81.33% achieved by the proposed algorithm. The pose parameters obtained from dual-pose registration exhibit a remarkable reduction of 67.04% in average rotation error and 71.84% in average translation error when compared to those obtained from single-posture registration. These results highlight that the proposed algorithm yields smaller registration errors and more precise registration results in dual-pose registration experiments conducted with five sets of spinal data.

4.5. Comparative Experiments of Registration Algorithms

To assess the accuracy of the proposed medical image registration algorithm, spine, pelvis, and calcaneus data were used for registration. Comparative experiments were conducted against three classical registration algorithms and two deep learning algorithms: MI, NCC, SSD, Siamese Network, and FaceNet. The proposed registration algorithm was set with 50 iterations and 10 particles, and each dataset underwent 10 experiments. The experimental results are presented in Table 4: the success rate of this algorithm is 85.40%, while the success rates of MI, NCC, SSD, Siamese Network, and FaceNet are 23.33%, 22.17%, 21.25%, 22.80%, and 22.93%, respectively. These results demonstrate that compared to other registration algorithms, this algorithm exhibits lower registration errors, higher registration efficiency, and significantly higher registration success rates.
The registration image obtained by this algorithm and the registration outcomes obtained by other registration algorithms are displayed in Figure 8. The figure shows that the algorithm proposed in this paper yields a registration result image with better translation distance and rotation angle accuracy in each degree of freedom than the other three registration algorithms, when compared to the original X-ray reference image.

5. Conclusions

Faced with the challenges of imprecise registration in traditional single-posture medical image registration, particularly for images with ambiguous features or prone to local optima, and significant errors in specific dimensions compared to other degrees of freedom, this paper introduces a novel dual-pose medical image registration algorithm based on an improved differential evolution approach. This method devises a composite measure to assess medical image similarity, employs GPU-accelerated Digitally Reconstructed Radiograph (DRR) technology, integrates dual-pose strategies to obtain frontal–lateral images to be registered, and subsequently utilizes PDE for iterative optimization. The result is a significant improvement in the similarity of DRR images, which serve as the final registration images.
The primary advantages of this algorithm are manifold. The utilization of dual-pose image registration incorporates richer projection view information and image-related details compared to single-posture images, enabling a more comprehensive analysis of six spatial degrees of freedom. This, in turn, reduces errors in rotation and translation, thereby enhancing the precision of medical image registration. The Phased Differential Evolution algorithm employed for optimization iteration improves the accuracy of registration results by refining the control parameters of population individuals and the mutation strategy, consequently augmenting the diversity of particle populations. Furthermore, the use of GPU-based parallel acceleration for image registration technology reduces the time overhead of GPU startup, thereby decreasing the DRR image generation time within the ray-casting algorithm and accelerating the overall execution efficiency of the algorithm.
While the proposed dual-pose medical image registration algorithm shows promising results in reducing registration errors, medical image registration remains a complex and challenging research domain. Future research could explore several directions: integrating deep learning techniques to enhance image preprocessing and contour extraction; investigating multimodal medical image registration, including 3D-3D registration; and exploring real-time medical image registration technology and its practical application in relevant clinical experiments.

Author Contributions

Conceptualization, D.Z.; Methodology, D.Z. and W.L.; Software, F.X.; Validation, F.L.; Data curation, F.X.; Writing—original draft, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ou, X.Y.; Chen, X.; Xu, X.N.; Xie, L.L.; Chen, X.F.; Hong, Z.Z.; Bai, H.; Liu, X.W.; Chen, Q.S.; Li, L.; et al. Recent Development in X-Ray Imaging Technology: Future and Challenges. Research 2021, 2021, 9892152. [Google Scholar] [CrossRef] [PubMed]
  2. Qin, C.X.; Cao, Z.G.; Fan, S.C.; Wu, Y.Q.; Sun, Y.; Politis, C.; Wang, C.L.; Chen, X.J. An oral and maxillofacial navigation system for implant placement with automatic identification of fiducial points. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 281–289. [Google Scholar] [CrossRef] [PubMed]
  3. Liao, R.; Zhang, L.; Sun, Y.; Miao, S.; Chefd’Hotel, C. A Review of Recent Advances in Registration Techniques Applied to Minimally Invasive Therapy. IEEE Trans. Multimedia 2013, 15, 983–1000. [Google Scholar] [CrossRef]
  4. Naik, R.R.; Anitha, H.; Bhat, S.N.; Ampar, N.; Kundangar, R. Realistic C-arm to PCT registration for vertebral localization in spine surgery. Med. Biol. Eng. Comput. 2022, 60, 2271–2289. [Google Scholar] [CrossRef] [PubMed]
  5. Frysch, R.; Pfeiffer, T.; Rose, G. A novel approach to 2D/3D registration of X-ray images using Grangeat’s relation. Med. Image Anal. 2021, 67, 101815. [Google Scholar] [CrossRef] [PubMed]
  6. Sotiras, A.; Davatzikos, C.; Paragios, N. Deformable Medical Image Registration: A Survey. IEEE Trans. Med. Imaging 2013, 32, 1153–1190. [Google Scholar] [CrossRef] [PubMed]
  7. Gobbi, D.; Comeau, R.; Lee, B.; Peters, T. Integration of intra-operative 3D ultrasound with pre-operative MRI for neurosurgical guidance. In Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat. No. 00CH37143), Chicago, IL, USA, 23–28 July 2000; Volume 3, pp. 1738–1740. [Google Scholar] [CrossRef]
  8. Ban, Y.X.; Wang, Y.; Liu, S.; Yang, B.; Liu, M.Z.; Yin, L.R.; Zheng, W.F. 2D/3D Multimode Medical Image Alignment Based on Spatial Histograms. Appl. Sci. 2022, 12, 8261. [Google Scholar] [CrossRef]
  9. Regodic, M.; Bardosi, Z.; Freysinger, W. Automated fiducial marker detection and localization in volumetric computed tomography images: A three-step hybrid approach with deep learning. J. Med. Imaging 2021, 8, 025002. [Google Scholar] [CrossRef] [PubMed]
  10. Yu, W.M.; Tannast, M.; Zheng, G.Y. Non-rigid free-form 2D-3D registration using a B-spline-based statistical deformation model. Pattern Recognit. 2017, 63, 689–699. [Google Scholar] [CrossRef]
  11. Kuppala, K.; Banda, S.; Barige, T.R. An overview of deep learning methods for image registration with focus on feature-based approaches. Int. J. Image Data Fusion 2020, 11, 113–135. [Google Scholar] [CrossRef]
  12. Zhao, J.; Yang, H.; Ding, Y. Medical image registration algorithm research based on mutual information similarity measure. Proc. SPIE 2008, 6625, 51–59. [Google Scholar] [CrossRef]
  13. Tsai, T.Y.; Lu, T.W.; Chen, C.M.; Kuo, M.Y.; Hsu, H.C. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy. Med. Phys. 2010, 37, 1273–1284. [Google Scholar] [CrossRef] [PubMed]
  14. Yan, L.; Wang, Z.Q.; Liu, Y.; Ye, Z.Y. Generic and Automatic Markov Random Field-Based Registration for Multimodal Remote Sensing Image Using Grayscale and Gradient Information. Remote Sens. 2018, 10, 1228. [Google Scholar] [CrossRef]
  15. Damas, S.; Cordón, O.; Santamaría, J. Medical Image Registration Using Evolutionary Computation: An Experimental Survey. IEEE Comput. Intell. Mag. 2011, 6, 26–42. [Google Scholar] [CrossRef]
  16. Ma, G.X.; Ahmed, N.K.; Willke, T.L.; Yu, P.S. Deep graph similarity learning: A survey. Data Min. Knowl. Discov. 2021, 35, 688–725. [Google Scholar] [CrossRef]
  17. Li, M.D.; Chang, K.; Bearce, B.; Chang, C.Y.; Huang, A.J.; Campbell, J.; Brown, J.M.; Singh, P.; Hoebel, K.V.; Erdoğmuş, D.; et al. Siamese neural networks for continuous disease severity evaluation and change detection in medical imaging. NPJ Digit. Med. 2020, 3, 48. [Google Scholar] [CrossRef] [PubMed]
  18. Taigman, Y.; Ming, Y.; Ranzato, M.; Wolf, L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 1701–1708. [Google Scholar] [CrossRef]
  19. McCauley, J.; Soleymani, S.; Williams, B.; Dando, J.; Nasrabadi, N.; Dawson, J. Identical Twins as a facial similarity benchmark for human facial recognition. In Proceedings of the 2021 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 15–17 September 2021; pp. 1–5. [Google Scholar] [CrossRef]
  20. Yunsheng, B.; Hao, D.; Yizhou, S.; Wei, W. Convolutional set matching for graph similarity. arXiv 2018. [Google Scholar] [CrossRef]
  21. Spoerk, J.; Gendrin, C.; Weber, C.; Figl, M.; Pawiro, S.A.; Furtado, H.; Fabri, D.; Bloch, C.; Bergmann, H.; Gröller, E.; et al. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology. Z. Med. Phys. 2012, 22, 13–20. [Google Scholar] [CrossRef] [PubMed]
  22. Banks, S.A.; Hodge, W.A. Accurate measurement of three-dimensional knee replacement kinematics using single-plane fluoroscopy. IEEE Trans. Biomed. Eng. 1996, 43, 638–649. [Google Scholar] [CrossRef] [PubMed]
  23. Akter, M.; Lambert, A.J.; Pickering, M.R.; Scarvell, J.M.; Smith, P.N. A 2D-3D image registration algorithm using log-polar transforms for knee kinematic analysis. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), Fremantle, Australia, 3–5 December 2012; pp. 1–8. [Google Scholar] [CrossRef]
  24. Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H.Z. Transformers in medical imaging: A survey. Med. Image Anal. 2023, 88, 102802. [Google Scholar] [CrossRef] [PubMed]
  25. Almeida, D.F.; Astudillo, P.; Vandermeulen, D. Three-dimensional image volumes from two-dimensional digitally reconstructed radiographs: A deep learning approach in lower limb CT scans. Med. Phys. 2021, 48, 2448–2457. [Google Scholar] [CrossRef] [PubMed]
  26. Fluck, O.; Vetter, C.; Wein, W.; Kamen, A.; Preim, B.; Westermann, R. A survey of medical image registration on graphics hardware. Comput. Methods Programs Biomed. 2011, 104, E45–E57. [Google Scholar] [CrossRef] [PubMed]
  27. Tahmasebi, N.; Boulanger, P.; Yun, J.Y.; Fallone, G.; Noga, M.; Punithakumar, K. Real-Time Lung Tumor Tracking Using a CUDA Enabled Nonrigid Registration Algorithm for MRI. IEEE J. Transl. Eng. Health Med. 2020, 8, 4300308. [Google Scholar] [CrossRef] [PubMed]
  28. Xi, C.; Li, Z.; Zheng, Y. Deep similarity learning for multimodal medical images. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2018, 6, 248–252. [Google Scholar] [CrossRef]
  29. Chen, X.X.; Wang, X.M.; Zhang, K.; Fung, K.M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y.C. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444. [Google Scholar] [CrossRef] [PubMed]
  30. Nikolic, M.; Tuba, E.; Tuba, M. Edge detection in medical ultrasound images using adjusted Canny edge detection algorithm. In Proceedings of the 2016 24th Telecommunications Forum (TELFOR), Belgrade, Serbia, 22–23 November 2016; pp. 1–4. [Google Scholar] [CrossRef]
  31. Deng, W.; Shang, S.F.; Cai, X.; Zhao, H.M.; Song, Y.J.; Xu, J.J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  32. Torres-Cerna, C.E.; Alanis, A.Y.; Poblete-Castro, I.; Bermejo-Jambrina, M.; Hernandez-Vargas, E.A. A comparative study of differential evolution algorithms for parameter fitting procedures. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4662–4666. [Google Scholar] [CrossRef]
  33. Gao, S.C.; Yu, Y.; Wang, Y.R.; Wang, J.H.; Cheng, J.J.; Zhou, M.C. Chaotic Local Search-Based Differential Evolution Algorithms for Optimization. IEEE Trans. Syst. Man Cybern.-Syst. 2021, 51, 3954–3967. [Google Scholar] [CrossRef]
  34. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  35. Yazdani, D.; Branke, J.; Omidvar, M.N.; Li, X.; Li, C.; Mavrovouniotis, M.; Nguyen, T.T.; Yang, S.; Yao, X. IEEE CEC 2022 competition on dynamic optimization problems generated by generalized moving peaks benchmark. arXiv 2021, arXiv:2106.06174. [Google Scholar] [CrossRef]
  36. Chen, Q.; Liu, B.; Zhang, Q.; Liang, J.; Suganthan, P.; Qu, B. Problem Definitions and Evaluation Criteria for CEC 2015 Special Session on Bound Constrained Single-Objective Computationally Expensive Numerical Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2014. [Google Scholar]
Figure 1. Dual-pose medical image registration process based on improved differential evolution.
Figure 1. Dual-pose medical image registration process based on improved differential evolution.
Sensors 25 04604 g001
Figure 2. Obtain the dual-pose DRR image’s projection schematic diagram.
Figure 2. Obtain the dual-pose DRR image’s projection schematic diagram.
Sensors 25 04604 g002
Figure 3. Schematic diagram of ray projection algorithm.
Figure 3. Schematic diagram of ray projection algorithm.
Sensors 25 04604 g003
Figure 4. The Process for DRR calculating parallelly on GPU.
Figure 4. The Process for DRR calculating parallelly on GPU.
Sensors 25 04604 g004
Figure 5. Image/Differential evolution algorithm process.
Figure 5. Image/Differential evolution algorithm process.
Sensors 25 04604 g005
Figure 6. Convergence curve of benchmark functions.
Figure 6. Convergence curve of benchmark functions.
Sensors 25 04604 g006
Figure 7. Comparative experiment of dual-pose strategy registration.
Figure 7. Comparative experiment of dual-pose strategy registration.
Sensors 25 04604 g007
Figure 8. The registration results obtained by various algorithms.
Figure 8. The registration results obtained by various algorithms.
Sensors 25 04604 g008
Table 1. Benchmark function.
Table 1. Benchmark function.
FunctionSearch
Scope
Theoretical
Optimal Value
F1 = Schwefel 2.22[−10,10]0
F2 = Schwefel 1.2[−100,100]0
F3 = Schwefel 2.21[−100,100]0
F4 = Generalized Penalized 1[−50,50]0
F5 = Generalized Penalized 2[−50,50]0
F6 = Kowalik[−5,5]0.0003075
Table 2. Intelligent optimization algorithm registration experiments.
Table 2. Intelligent optimization algorithm registration experiments.
  Bone
Type
AlgorithmAverage
Error of X
Rotation
/ °
Average
Error of Y
Rotation
/ °
Average
Error of Z
Rotation
/ °
Average
Error of X
Translation
/ mm
Average
Error of Y
Translation
/ mm
Average
Error of Z
Translation
/ mm
Average
Registration
Time
/ s
Spine 1PSO4.4032.8661.8391.0821.6215.50462.16
EO2.2521.7681.4170.4373.5535.50862.65
PDE (ours)1.7321.3530.6980.3540.4092.73158.12
Spine 2PSO5.9132.1153.9840.6711.6705.08664.61
EO3.7842.0392.5640.4891.4873.04765.93
PDE (ours)1.0951.7391.7530.2291.1561.93763.72
Spine 3PSO4.0051.9244.6931.7722.5512.46456.08
EO2.1333.1021.2572.0404.0544.85157.24
PDE (ours)2.0461.4791.7011.4381.2711.88054.86
Table 3. Experiments on GPU parallel generation of DRR.
Table 3. Experiments on GPU parallel generation of DRR.
Bone Type  Conventional
Registration
Time/ s
GPU Parallel
Generation of
DRR Registration
Time/s
   Speedup
Ratio
Spine 158.1231.001.8749
Spine 263.7235.871.7762
Spine 354.8630.191.8168
Spine 446.7826.821.7445
Spine 564.4238.291.6825
Table 4. Comparison with registration results of the state-of-the-art algorithms.
Table 4. Comparison with registration results of the state-of-the-art algorithms.
AlgorithmBone TypeAverage
Error of X
Rotation
/ °
Average
Error of Y
Rotation
/ °
Average
Error of Z
Rotation
/ °
Average
Error of X
Translation
/ mm
Average
Error of Y
Translation
/ mm
Average
Error of Z
Translation
/ mm
Average
Registration
Time
/ s
SSDLumbar 13.7234.5783.9863.0525.0058.69061.08
Lumbar 23.6902.4682.7262.26716.1948.639
Lumbar 32.5985.9531.4631.5815.18310.356
Thoracic vertebra 113.0358.6143.9158.0091.2238.464
Thoracic vertebra 25.9511.1601.7924.7991.28610.555
Thoracic vertebra 35.1421.7123.9505.4848.54415.533
Thoracic vertebra 44.6204.3162.11416.0061.75114.582
Spine10.2341.7401.61712.1653.36111.208
Pelvis 111.0775.5472.5113.8372.6636.221
Pelvis 24.7442.5341.6512.8753.40214.787
Tibia4.5509.9899.41010.9994.3695.166
Calcaneus2.7504.44012.9361.6757.71613.753
NCCLumbar 13.2581.6830.7372.2127.6034.18261.80
Lumbar 21.2671.7591.4221.0912.0984.027
Lumbar 33.8981.1720.5602.1637.6455.649
Thoracic vertebra 19.2794.3830.8337.5195.1096.742
Thoracic vertebra 26.4024.8544.5582.8166.9414.325
Thoracic vertebra 34.4971.4013.9865.0947.3019.764
Thoracic vertebra 49.6431.1571.2819.4120.1847.363
Spine7.5980.1151.0248.6859.76111.525
Pelvis 13.7749.0578.3509.20511.8673.560
Pelvis 28.4814.8376.5008.8683.8684.276
Tibia9.3748.0017.0402.8105.0125.138
Calcaneus0.2076.9882.8972.8991.88115.986
MILumbar 15.9762.1482.4081.3107.9225.48459.41
Lumbar 27.5761.0300.4670.8549.8133.748
Lumbar 34.9174.3540.2763.7466.8003.947
Thoracic vertebra 16.5062.3651.0778.6941.5953.794
Thoracic vertebra 21.4220.9363.3583.1562.7756.203
Thoracic vertebra 37.8993.5441.8440.7144.7948.123
Thoracic vertebra 46.5462.8736.1480.4692.2955.637
Spine5.5100.3643.1858.5404.7612.559
Pelvis 12.7621.3641.9739.8020.8405.788
Pelvis 23.8793.0541.7955.0281.8827.962
Tibia7.3996.4203.0471.0718.1595.883
Calcaneus5.7696.7594.7471.3594.4887.308
Siamese Network [17]Lumbar 112.3211.1562.4522.3904.0139.81785.90
Lumbar 210.7923.0910.9073.6326.2908.256
Lumbar 35.0711.2331.7217.1145.5619.095
Thoracic vertebra 14.6984.6401.0966.6803.71314.319
Thoracic vertebra 28.9124.4172.7911.0862.33811.113
Thoracic vertebra 32.1691.2733.5411.4786.85412.723
Thoracic vertebra 47.0062.7991.3714.0922.34613.122
Spine6.7783.5131.0496.1104.0787.576
Pelvis 110.4073.1115.1894.3115.1256.297
Pelvis 24.5982.6911.7458.8706.5885.394
Tibia7.9679.2253.0460.10410.22311.749
Calcaneus3.5196.1799.9912.7092.36113.900
FaceNet [19]Lumbar 110.4921.5292.1181.0482.6536.93583.97
Lumbar 212.0862.4781.7541.4878.51714.821
Lumbar 311.7235.1211.7905.1376.0518.622
Thoracic vertebra 16.2082.6501.2073.9877.01711.059
Thoracic vertebra 213.2203.0721.2532.4211.76910.890
Thoracic vertebra 39.3502.1316.7970.33910.31112.198
Thoracic vertebra 48.5634.1313.1478.1573.7688.893
Spine9.1162.2661.57610.6527.9857.920
Pelvis 14.8973.7628.8989.0239.1589.114
Pelvis 29.7662.0484.6388.4123.5514.645
Tibia4.4576.84711.3491.0328.84815.491
Calcaneus1.5515.2179.4412.2225.41210.160
OursLumbar 10.9160.4650.3040.1820.2481.40061.38
Lumbar 20.7500.3531.8120.1320.7120.438
Lumbar 30.4921.1710.6570.0650.6700.577
Thoracic vertebra 10.1200.3360.2400.6770.2230.489
Thoracic vertebra 20.2120.5550.8340.3480.6390.375
Thoracic vertebra 30.9120.8840.7961.0030.6640.633
Thoracic vertebra 40.9860.4250.7651.0361.6740.401
Spine0.7270.2710.4140.2770.7300.837
Pelvis 10.6140.5450.1430.6740.7230.520
Pelvis 20.9140.8320.4160.2660.2150.297
Tibia0.6280.9560.8830.3550.3590.496
Calcaneus0.2940.1580.9040.3310.7530.217
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, D.; Xing, F.; Liu, W.; Liu, F. Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution. Sensors 2025, 25, 4604. https://doi.org/10.3390/s25154604

AMA Style

Zhou D, Xing F, Liu W, Liu F. Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution. Sensors. 2025; 25(15):4604. https://doi.org/10.3390/s25154604

Chicago/Turabian Style

Zhou, Dibin, Fengyuan Xing, Wenhao Liu, and Fuchang Liu. 2025. "Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution" Sensors 25, no. 15: 4604. https://doi.org/10.3390/s25154604

APA Style

Zhou, D., Xing, F., Liu, W., & Liu, F. (2025). Highly Accelerated Dual-Pose Medical Image Registration via Improved Differential Evolution. Sensors, 25(15), 4604. https://doi.org/10.3390/s25154604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop