Next Article in Journal
Analysis and Design of a Smart Controller for Managing Penetration of Renewable Energy Including Cybersecurity Issues
Previous Article in Journal
Managed Evolution of Automotive Software Product Line Architectures: A Systematic Literature Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Intensity Optimization-Based CT and Cone Beam CT Image Registration

School of Information Science and Engineering, Yunnan University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(12), 1862; https://doi.org/10.3390/electronics11121862
Submission received: 19 May 2022 / Revised: 8 June 2022 / Accepted: 9 June 2022 / Published: 13 June 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Cancer is a highly lethal disease that is mainly treated by image-guided radiotherapy. Because the low dose of cone beam CT is less harmful to patients, cone beam CT images are often used for target delineation in image-guided radiotherapy of various cancers, especially in breast and lung cancer. However, breathing and heartbeat can cause position errors in images taken during different periods, and the low dose of cone beam CT also results in insufficient imaging clarity, rendering existing registration methods unable to meet the CT and cone beam CT registration tasks. In this paper, we propose a novel multi-intensity optimization-based CT and cone beam CT registration method. First, we use a multi-weighted mean curvature filtering algorithm to preserve the multi-intensity details of the input image pairs. Then, the strong edge retention results are registered using and intensity-based method to obtain the multi-intensity registration results. Next, a novel evaluation method called intersection mutual information is proposed to evaluate the registration accuracy of the different multi-intensity registration results. Finally, we determine the optimal registration transformation by intersection mutual information and apply it to the input image pairs to obtain the final registration results. The experimental results demonstrate the excellent performance of the proposed method, meeting the requirements of image-guided radiotherapy.

Graphical Abstract

1. Introduction

Cancer, with remarkable incidence and mortality, is one of the most concerning diseases worldwide, especially for a country like China, where the population base is undoubtedly huge. According to the latest statistics published by the National Cancer Center (NCC) of China [1], more than four million new cancer cases and nearly two million new cancer deaths happened in China in 2016. For this reason, radiotherapy, together with surgery and chemotherapy as major or complementary approaches, plays an important role as one of the three major strategies of cancer management [2,3,4].
Radiotherapy kills cancer cells by feeding high-energy dose rays to the target area. During image-guided radiotherapy (IGRT), images taken at different periods (before and after radiotherapy) need to be registered for target delineation and postoperative curative effectiveness analysis [5]. IGRT represents a great breakthrough in modern radiation oncology, in which linear accelerators are equipped with imaging devices, thus providing verification images prior to and during treatment [6,7,8,9]. The extensively adopted IGRT methods can be classified into two categories: 1000 V CT imaging using a cone X-ray beam (KVCT/CBCT) and an electronic portal image device (EPID) using a MV X-ray beam. Typically, considering the difference in beam energy, CBCT is widely applied in the clinical IGRT routine [10,11,12,13]. The development of IGRT has promoted the research and application of related new technologies for cancer treatment [14,15,16]. For CBCT-based guided radiotherapy, it is meaningful that, with the help of imaging registration techniques, the shift data generated according to the comparison of on-site images can correct the treatment table per fraction, subsequently resulting in more precise patient positioning. Specifically, in the process of IGRT, CT images (with a high imaging dose more harmful to patients) are first captured, and the cancerous areas are analyzed using the clear CT images before developing the radiotherapy plan. Then, CBCT images (with a low imaging dose less harmful to patients) serve as a guide to the location of the patient’s cancerous areas during the radiotherapy sessions. However, the current methods mainly rely on manual matching; although many registration methods have since been proposed, most of them cannot meet the requirements of CT and cone beam CT (CBCT) image registration tasks. Some typical imaging styles of CT and CBCT in IGRT are shown in Figure 1.
As can be seen from Figure 1, unlike most existing image registration tasks, the CT and CBCT image registration task faces the following difficulties: (i) poor imaging quality. As shown in Pair 1 of Figure 1, CBCT images generally have poor imaging quality (low-dose imaging), and most existing intensity-based registration methods cannot find sufficient feature regions; (ii) imaging deformation. As shown in Pair 2, the length of the yellow lines is consistent, meaning that the CBCT image is deformed compared with the CT image, indicating that the rigidity-based registration method is not suitable to solve the CT–CBCT registration task; (iii) incomplete imaging. As shown in Pair 3 and Pair 4, the imaging region of the CBCT is incomplete compared with the corresponding CT image, and most visual features and similarity-based registration methods cannot be applied to the CT–CBCT image registration task; (iv) inconsistent visual features. As shown in Pair 4, the poor quality of the CBCT image can cause some messy feature points that are not strictly consistent with the CT image. It should be noted that these four issues can occur randomly or even simultaneously during CBCT imaging, which poses a great challenge to the existing registration methods.
To address the above problems, we propose a novel multi-intensity optimization-based CT–CBCT registration method for image-guided radiotherapy. First, we propose a novel multi-weighted mean curvature filtering algorithm (MWMC) algorithm to process the input image pairs to obtain a series of multi-intensity image pairs of CT and CBCT. Then, the multi-intensity image pairs are registered using an intensity-based method. Next, we propose a novel evaluation index called intersection mutual information (IMI) to determine the optimal registration transformation for different intensity image pairs. The final registration result can be produced by applying the optimal registration transformation to the source input image pairs. The experimental results demonstrate the excellent performance of the proposed method in CT–CBCT image registration tasks, with the registration accuracy and time cost meeting the requirements of IGRT.
The remainder of this paper is organized as follows: Section 2 introduces the proposed multi-intensity optimization-based CT–CBCT registration method; Section 3 provides the experimental results and their analysis; Section 4 draws the conclusion of our work and discusses prospects for future applications.

2. Materials and Methods

In this section, the proposed CT and CBCT registration method is introduced in detail. As shown in Figure 2, the proposed method has two main stages. First, the input CT–CBCT pairs are processed using the proposed multi-weighted mean curvature filtering algorithm (MWMC) to produce the multi-intensity edge preserving results. Then, the multi-intensity image pairs are registered using the intensity-based method. In addition, we propose a novel evaluation metric for incomplete image registration to measure the registration performance, called intersection mutual information (IMI). Using IMI, we can obtain the optimal registration transformation with different intensity image pairs, before finally applying this transformation to register the source input pairs.

2.1. Details of the Proposed Model

The proposed model in this paper mainly contains four steps: multi-weighted mean curvature filter, intensity-based registration, multi-intensity-based registration, and intersection mutual information-based optimal transformation selection. Below, these four steps are introduced in detail.

2.1.1. Multi-Weighted Mean Curvature Filter

Weighted mean curvature (WMC) [17] originates from mean curvature (MC) [18,19]. MC has been widely used in the field of image processing, and the MC of an image U is defined as
H ( U ) = 1 n U U 2 ,
where and are the gradient and divergence operators, respectively. For 2D images, n = 2 , MC can be rewritten as
H = U x 2 U y y 2 U x U y U x y + U y 2 U x x 2 ( U x 2 + U y 2 ) 3 2 .
MC is independent of image contrast, which is suitable for medical images. WMC is defined as
H w ( U ) = n U 2 H ( U ) .
For 2D images, Equation (3) can be rewritten as
H w ( U ) = U 2 ( U U 2 ) = Δ U U y 2 U y y + 2 U x U y U x y + U x 2 U x x U x 2 + U y 2 ,
where Δ denotes the isotropic Laplace operator. The WMC regularization term is defined as
R H w ( U ) = H w ( U ) q d x ,
where q is the scalar parameter defining the norm, q > 0 . x R n is the spatial coordinate, and n is the dimension of the input image ( n = 2 in this paper).
The proposed multi-weighted mean curvature (MWMC) is defined as
M W M C ( I ) m = I , m = 0 M W M C ( I ) m 1 + δ H w ( M W M C ( I ) m 1 , m 1 ,
where m is the number of filtered image pairs obtained by MWMC, and δ is the size of the discrete timestep. Figure 3 shows a set of multi-intensity edge preserving results using MWMC with different m.
It can be seen from Figure 3 that edge preserving results with different intensity can be obtained using MWMC with different m. Figure 4 gives the results of color images using MWMC, where it can be clearly observed that MWMC can effectively retain the strong edge information of different intensities. The main motivation of the proposed method based on MWMC is that, by filtering out the chaotic textures and noisy points in CBCT and CT images, the registration accuracy can be improved. When m is large enough, only the significant information is retained in the input image, which helps us to improve the registration accuracy and efficiency.

2.1.2. Intensity-Based Registration

The registration problem can be defined as follows: find a suitable transformation such that a transformed version of a moving image is similar to a fixed image [20,21,22]; that is, given images s ( m , n ) and t ( m , n ) , find the spatial transformation f such that s ( f x ( m , n ) , f y ( m , n ) ) , and t ( m , n ) are similar or consistent by optimizing
min f c ψ ( s ( f ) , t , f ) ,
where ψ ( , , ) denotes the difference measure. The spatial transformation can be expressed as
s ( f x ( m , n ) , f y ( m , n ) ) = p Z q Z c [ p , q ] φ ( f x ( m , n ) p ) φ ( f y ( m , n ) q ) ,
where c is the coefficient, and φ ( x ) is the image representation function. For our registration tasks, CT and CBCTs are 2D images; hence, the optimization function using the least squares method can be defined as
ψ ( s ( f ) , t , f ) = m n ( s ( f x ( m , n ) , f y ( m , n ) t ( m , n ) ) 2 .
As the CT and CBCT images do not use the same imaging modality, there is no linear relationship between the intensity values. Accordingly, when using the sum of squared differences, cross-correlation may not work. Thus, we can use the mutual information to measure the similarity of CT and CBCT images. In this paper, we used CT as the fixed image and CBCT as the moving image.
Let P s ( f ) , t ( s ( f ) , t ) denote the joint probability distribution function of image s ( f ) and t. The mutual information [23] can be defined as
M I ( s ( f ) , t ) = s ( f ) , p P s ( f ) , p ( s ( f ) , t ) log ( P s ( f ) , t ( s ( f ) , t ) P s ( f ) ( s ( f ) ) P t ( t ) ) .
Thus, the optimization function can be defined as
M I ( s ( f ) , t ) min f .
The process of intensity-based image registration is shown in Figure 5. Intensity-based image registration is an iterative process, which begins with a random initial transformation matrix. Then, the transformation matrix is applied to the input CBCT image with bilinear interpolation. When the transformation is finished, a similarity metric is used to compare the transformed CBCT with the input CT image. Next, the optimizer checks for a stop condition. The process stops when the similarity measure matrix is large enough or the maximum number of iterations is reached.

2.1.3. Multi-Intensity Based Registration

The intensity-based registration method introduced in Section 2.2 lacks robustness for CT–CBCT registration tasks. Figure 6 shows some intensity-based registration results using different similarity metrics. It can be found that CT–CBCT image registration cannot be achieved using the similarity metrics of phase correlation and mean squares. Mutual information outperformed the other two similarity metrics. However, due to the imaging quality of CBCT images, the registration result based on mutual information still could not obtain ideal results (as shown by the white arrows in Figure 6).
The reason for the poor registration performance in Figure 6 is the poor imaging quality of CT and CBCT introduced in Section 1. Because intensity-based registration methods are more sensitive to local intensity information, noisy points and messy textures directly affect the registration performance. Therefore, we propose a multi-intensity-based registration method.
First, we generate a series of multi-intensity edge-preserving CT and CBCT image pairs using the MWMC proposed in Section 2.1. Then, the multi-intensity edge-preserving results are registered using the intensity-based method, allowing us to obtain transformation matrices of the input images with different intensities. Some results are given in Figure 7. It can be seen that excellent registration performance could be achieved with certain intensity edge-preserving results, such as m = 80 and m = 100. Therefore, after determining the optimal registration transformation, it can be directly applied to the input CT and CBCT images, thus obtaining an ideal registration result.

2.1.4. Intersection Mutual Information-Based Optimal Transformation Selection

Due to the incomplete imaging and poor imaging quality of CBCT, the calculation of mutual information can be vulnerable to noisy points and inconsistent imaging regions. In order to overcome this problem, we propose a novel similarity metric, i.e., intersection mutual information, which can be defined as follows:
I M I = M I { ( R m o v e d _ C B C T R C T ) · I m o v e d _ C B C T , ( R m o v e d _ C B C T R C T ) · I C T } ,
where M I { , } can be computed using Equation (10), and R I denotes the imaging regions of image I. In our paper, the imaging region was segmented by the active contours (snakes) technique, which is a region growing algorithm [24,25]. To illustrate the validity of the proposed IMI, Figure 8 presents a set of experiments. As can be seen from Figure 8b, the MI values of CT and moving CBCT were greater than the MI values of CT and moved CBCT, which is obviously unreasonable. In other words, we cannot directly use MI to measure the registration degree of the input image pairs. Using Equation (12), the obtained IMI value after registration is significantly higher than that before registration, indicating that we can measure the registration degree of the input image pairs using IMI.
Finally, the optimal transformation f o p t with different multi-intensity registration results can be determined by ranking their IMI.
f o p t = max i m I M I ( I c b c t ( f i ) , I c t ) ,
where m is the number of multi-intensity edge preserving image pairs obtained by MWMC. The optimal transformation can be found from the m transformations using Equation (13). Furthermore, the final registration result can be obtained by applying f o p t to the input CBCT.

2.2. Sample and Data

The datasets used in our experiments were obtained from our cooperative medical facility, Affiliated Hospital of Yunnan University. The samples are shown in Figure 9. All datasets were anonymized. For each patient, a pair of treatment planning CT and CBCT images was obtained. All planning CT images were acquired using the same CT simulation system (Philips Brilliance Big Bore), with a slice thickness of 3 mm. Likewise, CBCT images were all obtained using the same imaging device integrated on linac (XVI, Elekta Solutions AB, Stockholm, Sweden) prior to treatment. The size of all images was uniformly set to 256 × 256 pixels in our experiments.

2.3. Measures of Parameters

The parameters required in the proposed method mainly included the multi-intensity retention parameter m and the intensity-based registration parameters. After extensive testing using the CT and CBCT images, when the multi-intensity parameters exceeded 100, the processing results remained stable. Therefore, we set m to 100 in this paper. For the CT and CBCT images exported from medical devices, their size was uniformly normalized to 250 × 250. In practical applications, the size can be adjusted as needed.

3. Experiments and Analysis

To illustrate the effectiveness of the proposed CT–CBCT registration method, we compare the proposed method with some typical registration methods using several datasets.

3.1. Experimental Settings

In our experiments, we compared the proposed method with the Advanced Normalization Tools (ANTs) [26,27,28,29,30,31]. ANTs is the most used medical image registration toolkit with very good performance. An overview of the compared methods and similarity terms is given in Table 1. Among them, Affine denotes the affine transformation-based method, Rigid denotes the rigid transformation-based method, Similarity denotes the rotation and uniform scaling transformation-based method, and SyN denotes the symmetric diffeomorphic transformation-based method.

3.2. Experimental Results and Analysis

The experimental results using different medical image registration methods are shown in Figure 10. As can be seen from Pair 4, when the imaging regions of CT and CBCT images were relatively consistent, i.e., the CBCT image was relatively complete, the Affine, Similarity, SyN-based, and our proposed method could achieve better registration results. For CBCT images with partially missing regions, the Affine and SyN-based methods also achieved good registration performance, as shown in Pair 3. However, it can be seen from the enlarged region (yellow arrow) that the registration accuracy of the proposed method was superior to that of the Affine and SyN-based methods.
As shown in Figure 11 (Pair 1 and Pair 2), the compared methods were completely ineffective for severely incomplete imaging CBCT and CT image pairs. However, the proposed method also achieved good registration accuracy in these cases. These experiments illustrate that the proposed method can overcome the challenges in CT and CBCT image registration tasks introduced in Section 1.
To further validate the performance of the proposed CT–CBCT registration method, we performed a quantitative comparison of different methods using the IMI and SSIM metrics [32,33]; the results are shown in Table 2. It can be seen that the values of IMI obtained using were superior to those using other methods. in particular, our method exhibited a great advantage when using incomplete images, such as Pair 1 and Pair 2. This is consistent with the visual comparison shown in Figure 9. For relatively complete CBCT images, our method also achieved good performance (Pair 3 and Pair 4).
Table 3 presents the average values of IMI and SSIM using different registration methods on our datasets. It can be seen that our method achieved the best IMI and SSIM values, indicating its higher registration accuracy compared to other methods. More importantly, the proposed method is a promising candidate for application in IGRT.

4. Conclusions

In this paper, we proposed a novel multi-intensity optimization-based CT–CBCT registration method for IGRT tasks. The proposed method can overcome the problems caused by poor imaging quality, imaging deformation, incomplete imaging, and inconsistent visual features of CBCT and CT image pairs. The experimental results demonstrated the excellent performance of the proposed method in CT–CBCT image registration tasks and showed its potential application in IGRT. At present, this method can be applied to the evaluation of postoperative treatment effectiveness, as well as other medical applications. However, because the proposed method involves an optimization process, it is difficult to guide patient positioning during IGRT in real time. Thus, in the future, we will consider how to reduce the complexity of the proposed method and make it more suitable for IGRT.

Author Contributions

Conceptualization, L.X. and K.H.; methodology, L.X. and K.H.; software, L.X.; validation, J.G. and D.X.; formal analysis, L.X.; investigation, L.X.; resources, L.X.; data curation, L.X.; writing—original draft preparation, L.X.; writing—review and editing, K.H.; visualization, J.G.; supervision, D.X.; project administration, D.X.; funding acquisition, D.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant No. 62162068 and Grant No. 61761049, in part by the Yunnan Province Ten Thousand Talents Program and the Yunling Scholars Special Project under Grant YNWR-YLXZ-2018-022, and in part by the Yunnan Provincial Science and Technology Department–Yunnan University “Double First Class” Construction Joint Fund Project under Grant No. 2019FY003012.

Institutional Review Board Statement

Our study did not involve animal or other conditions that required ethical approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zheng, R.; Zhang, S.; Zeng, H.; Wang, S.; Sun, K.; Chen, R.; Li, L.; Wei, W.; He, J. Cancer incidence and mortality in China, 2016. J. Natl. Cancer Cent. 2022, 2, 1–9. [Google Scholar] [CrossRef]
  2. Li, Z.; Fang, Y.; Chen, H.; Zhang, T.; Yin, X.; Man, J.; Yang, X.; Lu, M. Spatiotemporal trends of the global burden of melanoma in 204 countries and territories from 1990 to 2019: Results from the 2019 global burden of disease study. Neoplasia 2022, 24, 12–21. [Google Scholar] [CrossRef] [PubMed]
  3. Coccia, M.; Roshani, S.; Mosleh, M. Scientific Developments and New Technological Trajectories in Sensor Research. Sensors 2021, 21, 7803. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, X.; Wang, X.; Li, X.; Zhou, L.; Nie, S.; Li, C.; Wang, X.; Dai, G.; Deng, Z.; Zhong, R. Evaluating the impact of possible interobserver variability in CBCT-based soft-tissue matching using TCP/NTCP models for prostate cancer radiotherapy. Radiat. Oncol. 2022, 17, 1–9. [Google Scholar] [CrossRef] [PubMed]
  5. Verellen, D.; Ridder, M.; Linthout, N.; Tournel, K.; Soete, G.; Storme, G. Innovations in image-guided radiotherapy. Nat. Rev. Cancer 2007, 7, 949–960. [Google Scholar] [CrossRef]
  6. Clough, A.; Sanders, J.; Banfill, K.; Faivre-Finn, C.; Price, G.; Eccles, C.L.; Aznar, M.; Van Herk, M. A novel use for routine CBCT imaging during radiotherapy to detect COVID-19. Radiography 2022, 28, 17–23. [Google Scholar] [CrossRef]
  7. Pollard, J.; Wen, Z.; Sadagopan, R.; Wang, J.; Ibbott, G. The future of image-guided radiotherapy will be MR guided. Br. J. Radiol. 2017, 90, 20160667. [Google Scholar] [CrossRef] [Green Version]
  8. Åström, L.M.; Behrens, C.P.; Calmels, L.; Sjöström, D.; Geertsen, P.; Mouritsen, L.S.; Serup-Hansen, E.; Lindberg, H.; Sibolt, P. Online adaptive radiotherapy of urinary bladder cancer with full re-optimization to the anatomy of the day: Initial experience and dosimetric benefits. Radiother. Oncol. 2022, 171, 37–42. [Google Scholar] [CrossRef]
  9. Coccia, M. Artificial intelligence technology in cancer imaging: Clinical challenges for detection of lung and breast cancer. J. Soc. Adm. Sci. 2019, 6, 82–98. [Google Scholar]
  10. Gong, J.; He, K.; Xie, L.; Xu, D.; Yang, T. A Fast Image Guide Registration Supported by Single Direction Projected CBCT. Electronics 2022, 11, 645. [Google Scholar] [CrossRef]
  11. Papp, J.; Simon, M.; Csiki, E.; Kovács, Á. CBCT Verification of SRT for Patients with Brain Metastases. Front. Oncol. 2021, 11, 745140. [Google Scholar] [CrossRef] [PubMed]
  12. Baeza, J.A.; Zegers, C.M.; de Groot, N.A.; Nijsten, S.M.; Murrer, L.H.; Verhoeven, K.; Boersma, L.; Verhaegen, F.; van Elmpt, W. Automatic dose verification system for breast radiotherapy: Method validation, contour propagation and DVH parameters evaluation. Phys. Med. 2022, 97, 44–49. [Google Scholar] [CrossRef] [PubMed]
  13. Coccia, M. Probability of discoveries between research fields to explain scientific and technological change. Technol. Soc. 2022, 68, 101874. [Google Scholar] [CrossRef]
  14. Coccia, M.; Finardi, U. New technological trajectories of non-thermal plasma technology in medicine. Int. J. Biomed. Eng. Technol. 2013, 11, 337–356. [Google Scholar] [CrossRef]
  15. Liang, J.; Liu, Q.; Grills, I.; Guerrero, T.; Stevens, C.; Yan, D. Using previously registered cone beam computerized tomography images to facilitate online computerized tomography to cone beam computerized tomography image registration in lung stereotactic body radiation therapy. J. Appl. Clin. Med. Phys. 2022, 23, e13549. [Google Scholar] [CrossRef] [PubMed]
  16. Woodford, K.; Panettieri, V.; Ruben, J.D.; Davis, S.; Tran Le, T.; Miller, S.; Senthi, S. Oesophageal IGRT considerations for SBRT of LA-NSCLC: Barium-enhanced CBCT and interfraction motion. Radiat. Oncol. 2021, 16, 218. [Google Scholar] [CrossRef] [PubMed]
  17. Gong, Y.; Goksel, O. Weighted mean curvature. Signal. Processing 2019, 164, 329–339. [Google Scholar] [CrossRef]
  18. Taylor, J.E. II—mean curvature and weighted mean curvature. Acta Metall. Et Mater. 1992, 40, 1475–1485. [Google Scholar] [CrossRef]
  19. Colding, T.; Minicozzi, W.; Pedersen, E. Mean curvature flow. Bull. Am. Math. Soc. 2015, 52, 297–333. [Google Scholar] [CrossRef] [Green Version]
  20. Papenberg, N.; Schumacher, H.; Heldmann, S.; Wirtz, S.; Bommersheim, S.; Ens, K.; Modersitzki, J.; Fischer, B. A Fast and Flexible Image Registration Toolbox. In Bildverarbeitung Für Die Medizin; Springer: Berlin/Heidelberg, Germany, 2007; pp. 106–110. [Google Scholar]
  21. Johnson, H.J.; Christensen, G.E. Consistent landmark and intensity-based image registration. IEEE Trans. Med. Imaging 2002, 21, 450–461. [Google Scholar] [CrossRef]
  22. Klein, S.; Staring, M.; Murphy, K.; Viergever, M.; Pluim, P. Elastix: A toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 2009, 29, 196–205. [Google Scholar] [CrossRef] [PubMed]
  23. Duncan, T.E. On the calculation of mutual information. SIAM J. Appl. Math. 1970, 19, 215–220. [Google Scholar] [CrossRef] [Green Version]
  24. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  25. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Processing 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Avants, B.B.; Tustison, N.; Song, G. Advanced normalization tools (ANTS). Insight J. 2009, 2, 1–35. [Google Scholar]
  27. Tustison, N.J.; Cook, P.A.; Klein, A.; Song, G.; Das, S.R.; Duda, J.T.; Kandel, B.; Strien, N.; Stone, J.; Gee, J.; et al. Large-scale evaluation of ANTs and FreeSurfer cortical thickness measurements. Neuroimage 2014, 99, 166–179. [Google Scholar] [CrossRef]
  28. Sanchez, C.E.; Richards, J.E.; Almli, C.R. Age-specific MRI templates for pediatric neuroimaging. Dev. Neuropsychol. 2012, 37, 379–399. [Google Scholar] [CrossRef] [Green Version]
  29. Tustison, N.J. Explicit B-spline regularization in diffeomorphic image registration. Front. Neuroinform. 2013, 7, 39. [Google Scholar] [CrossRef] [Green Version]
  30. Avants, B.B.; Tustison, N.; Song, G.; Cook, P.; Klein, A.; Gee, J. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 2011, 54, 2033–2044. [Google Scholar] [CrossRef] [Green Version]
  31. Avants, B.B.; Epstein, C.L.; Grossman, M.; Gee, J. Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 2008, 12, 26–41. [Google Scholar] [CrossRef] [Green Version]
  32. Omer, O.A.; Tanaka, T. Robust image registration based on local standard deviation and image intensity. In Proceedings of the 2007 6th International Conference on Information, Communications & Signal Processing, Singapore, 10–13 December 2007; pp. 1–5. [Google Scholar] [CrossRef]
  33. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Typical imaging styles of CBCT and CT in IGRT (chest site).
Figure 1. Typical imaging styles of CBCT and CT in IGRT (chest site).
Electronics 11 01862 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Electronics 11 01862 g002
Figure 3. Multi-intensity edge preserving results using MWMC with different m. The first and third lines display the CBCT and CT images, whereas the second and fourth lines display the pseudo-color maps.
Figure 3. Multi-intensity edge preserving results using MWMC with different m. The first and third lines display the CBCT and CT images, whereas the second and fourth lines display the pseudo-color maps.
Electronics 11 01862 g003
Figure 4. Multi-intensity edge preserving results of color image using MWMC with different m.
Figure 4. Multi-intensity edge preserving results of color image using MWMC with different m.
Electronics 11 01862 g004
Figure 5. Flowchart of the intensity-based registration method.
Figure 5. Flowchart of the intensity-based registration method.
Electronics 11 01862 g005
Figure 6. Intensity-based registration result using different similarity metrics.
Figure 6. Intensity-based registration result using different similarity metrics.
Electronics 11 01862 g006
Figure 7. Multi-intensity-based registration results using different m.
Figure 7. Multi-intensity-based registration results using different m.
Electronics 11 01862 g007
Figure 8. Illustration of IMI. (a) The source CT, moving CBCT, moved CBCT, and IMI mask maps. (b) The MI and IMI values of CT and moving CBCT and of CT and moved CBCT. The y-axis represents the values of IMI and SSIM metrics.
Figure 8. Illustration of IMI. (a) The source CT, moving CBCT, moved CBCT, and IMI mask maps. (b) The MI and IMI values of CT and moving CBCT and of CT and moved CBCT. The y-axis represents the values of IMI and SSIM metrics.
Electronics 11 01862 g008
Figure 9. Data used in our experiments (CBCT and CT images of the chest site).
Figure 9. Data used in our experiments (CBCT and CT images of the chest site).
Electronics 11 01862 g009
Figure 10. Visual comparison of different medical image registration methods on our datasets.
Figure 10. Visual comparison of different medical image registration methods on our datasets.
Electronics 11 01862 g010
Figure 11. Comparison of different methods according to IMI and SSIM. The y-axis represents the values of IMI and SSIM metrics for Pair 1 to Pair 4 using different registration methods.
Figure 11. Comparison of different methods according to IMI and SSIM. The y-axis represents the values of IMI and SSIM metrics for Pair 1 to Pair 4 using different registration methods.
Electronics 11 01862 g011
Table 1. Compared methods. Similarity measure acronyms: CC = neighborhood cross-correlation, Mean Squares = mean squared difference, MI = mutual information.
Table 1. Compared methods. Similarity measure acronyms: CC = neighborhood cross-correlation, Mean Squares = mean squared difference, MI = mutual information.
MethodTransformationSimilarity Measures
AffineAffine registrationMI, Mean Squares, GC
RigidRigid registrationMI, Mean Squares, GC
SimilarityRotation + uniform scalingMI, Mean Squares, GC
SyNSymmetric diffeomorphicCC, MI, Mean Squares, Demons
Table 2. Quantitative comparison of different methods according to IMI and SSIM metrics. The best results are shown in bold.
Table 2. Quantitative comparison of different methods according to IMI and SSIM metrics. The best results are shown in bold.
Pair 1Pair 2Pair 3Pair 4
MethodIMISSIMIMISSIMIMISSIMIMISSIM
Affine0.47460.31830.50320.27050.69040.30300.87660.3134
Rigid0.57110.25420.50350.29730.69230.26360.83510.3515
Similarity0.47270.25690.55490.38850.69120.26670.87670.3138
SyN0.47420.27120.48130.22830.72360.29060.88350.3188
Our method0.59400.38980.58260.38470.76760.30660.89260.3159
Table 3. The average IMI and SSIM metrics using different methods for all datasets. The best results are shown in bold.
Table 3. The average IMI and SSIM metrics using different methods for all datasets. The best results are shown in bold.
MethodAffineRigidSimilaritySyNOur Method
IMI0.63620.65050.64880.64060.7092
SSIM0.30130.29160.30640.27720.3492
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, L.; He, K.; Gong, J.; Xu, D. Multi-Intensity Optimization-Based CT and Cone Beam CT Image Registration. Electronics 2022, 11, 1862. https://doi.org/10.3390/electronics11121862

AMA Style

Xie L, He K, Gong J, Xu D. Multi-Intensity Optimization-Based CT and Cone Beam CT Image Registration. Electronics. 2022; 11(12):1862. https://doi.org/10.3390/electronics11121862

Chicago/Turabian Style

Xie, Lisiqi, Kangjian He, Jian Gong, and Dan Xu. 2022. "Multi-Intensity Optimization-Based CT and Cone Beam CT Image Registration" Electronics 11, no. 12: 1862. https://doi.org/10.3390/electronics11121862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop