Next Article in Journal
Mean Value of the General Dedekind Sums over Interval \({[1,\frac{q}{p})}\)
Previous Article in Journal
Weight Queue Dynamic Active Queue Management Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

miRID: Multi-Modal Image Registration Using Modality-Independent and Rotation-Invariant Descriptor

by
Thuvanan Borvornvitchotikarn
* and
Werasak Kurutach
Faculty of Information Science and Technology, Mahanakorn University of Technology, Bangkok 10530, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(12), 2078; https://doi.org/10.3390/sym12122078
Submission received: 22 November 2020 / Revised: 8 December 2020 / Accepted: 10 December 2020 / Published: 15 December 2020
(This article belongs to the Section Computer)

Abstract

:
Axiomatically, symmetry is a fundamental property of mathematical functions defining similarity measures, where similarity measures are important tools in many areas of computer science, including machine learning and image processing. In this paper, we investigate a new technique to measure the similarity between two images, a fixed image and a moving image, in multi-modal image registration (MIR). MIR in medical image processing is essential and useful in diagnosis and therapy guidance, but still a very challenging task due to the lack of robustness against the rotational variance in the image transformation process. Our investigation leads to a novel, local self-similarity descriptor, called the modality-independent and rotation-invariant descriptor (miRID). By relying on the mean of the intensity values, an miRID is simply computable and can effectively handle the complicated intensity relationship between multi-modal images. Moreover, it can also overcome the problem of rotational variance by sorting the numerical values, each of which is the absolute difference between each pixel’s intensity and the mean of all pixel intensities within a patch of the image. The experimental result shows that our method outperforms others in both multi-modal rigid and non-rigid image registrations.

1. Introduction

Over the last decade, medical imaging has been an important tool in clinical practice and many biomedical studies [1]. In image-guided therapy, medical images from various sources provide different information and, therefore, are employed for different purposes. For example, X-ray, ultrasound, computer tomography (CT) and magnetic resonance (MR) provide anatomical structure information, but positron emission tomography (PET) and single-photon emission computed tomography (SPECT) give the metabolism and physiological information [2]. Consequently, medical practitioners have to use multi-modal medical images generated at different times with different resolutions in order to make a better diagnostic of the disease [1]. Primarily, multi-modal image registration (MIR) is an underlying mechanism of an image-guided diagnostic tool. The aim of MIR is to align the corresponding features of two images. In addition, it has been enhanced to combine the anatomical and functional structures from two different image modalities. Therefore, currently, applications of MIR can be classified into two major categories: mono-modal and multi-modal registrations. The former involves the alignment between the same image modality and the latter different image modalities [2,3,4].
In general, CT and MR possess high spatial resolutions, but have limited functional information. On the other hand, PET and SPECT can provide physiological images, but lack anatomical information [3]. In order to obtain more complete medical information, MIR has been proposed to align two images with different modalities: a moving and a fixed one. Particularly, it can provide both anatomical and functional information [5,6]. For example, registration of CT and PET images has been used for positioning esophageal cancer, which is useful in detecting the tumor target and for planning the radiation treatment in clinical therapy [7]. MR–CT registration has been used to delineate the clinical target volume in the image-guided radiotherapy of prostate cancer [8]. In fact, applications of MIR in medical multi-modal images have faced more challenges due to the complicated relationships of the intensities between the two images. A difficult task is to define the similarity of the registered images [9].
During the last few decades, several methods of similarity measures in MIR have been proposed. In order to qualify as a similarity measure, all of them must satisfy at least two fundamental mathematical properties: symmetry and reflexivity [10]. Initially, mutual information (MI) has been extensively and successfully used in image processing to measure the intensity relationships [4,6]. Borvornvitchotikarn and Kurutach [11] introduced a taxonomy of MI methods and explained some MI principles and their limitations. The performance of MI was enhanced by many researchers, such as through Tsallis and Renyi’s entropy [12] and Jensen-Renyi’s entropy [13]. Second-order MI (SMI) [14] incorporated spatial information by extending the co-occurrence of intensity pairs of neighbors in the image. Regional MI (RMI) [15] and principal component analysis (PCA) regional MI (PRMI) enhanced the traditional RMI by using principal component analysis (PCA) [16]. Conditional MI (CMI), proposed by Loeckx et al. [17], used spatial information to improve the calculation of the joint intensity histogram. Moreover, Rivas et al. [18] presented self-similarity α-MI (SeSaMI) which adds the local structural information in the image into the MI. In addition, they proposed the contextual conditioned MI (CoCoMI), which incorporates the contextual information into CMI [19]. Although they achieved a good result in MIR, the computational cost of SeSaMI and CoCoMI also increased.
Furthermore, Heinrich et al. [20] proposed a method called the modality independent neighbourhood descriptor (MIND) as a local self-similarity measure based on neighborhood information. MIND, using the sum of squared differences (SSD) in estimating the similarity, is more accurate than several methods. However, its evaluation is heavily based on the central pixel and, thus, possibly vulnerable to noise. Consequently, Heinrich et al. [21] proposed a so-called self-similarity context (SSC) to improve the noise robustness of MIND. SSC uses the hamming distance of the binary results instead of the sum of the absolute differences (SAD). Moreover, Kasiri et al. [22] proposed the patch-based estimation, which applies MI to measure the similarity of the patches. In another work by Kasiri et al. [23], the measure called sorted self-similarity (Ssesi) was proposed. It sorts the elements within the patch to solve the rotational variance problem of the SSD for evaluating the patch similarity.
Local binary patterns (LBP) is one of the most effective and frequently used methods in texture classification [24]. Recently, Dongsheng et al. [25] have introduced a modality-independent local binary pattern (miLBP) descriptor, which can outperform SSC, SeSaMI, CoCoMI and the linear correlation of linear combination (LC2), in terms of robustness against noise and intensity non-uniformity in medical image processing. However, miLBP is still variant in rotational deformations. Borvornvitchotikarn and Kurutach [26] proposed a robust self-similarity descriptor (RSSD), which enhances miLBP in terms of rotational variance and improves the accuracy of MIR. However, both the miLBP and RSSD methods assess similarity based on the use of the central pixel and, therefore, any image artifact in that pixel can affect the performance of the descriptors.
To deal with the above-mentioned problems, we propose a novel robust local descriptor based on miLBP for the registration of different image modalities. Its fundamental technique relies on the mean of all pixels or voxels to estimate the self-similarity of the intensities within the patch. Moreover, it yields a rotation invariant metric by sorting the relevant values in the region of interest. We will call this method a modality-independent and rotation-invariant descriptor (miRID).
This paper is organized as follows. Section 2 provides our proposed method. Section 3 shows the experimental result and gives some discussion. Finally, Section 4 concludes this work.

2. The Proposed Method

2.1. Registration Framework

The goal of MIR is to find the best alignment between a moving image and a fixed image by maximizing the similarity or minimizing the dissimilarity between the two registered images. Generally, MIR has three main components: similarity measure, transformation and optimization. Similarity measure is used to estimate the (dis)similarity between the fixed image and the transformed moving image. The transformation model deforms the moving image to fit the fixed image. The optimization method selects the transformation parameters that gives the best similarity [27]. In this work, a registration T of a moving image I m and a fixed image I f can be formulated as
T = a r g   m i n   D ( I f , T ( I m ) )
where D I f , T I m is a dissimilarity measure which determines the degree of transformation and T denotes a transformation function used to find the minimum value of D I f , T I m .

2.2. miLBP

miLBP is a technique based on the traditional LBP. However, instead of using a selectively fixed threshold value as in LBP, miLBP introduced an adaptive threshold value using the standard deviation of all pixel intensities within the patch. Defined by Equation (2) [25], the concept of miLBP can be illustrated in Figure 1 and as follows:
m i L B P = p = 1 P s | g p g c | 2 p 1 , s x = 1 , | g p g c | > δ 0 , | g p g c | δ .
where P denotes the number of the neighboring pixels, c denotes the central pixel of the patch, p denotes a neighboring pixel of c (where p is counted as 1 for the pixel at the upper-left corner and increased by 1 as it moves to the next position in a clockwise manner), g p denotes the intensity value of p , g c denotes the intensity value of c and δ denotes the standard deviation of all pixel intensities within the patch.

2.3. miRID

This section proposes a novel local self-similarity descriptor which enhances the technique of miLBP. As mentioned previously, the miLBP method estimates the similarity value based on the use of the central pixel. Consequently, image artifacts within the central pixel can affect the performance of the descriptor. Therefore, we need a descriptor for the registration of multi-modal images by avoiding the use of the central pixel in evaluating the patch relationships. To accomplish that, we adopted the mean of all intensity values within the patch instead of using the intensity of the central pixel. In addition, to make our method robust against rotational deformations, the sorting operation was performed on the absolute difference between each pixel’s intensity and the mean of all pixel intensities within a patch of the image I .
The proposed method is defined by Equation (3):
m i R I D I n = s ( S o r t d e s c | g i M | ) , s x = 1 , if x > δ 0 , o t h e r w i s e
where g i denotes the intensity value of the pixel at position i , M denotes the mean of intensity of all pixels within the patch, | g i M | denotes the absolute difference between g i and M and S o r t d e s c operation denotes the descending order operation.
In Equation (3), we investigated the dissimilarity measure within an image patch. Firstly, G denotes the number of the pixels in the patch. G can be calculated by G = 2 r + 1 d , where r is the radius and d is the dimension. For example, assume that we have a 3 × 3 patch in a 2D image. As shown in Figure 2, the value of G is 9 ( r = 1 and d = 2). Our work employed the mean of the pixel intensities within the patch (denoted by M ), rather than the central pixel, in calculating the distance. More specifically, an absolute mean difference | g i M | between the intensity of a pixel g i and the mean M can be defined as follows. Let g i denote the intensity of a pixel at position i in a patch (i.e., g i { g 1 , g 2 , g 3 ,   , g G }), where G is the patch size. The mean of the intensity values can be defined by 1 G i = 1 G g i . Secondly, after receiving s ( | g i M | ) , 1 i G from the previous step, each intensity value of the pixel is taken into account and arranged in descending order. The binary results of the sorting operation are obtained as the invariant when the image is rotated. This is an essential step to yield a rotation invariant self-similarity descriptor. The sorting step can be formally described by s ( S o r t d e s c | g i M | ) ,   1 i G . Finally, thresholding is a step to digitize the values of s ( S o r t d e s c | g i M | ) in order to label the pixel location i with 0 or 1. δ is an adaptive threshold that is used to calculate the labels. δ can be defined by 1 G g = 1 G g i M 2 . Algorithm 1 shows the details of our calculation.
Algorithm 1.The Calculation of Our Descriptor
Input: image patch I n at position n of I
Output: m i R I D I n
 (1)  I n = g i , , g G
 (2) Calculate G , M , δ
 (3) For ( i = 1 ; i G ; i + + )
 (4) ID[i] = | g i M |
 (5) End for
 (6)  S o r t d e s c I D 1 G
 (7) For ( i = 1 ; i G ; i + + )
 (8) If ID[i] > δ then ID[i] = 1 else ID[i] = 0
 (9) End for
 (10)  m i R I D I n = ID[1…G]
 (11) Return m i R I D I n
The miRID was used for evaluating the dissimilarity between the fixed image I f and the deformed moving image I m ; ϕ . The dissimilarity measure D I f , I m ; ϕ , whose value is within the range of [0,1] between the two images, can be evaluated by Equation (4):
D I f , I m ; ϕ = 1 W   ( m i R I D I f     m i R I D I m ; ϕ )
where D I f , I m ; ϕ denotes the value of the dissimilarity between the images to be registered; represents the bit count operation, which counts the number of bits having the value of 1; is the hamming distance operation; m i R I D I f and m i R I D I m ; ϕ are the sorted binary pattern results of bits within I f and I m ; ϕ ; and W denotes the bit number of m i R I D results.
Figure 3 shows a block diagram illustrating our registration model. Specifically, m i R I D I f and m i R I D I m ; ϕ are used to represent the self-similarity values of the original fixed image and the transformed moving images, respectively. These descriptors can be evaluated by Equation (3). The dissimilarity measure between the miRID of I f and the miRID of I m ; ϕ can be calculated by D I f , I m ; ϕ , as defined by Equation (4). For the optimization of D I f , I m ; ϕ , the gradient descent optimization method [28] was used to find the optimal transformation. Adjusting the value of ϕ k in the transformation process using the gradient descent method was performed according to Equation (5). To find the optimal transformations, we minimized the dissimilarity value D I f , I m ; ϕ :
ϕ k + 1 = ϕ k s k D I f , I m ; ϕ k
where D I f , I m ; ϕ k is the derivative of the cost function D I f , I m ; ϕ k based on ϕ k , ϕ k + 1 is the next position, ϕ k denotes the current position, s k represents the step size at the k th iteration and ϕ is the values of the transformation parameters.
The gradient of the cost function D I f , I m ; ϕ with respect to the transformation parameter ϕ is shown in Equation (6):
D I f , I m ; ϕ k = D I f , I m ; ϕ k ϕ k

3. Experimental Results and Discussion

In our experimentations, an miRID as a local self-similarity descriptor was implemented using MATLAB R2019b with a mex file complied from C. All experiments were tested on a computer with an Intel® Core™ i7-7700 CPU 3.60 GHz and 16.0 GB of RAM. We set 300 iterations to be the maximum number of the stopping condition in the registration experiments (see more details on the settings in the original work of Myronenko et al. [29]).
In order to demonstrate the efficiency of the proposed method, we employed simulated longitudinal relaxation time (T1), transverse relaxation time (T2) and proton density (PD) brain MR images with an image size of 181 × 217 × 181 voxels, 3% noise and 40% intensity non-uniformity from the BrainWeb dataset [30]. We also compared the registration performance of the proposed method with the MI [6], SSC [21], miLBP [25] and Ssesi [23] methods. They were quantitatively assessed by the target registration error (TRE) [20] and the registration time (Time).
To demonstrate the rigid image registration, the moving image was rotated within the range of ±20° in rotational experiments. The rigid transformation of T I m ; ϕ can be expressed as
T I m ; ϕ = R ϕ a   I m
where R denotes the rotation matrix. All pixels a in the moving image I m were treated as a rigid body, which could rotate with respect to the Euler angle θ . In a 2D image, ϕ = θ , where θ represents the rotation around the axis normal to the image [31].
Table 1 shows the comparison of the experimental results (in terms of TRE) of the multi-modal rigid image registrations carried out by various approaches. It can be seen that the performances of the miRID and Ssesi were slightly different (1.82 and 1.89 mm, respectively), but both were much better than the rest. Moreover, the TRE of Ssesi was lower than the miRID’s on the T1-PD image registration. Figure 4 shows the visual assessments of the alignment accuracies.
Free-form deformation (FFD) with a B-spline control point was used in non-rigid image registration for this experiment. It can be expressed as
T I m ; ϕ = a   I m + a ϕ ϵ   N a p j β D ( a   I m a ϕ σ ) ,
where a ϕ represents the control points, p j denotes the B-spline coefficient vectors, β D denotes the cubic D -dimensional B-spline polynomial, N a denotes the set of B-spline control points at a   I m and σ is the B-spline control spacing, while a ϕ is used as the regular grid and overlaid on the fixed image I m [29].
Table 2 shows the comparison of the experimental results (in terms of TRE) of the non-rigid image registrations carried out by various approaches. The registration performance of our miRID method was significantly better than the others. In fact, our proposed method has been proven to be the best approximation of the similarity metric for the complicated relationships of the pixels’ intensities between multimodalities. However, the efficiency of the SSC method was similar to ours, which could outperform others in T1-PD image registration. Figure 5, Figure 6 and Figure 7 show the visual assessments of the alignment accuracies.
It can be seen from the experimentation that our proposed method, the miRID, was able to handle the complexity of the intensity relationships between different modalities and was robust to rotational variances. Its computation of the similarity measure for multi-modal image registration was straightforward. We compared our approach with some well-known ones—MI, SSC, miLBP and SSesi—in our experiment. For the multi-modal rigid image registration, as shown in Table 1, the miRID obtained the highest accuracy in all cases of modalities, except for the case of T1-PD, where Ssesi was slightly better than our method. Therefore, by adding the sorting operation into our method, rotation invariance could be accomplished. For the case of multi-modal non-rigid image registration, as shown in Table 2, the miRID method can still receive the best accuracy in all cases of registrations. In [25,26,32], the pairwise distance computations were defined by the absolute difference between the intensity of the neighboring pixels and the central pixel. This has the weakness that the performance of the self-similarity descriptor is directly affected by image artefacts within the central pixel. In Table 2, the efficiency of the miLBP was achieved for non-rigid transformation. However, the miLBP descriptor was used in the computation of the corner and gradient regions in the image patch, where g c may be very dark grey or very bright grey, and g p is significantly different in that the | g p g c | may be astoundingly high. Fortunately, our method uses M instead of g c , which is suitable for estimating the opposite intensity of the pixels in some corresponding regions of multi-modal images. As a result, the registration under the miRID method can transform the distorted images on the fixed images well. In addition, in the comparison of the registration time terms as shown in Table 1 and Table 2, the miLBP and miRID methods can perform at high registration speeds compared with others.

4. Conclusions

In this paper, we have proposed a novel self-similarity descriptor called miRID that is to be used in medical multi-modal image registration. Unlike MIND, miLBP and RSSD, this descriptor is simpler in terms of computation and can handle multi-modal image registration more efficiently. Particularly, it has been experimentally proven that our proposed technique is robust against both rotational variances in the deformation process and image artifacts that may occur. Finally, the experimental results have also shown that our method outperformed others in terms of alignment accuracy.

Author Contributions

Conceptualization, T.B. and W.K.; methodology, T.B. and W.K.; software, T.B.; writing—original draft preparation, T.B.; writing—review and editing, W.K.; supervision, W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ou, Y. Development and Validations of a Deformable Registration Algorithm for Medical Images: Applications to Brain, Breast and Prostate Studies; University of Pennsylvania: Philadelphia, PA, USA, 2012. [Google Scholar]
  2. Hill, D.L.; Batchelor, P.G.; Holden, M.; Hawkes, D.J. Medical image registration. Phys. Med. Biol. 2001, 46, R1. [Google Scholar] [CrossRef] [PubMed]
  3. Mani, V. Survey of medical image registration. J. Biomed. Eng. Technol. 2013, 1, 8–25. [Google Scholar]
  4. Sotiras, A.; Davatzikos, C.; Paragios, N. Deformable medical image registration: A survey. IEEE Trans. Med. Imaging 2013, 32, 1153–1190. [Google Scholar] [CrossRef] [Green Version]
  5. De Nigris, D.; Collins, D.L.; Arbel, T. Multi-modal image registration based on gradient orientations of minimal uncertainty. IEEE Trans. Med. Imaging 2012, 31, 2343–2354. [Google Scholar] [CrossRef]
  6. Maes, F.; Collignon, A.; Vandermeulen, D.; Marchal, G.; Suetens, P. Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imaging 1997, 16, 187–198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Jin, S.; Li, D.; Wang, H.; Yin, Y. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients. J. Appl. Clin. Med. Phys. 2013, 14, 50–61. [Google Scholar] [CrossRef] [PubMed]
  8. Korsager, A.S.; Carl, J.; Østergaard, L.R. Comparison of manual and automatic MR-CT registration for radiotherapy of prostate cancer. J. Appl. Clin. Med. Phys. 2016, 17, 294–303. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Degen, J.; Modersitzki, J.; Heinrich, M.P. Dimensionality reduction of medical image descriptors for multimodal image registration. Curr. Dir. Biomed. Eng. 2015, 1, 201–205. [Google Scholar] [CrossRef]
  10. Schroeder, M.J.J.P. Analogy in Terms of Identity, Equivalence, Similarity, and Their Cryptomorphs. Philosophies 2019, 4, 32. [Google Scholar] [CrossRef] [Green Version]
  11. Borvornvitchotikarn, T.; Kurutach, W. A taxonomy of mutual information in medical image registration. In Proceedings of the 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), Bratislava, Slovakia, 23–25 May 2016; pp. 1–4. [Google Scholar]
  12. Sahoo, S.; Nanda, P.K.; Samant, S. Tsallis and Renyi’s embedded entropy based mutual information for multimodal image registration. In Proceedings of the 2013 Fourth National Conference Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), Jodhpur, India, 18–21 December 2013; pp. 1–4. [Google Scholar]
  13. He, Y.; Hamza, A.B.; Krim, H. A generalized divergence measure for robust image registration. IEEE Trans. Signal Process. 2003, 51, 1211–1220. [Google Scholar]
  14. Rueckert, D.; Clarkson, M.; Hill, D.; Hawkes, D.J. Non-rigid registration using higher-order mutual information. In Proceedings of the Medical Imaging 2000, San Diego, CA, USA, 13–15 February 2000; pp. 438–447. [Google Scholar]
  15. Russakoff, D.B.; Tomasi, C.; Rohlfing, T.; Maurer, C.R., Jr. Image similarity using mutual information of regions. In Computer Vision-ECCV 2004, Proceedings of the European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 596–607. [Google Scholar]
  16. Chen, Y.-W.; Lin, C.-L. PCA based regional mutual information for robust medical image registration. In Advances in Neural Networks–ISNN 2011, Proceedings of the International Symposium on Neural Networks, Guilin, China, 29 May–1 June 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 355–362. [Google Scholar]
  17. Loeckx, D.; Slagmolen, P.; Maes, F.; Vandermeulen, D.; Suetens, P. Nonrigid image registration using conditional mutual information. IEEE Trans. Med. Imaging 2010, 29, 19–29. [Google Scholar] [CrossRef] [PubMed]
  18. Rivaz, H.; Karimaghaloo, Z.; Collins, D.L. Self-similarity weighted mutual information: A new nonrigid image registration metric. Med. Image Anal. 2014, 18, 343–358. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Rivaz, H.; Karimaghaloo, Z.; Fonov, V.S.; Collins, D.L. Nonrigid registration of ultrasound and MRI using contextual conditioned mutual information. IEEE Trans. Med. Imaging 2014, 33, 708–725. [Google Scholar] [CrossRef] [PubMed]
  20. Heinrich, M.P.; Jenkinson, M.; Bhushan, M.; Matin, T.; Gleeson, F.V.; Brady, M.; Schnabel, J.A. MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Med. Image Anal. 2012, 16, 1423–1435. [Google Scholar] [CrossRef] [PubMed]
  21. Heinrich, M.P.; Jenkinson, M.; Papiez, B.W.; Brady, S.M.; Schnabel, J.A. Towards realtime multimodal fusion for image-guided interventions using self-similarities. Med. Image Comput. Comput. Assist. Interv. 2013, 16, 187–194. [Google Scholar] [PubMed] [Green Version]
  22. Kasiri, K.; Fieguth, P.; Clausi, D.A. Self-similarity measure for multi-modal image registration. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4498–4502. [Google Scholar]
  23. Kasiri, K.; Fieguth, P.; Clausi, D.A. Sorted self-similarity for multi-modal image registration. In Proceedings of the 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), Lake Buena Vista (Orlando), FL, USA, 16–20 August 2016; pp. 1151–1154. [Google Scholar]
  24. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  25. Jiang, D.; Shi, Y.; Yao, D.; Wang, M.; Song, Z. miLBP: A robust and fast modality-independent 3D LBP for multimodal deformable registration. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 997–1005. [Google Scholar] [CrossRef] [Green Version]
  26. Borvornvitchotikarn, T.; Kurutach, W. Robust Self-Similarity Descriptor for Multimodal Image Registration. In Proceedings of the 2018 25th International Conference on Systems, Signals and Image Processing (IWSSIP), Maribor, Slovenia, 20–22 June 2018; pp. 1–4. [Google Scholar]
  27. Crum, W.R.; Hartkens, T.; Hill, D. Non-rigid image registration: Theory and practice. Br. J. Radiol. 2014. [Google Scholar] [CrossRef]
  28. Klein, S.; Staring, M.; Pluim, J.P.W. Evaluation of optimization methods for nonrigid medical image registration using mutual information and B-splines. IEEE Trans. Image Process. 2007, 16, 2879–2890. [Google Scholar] [CrossRef]
  29. Myronenko, A.; Song, X. Intensity-based image registration by minimizing residual complexity. IEEE Trans. Med. Imaging 2010, 29, 1882–1891. [Google Scholar] [CrossRef]
  30. BrainWeb, B. Simulated Brain Database. 2010. Available online: http://brainweb.bic.mni.mcgill.ca/cgi/brainweb2 (accessed on 23 May 2018).
  31. Yaniv, Z.; Lowekamp, B.C.; Johnson, H.J.; Beare, R. SimpleITK image-analysis notebooks: A collaborative environment for education and reproducible research. J. Digit. Imaging 2018, 31, 290–303. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
Figure 1. Basic concept of the modality-independent local binary pattern (miLBP). (a) The miLBP of an image patch without rotation, and (b) the miLBP of the patch with a 90° counterclockwise rotation.
Figure 1. Basic concept of the modality-independent local binary pattern (miLBP). (a) The miLBP of an image patch without rotation, and (b) the miLBP of the patch with a 90° counterclockwise rotation.
Symmetry 12 02078 g001
Figure 2. Concept of the modality-independent and rotation-invariant descriptor (miRID). (a) miRID of an image patch without rotation, and (b) miRID of an image patch with 90° counterclockwise rotation in the proposed method.
Figure 2. Concept of the modality-independent and rotation-invariant descriptor (miRID). (a) miRID of an image patch without rotation, and (b) miRID of an image patch with 90° counterclockwise rotation in the proposed method.
Symmetry 12 02078 g002
Figure 3. Diagram of the proposed registration model.
Figure 3. Diagram of the proposed registration model.
Symmetry 12 02078 g003
Figure 4. An illustrative case in T1-T2 registration (ac), T1-PD registration (df) and T2-PD registration (gi) of the fortieth slice of the T1, T2, and PD images.
Figure 4. An illustrative case in T1-T2 registration (ac), T1-PD registration (df) and T2-PD registration (gi) of the fortieth slice of the T1, T2, and PD images.
Symmetry 12 02078 g004
Figure 5. To illustrate the T1-T2 registration cases, this shows the visual results of the miRID method experimenting on the T1-T2 images. (a,e,i,m) are the distorted 20th, 30th, 40th and 50th slices of the original T1 image, respectively. They were used as the moving images in the registrations. (b,f,j,n) are the 20th, 30th, 40th and 50th slices of the original T2 image, respectively, and they were the fixed images in the registrations. The results of registering the moving images on the fixed images are shown in (c,g,k,o). The transformations of these registrations are shown in (d,h,l,p), respectively.
Figure 5. To illustrate the T1-T2 registration cases, this shows the visual results of the miRID method experimenting on the T1-T2 images. (a,e,i,m) are the distorted 20th, 30th, 40th and 50th slices of the original T1 image, respectively. They were used as the moving images in the registrations. (b,f,j,n) are the 20th, 30th, 40th and 50th slices of the original T2 image, respectively, and they were the fixed images in the registrations. The results of registering the moving images on the fixed images are shown in (c,g,k,o). The transformations of these registrations are shown in (d,h,l,p), respectively.
Symmetry 12 02078 g005
Figure 6. To illustrate the T1-PD registration cases, this shows the visual results of the miRID method experimenting on the T1-PD images. (a,e,i,m) are the distorted 20th, 30th, 40th and 50th slices of the original T1 image, respectively. They were used as the moving images in the registrations. (b,f,j,n) are the 20th, 30th, 40th and 50th slices of the original PD image, respectively, and they were the fixed images in the registrations. The results of registering the moving images on the fixed images are shown in (c,g,k,o). The transformations of these registrations are shown in (d,h,l,p), respectively.
Figure 6. To illustrate the T1-PD registration cases, this shows the visual results of the miRID method experimenting on the T1-PD images. (a,e,i,m) are the distorted 20th, 30th, 40th and 50th slices of the original T1 image, respectively. They were used as the moving images in the registrations. (b,f,j,n) are the 20th, 30th, 40th and 50th slices of the original PD image, respectively, and they were the fixed images in the registrations. The results of registering the moving images on the fixed images are shown in (c,g,k,o). The transformations of these registrations are shown in (d,h,l,p), respectively.
Symmetry 12 02078 g006
Figure 7. To illustrate the T1-PD registration cases, this shows the visual results of the miRID method experimenting on the T2-PD images. (a,e,i,m) are the distorted 20th, 30th, 40th and 50th slices of the original T2 image, respectively. They were used as the moving images in the registrations. (b,f,j,n) are the 20th, 30th, 40th and 50th slices of the original PD image, respectively, and they were the fixed images in the registrations. The results of registering the moving images on the fixed images are shown in (c,g,k,o). The transformations of these registrations are shown in (d,h,l,p), respectively.
Figure 7. To illustrate the T1-PD registration cases, this shows the visual results of the miRID method experimenting on the T2-PD images. (a,e,i,m) are the distorted 20th, 30th, 40th and 50th slices of the original T2 image, respectively. They were used as the moving images in the registrations. (b,f,j,n) are the 20th, 30th, 40th and 50th slices of the original PD image, respectively, and they were the fixed images in the registrations. The results of registering the moving images on the fixed images are shown in (c,g,k,o). The transformations of these registrations are shown in (d,h,l,p), respectively.
Symmetry 12 02078 g007
Table 1. The target registration error (TRE) results of multi-modal rigid image registration (mm).
Table 1. The target registration error (TRE) results of multi-modal rigid image registration (mm).
ModalityMI [6]SSC [21]miLBP [25]Ssesi [23]miRID
T1-T22.922.862.162.091.86
T1-PD2.682.432.231.531.62
T2-PD2.942.862.472.051.98
Average TRE2.852.722.291.891.82
Average Time35.2 s34.1 s24.9 s31.7 s26.1 s
Table 2. The target registration error (TRE) results of the multi-modal non-rigid image registration (mm).
Table 2. The target registration error (TRE) results of the multi-modal non-rigid image registration (mm).
ModalityMI [6]SSC [21]miLBP [25]Ssesi [23]miRID
T1-T22.632.352.422.452.04
T1-PD2.882.162.262.422.13
T2-PD3.012.472.362.472.01
Average TRE2.842.332.352.452.03
Average Time42.8 s39.6 s26.2 s35.4 s27.4 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Borvornvitchotikarn, T.; Kurutach, W. miRID: Multi-Modal Image Registration Using Modality-Independent and Rotation-Invariant Descriptor. Symmetry 2020, 12, 2078. https://doi.org/10.3390/sym12122078

AMA Style

Borvornvitchotikarn T, Kurutach W. miRID: Multi-Modal Image Registration Using Modality-Independent and Rotation-Invariant Descriptor. Symmetry. 2020; 12(12):2078. https://doi.org/10.3390/sym12122078

Chicago/Turabian Style

Borvornvitchotikarn, Thuvanan, and Werasak Kurutach. 2020. "miRID: Multi-Modal Image Registration Using Modality-Independent and Rotation-Invariant Descriptor" Symmetry 12, no. 12: 2078. https://doi.org/10.3390/sym12122078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop