Next Article in Journal
Assessment of UTI Diagnostic Techniques Using the Fuzzy–PROMETHEE Model
Previous Article in Journal
Cloud-Based Quad Deep Ensemble Framework for the Detection of COVID-19 Omicron and Delta Variants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MRI and CT Fusion in Stereotactic Electroencephalography (SEEG)

by
Jaime Pérez Hinestroza
1,*,
Claudia Mazo
1,2,
Maria Trujillo
1 and
Alejandro Herrera
1,3
1
Multimedia and Computer Vision Group, Universidad del Valle, Cali 760042, Colombia
2
School of Computing, Faculty of Engineering and Computing, Glasnevin Campus, Dublin City University, 9 Dublin, Ireland
3
Clinica Imbanaco Grupo Quironsalud, Cali 760042, Colombia
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(22), 3420; https://doi.org/10.3390/diagnostics13223420
Submission received: 30 May 2023 / Revised: 3 September 2023 / Accepted: 5 September 2023 / Published: 9 November 2023
(This article belongs to the Special Issue Brain Imaging in Epilepsy -Volume 2)

Abstract

:
Epilepsy is a neurological disorder characterized by spontaneous recurrent seizures. While 20% to 30% of epilepsy cases are untreatable with Anti-Epileptic Drugs, some of these cases can be addressed through surgical intervention. The success of such interventions greatly depends on accurately locating the epileptogenic tissue, a task achieved using diagnostic techniques like Stereotactic Electroencephalography (SEEG). SEEG utilizes multi-modal fusion to aid in electrode localization, using pre-surgical resonance and post-surgical computer tomography images as inputs. To ensure the absence of artifacts or misregistrations in the resultant images, a fusion method that accounts for electrode presence is required. We proposed an image fusion method in SEEG that incorporates electrode segmentation from computed tomography as a sampling mask during registration to address the fusion problem in SEEG. The method was validated using eight image pairs from the Retrospective Image Registration Evaluation Project (RIRE). After establishing a reference registration for the MRI and identifying eight points, we assessed the method’s efficacy by comparing the Euclidean distances between these reference points and those derived using registration with a sampling mask. The results showed that the proposed method yielded a similar average error to the registration without a sampling mask, but reduced the dispersion of the error, with a standard deviation of 0.86 when a mask was used and 5.25 when no mask was used.

1. Introduction

Epilepsy is a neurological disorder with a worldwide prevalence of 0.8% to 1.2%, where 20% to 30% of cases are untreatable with Anti-Epileptic Drugs (AED) [1,2]. For those patients, a valuable treatment is a surgical intervention [3], with a success rate ranging from 30% to 70% [4].
The success of the surgical intervention depends on a precise localization of the epileptogenic tissue. Diagnostic techniques, including but not limited to Stereotactic Electroencephalography (SEEG), play a crucial role in achieving this accuracy [3,5,6]. The SEEG measures the electric signal within the brain areas using deep electrodes, guiding the implantation and electrode localization with Magnetic Resonance Imaging (MRI) and Computer Tomography (CT) images. However, given the limited structural details in CT images, a fusion with an MRI is required. This fusion ensures a comprehensive representation of both anatomical structures and electrode positions in a unified image [7,8,9].
Image fusion is a processing technique that involves mapping images into a common coordinate system and merging the aligned results into a single output. Numerous methods are available for image fusion; however, the performance of each technique is influenced by characteristics related to the acquisition and image type [10,11]. When external objects are present in an SEEG sequence, it may interfere with the registration process, which relies on similarity metrics computed using voxel data between images [10,11]. Consequently, changes in the structural data of the CT image can affect the calculation of similarity metrics, leading to misregistration in the fused images.
Based on the challenges associated with image registration, we conducted a systematic review, using the methodology outlined by Kitchenham [12] for literature reviews in software engineering. Our research looked into the techniques and tools used for brain image fusion between CT and MRI, as well as the validation techniques employed to measure the performance [13]. Our review revealed a notable absence of a standard method for image fusion validation in CT and MRI, especially when external objects are present. Furthermore, we identified a significant lack of validation methodologies for these techniques. This is particularly concerning given that the Retrospective Image Registration Evaluation Project (R.I.R.E.), once a standard methodology, is no longer in use. Our review also highlighted the importance of understanding the performance of various image fusion techniques in applications like SEEG that involve external objects. We found that methods using Mutual Information (MI) as the optimization metric exhibited superior performance in multimodal image fusion.
These challenges were evident in SEEG examinations conducted at Clinica Imbanaco Grupo Quironsalud in Cali, Colombia, where we identified registration errors in the fusion of MRI and CT images primarily attributed to the presence of electrodes. These inconsistencies required manual adjustments to correctly align the misregistered MRI images. In response to these challenges, we introduce an image fusion method that accounts for external elements, primarily in exams like SEEG. It is crucial to note that while our method is designed to mitigate the impact of external objects in the images and enhance the spatial accuracy of electrode localization, it does not explore into the analysis of electric signals from the deep electrodes. Such an analysis falls outside the scope of this study and would demand a distinct analytical framework.

2. Fusion Method

The general procedure shown in Figure 1 consists of seven main steps: (i) initial electrode segmentation in the CT image; (ii) generation of a mask of all non-electrode voxels in the CT image; (iii) registration of the MRI against the CT image using the non-electrode sampling mask to compute the transformation; (iv) segmentation of the brain from the registered MRI with the ROBEX tool, and subsequently computing a brain mask; (v) improving the electrode segmentation using the brain mask obtained from the previous step; (vi) integrate the fully segmented electrodes with the registered MRI.

2.1. CT Electrode Segmentation

Given that our literature search did not identify any fusion methods that take into account the electrode and utilize segmentation for registration [13], we devised our own segmentation procedure. This procedure employs thresholding and morphological operations, as shown in Figure 2 and Figure 3.
To extract the electrodes from the CT, we employed simple thresholding with a window of 1500 HU to 3000 HU (i). Subsequently, we computed a mask of the head tissue to remove the skull from the segmented image (viii). Next, we generated a brain mask for the MRI, which had been aligned with the CT, utilizing the ROBEX (Robust Brain Extraction) tool, which is a stripping method based on the work of Iglesias et al. [14]. Finally, we applied this mask to the registered electrodes to remove any objects situated outside the brain.
To segment the skull, we employed a simple threshold with a window ranging from 300 HU to 1900 HU (ii). Subsequently, we employed a morphological eroding technique with a cross kernel of size 3 × 3 × 3 to eliminate the electrodes from the skull. Next, a morphological closing and dilation operation with a ball kernel of size 4 × 4 × 4 was performed to connect all the bone tissue (iii). Finally, we applied a NOT operator to generate a mask of no-skull gray voxels (iv).
We generate a head mask using Otsu’s thresholding (v) to exclude any object external to the head [15]. After that, we apply a morphological hole-filling operation to remove any internal gap within the head (vi). Then, we create a brain mask by intersecting the head mask and the no-skull mask (vii). Finally, we combine the brain mask with the thresholding electrodes, ensuring the removal of most of the skull tissue (viii).

2.2. MRI Registration

For MRI registration, we employed an affine rigid transformation combined with a gradient descent algorithm, using Mutual Information (MI) as the similarity metric. We opted for this registration approach because MI is based on the normal probability distribution between images [10,11], which has been shown to be more effective in multi-modal registration [13]. To enhance the registration process, we add a unique step that uses a sampling of the voxels that do not contain electrodes when computing the MI. We achieve this by creating a mask through the application of a NOT operation to the segmented electrodes, as detailed in Section 2.1. Figure 4 shows a schema of the registration procedure.

2.3. Final CT Electrode Segmentation

The preliminary electrode segmentation employs both thresholding and morphological operations. However, this approach might segment some bone tissue alongside the electrodes, which must be excluded from the final image. To address this, we use the aligned MRI to generate a brain mask with the aid of the ROBEX tool, as depicted Figure 5.

2.4. Image Merging

Finally, we add the segmented electrodes to the aligned MRI to produce the fused image (Figure 6).

3. Validation Method

Given the lack of a standardized validation methodology for multi-modal image fusion, as highlighted in our 2021 literature review [13], we opted to employ two distinct methods to validate our proposed technique. Initially, we employ the RIRE dataset to generate synthetic data. Then, for our second validation, we used four pairs of MRI and CT images from SEEG exams, measuring the performance by identifying five anatomical structures in the CT and MRI.

3.1. Validation Using RIRE Dataset

We selected eight images from the RIRE dataset containing both MRI and CT images. To simulate the presence of electrodes, we introduced cylinders into the CT images. Subsequently, we performed a rigid registration on the images without electrodes. The transformation obtained from this registration served as a reference for further analysis.
Furthermore, to measure the performance, we compared the location of brain structures in the registered images. This comparison was conducted using the Euclidean distance between the reference structure in the CT and the corresponding structure in the registered MRI.

3.1.1. Electrodes Generation

To obtain a CT image with electrodes, we used the RIRE dataset and added cylinders to the images to simulate SEEG electrodes. These simulated electrodes were designed with a diameter of 3 mm and a length of 80 mm, mirroring the specifications of deep electrodes that feature eight contacts with a 10 mm spacing between them [16]. The gray values of the generated electrodes ranged between 1500 and 3000 HU. In total, we added a total of 12 electrodes to the CT images, placing them at random orientations and positions within the brain tissue. Figure 7 shows an example of the generated images.

3.1.2. Reference Registration Images

To evaluate the performance of our method, we applied a rigid registration on the RIRE dataset without electrodes. This registration procedure employed an affine transformation with a gradient descent algorithm. We used the eight points defined in the RIRE dataset to calculate the reference point using the reference registration. Table 1 presents the original points used, while Table 2 displays the resulting reference points per image.

3.1.3. Registration Error Using Reference Points

We computed the error using the resulting points in Table 2 and compared them to the resulting point from the procedure described in Section 2. The error was calculated as the Euclidean distance between the reference points and the points obtained from our fusion method.
e r r o r = P R x P x 2 + P R y P y 2 + P R z P z 2 ,
where P R x , P R y , P R z are the coordinates in mm for the reference points (Table 2), and P x , P y , P z are the reference points per image after applying our fusion method.

3.2. Validation with Brain Structures

We measure the performance of the fusion method by using the subsequent brain structures in the CT image as reference points: (i) the Sylvian aqueduct; (ii) the anterior commissure; (iii) the pineal gland; (iv) the right lens; (v) the left lens.
With the guidance of a medical expert, we manually localized the structures of interest, obtaining their positions in both the CT and registered MRI images using the 3D Slicer version 4.11. Upon identifying these structures, we measured the error as the Euclidean distance between the reference structure point in the CT image and the corresponding point in the registered MRI using Figure 1. For performance evaluation, we evaluated the error in images resulting from two distinct methods: (i) our proposed approach that incorporates a sampling mask during registration, and (ii) a reference method from the existing literature that conducts registration without a sampling mask. The purpose of this validation was to determine whether the use of a mask reduces registration errors. Figure 8 and Figure 9 show the methods that were compared.
In our validation process, we also employed global fusion metrics to assess potential distortions arising from the fusion procedures. The metrics we utilized include:
  • Mutual Information (MI):
Estimate the amount of information transferred from the source image into the fused image [17]. Given the input images ( I i ) and the fused image I f , the MI can be computed using the following equation:
M I ( I i , I f ) = H ( I i ) + H ( I f ) H ( I i , I f ) ,
where H ( I i , I f ) is the joint entropy between the input and fused images, and H ( I i ) , H ( i f ) are the marginal entropy of the input and fused image, respectively.
  • Structural Similarity Index (SSIM):
Measure the preservation of the structural information, separating the image into three components: luminance I, contrast C and structure S [17].
S S I M I i , I f = I I i , I f a . C I i , I f b . S I i , I f c .
  • Root Mean Square Error (RMSE):
Measure the variance of the arithmetic square root [17].
R M S E = x = 1 M y = 1 N I i x , y I f X , Y 2 ,
where I i ( x , y ) and I f ( X , Y ) , are the pixel values of the input and fused image, respectively, and M, N are the dimensions of the image.
  • Peak Signal-to-Noise Ratio (PSNR):
The PSNR is calculated from the RMSE in the following equation given image of dimension M × N [17]
P S N R = 10 · L o g M × N 2 R M S E .

4. Results

We validated the performance of the fusion method following the two methodologies in Section 3. We used eight pairs of CT and MRI from the RIRE dataset for the first method. The CT images were generated with the method described in Section 3.1.1. We compared the procedure shown in Figure 8 against a fusion procedure that does not employ a sampling mask of the brain tissue, which is shown in Figure 9. Both methods used a rigid registration with MI as the similarity metric and gradient descent for the optimization. We used descriptive statistical metrics of central tendency and variation to compare the methods using the validation from Section 3.1. These results were summarized in the box plot shown in Figure 10. For the second validation, we faced a limitation in the number of images available for evaluation. Given this constraint, we opted to compare the methods individually for each of the four cases. A scatter plot was chosen as the most suitable representation to visualize the error dispersion for both methods. Scatter plots are particularly effective in such scenarios as they allow for clear visualization of individual data points, making it easier to discern patterns or anomalies, especially when dealing with a smaller dataset. This approach provides a more transparent and detailed view of the distribution of errors across the limited set of images. The results of this comparison are illustrated in Figure 11.
Table 3 displays the Euclidean distance between the reference points and the resulting points of the transformation from the compared methods. From the data, we can observe that the difference in the Euclidean distance for our method is significantly lower in images 3, 6, and 8. This is mainly caused by the differences in the original images that have some variations in brain tissue, as shown in Figure 12, Figure 13 and Figure 14. Due to some electrodes passing through these areas with variations, the sampling in the registration does not use these voxels to compute the transformation, thus improving the registration when the mask is used. The results are represented in Figure 10, where our method using a sampling mask yields a Euclidean distance of 1.3176 mm with a standard deviation of 0.8643. In contrast, the method without a sampling mask yields a Euclidean distance of 1.2789 mm with a standard deviation of 5.2511. These findings suggest that the use of the mask improves the registration when there is a great difference in the tissue between the MRI and CT images due to the reduction in voxel sampling of these varying tissues in the registration process.
For the second validation, we use the methodology described in Section 3.2 in four pairs of MRI and CT images obtained from Clínica Imbanaco Grupo Quirón Salud. Table 4 displays the localized points in the CT image that we used as a reference in our analysis. The localized points for the structure in the MRI registered with our proposed methods are shown in Table 5, while the points in the MRI from the method to compare are displayed in Table 6.
After the structure localization, we compute the Euclidean distance between the points from the registered MRI against the points of the CT images. This process is applied the resulting images from our proposed method and the method that uses no sampling mask in the registration. The resulting Euclidean distances are shown in Table 7.
The validation results, displayed in Table 7, show that our method has a higher Euclidean distance compared to the method without a mask for images from patient 1, patient 3, and patient 4. However, our method achieves a lower Euclidean distance in images from patient 2. Further analysis of the two methods using global performance metrics, as presented in Table 8, reveals a relatively low difference, indicating minimal distortion in the images when comparing the two methods.
While the validation with the limited dataset showed some promising results, it still requires further refinement. While we were able to reduce the Euclidean distances for all structures in patient 2’s images and for some structures, such as the anterior commissure and the pineal gland, in patient 3’s images, our method displayed a higher Euclidean distance in images for patient 1 and patient 4. We employed global fusion metrics from Table 8 to analyze if the difference in distance was caused by any distortion in the registration procedure. However, these metrics did not reveal any significant difference related to distortion in the registered images using any of the compared registration methods.
While the application of the mask did induce an increase in error for certain images, the implementation of our method with the mask notably reduced the average error and overall dispersion, as depicted in Figure 11. This demonstrates the promising potential of our method. However, with the limited dataset, while showing some promising results, it is evident that further refinement is necessary. To conclusively affirm the improvement introduced by our approach, a larger dataset is required for this validation methodology.

5. Discussion

Our proposed image fusion method between MRI and CT, which considers the electrodes, is useful in addressing the identified problem, where the presence of external objects produced a registration error. This approach improves the registration in the images using the RIRE dataset. However, in the second validation stage, our method demonstrated a lower average error, yet we observed instances where performance was lower when the sampling mask was applied. This could be attributed to the potential importance of information proximal to the electrode for the calculation of similarity metrics during the registration procedure. However, even in these cases, the error increase was minimal compared to scenarios where the error was lower.
Another challenge was the absence of a standardized validation method for multimodal fusion, prompting us to develop our own method using available data. This included the limited dataset from Imbanaco and the deprecated RIRE dataset.
We also found a lack of research on methods for electrode segmentation in SEEG. This necessitated the development of our own registration method, a task that was not initially included in the project scope.
All objectives of the project were achieved. The project successfully identified primary techniques for image registration and fusion between MRI and CT images, developed a method to fuse these images when external objects were present, and conducted an evaluation to measure and compare the performance of the designed method. In the evaluations, the method outperformed other existing state-of-the-art techniques in certain scenarios.

6. Conclusions

We have developed and presented an image fusion method for combining CT and MRI from SEEG exams. Our approach aims to minimize misregistration errors between pre-surgical MRI and post-surgical CT images, by the use of a sampling mask of all voxels that are not electrode in the post-surgical CT image.
We acknowledged the lack of a standard validation method for image fusion and registration in brain images, mainly when external objects are presented in one of the images. We addressed this by employing two evaluation approaches: (i) a simulation-based evaluation method with synthetic electrodes generated from the RIRE dataset; and (ii) an evaluation using four image pairs acquired from patients at Clinica Imbanaco Grupo Quirón Salud, where we measured the error using five anatomical structures that can be localized in the pre-surgical MRI and post-surgical CT images.
Our findings indicate that the proposed method outperforms the existing state-of-the-art techniques in the simulation-based evaluation using the RIRE dataset. In the evaluation using clinical images, we observed that our method demonstrated superior performance in some cases, while showing a slight decrease in performance in others. Despite this variability, the overall average Euclidean distance was lower for our method, suggesting an improvement in registration accuracy.
We recommend enhancing the second validation methodology by increasing the number of images and refining the localization of brain structures to further reduce bias in the evaluation results. This would enable a more comprehensive assessment of the proposed fusion method’s performance for clinical scenarios.
In conclusion, our proposed image fusion method shows promise for improving the accuracy of the registration in SEEG. With further development and refinement, this approach has the potential to significantly impact the field of epilepsy treatment, offering further aid in the localization of epileptogenic tissue when SEEG is employed.

Author Contributions

Conceptualization, J.P.H., C.M. and M.T.; methodology, J.P.H., C.M. and M.T.; validation, J.P.H., C.M. and M.T.; formal analysis, J.P.H.; investigation, J.P.H.; resources, J.P.H.; data curation, J.P.H.; writing—original draft preparation, J.P.H.; writing—review and editing, J.P.H., C.M., M.T. and A.H.; visualization, J.P.H.; supervision, C.M., M.T. and A.H.; project administration, C.M. and M.T.; funding acquisition, M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data and code supporting the findings of this study are openly available. The code used in this research has been made publicly accessible on GitHub at the following repository: https://github.com/andresprzh/SEEGFusion. This repository includes detailed documentation and source code relevant to the study. No new datasets were created or analyzed in this study; thus, data sharing beyond the provided code is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest in this work. This research is independent and has not been influenced by any commercial or financial relationships that could be construed as a potential conflict of interest. The study was conducted without the involvement of any companies or commercial entities and did not receive any private funding.

Abbreviations

The following abbreviations are used in this manuscript:
SEEGStereotactic Electroencephalography
MRIMagnetic Resonance Imaging
CTComputer Tomography
AEDAnti-Epileptic Drugs
MSDMean Squared Difference
MIMutual Information
RIRERetrospective Image Registration Evaluation Project
RMSERoot Mean Square Error
PSNRPeak Signal to Noise Ratio
NNArtificial Neural Networks methods
PCNNPulse Coupled Neural Network
SSIMStructural Similarity Index
ENEntropy
STDStandard deviation

References

  1. Hauser, W.; Hesdorffer, D.; Epilepsy Foundation of America. Epilepsy: Frequency, Causes, and Consequences; Number v. 376; Demos Medical Pub: New York, NY, USA, 1990. [Google Scholar]
  2. Murray, C.J.L.; Lopez, A.D.; World Health Organization. Global Comparative Assessments in the Health Sector: Disease Burden, Expenditures and Intervention Packages; Murray, C.J.L., Lopez, A.D., Eds.; World Health Organization: Geneva, Switzerland, 1994. [Google Scholar]
  3. Baumgartner, C.; Koren, J.P.; Britto-Arias, M.; Zoche, L.; Pirker, S. Presurgical epilepsy evaluation and epilepsy surgery. F1000Research 2019, 8. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, C.; Kwan, P. The Concept of Drug-Resistant Epileptogenic Zone. Front. Neurol. 2019, 10, 558. [Google Scholar] [CrossRef] [PubMed]
  5. Youngerman, B.E.; Khan, F.A.; McKhann, G.M. Stereoelectroencephalography in epilepsy, cognitive neurophysiology, and psychiatric disease: Safety, efficacy, and place in therapy. Neuropsychiatr. Dis. Treat. 2019, 15, 1701–1716. [Google Scholar] [CrossRef] [PubMed]
  6. Iida, K.; Otsubo, H. Stereoelectroencephalography: Indication and Efficacy. Neurol. Med.-Chir. 2017, 57, 375–385. [Google Scholar] [CrossRef] [PubMed]
  7. Gross, R.E.; Boulis, N.M. (Eds.) Neurosurgical Operative Atlas: Functional Neurosurgery, 3rd ed.; Thieme/AANS: Stuttgart, Germany, 2018. [Google Scholar]
  8. Perry, M.S.; Bailey, L.; Freedman, D.; Donahue, D.; Malik, S.; Head, H.; Keator, C.; Hernandez, A. Coregistration of multimodal imaging is associated with favourable two-year seizure outcome after paediatric epilepsy surgery. Epileptic Disord. 2017, 19, 40–48. [Google Scholar] [CrossRef] [PubMed]
  9. Mitchell, H.B. Image Fusion: Theories, Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  10. Xu, R.; Chen, Y.W.; Tang, S.; Morikawa, S.; Kurumi, Y. Parzen-Window Based Normalized Mutual Information for Medical Image Registration. IEICE Trans. Inf. Syst. 2008, 91, 132–144. [Google Scholar] [CrossRef]
  11. Oliveira, F.P.M.; Tavares, J.M.R.S. Medical image registration: A review. Comput. Methods Biomech. Biomed. Eng. 2014, 17, 73–93. [Google Scholar] [CrossRef] [PubMed]
  12. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering. 2007. Available online: https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf (accessed on 5 April 2023).
  13. Perez, J.; Mazo, C.; Trujillo, M.; Herrera, A. MRI and CT Fusion in Stereotactic Electroencephalography: A Literature Review. Appl. Sci. 2021, 11, 5524. [Google Scholar] [CrossRef]
  14. Iglesias, J.E.; Liu, C.Y.; Thompson, P.M.; Tu, Z. Robust Brain Extraction across Datasets and Comparison with Publicly Available Methods. IEEE Trans. Med. Imaging 2011, 30, 1617–1634. [Google Scholar] [CrossRef] [PubMed]
  15. Otsu, N. A threshold selection method from gray level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  16. Jayakar, P.; Gotman, J.; Harvey, A.S.; Palmini, A.; Tassi, L.; Schomer, D.; Dubeau, F.; Bartolomei, F.; Yu, A.; Kršek, P.; et al. Diagnostic utility of invasive EEG for epilepsy surgery: Indications, modalities, and techniques. Epilepsia 2016, 57, 1735–1747. [Google Scholar] [CrossRef] [PubMed]
  17. Du, J.; Li, W.; Lu, K.; Xiao, B. An overview of multi-modal medical image fusion. Neurocomputing 2016, 215, 3–20. [Google Scholar] [CrossRef]
Figure 1. Image fusion method with external object.
Figure 1. Image fusion method with external object.
Diagnostics 13 03420 g001
Figure 2. Electrode segmentation general procedure.
Figure 2. Electrode segmentation general procedure.
Diagnostics 13 03420 g002
Figure 3. Electrode segmentation detailed procedure.
Figure 3. Electrode segmentation detailed procedure.
Diagnostics 13 03420 g003
Figure 4. Registration procedure.
Figure 4. Registration procedure.
Diagnostics 13 03420 g004
Figure 5. Final electrode procedure.
Figure 5. Final electrode procedure.
Diagnostics 13 03420 g005
Figure 6. Image merging procedure.
Figure 6. Image merging procedure.
Diagnostics 13 03420 g006
Figure 7. Example of a generated image with electrodes.
Figure 7. Example of a generated image with electrodes.
Diagnostics 13 03420 g007
Figure 8. Proposed method that uses a sampling mask for the registration.
Figure 8. Proposed method that uses a sampling mask for the registration.
Diagnostics 13 03420 g008
Figure 9. Method to compare that does not use a sampling mask in the registration.
Figure 9. Method to compare that does not use a sampling mask in the registration.
Diagnostics 13 03420 g009
Figure 10. Box diagram of Euclidean distance for the different methods.
Figure 10. Box diagram of Euclidean distance for the different methods.
Diagnostics 13 03420 g010
Figure 11. Scatter plot of the Euclidean distance between anatomical structures in the images from Imbanaco.
Figure 11. Scatter plot of the Euclidean distance between anatomical structures in the images from Imbanaco.
Diagnostics 13 03420 g011
Figure 12. Image 3 fusion with no mask and with a mask.
Figure 12. Image 3 fusion with no mask and with a mask.
Diagnostics 13 03420 g012
Figure 13. Image 6 fusion with no mask and with a mask.
Figure 13. Image 6 fusion with no mask and with a mask.
Diagnostics 13 03420 g013
Figure 14. Image 8 fusion with no mask and with a mask.
Figure 14. Image 8 fusion with no mask and with a mask.
Diagnostics 13 03420 g014
Table 1. Original point positions in mm.
Table 1. Original point positions in mm.
PointXYZ
10.00000.00000.0000
2333.98700.00000.0000
30.0000333.98700.0000
4333.9870333.98700.0000
50.00000.0000112.0000
6333.98700.0000112.0000
70.0000333.9870112.0000
8333.9870333.9870112.0000
Table 2. Reference resulting point for all eight images.
Table 2. Reference resulting point for all eight images.
Image 1Image 2Image 3
XYZXYZXYZ
3.4167−22.2013−2.69572.9734−29.8271−17.75967.3801−30.8327−32.4198
331.6863−22.0400−3.7915332.0465−27.5075−16.6379333.7592−29.2687−27.2289
4.6098309.14271.75732.2699305.3511−17.08179.9346301.5044−31.1614
332.8794309.30390.6615331.3430307.6706−15.9600336.3137303.0685−25.9705
3.6098−21.9385107.92713.0988−24.740594.20147.8149−28.130273.5445
331.8794−21.7772106.8313332.1718−22.421095.3231334.1940−26.566178.7354
4.8029309.4055112.38012.3953310.437694.879310.3693304.207074.8029
333.0724309.5667111.2843331.4683312.757196.0010336.7485305.771179.9938
Image 4Image 5Image 6
XYZXYZXYZ
−4.4250−22.1707−6.56180.4343−33.2795−32.1174−14.1576−32.7423−23.8992
327.7407−21.7509−8.7225333.7756−31.6846−31.6177308.7199−34.2773−26.0497
−4.2159311.6391−4.40261.3211301.9878−33.3221−16.6352298.2377−21.9366
327.9498312.0588−6.5633334.6624303.5827−32.8224306.2424296.7028−24.0871
−4.0413−22.2377106.30930.9893−30.942776.9555−13.7173−30.172985.4656
328.1243−21.8180104.1486334.3306−29.347777.4553309.1603−31.707883.3151
−3.8322311.5720108.46851.8761304.324675.7508−16.1948300.807287.4282
328.3335311.9917106.3078335.2174305.919676.2505306.6827299.272285.2777
Image 7Image 8
XYZXYZ
−7.6836−35.2270−19.114016.5968−32.3464−22.5351
330.7646−33.7368−18.2985337.2910−39.1214−20.8741
−8.3642304.4994−16.528334.2465299.5795−22.6286
330.0840305.9896−15.7128354.9407292.8044−20.9676
−7.7841−37.044092.858216.5872−26.094690.3929
330.6641−35.553893.6738337.2813−32.869692.0538
−8.4647302.682395.443934.2369305.831390.2994
329.9835304.172596.2594354.9310299.056291.9603
Coordinates in X, Y, and Z for the points obtained from applying the reference transform in 8 pairs of CT and MRI images from the RIRE dataset, without adding the synthesized electrodes. This point was used as a reference to compute the Euclidean distance in the registration procedure with the synthesized images with electrodes.
Table 3. Euclidean distance in mm for the image fusion methods.
Table 3. Euclidean distance in mm for the image fusion methods.
Pointimages 1images 2images 3images 4
Maskno MaskMaskno MaskMaskno MaskMaskno Mask
10.2090.1791.2600.5352.1143.4691.2470.594
20.3670.1910.6660.6171.5534.1520.9151.015
30.4680.1311.4460.7271.3076.5761.5070.703
40.1590.3651.4090.5861.2797.0700.4300.415
50.1480.2581.2040.7491.7145.0290.8430.343
60.4890.1140.8500.8700.9875.2740.9580.827
70.3460.1741.3860.5750.9608.0331.3440.859
80.2280.2971.4940.4960.9728.2820.8030.553
Pointimages 5images 6images 7images 8
Maskno MaskMaskno MaskMaskno MaskMaskno Mask
13.0642.9980.7375.0571.3270.9093.16110.620
23.0592.9641.36617.1570.9540.8921.1085.676
32.6122.5841.5959.5680.3280.4363.0419.223
41.6011.5432.80321.5410.6140.0981.3967.952
52.1752.1891.5946.4841.0890.5983.25810.370
61.8321.7822.11617.9450.6150.5461.8546.935
73.2473.2091.66110.4190.7050.6113.0148.926
82.2232.1412.94822.1870.8990.4041.8368.886
Euclidean distance between the resulting points computed with the transformation from the proposed method using a sampling mask and the method with no sampling mask; images 3, 6, and 8 show a lower Euclidean reduction when a mask was used.
Table 4. Reference structures in CT image.
Table 4. Reference structures in CT image.
StructureImage 1Image 2
XYZXYZ
Sylvian Aqueduct1.619132.161128.2941.719130.778−390.919
Anterior commissure0.079149.323131.2940.31145.902−365.217
Right lens30.48227.407121.90730.951230.123−365.558
Left lens−39.392221.421120.383−38.422226.1−367.741
Pineal gland1.399132.381139.2942.155125.729−375.287
Image 3Image 4
XYZXYZ
Sylvian Aqueduct−0.001133.384−411.3896.398149.516−571.414
Anterior commissure−0.425170.072−410.326−0.19170.766−565.312
Right lens29.609225.447−457.272−0.743234.271−598.263
Left lens−36.389224.62−453.821−54.969215.592−584.7
Pineal gland1.039141.247−398.3859.875147.018−558.196
Coordinates in X, Y, and Z of the located structures in the four CT images.
Table 5. Structures in MRI registered with mask.
Table 5. Structures in MRI registered with mask.
StructureImage 1Image 2
XYZXYZ
Sylvian Aqueduct−0.497135.501131.223−0.941137.412−374.917
Anterior commissure−2.628159.994140.26−2.127163.213−365.627
Right lens31.231226.372126.95734.232237.695−379.789
Left lens−38.222220.542124.31−36.479233.46−377.886
Pineal gland−0.381134.024139.112−0.327131.274−372.335
Image 3Image 4
XYZXYZ
Sylvian Aqueduct−0.139146.147−404.7013.085150.405−571.851
Anterior commissure−2.833169.993−409.066−0.279173.761−569.139
Right lens28.638225.725−456.31713.94230.782−603.875
Left lens−37.283224.27−452.711−46.124228.076−603.892
Pineal gland0.002143.884−399.8455.151144.134−566.054
Coordinates in X, Y, and Z of the located structures in the resulted fused images using a sampling mask in the registration procedure.
Table 6. Structures in MRI registered without mask.
Table 6. Structures in MRI registered without mask.
StructureImage 1Image 2
XYZXYZ
Sylvian Aqueduct−0.469136.396131.874−0.435134.905−378.153
Anterior commissure−2.491160.081140.798−2.048158.782−365.235
Right lens31.606226.472127.2233.893231.655−370.27
Left lens−38.372220.462124.4−36.656226.912−371.566
Pineal gland0.05131.032134.6090.013129.051−375.979
Image 3Image 4
XYZXYZ
Sylvian Aqueduct1.229146.558−405.0573.03151.037−571.368
Anterior commissure−0.905169.695−409.541−0.098173.599−569.108
Right lens29.558226.22−455.33214.847230.98−603.507
Left lens−36.204224.27−452.584−45.157228.144−605.603
Pineal gland1.401141.552−399.9674.646144.436−565.84
Coordinates in X, Y, and Z of the located structures in the resulted fused images without using a sampling mask in the registration procedure.
Table 7. Euclidean distance between resulting registered images.
Table 7. Euclidean distance between resulting registered images.
StructureImage 1Image 2Image 3Image 4
Maskno MaskMaskno MaskMaskno MaskMaskno Mask
Sylvian aqueduct5.9254.92113.58817.52614.66814.413.6963.458
Anterior commissure14.58314.19813.09417.4870.9942.7194.7384.86
Right lens5.5115.2095.76216.4512.0891.3916.77416.101
Left lens4.2544.1914.29112.6831.2991.46826.28224.544
Pineal gland5.0592.4294.0136.7541.6513.1889.6159.612
Euclidean distance between the reference structures with the structures from the resulting fused image in the methods using the mask and with no mask.
Table 8. Global fusion evaluation metrics in resulting registered images.
Table 8. Global fusion evaluation metrics in resulting registered images.
MetricImage 1Image 2Image 3Image 4
Maskno MaskMaskno MaskMaskno MaskMaskno Mask
RMSE7334.7717334.7677399.47537.7726194.5276207.7187655.5447657.312
PSNR95.50195.50194.3494.15497.75997.73894.36194.359
SSIM0.8930.8930.870.90.7740.7750.8530.853
MI0.5740.5740.4330.4160.4330.4380.3480.348
Global performance metrics for the fused images using the mask and without using the mask.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez Hinestroza, J.; Mazo, C.; Trujillo, M.; Herrera, A. MRI and CT Fusion in Stereotactic Electroencephalography (SEEG). Diagnostics 2023, 13, 3420. https://doi.org/10.3390/diagnostics13223420

AMA Style

Pérez Hinestroza J, Mazo C, Trujillo M, Herrera A. MRI and CT Fusion in Stereotactic Electroencephalography (SEEG). Diagnostics. 2023; 13(22):3420. https://doi.org/10.3390/diagnostics13223420

Chicago/Turabian Style

Pérez Hinestroza, Jaime, Claudia Mazo, Maria Trujillo, and Alejandro Herrera. 2023. "MRI and CT Fusion in Stereotactic Electroencephalography (SEEG)" Diagnostics 13, no. 22: 3420. https://doi.org/10.3390/diagnostics13223420

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop