Next Article in Journal
Validation of a New Stress Induction Protocol Using Speech Improvisation (IMPRO)
Previous Article in Journal
Voice-Evoked Color Prediction Using Deep Neural Networks in Sound–Color Synesthesia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Fusion of High-Resolution DynaCT and T2-Weighted MRI for Image-Guided Programming of dDBS

1
Chair of Medical Systems Technology, Institute for Medical Technology, Faculty of Electrical Engineering and Information Technology, Otto von Guericke University Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
2
Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
3
Neuroradiology, Medical Faculty, Martin Luther University Halle-Wittenberg, Ernst-Grube-Straße 40, 06120 Halle, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Brain Sci. 2025, 15(5), 521; https://doi.org/10.3390/brainsci15050521
Submission received: 17 January 2025 / Revised: 19 February 2025 / Accepted: 16 May 2025 / Published: 19 May 2025
(This article belongs to the Section Sensory and Motor Neuroscience)

Abstract

:
Objectives: This study aimed to develop a semi-automated registration method for aligning preoperative non-contrast T2-weighted MRI with postoperative high-resolution cone-beam CT (DynaCT) in patients undergoing directional deep brain stimulation (dDBS) surgery targeting the subthalamic nucleus (STN). The aim was to facilitate image-guided programming of DBS devices and postoperative verification of the alignment of segmented contacts. Materials and Methods: A dataset of ten patients undergoing bilateral dDBS implantation was retrospectively collected, including DynaCT (acquired postoperatively) and non-contrast T2-weighted MRI (obtained preoperatively). A semi-automated registration method was used, employing manual initialization due to dissimilar anatomical information between DynaCT and T2-weighted MRI. Image visualization, initial alignment using a centered transformation initializer, and single-resolution image registration involving the Simple Insight Toolkit (SimpleITK) library were performed. Manual landmark-based alignment based on anatomical landmarks and evaluation metrics such as Target Registration Error (TRE) assessed alignment accuracy. Results: The registration method successfully aligned all images. Quantitative evaluation revealed an average of the mean TRE of 1.48 mm across all subjects, indicating satisfactory alignment quality. Multiplanar reformations (MPRs) based on electrode-oriented normal vectors visualized segmented contacts for accurate electrode placement. Conclusions: The developed method demonstrated successful registration between preoperative non-contrast T2-weighted MRI and postoperative DynaCT, despite dissimilar anatomical information. This approach facilitates accurate alignment crucial for DBS programming and postoperative verification, potentially reducing the programming time of the DBS. The study underscores the importance of image quality, manual initialization and semi-automated registration methods for successful multimodal image registration in dDBS procedures targeting the STN.

1. Introduction

Directional DBS electrode models can steer the stimulating electric field in dDBS [1]. Knowing the exact orientation of the electrode and its contacts at the stimulation site is crucial for implantation and for programming the device [2,3,4]. However, the segmented contacts of the electrodes that steer the electric field elude conventional imaging modalities such as MRI and CT. Recent publications have shown that photon counting CT (PCCT) as well as high-resolution DynaCT (©Siemens, Erlangen, Germany) can be used to visualize segmented contacts. While PCCT is capable of displaying the contacts in a full field of view (FOV), it is not yet commonly available. Moreover, image-guided implantation and programming may be challenging due to patient transfers from the operating department to the imaging unit with PCCT. Conversely, high-resolution DynaCT can image the electrode contacts but lacks the ability to display the target region (e.g., subthalamic nucleus, STN) due to poor soft-tissue contrast. Its advantages, on the other hand, lie in its availability and its design, which allow for image-guided procedures within a hybrid operating room. In this proof-of-concept study, we developed a manually initialized intensity-based registration method to register non-contrast preoperative T2-weighted MRI with postoperative high-resolution DynaCT in patients with STN-DBS. In perspective, the fast and automatic fusion of these modalities would enable image-guided programming of the dDBS device within a hybrid operating room setup.
Prior research has demonstrated the feasibility of multimodal image fusion for DBS applications. A notable example is the fusion of flat detector computed tomography (FDCT) with CT, which has been shown to effectively preserve anatomical context while enhancing the visualization of segmented DBS electrode contacts in 3D [5]. This method allows for subsequent fusion with preoperative MRI, facilitating more accurate electrode placement. However, despite these advancements, existing studies have primarily relied on CT as an intermediary imaging step between FDCT and MRI, introducing an additional imaging layer and increased radiation exposure. In contrast, our study eliminates the need for CT, directly registering postoperative DynaCT with preoperative MRI, thus reducing total radiation exposure while maintaining high registration accuracy. Our work builds upon these prior methodologies by demonstrating the feasibility of semi-automated registration for accurate multimodal image fusion in DBS programming, optimizing the workflow for postoperative electrode programming without requiring an additional CT scan.

2. Materials and Methods

2.1. Data Acquisition

Hypothesizing that DynaCT and T2-weighted MRI would not share sufficient anatomical details for direct, fully automated registration, the decision was made to register DynaCT to T2-weighted MRI with manual initialization. According to national and European data privacy policy, we used a retrospectively collected dataset. The dataset contained anonymized images from clinical routine from ten patients ( n = 10 ) that underwent bilateral dDBS implantation surgery. The DynaCT was performed within the first week after surgery, whereas the MRI study was performed approximately 2 weeks prior to the electrode implantation. The average age of the patients included in the study was 53 years, and an equal representation of male and female was chosen ( n = 5 ). For each patient, high-resolution DynaCT was acquired using an ArtisQ multi-purpose X-ray system with syngo DynaCT micro head (both ©Siemens, Erlangen, Germany). The settings for the high-resolution DynaCT were an anode voltage of 116–119 kV with a tube current ranging from 258 to 274 mA. The settings for T2-weighted MRI were as follows: Siemens 3T Skyra, sequences: 3D Spin Echo (3DSE) and 2D Turbo Spin Echo (2DTSE); TR: 2800 ms and 6950; TE: 244 ms and 80 ms; flip angle: 120° and 180°; pixel bandwidth: 700 and 250; acquisition time: ca. 9 min. Additional information regarding the images’ properties can be found in Table 1.

2.2. Image Visualization and Initial Alignment

To visualize the images, a multimodal image (DynaCT and T2-weighted MRI) display was utilized, employing a windowing technique to enhance the visual image quality. The display included both fixed and moving images, applying specific window level values for DynaCT and MRI. An initial alignment was performed using a centered transformation initializer with an Euler 3D transformation for geometric alignment.

2.3. Manual Landmark-Based Alignment

As shown in Figure 1, manual landmark-based alignment was performed using pre-defined points on both fixed and moving images based on anatomical structure landmarks of semicircular canals. A Versor Rigid 3D transform was initialized based on these landmarks to refine the initial transformation. This transformed the Euler 3D transformation to optimize the alignment based on landmarks.

2.4. Intensity-Based Image Registration

A single-resolution image registration approach was employed utilizing the SimpleITK library [6]. The registration parameters were configured as follows: the similarity metric employed was Mattes Mutual Information. A random sampling strategy was chosen with a sampling percentage of 0.01. Interpolation was achieved linearly. The optimization algorithm used was Gradient Descent with a learning rate of 0.33, which ran for 30 iterations, estimating the learning rate per iteration and converging upon reaching a minimum threshold of 0.01. The initialization involved an initial transform which was provided externally. Additionally, the resolution settings included shrink factors set to one and smoothing sigmas set to zero, and the smoothing sigmas were specified in physical units. The resolution registration function execution outputted the final transformed image and the metric value achieved during the registration process.

2.5. Evaluation Metrics

For evaluation, TREs were computed for both the initial alignment and the final alignment, using utilities provided in the code. These metrics were calculated in millimeters, comprising mean and standard deviation values for both initial and final alignments. The TREs were assessed between the fixed image points and corresponding moving image points, providing insights into the accuracy of the alignment process.

2.6. Image Processing

Our method involves a partially automated process for multimodal 3D image registration, integrating both MRI and DynaCT data. Custom software using SimpleITK v2.3.1 for python was developed by performing all necessary computations. The workflow consists of manually choosing two different sets of anatomical landmarks, one for initialization and one for evaluation of TRE, as can be seen in Figure 2. For each initialization and evaluation, a radiologist chose three corresponding landmarks within 3DSlicer 5.2.1 [7] before rigidly registering the images via maximization of mutual information. After optimization of the rigid transformation parameters, the final transformation was obtained. For each subject, the TRE was calculated based on the dedicated evaluation landmarks and final transformation, as shown in Table 1 and Figure 3. Besides, a multiplanar reformation (MPR) along an electrode-based normal vector was computed using Matlab R2024a to visualize the segmented contacts in the right plane, as can be seen in Figure 4.

3. Results

All images were registered with high precision. The quantitative evaluation yielded an average mean TRE of 1.48 mm over all subjects. Choosing the landmarks carefully took approximately 7.5 min per patient. The intensity-based final registration took 5 s. Table 1 gives an overview of the mean TRE, standard deviation and spatial information. After registration, an MPR along an electrode-based normal vector was computed using a custom Matlab code. Figure 4 shows an exemplary MPR for the right electrode. The segmented contacts appear in the right anatomical structure (STN). Accordingly Figure 5 shows a non-angulated sagittal view where the contacts, as well as the marker, are clearly delineated.
The results of our multimodal image registration approach are highly encouraging, demonstrating significant improvements in the accuracy of aligning DynaCT and T2-weighted MRI scans. Figure 3 provides a detailed assessment of TRE in ten patients, showing the mean and standard deviation at three specific locations. In the left segment of Figure 3, highlighted in blue, we observe initially elevated TRE values before the registration process. On the contrary, the right segment, depicted in orange, showcases a remarkable improvement post-registration, with a notable reduction in TRE values. This visual representation effectively captures the positive impact of our registration technique. Our summary provides a holistic view of the results obtained before and after registration for all ten patients, underscoring the substantial improvements achieved. The computed mean and standard deviation (SD) further emphasize the significant reduction in TRE after registration, serving as robust indicators of the efficacy of our image registration methodology. To provide additional context, Figure 3 offers a quantitative assessment, detailing the calculation of mean and standard deviations for TRE at three specific points within each patient’s image dataset. This assessment, involving DynaCT and T2-weighted MRI images across ten datasets related to dDBS, employed a meticulous qualitative identification of corresponding anatomical landmarks in both modalities for each dataset. The determination of TRE utilized a semi-automatic method, and the resulting figure is logically divided into two sections. The left part, Figure 3a represented in blue, signifies the dataset before registration, while the right part, Figure 3b, portrayed in orange, signifies the dataset after registration. This meticulous evaluation process ensures a robust and reliable assessment of the effectiveness of our image registration approach. The higher SD in subject 5 is most likely attributable to inaccuracies when choosing the landmarks. This was caused by incomplete reconstruction of the semicircular canal due to the narrow FOV.

4. Discussion

The developed semi-automated registration method effectively aligned preoperative T2-weighted MRI with postoperative high-resolution DynaCT with a narrow FOV in dDBS patients. This potentially leverages workflows already established for determining electrode orientation [8,9] and may be used in a hybrid surgery programming scenario [4]. However, the inherent differences in anatomical structure and acquisition timeframes between preoperative MRI and postoperative DynaCT introduce additional challenges in image fusion. MRI provides superior soft-tissue contrast, while DynaCT excels in electrode visualization. Additionally, postoperative anatomical changes and potential shifts in electrode positioning further complicate registration accuracy [3]. The choice of appropriate initialization strategies is crucial to mitigate these discrepancies and improve alignment precision. Frameless, image-guided implantation has already been tested in DBS patients, but the assessment of the anatomical orientation of segmented contacts remains unsolved, though electrode tip positions are exploitable [8]. The accuracy of the image registration presented here is state of the art, with an overall mean TRE of 1.48 mm [10,11]. However, the method of calculating the registration accuracy differs from study to study, which should be considered when comparing results. Notably, in the present work, image pairs containing motion artifacts exhibited slightly higher TRE values; however, no statistical tests for outlier detection were performed in this context (see Table 1). Intuitively, the accuracy of image registration is largely dependent on image quality, leading to a combined error. These errors arise from two main sources: firstly, due to the geometric distortions of the modalities, and secondly, due to registration errors related to the optimization algorithm [10]. In DynaCT, cone-beam geometry leads to an incomplete image reconstruction in the peripheral; hence, distortions increase the farther a voxel is from the isocenter [12]. Additionally, scattered radiation through beam hardening affects image quality. In MRI, it is well known that magnetic field inhomogeneities lead to geometric inaccuracies [10]. Since manual registration methods are inherently subjective and rely on an individual’s expertise, combining them with an automated algorithm may improve robustness. To mitigate these challenges, our method incorporates a manual initialization step based on anatomical landmarks, allowing for accurate initial alignment before optimization. The selection of semicircular canals as anatomical landmarks ensures consistency, as they are distinguishable in both MRI and DynaCT despite contrast differences. This approach helps compensate for postsurgical anatomical deviations and variations in head positioning between preoperative and postoperative imaging sessions. The final intensity-based transformation was completed in 5 s per patient using a single-resolution optimization scheme, enhancing computational efficiency.
Our findings are consistent with prior research on image fusion techniques for DBS electrode visualization. The fusion of highly resolved FDCT with CT has been shown to effectively visualize segmented contacts of DBS electrodes in 3D, preserving anatomical context and facilitating subsequent fusion with preoperative MRI, thereby enhancing the accuracy of electrode placement [5]. However, previous studies required CT to be used as an intermediary step between FDCT and MRI, adding an extra imaging layer and additional radiation exposure. In contrast, our approach eliminates the need for an intermediary CT scan by directly fusing postoperative DynaCT with preoperative MRI. This reduces the total radiation exposure to the patient while still ensuring accurate electrode visualization and registration. Moreover, the semi-automated registration process between FDCT and CT demonstrated good alignment with intracranial structures, with a mean TRE of 4.16 mm, suggesting its potential clinical applicability [5]. In comparison, our study achieved a mean TRE of 1.48 mm, indicating an improvement in alignment accuracy, which may further benefit image-guided DBS programming.

5. Conclusions and Future Work

Whether the accuracy of the method proposed is appropriate for verifying the position of segmented contacts within a target area has to be explored. In this study, semicircular canals were utilized as corresponding anatomical structures to initialize the automatic registration, as they exhibit a T2-hyperintense signal in MRI. Accordingly, in DynaCT the semicircular canals are also distinguishable, which makes them a suitable candidate for automated initialization. Yet, incomplete reconstructions of this structure in the narrow FOV DynaCT must be taken into account when building a fully automated approach around this promising structure. High-resolution cochlea segmentation and registration has already been introduced, but was restricted to the cochlea itself and not used for global transformations [13]. An exhaustive parameter search may result in a local maximum of the mutual information function, especially if the initialization is already close to the global optimum [14,15]. Hence, the optimization settings were kept to a less strict regime, counting on good initialization. Since the postoperative DynaCT was acquired shortly before discharging, in accordance with previous recommendations [16], the impact of surgery-based changing should be lower compared to that of the first few postoperative hours [17]. The orientation of the electrodes remains stable from a certain postoperative point in time [17], yet caution is indicated because the preoperative MRIs do not image the brain at the same time as the DynaCT acquisition. However, with 5 s of runtime per patient, from the temporal perspective, this improves the possibility of application in intraoperative conditions. Additionally, determining the best combination of optimizers, similarity metrics, and interpolators is crucial for minimizing registration errors. This significantly improves the accuracy of aligning multimodal images by reducing the TRE based on suitable selection of registration parameters to balance accuracy and computational efficiency [18,19]. Phantom- and fiducial-based studies indicate DynaCT and MRI show the highest mean TRE [20]. Thus, it is likely that the mean TRE was not overestimated for fiducials and manually chosen landmarks were utilized. Hence, a phantom-based registration study of the proposed method is planned, as this will allow us to evaluate the accuracy with more certainty. All in all, the 3D visualization of segmented electrode contacts, as postulated in [4], was performed, as shown in Figure 6, showing the potential for streamlining dDBS-surgery, reducing the duration of such surgery, and for verifying the segmented contacts’ orientation with no need for stereotactic frames or micro-electrode recordings. While the semi-automatic registration method itself is well established, the specific combination of modalities in this context is novel. This should be viewed as a feasibility study or proof of concept to avoid unnecessary investment in developing an algorithm that may not be viable. Additionally, the study underscores the importance of image quality and the impact of initialization, and may highlight the potential of multimodal image fusion of DynaCT and MRI.
Despite the successful implementation of our semi-automated registration method, certain limitations must be acknowledged. The variability in anatomical structures due to postsurgical changes and differences in imaging time points can introduce challenges in achieving consistent registration accuracy. While the current approach successfully demonstrated a mean TRE of 1.48 mm, further validation using a larger dataset is needed to ensure robustness across different patient cases. Additionally, the reliance on manual landmark selection introduces subjectivity, which may influence registration accuracy. Future work will focus on integrating automated landmark detection techniques and deep learning-based registration frameworks to reduce user dependency and enhance reproducibility. Moreover, systematic parameter optimization will be explored to refine the selection of transformation models, similarity metrics, and interpolation methods, potentially improving the alignment of multimodal images even further.
Furthermore, we plan to use our dataset for supervised deep learning training, aiming to develop a fully automated registration pipeline. Manual labeling of our dataset remains challenging due to the variability in anatomical structures between MRI and DynaCT modalities. By leveraging our registered image pairs, we aim to create a training dataset that enables deep learning models to learn optimal registration transformations, reducing the need for manual intervention and enhancing registration accuracy in future studies.

Author Contributions

In this study, F.A.-J. and M.M. contributed equally to conducting experiments to test the image registration approach developed by F.A.-J. The manuscript was a collaborative effort between M.M. and F.A.-J., M.F., M.S. and C.H. critically revised the manuscript, with M.S. contributing the basic idea to the methodological approach of our research using multimodal imaging. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Deutscher Akademischer Austauschdienst (DAAD) under funding programme research grants and Doctoral programmes in Germany (57440921). Additionally, it received support for the Book Processing Charge from the Open Access Publication Fund of Magdeburg University. The funders had no role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript.

Institutional Review Board Statement

In accordance with EU and German data privacy law, ethics approval was waived because a retrospectively collected, anonymized dataset from a clinical routine was used. Approval by the ethics committee was waived on 12 May 2025 under R04-25.

Informed Consent Statement

All authors have consented to the publication of this manuscript.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to ethical considerations and patient confidentiality.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Lozano, A.M.; Lipsman, N.; Bergman, H.; Brown, P.; Chabardes, S.; Chang, J.W.; Matthews, K.; McIntyre, C.C.; Schlaepfer, T.E.; Schulder, M.; et al. Deep brain stimulation: Current challenges and future directions. Nat. Rev. Neurol. 2019, 15, 148–160. [Google Scholar] [CrossRef] [PubMed]
  2. Egger, K.; Rau, A.; Urbach, H.; Reisert, M.; Reinacher, P.C. 3D X-ray based visualization of directional deep brain stimulation lead orientation. J. Neuroradiol. 2022, 49, 293–297. [Google Scholar] [CrossRef] [PubMed]
  3. Dembek, T.; Hoevels, M.; Hellerbach, A.; Horn, A.; Petry-Schmelzer, J.; Borggrefe, J.; Wirths, J.; Dafsari, H.; Barbe, M.; Visser-Vandewalle, V.; et al. Directional DBS leads show large deviations from their intended implantation orientation. Park. Relat. Disord. 2019, 67, 117–121. [Google Scholar] [CrossRef] [PubMed]
  4. Schmidt, J.M.; Buentjen, L.; Kaufmann, J.; Gruber, D.; Treuer, H.; Haghikia, A.; Voges, J. Deviation of the orientation angle of directional deep brain stimulation leads quantified by intraoperative stereotactic X-ray imaging. Neurosurg. Rev. 2022, 45, 2975–2982. [Google Scholar] [CrossRef] [PubMed]
  5. Al-Jaberi, F.; Moeskes, M.; Skalej, M.; Fachet, M.; Hoeschen, C. 3D-visualization of segmented contacts of directional deep brain stimulation electrodes via registration and fusion of CT and FDCT. EJNMMI Rep. 2024, 8, 17. [Google Scholar] [CrossRef] [PubMed]
  6. Beare, R.; Lowekamp, B.; Yaniv, Z. Image Segmentation, Registration and Characterization in R with SimpleITK. J. Stat. Softw. 2018, 86, 1–35. [Google Scholar] [CrossRef] [PubMed]
  7. Fedorov, A.; Beichel, R.; Kalpathy-Cramer, J.; Finet, J.; Fillion-Robin, J.C.; Pujol, S.; Bauer, C.; Jennings, D.; Fennessy, F.; Sonka, M.; et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn. Reson. Imaging 2012, 30, 1323–1341. [Google Scholar] [CrossRef] [PubMed]
  8. Soler-Rico, M.; Peeters, J.B.; Joris, V.; Delavallée, M.; Duprez, T.; Raftopoulos, C. MRI-guided DBS of STN under general anesthesia for Parkinson’s disease: Results and microlesion effect analysis. Acta Neurochir. 2022, 164, 2279–2286. [Google Scholar] [CrossRef] [PubMed]
  9. O’Gorman, R.L.; Jarosz, J.M.; Samuel, M.; Clough, C.; Selway, R.P.; Ashkan, K. CT/MR image fusion in the postoperative assessment of electrodes implanted for deep brain stimulation. Stereotact. Funct. Neurosurg. 2009, 87, 205–210. [Google Scholar] [CrossRef] [PubMed]
  10. Aldosary, G.; Szanto, J.; Holmes, O.; Lavigne, B.; Althobaity, W.; Sheikh, A.; Foottit, C.; Vandervoort, E. Geometric inaccuracy and co-registration errors for CT, DynaCT and MRI images used in robotic stereotactic radiosurgery treatment planning. Phys. Med. 2020, 69, 212–222. [Google Scholar] [CrossRef] [PubMed]
  11. Geevarghese, R.; O’Gorman Tuura, R.; Lumsden, D.E.; Samuel, M.; Ashkan, K. Registration accuracy of CT/MRI fusion for localisation of deep brain stimulation electrode position: An imaging study and systematic review. Stereotact. Funct. Neurosurg. 2016, 94, 159–163. [Google Scholar] [CrossRef] [PubMed]
  12. Orth, R.C.; Wallace, M.J.; Kuo, M.D.; Technology Assessment Committee of the Society of Interventional Radiology. C-arm cone-beam CT: General principles and technical considerations for use in interventional radiology. J. Vasc. Interv. Radiol. 2008, 19, 814–820. [Google Scholar] [CrossRef] [PubMed]
  13. Al-Dhamari, I.; Helal, R.; Abdelaziz, T.; Waldeck, S.; Paulus, D. Automatic cochlear multimodal 3D image segmentation and analysis using atlas–model-based method. Cochlear Implant. Int. 2024, 25, 46–58. [Google Scholar] [CrossRef] [PubMed]
  14. Maes, F.; Vandermeulen, D.; Suetens, P. Medical image registration using mutual information. Proc. IEEE Inst. Electr. Electron. Eng. 2003, 91, 1699–1722. [Google Scholar] [CrossRef]
  15. Chen, M.; Tustison, N.J.; Jena, R.; Gee, J.C. Image Registration: Fundamentals and Recent Advances Based on Deep Learning. In Machine Learning for Brain Disorders; Colliot, O., Ed.; Springer: New York, NY, USA, 2023; pp. 435–458. [Google Scholar] [CrossRef]
  16. Krüger, M.T.; Naseri, Y.; Cavalloni, F.; Reinacher, P.C.; Kägi, G.; Weber, J.; Brogle, D.; Bozinov, O.; Hägele-Link, S.; Brugger, F. Do directional deep brain stimulation leads rotate after implantation? Acta Neurochir. 2021, 163, 197–203. [Google Scholar] [CrossRef] [PubMed]
  17. Dembek, T.A.; Asendorf, A.L.; Wirths, J.; Barbe, M.T.; Visser-Vandewalle, V.; Treuer, H. Temporal stability of lead orientation in directional deep brain stimulation. Stereotact. Funct. Neurosurg. 2021, 99, 167–170. [Google Scholar] [CrossRef] [PubMed]
  18. Al-Jaberi, F.; Fachet, M.; Moeskes, M.; Skalej, M.; Hoeschen, C. Optimization Techniques for Semi-Automated 3D Rigid Registration in Multimodal Image-Guided Deep Brain Stimulation. Curr. Dir. Biomed. Eng. 2023, 9, 355–358. [Google Scholar] [CrossRef]
  19. Handa, B.; Singh, G.; Kamal, R.; Oinam, A.S.; Kumar, V. Evaluation method for the optimization of 3d rigid image registration on multimodal image datasets. Int. J. Eng. Adv. Technol. 2019, 9, 234. [Google Scholar] [CrossRef]
  20. Chung, H.T.; Kim, J.H.; Kim, J.W.; Paek, S.H.; Kim, D.G.; Chun, K.J.; Kim, T.H.; Kim, Y.K. Assessment of image co-registration accuracy for frameless gamma knife surgery. PLoS ONE 2018, 13, e0193809. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Exemplary overview of a landmark defined in 3DSlicer (yellow “L-3”) in axial (a) and sagittal view (b). As the Euclidean distance of the landmarks increases, the transformation becomes more accurate. Bilaterally, the tip of the anterior semicircular canals was also chosen. The high-resolution DynaCT is depicted in (a,c), whereas the T2-weighted MRI is shown in (b,d).
Figure 1. Exemplary overview of a landmark defined in 3DSlicer (yellow “L-3”) in axial (a) and sagittal view (b). As the Euclidean distance of the landmarks increases, the transformation becomes more accurate. Bilaterally, the tip of the anterior semicircular canals was also chosen. The high-resolution DynaCT is depicted in (a,c), whereas the T2-weighted MRI is shown in (b,d).
Brainsci 15 00521 g001aBrainsci 15 00521 g001b
Figure 2. Schematic illustration of the proposed workflow. T2-weighted MRI and DynaCT DICOM-images were loaded into the 3DSlicer GUI. For each modality, two sets of corresponding anatomical landmarks were chosen utilizing 3DSlicer. One set was used for initializing the registration and another was used to determine the accuracy. Matching the control points led to a robust initial estimation of translation and rotation between the datasets. Subsequently, the final transformation was obtained via maximization of mutual information via gradient descent.
Figure 2. Schematic illustration of the proposed workflow. T2-weighted MRI and DynaCT DICOM-images were loaded into the 3DSlicer GUI. For each modality, two sets of corresponding anatomical landmarks were chosen utilizing 3DSlicer. One set was used for initializing the registration and another was used to determine the accuracy. Matching the control points led to a robust initial estimation of translation and rotation between the datasets. Subsequently, the final transformation was obtained via maximization of mutual information via gradient descent.
Brainsci 15 00521 g002
Figure 3. A quantitative assessment was conducted to calculate the mean and standard deviations for the TRE at three points within each patient image dataset. This evaluation involved comparing DynaCT and T2-weighted MRI images across ten datasets related to deep brain stimulation. The evaluator qualitatively identified corresponding anatomical landmarks in both image modalities for each dataset. TRE was determined using a semi-automatic method, and the resulting figure is divided into two sections. The left part, (a) shown in blue, represents the dataset before registration, while the right part, (b) depicted in orange, represents the dataset after registration.
Figure 3. A quantitative assessment was conducted to calculate the mean and standard deviations for the TRE at three points within each patient image dataset. This evaluation involved comparing DynaCT and T2-weighted MRI images across ten datasets related to deep brain stimulation. The evaluator qualitatively identified corresponding anatomical landmarks in both image modalities for each dataset. TRE was determined using a semi-automatic method, and the resulting figure is divided into two sections. The left part, (a) shown in blue, represents the dataset before registration, while the right part, (b) depicted in orange, represents the dataset after registration.
Brainsci 15 00521 g003
Figure 4. Fused image after multiplanar reformation (MPR): The images are displayed in accordance with scientific convention. Preoperative T2-weighted MRI (a), and cutout view of the same slice (b). Characteristic hypointensities depicting both the right subthalamic (yellow) and red nucleus (red) as well as the third ventricle (blue) on preoperative T2-weighted MRI. Each segmented contact point can be identified within the anatomical target area.
Figure 4. Fused image after multiplanar reformation (MPR): The images are displayed in accordance with scientific convention. Preoperative T2-weighted MRI (a), and cutout view of the same slice (b). Characteristic hypointensities depicting both the right subthalamic (yellow) and red nucleus (red) as well as the third ventricle (blue) on preoperative T2-weighted MRI. Each segmented contact point can be identified within the anatomical target area.
Brainsci 15 00521 g004
Figure 5. Sagittal non-contrast T2-weighted MRI with DynaCT superimposed using alpha blending. The subthalamic nucleus is displayed as a T2-hypointense streak (red arrow). All parts of the electrode that are relevant for optimizing the stimulation can be clearly delineated (yellow circle). The image sequence transitions from medial (a) to lateral (d). (a) Proximal circular contact within STN. (b) Proximal segmented contacts for steering the electric field. (c) Distal segmented contact. (d) Distal circular contact.
Figure 5. Sagittal non-contrast T2-weighted MRI with DynaCT superimposed using alpha blending. The subthalamic nucleus is displayed as a T2-hypointense streak (red arrow). All parts of the electrode that are relevant for optimizing the stimulation can be clearly delineated (yellow circle). The image sequence transitions from medial (a) to lateral (d). (a) Proximal circular contact within STN. (b) Proximal segmented contacts for steering the electric field. (c) Distal segmented contact. (d) Distal circular contact.
Brainsci 15 00521 g005
Figure 6. Axial non-contrast T2-weighted MRI with DynaCT superimposed using alpha blending. The subthalamic nucleus is displayed as a T2-hypointense streak (red arrow). All parts of the electrode that are relevant for optimizing the stimulation can be clearly delineated (yellow circle). The image sequence goes from caudal (a) to cranial (d). (a) Proximal circular contact within STN. (b) Proximal segmented contacts with blue stars indicating the segments for steering the electric field. (c) Distal segmented contact with blue stars indicating the segments for steering the electric field. (d) Distal circular contact.
Figure 6. Axial non-contrast T2-weighted MRI with DynaCT superimposed using alpha blending. The subthalamic nucleus is displayed as a T2-hypointense streak (red arrow). All parts of the electrode that are relevant for optimizing the stimulation can be clearly delineated (yellow circle). The image sequence goes from caudal (a) to cranial (d). (a) Proximal circular contact within STN. (b) Proximal segmented contacts with blue stars indicating the segments for steering the electric field. (c) Distal segmented contact with blue stars indicating the segments for steering the electric field. (d) Distal circular contact.
Brainsci 15 00521 g006
Table 1. The average and standard deviation of the TRE calculated for each MRI and DynaCT image pair following registration using the proposed method. The evaluation was conducted by a rater, and detailed imaging parameters, including image dimensions and voxel size for both DynaCT and T2-weighted MRI, were provided.
Table 1. The average and standard deviation of the TRE calculated for each MRI and DynaCT image pair following registration using the proposed method. The evaluation was conducted by a rater, and detailed imaging parameters, including image dimensions and voxel size for both DynaCT and T2-weighted MRI, were provided.
PatientTRE (mm)SD (mm)Image Size DynaCT (pixels)Voxel Size DynaCT (mm)Image Size T2-w MRI (pixels)Voxel Size T2-w MRI (mm)Comment
012.610.5512 512 4970.2 0.2 0.2192 192 1601 1 1-
021.240.35512 512 4970.2 0.2 0.2192 192 1601 1 1-
031.050.24512 512 4970.2 0.2 0.2192 192 1601 1 1-
041.070.21512 512 4970.2 0.2 0.2192 192 1601 1 1-
051.941.21512 512 4970.2 0.2 0.2192 192 1601 1 1Motion artifacts MRI
061.860.68512 512 4970.2 0.2 0.2192 192 1601 1 1Motion artifacts MRI
070.720.28512 512 4970.2 0.2 0.2192 192 1601 1 1-
081.940.51512 512 4970.2 0.2 0.2192 192 1601 1 1-
091.050.37512 512 4970.2 0.2 0.2192 192 1601 1 1-
101.30.27512 512 4970.2 0.2 0.2240 320 800.8 0.8 22 mm slice thickness
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al-Jaberi, F.; Moeskes, M.; Skalej, M.; Fachet, M.; Hoeschen, C. Image Fusion of High-Resolution DynaCT and T2-Weighted MRI for Image-Guided Programming of dDBS. Brain Sci. 2025, 15, 521. https://doi.org/10.3390/brainsci15050521

AMA Style

Al-Jaberi F, Moeskes M, Skalej M, Fachet M, Hoeschen C. Image Fusion of High-Resolution DynaCT and T2-Weighted MRI for Image-Guided Programming of dDBS. Brain Sciences. 2025; 15(5):521. https://doi.org/10.3390/brainsci15050521

Chicago/Turabian Style

Al-Jaberi, Fadil, Matthias Moeskes, Martin Skalej, Melanie Fachet, and Christoph Hoeschen. 2025. "Image Fusion of High-Resolution DynaCT and T2-Weighted MRI for Image-Guided Programming of dDBS" Brain Sciences 15, no. 5: 521. https://doi.org/10.3390/brainsci15050521

APA Style

Al-Jaberi, F., Moeskes, M., Skalej, M., Fachet, M., & Hoeschen, C. (2025). Image Fusion of High-Resolution DynaCT and T2-Weighted MRI for Image-Guided Programming of dDBS. Brain Sciences, 15(5), 521. https://doi.org/10.3390/brainsci15050521

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop