Next Article in Journal
Salivary Transcriptome and Mitochondrial Analysis of Autism Spectrum Disorder Children Compared to Healthy Controls
Previous Article in Journal
Will the Artificial Intelligence Touch Substitute for the Human Touch?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation

1
UCL Medical School, University College London, London WC1E 6DE, UK
2
Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
3
Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
4
High-Dimensional Neurology, Institute of Neurology, University College London, London WC1N 3BG, UK
*
Author to whom correspondence should be addressed.
NeuroSci 2024, 5(3), 265-275; https://doi.org/10.3390/neurosci5030021
Submission received: 25 June 2024 / Revised: 30 July 2024 / Accepted: 31 July 2024 / Published: 2 August 2024

Abstract

:
The purpose of the article is to provide a practical guide for manual and semi-automated image segmentation of common neurosurgical cranial lesions, namely meningioma, glioblastoma multiforme (GBM) and subarachnoid haemorrhage (SAH), for neurosurgical trainees and researchers. Materials and Methods: The medical images used were sourced from the Medical Image Computing and Computer Assisted Interventions Society (MICCAI) Multimodal Brain Tumour Segmentation Challenge (BRATS) image database and from the local Picture Archival and Communication System (PACS) record with consent. Image pre-processing was carried out using MRIcron software (v1.0.20190902). ITK-SNAP (v3.8.0) was used in this guideline due to its availability and powerful built-in segmentation tools, although others (Seg3D, Freesurfer and 3D Slicer) are available. Quality control was achieved by employing expert segmenters to review. Results: A pipeline was developed to demonstrate the pre-processing and manual and semi-automated segmentation of patient images for each cranial lesion, accompanied by image guidance and video recordings. Three sample segmentations were generated to illustrate potential challenges. Advice and solutions were provided within both text and video. Conclusions: Semi-automated segmentation methods enhance efficiency, increase reproducibility, and are suitable to be incorporated into future clinical practise. However, manual segmentation remains a highly effective technique in specific circumstances and provides initial training sets for the development of more advanced semi- and fully automated segmentation algorithms.

1. Introduction

Image segmentation algorithms are powerful tools for the delineation of regions of interest in medical images obtained through various modalities such as magnetic resonance imaging (MRI) and computed tomography (CT) [1]. Segmentation allows for the efficient delineation of pathology spread and border, to define the three-dimensional spatial characteristics of the lesion and the use of radiomics to determine clinical lesion characteristics such as volume, intensity and shape [2,3,4]. These features are valuable in clinical practise to allow for the determination of treatment planning, surgical approach, prognosis and, in the long-term, follow-up of patients with neurosurgical brain lesions [5,6,7].
Segmentation methods can be broadly divided into manual, semi-automated and fully automated types, depending on the level of involvement from the segmenter [8]. Manual segmentation describes the hand-crafted process of outlining structures in medical images in a slice-by-slice manner [9]. Semi-automated segmentation relies on pre-coded computer algorithms for initial segmentation and requires manual inspection and editing afterwards [10]. Fully automated segmentation typically employs machine learning for algorithm development and aims to minimise the manual input [11].
Medical image segmentation can be undertaken on various software including ITK-SNAP, Seg3D, Freesurfer and 3D Slicer [12]. While manual, semi- and fully automated techniques have been widely used in research for common neurosurgical conditions like brain tumours [13], subarachnoid haemorrhage [14] and hydrocephalus [15], their clinical and educational potential in neurosurgery remains undervalued [12]. There is also currently a lack of training in segmentation in the U.K. and international neurosurgical resident curricula [16].
To fill this gap, we offer a detailed practical guide for novices on how to segment common cerebral lesions. Specifically, we demonstrate a pipeline that makes use of ITK-SNAP to delineate meningiomas, subarachnoid haemorrhage and glioblastomas. We chose these three conditions as they represent distinct pathologies involving different segmentation methods. Hence, we demonstrate a wide variety of possible segmentation methods.

2. Materials and Methods

The image processing pipeline (Figure 1) summarises our process.

2.1. Ethics

This article was written as an educational guide and is exempt from ethical committee approval. Where relevant, patients approved the use of their scans for research and educational purposes.

2.2. Hardware

Segmentations were performed on an Hewlett-Packard Pavilion laptop 2015 (made in China) with an Advanced Micro Devices (AMD) 2.00 GHz processor and 16 GB of Random Access Memory (RAM). The time taken for image pre-processing averaged around 15–60 s. The manual segmentation of meningioma required 15 min, compared to 20 min and 30 min for GBM and SAH semi-automated segmentation, respectively.

2.3. Software

ITK-SNAP is an openly accessible and easy-to-use software that has powerful built-in semi-automated segmentation tools [17]. It is available with Windows, MacOS and Linux [18] and allows for image processing, segmentation and visualisation. 3D Slicer is a possible alternative software, which is a general-purpose 3D medical image analysis tool that can perform similar functions of contour-based segmentation [19].
MRIcron is a free cross-platform image viewer that can convert Digital Imaging and Communication in Medicine (DICOM) images to Neuroimaging Informatics Technology Initiative (NIfTI) format [20]. Matrix Laboratory (MATLAB) is a programming language and software that allows the calculation of post-segmentation error metrics, such as the Dice similarity coefficient, to evaluate the quality of segmentation [14,21].

2.4. Image Acquisition

Glioblastoma (GBM) scans were sourced from the Medical Image Computing and Computer Assisted Interventions Society (MICCAI) Multimodal Brain Tumour Segmentation Challenge (BRATS) [22,23,24,25,26,27]. This database provided the T1-weighted, T1-weighted + contrast, T2-weighted and T2 Fluid-attenuated inversion recovery (FLAIR) scans for each patient. It also provides a ground truth manual segmentation. The inclusion criteria involved a pathologically confirmed diagnosis of GBM and an available O6-methylguanine-DNA methyl transferase (MGMT) promoter methylation status. Demographically, 60% of the patients were male, while 40% were females [22,28].
The scans for SAH and meningiomas were obtained from consenting patients from our institution. These images were originally downloaded from the Picture Archival and Communication System (PACS, Figure 1), a hospital system that allows medical professionals to gain access to medical imaging [29]. All data used were anonymised. The SAH patient was a female in her early 70s at the time of the scan, while the patient with the meningioma was man in his early 40s, who had a tumour of grade II.
PACS images are typically stored in DICOM format: the standard for raw images obtained from medical scanners [30]. DICOM files are converted to the NIFTI format using software like MRIcron (Figure 1) [20,31].

2.5. Training

Before segmentation, segmenters often require training to recognise key radiological features of meningiomas, SAH and GBM, the key features of which are summarised in Table 1.

2.6. Segmentation

We utilise manual segmentations to delineate meningiomas, particularly given its complex morphology around the skull base. It involves annotating the full extent of the meningioma through a sequential array of MRI slices followed by interpolation to generate a 3D structure (Figure 2). A detailed step-by-step guide can be found in Supplementary Video S1 (introduction) and Supplementary Video S2 (meningioma segmentation).
We utilise semi-automated segmentation (classification) to delineate SAH (Figure 3A). In brief, different brain tissues are first manually labelled (Figure 3B). A contour is then initialised where the centre of the lesion is likely to be and grows stochastically to the lesion boundary (Figure 3C). Finally, the segmentation image generated is inspected and edited manually (Figure 3D, Supplementary Video S3). Particular attention is given to diffuse areas of SAH over the falx cerebri and adjacent to the venous sinuses.
Using slight modifications of these steps, GBMs are segmented using a similar process (Figure 4, Supplementary Video S4). Particular attention is given to areas of oedema which may or may not be needed in the segmentation.

2.7. Methods of Quality Control

Quality control is integral to image segmentation in order to maintain the reliability, consistency and accuracy of derived radiomics [40]. In addition to segmenter training, commonly used quality control methods include the use of expert-defined segmentations, appropriate labelling and error metrics.

2.7.1. Expertly Defined Segmentations and the Imaging Ground Truth

During manual inspection, segmentations are generally checked by consultant radiologists, anatomists and other expert clinical neuroscientists to provide rigorous visual validation. This acts as a ground truth, which allows for a more reliable delineation of lesions and the exclusion of errors. If identified, the errors can then be fixed immediately with cross-checking [41]. However, there remains doubt as to what constitutes an ‘expert’ due to the varied knowledge base and techniques required [42].

2.7.2. Error Metrics

The Dice similarity coefficient and the Jaccard index are two commonly used performance statistical metrics to evaluate the efficacy of segmentation [43], i.e., both can be applied to check for the consistency of segmentation for the same type of lesions against the ground truth or the inter-segmenter consistency. Both indices are calculated by comparing the segmentation performed by a particular method (A) against the gold standard (B, typically expert manual segmentation). These coefficients range between 0 and 1, with 1 indicating high similarity whereas 0 indicates complete separate results.
While the Dice similarity coefficient more strongly weighs the commonalities between two objects and takes into account the total lesion volume, the Jaccard index penalises the differences between two objects and is not volume dependent.

2.8. Post-Segmentation Processing and Radiomics

After successful segmentation, quantitative features of the pathologies such as the size, shape, contrast-enhancement and texture (radiomics) can be extracted using pre-programmed algorithms [44]. A few examples of relevant radiomics features can be found in the following papers [45,46,47].

3. Results and Discussion

In this article, we provide a step-by-step guideline for the manual and semi-automated segmentation of three common neurosurgical pathologies: meningioma, subarachnoid haemorrhage and glioblastoma using ITK-SNAP.
Manual segmentation performed by expert surgeons and radiologists still currently remains the gold standard [48] and is particularly helpful when the lesion and surrounding tissue have similar signal intensities, causing automated algorithms to fail [49]. Semi-automated segmentation offers increased efficiency and improved repeatability while retaining an aspect of real-time quality control [9] and is particularly suitable for segmenting anatomically complex brain lesions where sparse training data sets exist [14].
Inexperienced trainees can quickly develop the ability to perform manual and semiautomated segmentation. In a particular study, a group of five participants, including a neurosurgeon, two biomedical engineers and two medical students, were given a standardised 10 min preparation time before segmenting four vestibular schwannoma scans using manual and semi-automated methods. Three of these participants were inexperienced in segmentation, while two were experts [9]. The inexperienced participants had a Dice score of 0.899 compared to 1.901 for the expert segmenters, against ground truth data. This suggests, albeit in a small sample, that segmentation skills can be trained over a short period of time [9].
Despite the advancements in semi-automated and automated methods, manual segmentation remains a commonly used method for segmentation and is typically held as a gold standard when performed by an expert [50]. Semi-automated segmentation, though faster and less labour-intensive, can miss out key areas of brain lesions due to heterogeneity between lesions. Thus, semi-automated methods may still require supervision by a clinician and extensive manual checks [8].
Automated segmentation methods exist for meningioma and have been relatively effective in demarcating lesion boundaries [51,52,53]. However, meningiomas often have areas of heterogeneity: oedema and necrosis make automatic classification methods difficult to implement [13]. Automated, quantitative tools for non-invasive sub-classification of meningiomas on multi-sequence MR images have recently become more available, following the publication of open access manually segmented datasets.
While automated segmentation further improves efficiency, they require extensive coding and training on large databases [13], and usually still require manual quality control steps. The lack of publicly available data for spatially complex brain lesions, such as subarachnoid haemorrhage, can limit their utility at present [54]. With greater capability for training, larger databases and more computational power, this will likely become a less pertinent issue [55].
While segmenting, special considerations should be paid to patient confidentiality. Article 9 of the EU general data protection regulation (GDPR), for example, prohibits the processing and revealing of data concerning health and biometric data for the purpose of uniquely identifying a natural person, unless the data subject has given explicit consent [56]. Medical image from PACS stored in the DICOM format contains patient’s protected health data in the header [57]. When converting DICOM files into NIFTI format using software like MRIcron [20], the data header should be anonymised. However, patients’ facial contour details still remain in the CT and MRI scans, and three-dimensional models of the patient’s facial appearance can be reconstructed [57]. This issue may be resolved by defacing/skull-stripping algorithms [58] or the application of digital masks [59].
If neurosurgery is to become more automated and personalised as anticipated [60], early delineation and characterisation of brain lesions are needed to triage and optimise management. This requires fast, accurate lesion segmentation which can be embedded into the clinical workflow. Such algorithms require training sets of segmentations at scale to ensure both accuracy and precision. By educating clinicians to confidently carry out manual and semi-automated segmentation, the underpinning training data will be accrued. Moreover, by performing segmentations, neurosurgical trainees can actively improve their understanding of the radiology of brain pathologies and mentally rehearse relevant operative procedures [12].
In conclusion, neurosurgical image segmentation has great potential within clinical care, education and research. Our educational guideline provides a step-by-step pathway for neurosurgical trainees new to medical image segmentation. This allows them to effectively apply this within their research and practice.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/neurosci5030021/s1, Video S1: introduction. Video S2: meningioma segmentation. Video S3: SAH. Video S4: GBM.

Author Contributions

Conceptualization, A.S.P.; methodology, A.S.P.; software, R.J. and N.L., validation, R.J. and N.L.; formal analysis, R.J., N.L. and F.L.; investigation, R.J. and N.L.; resources, R.J., A.S.P. and N.L.; data curation, R.J., F.L. and N.L.; writing—original draft preparation, R.J., F.L., N.L. and A.S.P.; writing—review and editing, R.J., F.L., N.L., A.S.P. and H.H.; visualization, R.J., F.L. and N.L.; supervision, A.S.P. and H.H.; project administration, A.S.P.; funding acquisition not relevant. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pham, D.L.; Xu, C.; Prince, J.L. Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2000, 2, 315–337. [Google Scholar] [CrossRef] [PubMed]
  2. Kauke, M.; Safi, A.F.; Stavrinou, P.; Krischek, B.; Goldbrunner, R.; Timmer, M. Does Meningioma Volume Correlate With Clinical Disease Manifestation Irrespective of Histopathologic Tumor Grade? J. Craniofac. Surg. 2019, 30, e799. [Google Scholar] [CrossRef] [PubMed]
  3. Helland, R.H.; Ferles, A.; Pedersen, A.; Kommers, I.; Ardon, H.; Barkhof, F.; Bello, L.; Berger, M.S.; Dunås, T.; Nibali, M.C.; et al. Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks. Sci. Rep. 2023, 13, 18897. [Google Scholar] [CrossRef] [PubMed]
  4. Simi, V.R.; Joseph, J. Segmentation of Glioblastoma Multiforme from MR Images—A comprehensive review. Egypt. J. Radiol. Nucl. Med. 2015, 46, 1105–1110. [Google Scholar] [CrossRef]
  5. Hu, J.; Zhao, Y.; Li, M.; Liu, J.; Wang, F.; Weng, Q.; Wang, X.; Cao, D. Machine learning-based radiomics analysis in predicting the meningioma grade using multiparametric MRI. Eur. J. Radiol. 2020, 131, 109251. [Google Scholar] [CrossRef] [PubMed]
  6. Cepeda, S.; Pérez-Nuñez, A.; García-García, S.; García-Pérez, D.; Arrese, I.; Jiménez-Roldán, L.; García-Galindo, M.; González, P.; Velasco-Casares, M.; Zamora, T.; et al. Predicting Short-Term Survival after Gross Total or Near Total Resection in Glioblastomas by Machine Learning-Based Radiomic Analysis of Preoperative MRI. Cancers 2021, 13, 5047. [Google Scholar] [CrossRef] [PubMed]
  7. Pemberton, H.G.; Wu, J.; Kommers, I.; Müller, D.M.J.; Hu, Y.; Goodkin, O.; Vos, S.B.; Bisdas, S.; Robe, P.A.; Ardon, H.; et al. Multi-class glioma segmentation on real-world data with missing MRI sequences: Comparison of three deep learning algorithms. Sci. Rep. 2023, 13, 18911. [Google Scholar] [CrossRef] [PubMed]
  8. Trimpl, M.J.; Primakov, S.; Lambin, P.; Stride, E.P.J.; Vallis, K.A.; Gooding, M.J. Beyond automatic medical image segmentation-the spectrum between fully manual and fully automatic delineation. Phys. Med. Biol. 2022, 67, 12TR01. [Google Scholar] [CrossRef] [PubMed]
  9. McGrath, H.; Li, P.; Dorent, R.; Bradford, R.; Saeed, S.; Bisdas, S.; Ourselin, S.; Shapey, J.; Vercauteren, T. Manual segmentation versus semi-automated segmentation for quantifying vestibular schwannoma volume on MRI. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1445–1455. [Google Scholar] [CrossRef] [PubMed]
  10. MacKeith, S.; Das, T.; Graves, M.; Patterson, A.; Donnelly, N.; Mannion, R.; Axon, P.; Tysome, J. A comparison of semi-automated volumetric vs linear measurement of small vestibular schwannomas. Eur. Arch. Oto-Rhino-Laryngol. 2018, 275, 867–874. [Google Scholar] [CrossRef] [PubMed]
  11. Vaidyanathan, A.; van der Lubbe, M.F.J.A.; Leijenaar, R.T.H.; van Hoof, M.; Zerka, F.; Miraglio, B.; Primakov, S.; Postma, A.A.; Bruintjes, T.D.; Bilderbeek, M.A.L.; et al. Deep learning for the fully automated segmentation of the inner ear on MRI. Sci. Rep. 2021, 11, 2885. [Google Scholar] [CrossRef] [PubMed]
  12. Ann, C.N.; Luo, N.; Pandit, A.S. Letter: Image Segmentation in Neurosurgery: An Undervalued Skill Set? Neurosurgery 2022, 91, e31–e32. [Google Scholar] [CrossRef] [PubMed]
  13. Kang, H.; Witanto, J.N.; Pratama, K.; Lee, D.; Choi, K.S.; Choi, S.H.; Kim, K.; Kim, M.; Kim, J.W.; Kim, Y.H.; et al. Fully Automated MRI Segmentation and Volumetric Measurement of Intracranial Meningioma Using Deep Learning. J. Magn. Reson. Imaging 2023, 57, 871–881. [Google Scholar] [CrossRef] [PubMed]
  14. Street, J.S.; Pandit, A.S.; Toma, A.K. Predicting vasospasm risk using first presentation aneurysmal subarachnoid hemorrhage volume: A semi-automated CT image segmentation analysis using ITK-SNAP. PLoS ONE 2023, 18, e0286485. [Google Scholar] [CrossRef] [PubMed]
  15. Ziegelitz, D.; Hellström, P.; Björkman-Burtscher, I.M.; Agerskov, S.; Stevens-Jones, O.; Farahmand, D.; Tullberg, M. Evaluation of a fully automated method for ventricular volume segmentation before and after shunt surgery in idiopathic normal pressure hydrocephalus. World Neurosurg. 2023, 181, e303–e311. [Google Scholar] [CrossRef] [PubMed]
  16. Whitfield, P.; Thomson, S.; Brown, J.; Kitchen, N.; Edlmann, E. Neurosurgery Curriculum 2021. Published 4 August 2021. Available online: https://www.gmc-uk.org/-/media/documents/neurosurgery-curriculum-2021---minor-changes-approved-feb22_pdf-89622738.pdf (accessed on 7 October 2023).
  17. Buffinton, C.M.; Baish, J.W.; Ebenstein, D.M. An Introductory Module in Medical Image Segmentation for BME Students. Biomed. Eng. Educ. 2023, 3, 95–109. [Google Scholar] [CrossRef]
  18. Yushkevich, P.A.; Piven, J.; Hazlett, H.C.; Smith, R.G.; Ho, S.; Gee, J.C.; Gerig, G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. NeuroImage 2006, 31, 1116–1128. [Google Scholar] [CrossRef] [PubMed]
  19. Fedorov, A.; Beichel, R.; Kalpathy-Cramer, J.; Finet, J.; Fillion-Robin, J.-C.; Pujol, S.; Bauer, C.; Jennings, D.; Fennessy, F.; Sonka, M.; et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn. Reson. Imaging 2012, 30, 1323–1341. [Google Scholar] [CrossRef] [PubMed]
  20. Rorden, C.; Brett, M. Stereotaxic display of brain lesions. Behav. Neurol. 2000, 12, 191–200. [Google Scholar] [CrossRef] [PubMed]
  21. The MathWorks Inc. MATLAB, Version: 9.13.0 (R2022b); Version: 9.13.0 (R2022b); The MathWorks Inc.: Natick, MA, USA, 2022. Available online: https://www.mathworks.com (accessed on 8 November 2023).
  22. Baid, U.; Ghodasara, S.; Mohan, S.; Bilello, M.; Calabrese, E.; Colak, E.; Farahani, K.; Kalpathy-Cramer, J.; Kitamura, F.C.; Pati, S.; et al. The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification. arXiv 2021, arXiv:2107.02314. [Google Scholar] [CrossRef]
  23. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  24. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.S.; Freymann, J.B.; Farahani, K.; Davatzikos, C. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 2017, 4, 170117. [Google Scholar] [CrossRef] [PubMed]
  25. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.; Freymann, J.; Farahani, K.; Davatzikos, C. Segmentation Labels for the Pre-Operative Scans of the TCGA-GBM Collection; National Institutes of Health: Bethesda, MD, USA, 2017. [Google Scholar] [CrossRef]
  26. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.; Freymann, J.; Farahani, K.; Davatzikos, C. Segmentation Labels for the Pre-Operative Scans of the TCGA-LGG Collection; National Institutes of Health: Bethesda, MD, USA, 2017. [Google Scholar] [CrossRef]
  27. Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge. arXiv 2019, arXiv:1811.02629. [Google Scholar] [CrossRef]
  28. Kim, B.-H.; Lee, H.; Choi, K.S.; Nam, J.G.; Park, C.-K.; Park, S.-H.; Chung, J.W.; Choi, S.H. Validation of MRI-Based Models to Predict MGMT Promoter Methylation in Gliomas: BraTS 2021 Radiogenomics Challenge. Cancers 2022, 14, 4827. [Google Scholar] [CrossRef] [PubMed]
  29. Larsson, W.; Aspelin, P.; Bergquist, M.; Hillergård, K.; Jacobsson, B.; Lindsköld, L.; Wallberg, J.; Lundberg, N. The effects of PACS on radiographer’s work practice. Radiography 2007, 13, 235–240. [Google Scholar] [CrossRef]
  30. Li, X.; Morgan, P.S.; Ashburner, J.; Smith, J.; Rorden, C. The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J. Neurosci. Methods 2016, 264, 47–56. [Google Scholar] [CrossRef] [PubMed]
  31. Atkinson, D. Geometry in Medical Imaging: DICOM and NIfTI Formats. Zenodo 2022. [Google Scholar] [CrossRef]
  32. Watts, J.; Box, G.; Galvin, A.; Brotchie, P.; Trost, N.; Sutherland, T. Magnetic resonance imaging of meningiomas: A pictorial review. Insights Imaging 2014, 5, 113–122. [Google Scholar] [CrossRef] [PubMed]
  33. Hallinan, J.T.P.D.; Hegde, A.N.; Lim, W.E.H. Dilemmas and diagnostic difficulties in meningioma. Clin. Radiol. 2013, 68, 837–844. [Google Scholar] [CrossRef] [PubMed]
  34. Ginsberg, L.E. Radiology of meningiomas. J. Neurooncol. 1996, 29, 229–238. [Google Scholar] [CrossRef] [PubMed]
  35. Li, X.; Lu, Y.; Xiong, J.; Wang, D.; She, D.; Kuai, X.; Geng, D.; Yin, B. Presurgical differentiation between malignant haemangiopericytoma and angiomatous meningioma by a radiomics approach based on texture analysis. J. Neuroradiol. 2019, 46, 281–287. [Google Scholar] [CrossRef] [PubMed]
  36. Saad, A.F.; Chaudhari, R.; Fischbein, N.J.; Wintermark, M. Intracranial Hemorrhage Imaging. Semin. Ultrasound CT MRI 2018, 39, 441–456. [Google Scholar] [CrossRef] [PubMed]
  37. Provenzale, J.M.; Hacein-Bey, L. CT evaluation of subarachnoid hemorrhage: A practical review for the radiologist interpreting emergency room studies. Emerg. Radiol. 2009, 16, 441. [Google Scholar] [CrossRef] [PubMed]
  38. Abd-Elghany, A.A.; Naji, A.A.; Alonazi, B.; Aldosary, H.; Alsufayan, M.A.; Alnasser, M.; Mohammad, E.A.; Mahmoud, M.Z. Radiological characteristics of glioblastoma multiforme using CT and MRI examination. J. Radiat. Res. Appl. Sci. 2019, 12, 289–293. [Google Scholar] [CrossRef]
  39. Shukla, G.; Alexander, G.S.; Bakas, S.; Nikam, R.; Talekar, K.; Palmer, J.D.; Shi, W. Advanced magnetic resonance imaging in glioblastoma: A review. Chin. Clin. Oncol. 2017, 6, 40. [Google Scholar] [CrossRef] [PubMed]
  40. Fournel, J.; Bartoli, A.; Bendahan, D.; Guye, M.; Bernard, M.; Rauseo, E.; Khanji, M.Y.; Petersen, S.E.; Jacquier, A.; Ghattas, B. Medical image segmentation automatic quality control: A multi-dimensional approach. Med. Image Anal. 2021, 74, 102213. [Google Scholar] [CrossRef] [PubMed]
  41. Monereo-Sánchez, J.; de Jong, J.J.; Drenthen, G.S.; Beran, M.; Backes, W.H.; Stehouwer, C.D.; Schram, M.T.; Linden, D.E.; Jansen, J.F. Quality control strategies for brain MRI segmentation and parcellation: Practical approaches and recommendations—Insights from the Maastricht study. NeuroImage 2021, 237, 118174. [Google Scholar] [CrossRef] [PubMed]
  42. Lebovitz, S.; Levina, N.; Lifshitz-Assaf, H. Is AI Ground Truth Really True? The Dangers of Training and Evaluating AI Tools Based on Experts’ Know-What. MIS Q. 2021, 45, 1501–1526. [Google Scholar] [CrossRef]
  43. Eelbode, T.; Bertels, J.; Berman, M.; Vandermeulen, D.; Maes, F.; Bisschops, R.; Blaschko, M.B. Optimization for Medical Image Segmentation: Theory and Practice When Evaluating With Dice Score or Jaccard Index. IEEE Trans. Med. Imaging 2020, 39, 3679–3690. [Google Scholar] [CrossRef] [PubMed]
  44. McCague, C.; Ramlee, S.; Reinius, M.; Selby, I.; Hulse, D.; Piyatissa, P.; Bura, V.; Crispin-Ortuzar, M.; Sala, E.; Woitek, R. Introduction to radiomics for a clinical audience. Clin. Radiol. 2023, 78, 83–98. [Google Scholar] [CrossRef]
  45. Wu, G.; Chen, Y.; Wang, Y.; Yu, J.; Lv, X.; Ju, X.; Shi, Z.; Chen, L.; Chen, Z. Sparse Representation-Based Radiomics for the Diagnosis of Brain Tumors. IEEE Trans. Med. Imaging 2018, 37, 893–905. [Google Scholar] [CrossRef] [PubMed]
  46. Chen, C.; Ou, X.; Wang, J.; Guo, W.; Ma, X. Radiomics-Based Machine Learning in Differentiation Between Glioblastoma and Metastatic Brain Tumors. Front. Oncol. 2019, 9, 806. [Google Scholar] [CrossRef] [PubMed]
  47. Razek, A.A.K.A.; Alksas, A.; Shehata, M.; AbdelKhalek, A.; Baky, K.A.; El-Baz, A.; Helmy, E. Clinical applications of artificial intelligence and radiomics in neuro-oncology imaging. Insights Imaging 2021, 12, 152. [Google Scholar] [CrossRef] [PubMed]
  48. Bauer, S.; Wiest, R.; Nolte, L.P.; Reyes, M. A survey of MRI-based medical image analysis for brain tumor studies. Phys. Med. Biol. 2013, 58, R97. [Google Scholar] [CrossRef] [PubMed]
  49. Kaus, M.R.; Warfield, S.K.; Nabavi, A.; Black, P.M.; Jolesz, F.A.; Kikinis, R. Automated Segmentation of MR Images of Brain Tumors. Radiology 2001, 218, 586–591. [Google Scholar] [CrossRef] [PubMed]
  50. Wilke, M.; de Haan, B.; Juenger, H.; Karnath, H.-O. Manual, semi-automated, and automated delineation of chronic brain lesions: A comparison of methods. NeuroImage 2011, 56, 2038–2046. [Google Scholar] [CrossRef] [PubMed]
  51. Laukamp, K.R.; Pennig, L.; Thiele, F.; Reimer, R.; Görtz, L.; Shakirin, G.; Zopfs, D.; Timmer, M.; Perkuhn, M.; Borggrefe, J. Automated Meningioma Segmentation in Multiparametric MRI. Clin. Neuroradiol. 2021, 31, 357–366. [Google Scholar] [CrossRef] [PubMed]
  52. LaBella, D.; Adewole, M.; Alonso-Basanta, M.; Altes, T.; Anwar, S.M.; Baid, U.; Bergquist, T.; Bhalerao, R.; Chen, S.; Chung, V.; et al. The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma. arXiv 2023, arXiv:2305.07642. [Google Scholar] [CrossRef]
  53. LaBella, D.; Khanna, O.; McBurney-Lin, S.; Mclean, R.; Nedelec, P.; Rashid, A.S.; Tahon, N.H.; Altes, T.; Baid, U.; Bhalerao, R.; et al. A multi-institutional meningioma MRI dataset for automated multi-sequence image segmentation. Sci. Data 2024, 11, 496. [Google Scholar] [CrossRef] [PubMed]
  54. Barros, R.S.; van der Steen, W.E.; Boers, A.M.; Zijlstra, I.; Berg, R.v.D.; El Youssoufi, W.; Urwald, A.; Verbaan, D.; Vandertop, P.; Majoie, C.; et al. Automated segmentation of subarachnoid hemorrhages with convolutional neural networks. Inform. Med. Unlocked 2020, 19, 100321. [Google Scholar] [CrossRef]
  55. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J. Digit. Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef] [PubMed]
  56. Directive 95/46/EC (General. Data Protection Regulation). Regulation EU (2016) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal. Data and on the Free Movement of Such. Data, and Repealing. Off. J. Eur. Union 2016, 119. [Google Scholar]
  57. Avrin, D. HIPAA privacy and DICOM anonymization for research. Acad. Radiol. 2008, 15, 273. [Google Scholar] [CrossRef] [PubMed]
  58. Lotan, E.; Tschider, C.; Sodickson, D.K.; Caplan, A.L.; Bruno, M.; Zhang, B.; Lui, Y.W. Medical Imaging and Privacy in the Era of Artificial Intelligence: Myth, Fallacy, and the Future. J. Am. Coll. Radiol. JACR 2020, 17, 1159–1162. [Google Scholar] [CrossRef] [PubMed]
  59. Yang, Y.; Lyu, J.; Wang, R.; Wen, Q.; Zhao, L.; Chen, W.; Bi, S.; Meng, J.; Mao, K.; Xiao, Y.; et al. A digital mask to safeguard patient privacy. Nat. Med. 2022, 28, 1883–1892. [Google Scholar] [CrossRef] [PubMed]
  60. Planells, H.; Parmar, V.; Marcus, H.J.; Pandit, A.S. From theory to practice: What is the potential of artificial intelligence in the future of neurosurgery? Expert. Rev. Neurother. 2023, 23, 1041–1046. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An example image-processing pipeline with image acquisition, pre-processing, segmentation and post-processing stages. (Single column with colour in print).
Figure 1. An example image-processing pipeline with image acquisition, pre-processing, segmentation and post-processing stages. (Single column with colour in print).
Neurosci 05 00021 g001
Figure 2. Manual segmentation of a convexity meningioma. (A) Original MRI. (B) Manual delineation of meningioma outline. (C) Interpolation of lesion through various slices. (single column with colour in print).
Figure 2. Manual segmentation of a convexity meningioma. (A) Original MRI. (B) Manual delineation of meningioma outline. (C) Interpolation of lesion through various slices. (single column with colour in print).
Neurosci 05 00021 g002
Figure 3. Semi-automated segmentation of SAH. (A) Original CT. (B) Manual labelling of different brain tissues, i.e., classification. Red represents cerebrospinal fluid, green represents bone, blue represents brain parenchyma and yellow represents subarachnoid haemorrhage. (C) Evolution of contours. (D) Final segmentation at different levels after manual inspection and editing. (Single column with colour in print).
Figure 3. Semi-automated segmentation of SAH. (A) Original CT. (B) Manual labelling of different brain tissues, i.e., classification. Red represents cerebrospinal fluid, green represents bone, blue represents brain parenchyma and yellow represents subarachnoid haemorrhage. (C) Evolution of contours. (D) Final segmentation at different levels after manual inspection and editing. (Single column with colour in print).
Neurosci 05 00021 g003
Figure 4. Semi-automated segmentation of a glioblastoma. (A) Original MRI. (B) Capturing the extent of GBM through classification. Red represents glioblastoma, green represents cerebral oedema, blue represents brain parenchyma and yellow represents cerebrospinal fluid. (C) Derivation of 3D lesion volume. (Single column with colour in print).
Figure 4. Semi-automated segmentation of a glioblastoma. (A) Original MRI. (B) Capturing the extent of GBM through classification. Red represents glioblastoma, green represents cerebral oedema, blue represents brain parenchyma and yellow represents cerebrospinal fluid. (C) Derivation of 3D lesion volume. (Single column with colour in print).
Neurosci 05 00021 g004
Table 1. Key imaging sequences and radiological features of meningioma, glioblastoma and subarachnoid haemorrhage. CT: computed tomography, FLAIR: Fluid Attenuated Inversion Recovery. T1w: T1 weighted. T2w: T2 weighted. TIRM: Turbo Inversion Recovery Magnitude.
Table 1. Key imaging sequences and radiological features of meningioma, glioblastoma and subarachnoid haemorrhage. CT: computed tomography, FLAIR: Fluid Attenuated Inversion Recovery. T1w: T1 weighted. T2w: T2 weighted. TIRM: Turbo Inversion Recovery Magnitude.
LesionSequences for SegmentationsRadiological Features
MeningiomaT1w
T1w + contrast
T2w
Meningiomas have isointensity to slight hypointensity with T1 weighting. With T2-weighted sequences, meningiomas have isointensity to slight hyperintensity [32].
Two basic morphologies of meningioma include en plaque with a sheet-like dural extension and globose with a broad dural attachment [33].
The thick extended dura (commonly referred to as a dural tail) tends to extend away from the meningioma, which can be easily missed [34].
Bone changes may be visible, such as hyperostosis, osteolysis, enlargement of the skull base foramina and meningioma calcification [35].
Subarachnoid Haemorrhage (SAH)CT non-contrastAcute haemorrhage will be present with 15–25 Hounsfield Units (HU) of greater density than normal grey and white matter on a CT scan [36].
Anatomically, SAH is typically found present in the interpeduncular cistern, the Sylvian fissure, the occipital horns of the lateral ventricles and the deep sulci on each side of the medial longitudinal fissure [37].
Glioblastoma (GBM)T1w
T1w + contrast
T2w
T2 FLAIR/TIRM
GBMs are generally hyperintense on T2-weighted images but are hypo- or isointense on T1-weighted images [38]. GBM often have enhancing and non-enhancing components. Necrosis is typically visible as a low signal intensity (SI) on T1-enhanced MRI and located at the centre of the lesion [39]. Cystic components of a GBM are typically T2W hyperintense and T1 hypointense, with a well-defined thin wall.
There can also an area of oedema surrounding the tumour that is visible in T2 FLAIR scans [38].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jain, R.; Lee, F.; Luo, N.; Hyare, H.; Pandit, A.S. A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation. NeuroSci 2024, 5, 265-275. https://doi.org/10.3390/neurosci5030021

AMA Style

Jain R, Lee F, Luo N, Hyare H, Pandit AS. A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation. NeuroSci. 2024; 5(3):265-275. https://doi.org/10.3390/neurosci5030021

Chicago/Turabian Style

Jain, Raunak, Faith Lee, Nianhe Luo, Harpreet Hyare, and Anand S. Pandit. 2024. "A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation" NeuroSci 5, no. 3: 265-275. https://doi.org/10.3390/neurosci5030021

APA Style

Jain, R., Lee, F., Luo, N., Hyare, H., & Pandit, A. S. (2024). A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation. NeuroSci, 5(3), 265-275. https://doi.org/10.3390/neurosci5030021

Article Metrics

Back to TopTop