You are currently viewing a new version of our website. To view the old version click .
Data
  • Data Descriptor
  • Open Access

1 February 2020

Intracranial Hemorrhage Segmentation Using a Deep Convolutional Model

,
,
,
,
and
1
The Department of Computer and Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
2
Computer Engineering Department, University of Technology, Baghdad 10001, Iraq
3
Babylon Health Directorate, Babil 51001, Iraq
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Benchmarking Datasets in Bioinformatics

Abstract

Traumatic brain injuries may cause intracranial hemorrhages (ICH). ICH could lead to disability or death if it is not accurately diagnosed and treated in a time-sensitive procedure. The current clinical protocol to diagnose ICH is examining Computerized Tomography (CT) scans by radiologists to detect ICH and localize its regions. However, this process relies heavily on the availability of an experienced radiologist. In this paper, we designed a study protocol to collect a dataset of 82 CT scans of subjects with a traumatic brain injury. Next, the ICH regions were manually delineated in each slice by a consensus decision of two radiologists. The dataset is publicly available online at the PhysioNet repository for future analysis and comparisons. In addition to publishing the dataset, which is the main purpose of this manuscript, we implemented a deep Fully Convolutional Networks (FCNs), known as U-Net, to segment the ICH regions from the CT scans in a fully-automated manner. The method as a proof of concept achieved a Dice coefficient of 0.31 for the ICH segmentation based on 5-fold cross-validation.
Dataset: https://physionet.org/content/ct-ich/1.3.0/, doi:10.13026/w8q8-ky94.
Dataset License: Creative Commons Attribution 4.0 International Public License.

1. Introduction

Traumatic brain injury (TBI) is a major cause of death and disability in the United States. It contributed to about 30% of all injury deaths in 2013 [1]. After accidents with TBI, extra-axial intracranial lesions, such as intracranial hemorrhage (ICH), may occur. ICH is a critical medical lesion that results in a high rate of mortality [2]. It is considered to be clinically dangerous because of its high risk for turning into a secondary brain injury that may lead to paralysis and even death if it is not treated in a time-sensitive procedure. Depending on its location in the brain, ICH is divided into five sub-types: Intraventricular (IVH), Intraparenchymal (IPH), Subarachnoid (SAH), Epidural (EDH) and Subdural (SDH). In addition, the ICH that occurs within the brain tissue is called Intracerabral hemorrhage.
The Computerized Tomography (CT) scan is commonly used in the emergency evaluation of subjects with TBI for ICH [3]. The availability of the CT scan and its rapid acquisition time makes it a preferred diagnostic tool over Magnetic Resonance Imaging for the initial assessment of ICH. CT scans generate a sequence of images using X-ray beams where brain tissues are captured with different intensities depending on the amount of tissue X-ray absorbency (Hounsfield units (HU)). CT scans are displayed using a windowing method. This method transforms the HU numbers into grayscale values ([0, 255]) according to the window level and width parameters. By selecting different window parameters, different features of the brain tissues are displayed in the grayscale image (e.g., brain window, stroke window, and bone window) [4]. In the CT scan images, which are displayed using the brain window, the ICH regions appear as hyperdense regions with a relatively undefined structure. These CT images are examined by an expert radiologist to determine whether ICH has occurred and if so, detect its type and region. However, this diagnosis process relies on the availability of a subspecialty-trained neuroradiologist, and as a result, could be time inefficient and even inaccurate, especially in remote areas where specialized care is scarce.
Recent advances in convolutional neural networks (CNNs) have demonstrated that the method has an excellent performance in automating multiple image classification and segmentation tasks [5]. Hence, we hypothesized that deep learning algorithms have the potential to automate the procedure of the ICH detection and segmentation. We implemented a fully convolutional network (FCN), known as U-Net [6], to segment the ICH regions in each CT slice. An automated tool for ICH detection and classification can be used to assist junior radiology trainees when experts are not immediately available in the emergency rooms, especially in developing countries or remote areas. Such a tool can help reduce the time and error in the ICH diagnosis significantly.
Furthermore, there are two publicly available datasets for the ICH classification, and no publicly available dataset for the ICH segmentation. The first public dataset is called CQ500 that consists of 491 head CT scans [7], and the second one was published in September 2019 for the RSNA challenge at Kaggle that consists of over 25k CT scans. Arbabshirani et al. and Lee et al. indicated the availability of their data upon request [8,9]. Many papers proposed ICH segmentation approaches in addition to the ICH detection and classification. However, many of these approaches were not validated due to the lack of public or private datasets with ICH masks [10,11,12,13,14], and the other approaches were validated on private datasets that have different characteristics such as the number of CT scans and the diagnosed ICH types [9,15,16,17,18,19,20,21,22,23,24,25]. With these differences, an objective comparison between the different approaches is not feasible. Hence, there is a need for a dataset which can help to benchmark and extend the work in ICH segmentation. Therefore, the main focus of this work was collecting head CT scans with ICH segmentation and making them publicly available. We also performed a comprehensive literature review in the area of ICH detection and segmentation. Our contributions to fill the gap in knowledge are listed in Table 1.
Table 1. Contributions of this paper.
The paper is organized as follows. Section 2 provides a review of the ICH detection and segmentation methods proposed in the literature. Section 3 describes the study used to collect a dataset of CT scans, and the deep learning method used to perform the ICH segmentation. Section 4 describes three experiments that were performed using the proposed method and provides the results. We discuss the results in Section 5. The paper is concluded in Section 6.

3. Methods

This section first describes the dataset collection and annotation protocol, then it describes the FCN implemented in this work.

3.1. Dataset

A retrospective study was designed to collect head CT scans of subjects with TBI. The study was approved by the research and ethics board in the Iraqi ministry of health-Babil Office. The CT scans were collected between February and August 2018 from Al Hilla Teaching Hospital-Iraq. The CT scanner was Siemens/ SOMATOM Definition AS which had an isotropic resolution of 0.33 mm, 100 kV, and a slice thickness of 5mm. The information of each subject was anonymized. A total of 82 subjects (46 male) with an average age of 27.8 ± 19.5 years were included in this study (refer to Table 4 for the subject demographics). Each CT scan includes about 34 slices on average. Two radiologists annotated the non-contrast CT scans and recorded the ICH sub-types if an ICH was diagnosed. The two expert radiologists reviewed the non-contrast CT scans together and at the same time to reduce the effort and time in the ICH segmentation process. Once they reached a consensus on the ICH diagnosis, which consisted of the presence of ICH as well as its shape and location, the delineation of the ICH regions was performed. The radiologists did not have access to the clinical history of the subjects.
Table 4. Subject demographics.
During the data collection process, Syngo by Siemens Medical Solutions was first used to read the CT DICOM files and save two videos (AVI format), one after windowing using the brain window (level = 40, width = 120) and one using the bone window (level = 700, width = 3200). Second, a custom tool was implemented in Matlab and used to record the radiologist annotations and delineating the ICH regions. The gray-scale 650 × 650 images (JPG format) for each CT slice were also saved for both brain and bone windows (Figure S1). The raw CT scans in DICOM format were anonymized and transferred directly to NIfTI using NiBabel library in Python. Likewise, the segmentation masks were saved as NIfTI files.
Out of the 82 subjects, 36 were diagnosed with an ICH with the following types: IVH, IPH, SAH, EDH, and SDH. See Figure 2 for some examples. One of the CT scans had a chronic ICH, and it was excluded from this study. Table 5 shows the number of slices with and without an ICH as well as the numbers with different ICH sub-types. It is important to note that the number of the CT slices for each ICH sub-type in this dataset is not balanced as the majority of the CT slices do not have an ICH. Besides that, the IVH was only diagnosed in five subjects and the SDH hemorrhage in only four subjects. Some slices were annotated with two or more ICH sub-types.
Figure 2. Samples from the dataset that show the different types of ICH (Intraventricular (IVH), Intraparenchymal (IPH), Subarachnoid (SAH), Epidural (EDH) and Subdural (SDH)).
Table 5. The number of slices with and without an ICH as well as different ICH sub-types.
As shown in Table 3, only an average of about 60 CT scans were used to test the ICH segmentation methods in the literature, so 82 CT scans in our dataset have a comparable (and even larger) size. However, more CT scans with ICH masks are still preferable. The dataset, including both the CT scans and the ICH masks, was released in JPG and NIfTI formats at PhysioNet (https://physionet.org/content/ct-ich/1.3.0/), which is a repository of freely-available medical research data [26,27]. The license is Creative Commons Attribution 4.0 International Public License.

3.2. ICH Segmentation Using U-Net

A Fully Convolutional Network (FCN) is an end-to-end or 1-stage algorithm used for semantic segmentation. Recently, FCN is the state-of-art performance in many applications involving the delineation of the objects. U-Net was developed by Ronneberger et al. as a type of FCNs [6]. For biomedical image segmentation, U-Net was shown to be effective on small training datasets [6], which motivated us to use it for the ICH segmentation in our study. In this work, we investigated the first application of U-Net for the ICH segmentation. The architecture of U-Net is illustrated in Figure 3.
Figure 3. The U-Net architecture implemented in this study. A sliding window of size 160 × 160 with a stride of 80 pixels was used to divide each CT slice into 49 windows before feeding them to the U-Net for the ICH segmentation.
The architecture is symmetrical because it builds upon two paths: a contracting path and an expansive path. In the contracting path, four blocks of typical components of a convolutional network are used. Each block is constructed by two 3 × 3 convolutional filtering layers along with padding, which is followed by a rectified linear unit (ReLU) and then by a 2 × 2 max-pooling layer. In the expansive path, four blocks are also built that consist of two 3 × 3 convolutional filtering layers followed by ReLU layers. Each block is preceded by upsampling the feature maps followed by a 2 × 2 convolution (up-convolution), which are then concatenated with the corresponding cropped feature map from the contracting path. The skip connections between the two paths are intended to provide the local or the fine-grained spatial information to the global information while upsampling for the precise localization. After the last block in the expansive path, the feature maps are first filtered using two 3 × 3 convolutional filters to produce two images; one is for the ICH regions and one for the background. The final stage is a 1 × 1 convolutional filter with a sigmoid activation layer to produce the ICH probability in each pixel. In summary, the network has 24 convolutional layers, four max-pooling layers, four upsampling layers, and four concatenations. No dense layer is used in this architecture, in order to reduce the number of parameters and the computation time.

4. Results

We did not perform any preprocessing on the original CT slices, except for removing 5 pixels from the image borders which were only the black regions with no important information. This process resulted in 640 × 640 CT slices. We performed three experiments to validate the performance of U-Net and compare it with a simple threshold-based method.
In the first experiment, a grid search was implemented to select the lower and upper thresholds of the ICH regions. The thresholds that resulted in the highest Jaccard index on the training data were selected and used in the testing procedure. It is expected that using U-Net with the full 640 × 640 CT slices creates a bias of the model to the negative class (i.e., non-ICH class) because only a small number of pixels would belong to the positive class (i.e., ICH class) in each CT scan. In the second experiment, we investigated this effect by training and testing U-Net using the full 640 × 640 CT slices. For the same reason, Kuo et al. [20] used 160 × 160 windows instead of the entire CT slice and achieved a more precise model. This approach can also balance the training data by undersampling the negative windows. Therefore, in the third experiment, each slice from the CT scan was first divided using 160 × 160 window with a stride of 80 pixels. This process resulted in 49 overlapped windows of size 160 × 160 , which were then passed through U-Net for the ICH segmentation. Next, the 160 × 160 masks of each CT scan were mapped to their original spatial positions on the original CT slice. The overlapped masks were then averaged to produce the full 640 × 640 ICH masks. This process resulted in four predictions for every pixel in the CT slice except for the ones in the edges and corners, where two and one predictions were made, respectively. The average of all the predictions at every pixel provided the final predictions. Finally, two consecutive morphological operations were performed on the ICH masks, which were closing and opening. The closing operation was performed to fill in the gaps in the ICH regions and the opening operation was performed to remove outliers and non-ICH regions.
For the evaluation, we used slice-level Jaccard index (Equation (1)) and Dice similarity coefficient (Equation (2)) to quantify how well the model segmentation on each CT slice fits the gold-standard segmentation. In Equations (1) and (2), R I C H refers to the neurologists’ segmentation and R I C H ^ to the segmentation that resulted from U-Net.
J a c c a r d I n d e x = R I C H R I C H ^ R I C H R I C H ^
D i c e = 2 R I C H R I C H ^ R I C H + R I C H ^
Subject-based, 5-fold cross-validation (at the patient level) was used to train, validate, and test the developed model for all the experiments. For the first experiment, a grid search was implemented to select a lower threshold in the 100 to 210 range and an upper threshold in the 210 to 255 range. This process resulted in thresholds of 140 and 230 with a testing Jaccard index of 0.08 and Dice coefficient of 0.135.
During the second and third experiments, we implemented the U-Net architecture, illustrated in Figure 3, in the Python environment using Keras library with TensorFlow as backend [33]. The shape of the input image was 640 × 640 in the second experiment and 160 × 160 in the third experiment. The 640 × 640 CT slices or the 160 × 160 windows and their corresponding segmentation masks were used to train the network in each experiment. In our dataset, 36 out of 82 subjects were diagnosed with ICHs, resulting in only 318 ICH slices out of 2491 (i.e., less than 10% of the images). In order to avoid any class-imbalance issues between data with and without ICH, we applied a random undersampling to the training data and reduced the number of non-ICH data to the same level as the data with ICH. At every cross-validation iteration, one fold of the CT scans was left out as a held-out set for testing, one fold was used for validation, and three folds were used for training. U-Net was trained for 150 epochs on the 640 × 640 CT slices or 160 × 160 windows and their corresponding segmentation windows. For our implementation purposes, we used GeForce RTX2080 GPU with 11 GB of memory. The training took approximately 5 hours in each cross-validation iteration. During the training and at each iteration, random slices were selected from the training data, and a data augmentation was performed randomly from the following linear transformations:
  • Rotation with a maximum of 20 degrees
  • Width shift with a maximum of 10% of the image width
  • Height shift with a maximum of 10% of the image height
  • Shear with a maximum intensity of 0.1
  • Zoom with a maximum range of 0.2
The dataset includes subjects with a wide age-range, which implies the presence of a wide range of head shapes and sizes in this dataset. To account for such a variability, we used zooming and shearing transformations for the data augmentation purposes. The head orientation can be different from subject to subject. Hence, rotation, as well as width and height shifts, were applied to increase the model generalizability. These linear transformations will yield valid CT slices as would present in real CT data. It is worth mentioning that the non-linear deformations may provide slices that would not be seen in real CT data. As a result, we only used linear transformations in our analysis. In addition, all the subjects entered the CT scanner with their heads facing to the same direction. So, the horizontal flipping will lead to CT slices that will not be generated in the data acquisition process. Therefore, it was not used as an augmentation method.
Adam optimizer with cross-entropy loss and 1 × 10 - 5 learning rate was used. A mini-batch of size 2 was used for the second experiment and 32 in the third experiment. The trained model was validated after each epoch. The best-trained model with the lowest validation Jaccard index was saved and used for the testing. The training evaluation metric was the average cross-entropy loss.
During the second experiment with the full CT slices, U-Net failed to detect any ICH regions and resulted in only black masks (i.e., negative class with no ICH). Although we used only the CT slices with an ICH, there were only a few number of ICH pixels in the dataset, which caused the model to be biased towards the negative class with no ICH. Windowing the CT slices and undersampling the negative windows in the third experiment improved the biasing issue. The 5-fold cross-validation of the developed U-Net resulted in a better performance for the third experiment as shown in Table 6. The testing Jaccard index was 0.21 and the Dice coefficient was 0.31. The slice-level sensitivity was 97.2% and the slice-level specificity was 50.4%. Increasing the threshold on the predicted probability masks yielded a better testing specificity at the expense of the testing sensitivity as shown in Table 7. Figure 4 provides the segmentation result of the trained U-Net on some test 160 × 160 windows along with the radiologist delineation of the ICHs. The boundary effect of each predicted 160 × 160 mask was minimal. The boundaries show low probabilities for the non-ICH regions instead of zero, and they were zeroed out after thresholding and performing the morphological operations. The final segmented ICH regions after combining the windows, thresholding, and performing the morphological operations for some CT slices are shown in Figure 5. As shown in this figure, the model matched the radiologist ICH segmentation perfectly in the slices shown on the left side, but there were some false-positive ICH regions in the right-side slices. Note that the CT slice in Figure 5, bottom right panel, shows the ending of an EDH region where it is partially segmented by the model.
Table 6. The testing results of the U-Net model trained on 160 × 160 windows and used for the ICH segmentation.
Table 7. The testing slice-level results of the U-Net model trained on 160 × 160 windows using different thresholds.
Figure 4. Samples from the windows of the testing CT slices are shown on the top. The mask or delineation of the ICH is shown with a red dotted line. The output of U-Net before thresholding and applying the morphological operations is shown on the bottom.
Figure 5. Samples from the testing CT slices along with the radiologist delineation of the ICH (red dotted lines) and the U-Net segmentation (green dotted lines) are provided. A precise match of the U-Net segmentation is shown in the slices on the left side. There are some false-positive regions in the slices on the right side.
The results based on the ICH sub-type showed that the U-Net performed the best with a Dice coefficient of 0.52 for the SDH segmentation. The average Dice scores for the segmentation of EDH, IVH, IPH and SAH were 0.35, 0.3, 0.28 and 0.23, respectively. The minimum Dice coefficient and Jaccrad index in Table 6 was zero when the U-Net failed to localize the ICH regions in the CT scans of two subjects. One of the subjects had only a small IPH region in one CT slice, and the other subject had only a small IPH region in two CT slices. The width and height of the IPH regions for these subjects were less than 10mm, which sets the lower limit of the ICH segmentation by the proposed U-Net architecture. The results based on the subjects’ age show that the Dice coefficient of the subjects younger than 18 years old is 0.321, and it is 0.309 for the subjects older than 18. This analysis confirms that there is no significant difference between the method’s performance for the subjects younger and older than 18 years old.

5. Discussion

The U-Net model based on the 160 × 160 windows of the CT slices resulted in a Dice coefficient of 0.31 for the ICH segmentation and a high sensitivity in detecting the ICH regions to be considered as the baseline for this dataset. This performance is comparable to the deep learning methods in the literature that were trained on small datasets [22,23]. Kuang et al. reported a Dice coefficient of 0.65 when a semi-automatic method based on U-Net and a contour evolution were used for the ICH segmentation. They reported a Dice coefficient of 0.35 when only U-Net was used [23]. The performance of the U-Net trained in our study is comparable to their results considering that we used a smaller dataset that had all the ICH sub-types and not only intracerebral hemorrhage. Nag et al. [22] tested autoencoder and active contour Chan-Vese model on a dataset that did not contain any SDH cases and reported an average Jaccard index of 0.55. The autoencoder was trained on half of the dataset, and then the entire dataset was used for testing, which could boost the average Jaccard index. The other deep learning-based models in [9,20,21,24] were trained and tested on larger datasets and achieved higher performance for the ICH segmentation. Chang et al. [21] reported an average Dice coefficient of 0.85, Lee et al. [9] reported a 78% overlap between the attention maps of their CNN model and the gold-standard bleeding points, Kuo et al. [20] reported 78% average precision, and Cho et al. [24] reported 80.19% precision and 82.15% recall. In addition to the deep learning methods, in the study of Shahangian et al. [18], DRLSE was used for the segmentation of EDH, IPH, and SDH, and Dice coefficients of 0.75, 0.62 and 0.37 were reported for each sub-type, respectively. Our method achieved a higher Dice coefficient of 0.52 in segmenting SDH. Some traditional methods reported better dice coefficient (0.87 [17], 0.89 [19], and 0.82 [25]) for the ICH segmentation when a small dataset was used.
Regarding the ICH detection, U-Net achieved a slice-level sensitivity of 97.2% and specificity of 50.4%, which is comparable to the results reported by Yuh et al. [10] when 0.5 threshold was used. Increasing the threshold to 0.8 resulted in 73.7% sensitivity, 82.4% specificity, and 82.5% accuracy, which is comparable to some methods in the literature that were trained on large datasets [8,29]. In the work of Arbabshirani et al. [8], an ensemble of four 3D CNN models was trained on 10k CT scans and yielded 71.5% sensitivity and 83.5% specificity. In the work of Grewal et al. [29], a deep model based on DenseNet and RNN achieved 81% accuracy.
One limitation of U-Net was the false-positive segmentation as shown in Figure 5, which was the main reason for the method’s low Dice coefficient. The false-positive segmentation was more prevalent near the bones, where the intensity in the grayscale image is similar to the intensity of the ICH regions. Another limitation is that the developed U-Net model failed to localize the ICH regions in the CT scans of two subjects who had small IPH regions. Hence, the current method as stands can be used as an assistive software to the radiologists for the ICH segmentation but is not yet at a precision that can be used as a standalone segmentation method. The future work can include collecting further CT scans and also enhancing U-Net with a recurrent neural network such as LSTM networks to consider the relationship between the adjacent slices when segmenting the ICH regions. Furthermore, we plan to improve the accuracy of our method by utilizing the transfer learning. The publicly available datasets for ICH detection and classification can be used for the transfer learning.

6. Conclusions

ICH is a critical medical lesion that requires immediate medical attention, or it may turn into a secondary brain injury, which can lead to paralysis or even death. The contribution of this paper is two-fold. First, a new dataset with 82 CT scans was collected. The dataset is made publicly available online at the PhysioNet to address the need for more publicly available benchmark datasets toward developing reliable techniques for ICH segmentation. Second, a deep learning method for the ICH segmentation was developed as a proof-of-concept. The developed method was assessed on the collected CT scans with 5-fold cross-validation. It resulted in a Dice coefficient of 0.31, which has a comparative performance for deep learning methods reported in the literature, and trained using small datasets. The U-Net model developed in this manuscript can be used as add-on software to process the CT scans. The processed CT scans with the potential ICH areas can then be reviewed by the radiologists. This preprocessing can help the radiologists to perform the final segmentation more effectively (with more accuracy) and efficiently (in a shorter time). Moreover, the paper provides a detailed review of the methods proposed for the ICH detection and classification as well as segmentation.

Supplementary Materials

The following are available at https://www.mdpi.com/2306-5729/5/1/14/s1, Figure S1: Windowing of the CT scans.

Author Contributions

Conceptualization, M.D.H., M.S.C., A.D.S., H.F.A.-k., and B.G.; data curation, M.D.H., and Z.A.Y.; formal analysis, M.H.; investigation, H.F.A.-k., and B.G.; methodology, M.D.H., M.S.C., A.D.S., H.F.A.-k., and B.G.; resources, H.F.A.-k., Z.A.Y., and B.G.; software, M.D.H.; validation, M.D.H.; writing—original draft, M.D.H. and B.G.; writing—review and editing, M.D.H., M.S.C., A.D.S. and B.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Thanks for Mohammed Ali for the clinical support and all the patients participated in the data collection.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputerized Tomography
TBITraumatic brain injury
ICHIntracranial hemorrhage
IVHIntraventricular hemorrhage
IPHIntraparenchymal hemorrhage
SAHSubarachnoid hemorrhage
EDHEpidural hemorrhage
SDHSubdural hemorrhage
CNNConvolutional neural networks
RNNRecurrent neural network
FCNFully convolutional networks
LSTMLong short-term memory network
AUCArea under the ROC curve

References

  1. Taylor, C.A.; Bell, J.M.; Breiding, M.J.; Xu, L. Traumatic brain injury-related emergency department visits, hospitalizations, and deaths-United States, 2007 and 2013. Morb. Mortal. Wkly. Rep. Surveill. Summ. 2017, 66, 1–16. [Google Scholar] [CrossRef] [PubMed]
  2. van Asch, C.J.; Luitse, M.J.; Rinkel, G.J.; van der Tweel, I.; Algra, A.; Klijn, C.J. Incidence, case fatality, and functional outcome of intracerebral haemorrhage over time, according to age, sex, and ethnic origin: A systematic review and meta-analysis. Lancet Neurol. 2010, 9, 167–176. [Google Scholar] [CrossRef]
  3. Currie, S.; Saleem, N.; Straiton, J.A.; Macmullen-Price, J.; Warren, D.J.; Craven, I.J. Imaging assessment of traumatic brain injury. Postgrad. Med. 2016, 92, 41–50. [Google Scholar] [CrossRef] [PubMed]
  4. Xue, Z.; Antani, S.; Long, L.R.; Demner-Fushman, D.; Thoma, G.R. Window classification of brain CT images in biomedical articles. In AMIA Annual Symposium Proceedings; American Medical Informatics Association: Bethesda, MD, USA, 2012; Volume 2012, p. 1023. [Google Scholar]
  5. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
  6. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  7. Chilamkurthy, S.; Ghosh, R.; Tanamala, S.; Biviji, M.; Campeau, N.G.; Venugopal, V.K.; Mahajan, V.; Rao, P.; Warier, P. Deep learning algorithms for detection of critical findings in head CT scans: A retrospective study. Lancet 2018, 392, 2388–2396. [Google Scholar] [CrossRef]
  8. Arbabshirani, M.R.; Fornwalt, B.K.; Mongelluzzo, G.J.; Suever, J.D.; Geise, B.D.; Patel, A.A.; Moore, G.J. Advanced machine learning in action: Identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digit. Med. 2018, 1, 9. [Google Scholar] [CrossRef]
  9. Lee, H.; Yune, S.; Mansouri, M.; Kim, M.; Tajmir, S.H.; Guerrier, C.E.; Ebert, S.A.; Pomerantz, S.R.; Romero, J.M.; Kamalian, S.; et al. An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat. Biomed. Eng. 2019, 3, 173. [Google Scholar] [CrossRef]
  10. Yuh, E.L.; Gean, A.D.; Manley, G.T.; Callen, A.L.; Wintermark, M. Computer-aided assessment of head computed tomography (CT) studies in patients with suspected traumatic brain injury. J. Neurotrauma 2008, 25, 1163–1172. [Google Scholar] [CrossRef]
  11. Li, Y.; Wu, J.; Li, H.; Li, D.; Du, X.; Chen, Z.; Jia, F.; Hu, Q. Automatic detection of the existence of subarachnoid hemorrhage from clinical CT images. J. Med. Syst. 2012, 36, 1259–1270. [Google Scholar] [CrossRef]
  12. Li, Y.H.; Zhang, L.; Hu, Q.M.; Li, H.W.; Jia, F.C.; Wu, J.H. Automatic subarachnoid space segmentation and hemorrhage detection in clinical head CT scans. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 507–516. [Google Scholar] [CrossRef]
  13. Chilamkurthy, S.; Ghosh, R.; Tanamala, S.; Biviji, M.; Campeau, N.G.; Venugopal, V.K.; Mahajan, V.; Rao, P.; Warier, P. Development and validation of deep learning algorithms for detection of critical findings in head CT scans. arXiv 2018, arXiv:1803.05854. [Google Scholar]
  14. Ye, H.; Gao, F.; Yin, Y.; Guo, D.; Zhao, P.; Lu, Y.; Wang, X.; Bai, J.; Cao, K.; Song, Q.; et al. Precise diagnosis of intracranial hemorrhage and subtypes using a three-dimensional joint convolutional and recurrent neural network. Eur. Radiol. 2019, 29, 6191–6201. [Google Scholar] [CrossRef] [PubMed]
  15. Chan, T. Computer aided detection of small acute intracranial hemorrhage on computer tomography of brain. Comput. Med. Imaging Graph. 2007, 31, 285–298. [Google Scholar] [CrossRef] [PubMed]
  16. Prakash, K.B.; Zhou, S.; Morgan, T.C.; Hanley, D.F.; Nowinski, W.L. Segmentation and quantification of intra-ventricular/cerebral hemorrhage in CT scans by modified distance regularized level set evolution technique. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 785–798. [Google Scholar] [CrossRef] [PubMed]
  17. Bhadauria, H.; Dewal, M. Intracranial hemorrhage detection using spatial fuzzy c-mean and region-based active contour on brain CT imaging. Signal Image Video Process. 2014, 8, 357–364. [Google Scholar] [CrossRef]
  18. Shahangian, B.; Pourghassem, H. Automatic brain hemorrhage segmentation and classification algorithm based on weighted grayscale histogram feature in a hierarchical classification structure. Biocybern. Biomed. Eng. 2016, 36, 217–232. [Google Scholar] [CrossRef]
  19. Muschelli, J.; Sweeney, E.M.; Ullman, N.L.; Vespa, P.; Hanley, D.F.; Crainiceanu, C.M. PItcHPERFeCT: Primary intracranial hemorrhage probability estimation using random forests on CT. NeuroImage Clin. 2017, 14, 379–390. [Google Scholar] [CrossRef]
  20. Kuo, W.; Häne, C.; Yuh, E.; Mukherjee, P.; Malik, J. Cost-Sensitive active learning for intracranial hemorrhage detection. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 715–723. [Google Scholar]
  21. Chang, P.; Kuoy, E.; Grinband, J.; Weinberg, B.; Thompson, M.; Homo, R.; Chen, J.; Abcede, H.; Shafie, M.; Sugrue, L.; et al. Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT. Am. J. Neuroradiol. 2018, 39, 1609–1616. [Google Scholar] [CrossRef]
  22. Nag, M.K.; Chatterjee, S.; Sadhu, A.K.; Chatterjee, J.; Ghosh, N. Computer-assisted delineation of hematoma from CT volume using autoencoder and Chan Vese model. Int. J. Comput. Assist. Radiol. Surg. 2018, 14, 259–269. [Google Scholar] [CrossRef]
  23. Kuang, H.; Menon, B.K.; Qiu, W. Segmenting Hemorrhagic and Ischemic Infarct Simultaneously From Follow-Up Non-Contrast CT Images in Patients With Acute Ischemic Stroke. IEEE Access 2019, 7, 39842–39851. [Google Scholar] [CrossRef]
  24. Cho, J.; Park, K.S.; Karki, M.; Lee, E.; Ko, S.; Kim, J.K.; Lee, D.; Choe, J.; Son, J.; Kim, M.; et al. Improving sensitivity on identification and delineation of intracranial hemorrhage lesion Using cascaded deep learning models. J. Digit. Imaging 2019, 32, 450–461. [Google Scholar] [CrossRef] [PubMed]
  25. Gautam, A.; Raman, B. Automatic segmentation of intracerebral hemorrhage from brain CT images. In Machine Intelligence and Signal Analysis; Springer: Singapore, 2019; pp. 753–764. [Google Scholar]
  26. Hssayeni, M.D. Computed Tomography Images for Intracranial Hemorrhage Detection and Segmentation. 2019. Available online: https://physionet.org/content/ct-ich/1.3.0/ (accessed on 25 December 2019).
  27. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed]
  28. Prevedello, L.M.; Erdal, B.S.; Ryu, J.L.; Little, K.J.; Demirer, M.; Qian, S.; White, R.D. Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 2017, 285, 923–931. [Google Scholar] [CrossRef] [PubMed]
  29. Grewal, M.; Srivastava, M.M.; Kumar, P.; Varadarajan, S. RADnet: Radiologist level accuracy using deep learning for hemorrhage detection in CT scans. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 281–284. [Google Scholar]
  30. Jnawali, K.; Arbabshirani, M.R.; Rao, N.; Patel, A.A. Deep 3D convolution neural network for CT brain hemorrhage classification. In Medical Imaging 2018: Computer-Aided Diagnosis; International Society for Optics and Photonics: Washington, DC, USA, 2018; Volume 10575, p. 105751C. [Google Scholar]
  31. Chi, F.L.; Lang, T.C.; Sun, S.J.; Tang, X.J.; Xu, S.Y.; Zheng, H.B.; Zhao, H.S. Relationship between different surgical methods, hemorrhage position, hemorrhage volume, surgical timing, and treatment outcome of hypertensive intracerebral hemorrhage. World J. Emerg. Med. 2014, 5, 203. [Google Scholar] [CrossRef]
  32. Strub, W.; Leach, J.; Tomsick, T.; Vagal, A. Overnight preliminary head CT interpretations provided by residents: Locations of misidentified intracranial hemorrhage. Am. J. Neuroradiol. 2007, 28, 1679–1682. [Google Scholar] [CrossRef]
  33. Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 1 March 2019).

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.