Next Article in Journal / Special Issue
Incoherent Radar Imaging for Breast Cancer Detection and Experimental Validation against 3D Multimodal Breast Phantoms
Previous Article in Journal
Critical Aspects of Person Counting and Density Estimation
Previous Article in Special Issue
Deep Learning for Brain Tumor Segmentation: A Survey of State-of-the-Art
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Region Growing for Brain Tumor MR Image Segmentation

by
Erena Siyoum Biratu
1,
Friedhelm Schwenker
2,
Taye Girma Debelee
1,3,*,
Samuel Rahimeto Kebede
3,4,
Worku Gachena Negera
3 and
Hasset Tamirat Molla
5
1
College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia
2
Institute of Neural Information Processing, Ulm University, 89081 Ulm, Germany
3
Artificial Intelligence Center, Addis Ababa 40782, Ethiopia
4
Department of Electrical and Computer Engineering, Debreberhan University, Debre Berhan 445, Ethiopia
5
College of Natural and Computational Science, Addis Ababa University, Addis Ababa 1176, Ethiopia
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(2), 22; https://doi.org/10.3390/jimaging7020022
Submission received: 19 November 2020 / Revised: 25 January 2021 / Accepted: 26 January 2021 / Published: 1 February 2021
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)

Abstract

:
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach’s performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.

1. Introduction

Cancer is a critical health problem with a very high mortality rate in the world. But we can prevent deaths and illnesses from cancer if we can diagnose it earlier. Globally the mean five-year survival rate of cancer patients has increased from 49% to 67%. The main reason behind this improvement is the rapid growth in diagnostic and treatment techniques [1]. A brain tumor is one of the deadliest cancers among children and adults. A brain tumor is an abnormal mass of brain tissue that grows out of the control of the normal forces that regulate growth inside the skull. These unusual growths can be cancerous or non-cancerous [2]. There are many pieces of research carried out in the past few decades on a brain tumor, but it remained to be one of the major causes among much common type of cancers for the death of people in the entire world [3].
We can classify brain tumors as primary brain tumors and secondary brain tumors depending on the point of origin. Primary brain tumors originate from the brain tissues, whereas secondary tumors originate elsewhere and spread to the brain via hematogenous or lymphatic route. We can categorize brain tumors in terms of severity as benign and malignant [4]:
  • Benign brain tumors are those that grow slowly and do not metastasize or spread to other body organs and often can be removed and hence are less destructive or curable. They can still cause problems since they can grow big and press on sensitive areas of the brain (the so-called mass effect). Depending on their location, they can be life-threatening.
  • Malignant brain tumors are those with cancerous cells. The rate of growth is fast ranging from months to a few years. Unlike other malignancies, malignant brain tumors rarely spread to other body parts due to the tight junction in the brain and spinal cord.

Brain Tumor Imaging Technologies

Medical imaging technologies revolutionized medical diagnosis over the last 40 years allowing doctors to detect tumors earlier and improve the prognosis by visualizing tissue structures [5]. The most common imaging modalities for the detection of brain tumors include computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) [5]. MRI is the most commonly used system to diagnose brain tumors since it presents accurate details about the investigated tumor and has little risk to radiations. Additionally, it is capable of differentiating soft tissue with high resolution and is more sensitive in detecting and visualizing subtle changes in tissue density and the physiological alternations associated with the tumor [6,7,8,9]. Usually, one imaging modality is used in the diagnosis of brain tumors. But in some cases, more than one imaging modality might be advantageous in the diagnosis of brain tumors using medical image registration. Rundo et al. [10] explored the use of medical registration, which is a process of combining information from different imaging modalities into single data. These fusions usually require optimization of similarity between the different modality input images. CNN based optimization for medical image registration was performed in [11].
MRI is a non-invasive imaging technique that produces three-dimensional anatomical images by measuring the energy released when a proton changes its polarity after it was altered using a strong magnetic field. MRIs are sain the detection of abnormalities in the soft tissues.
MRI images can be taken in many ways [12]. The most common and widely used modalities include:
  • T1-weighted: by measuring the time required for the magnetic vector to return to its resting state(T1-relaxation time)
  • T2-weighted: by measuring the time required for the axial spin to return to its resting state (T2-relaxation time).
  • Fluid-attenuated inversion recovery(T2-FLAIR): which is T2 weighted by suppressing cerebrospinal fluid(CSF).
T1, especially with the addition of contrast(Gadolinium), is effective in the detection of new lesions, whereas T2 and Flair are effective in defining high-grade glial neoplasm(glioma) and surrounding edema. Flair performs better in defining the actual volume of the neoplasm [13]. In this paper, we considered Flair images since they are effective in the detection of Gliomas (such as glioblastoma, astrocytomas, oligodendrogliomas, and ependymomas), that makeup 81% of malignant brain tumors in adults [14].

2. Related Works

Since medical images contain artifacts such as tags, noises, and other body parts that are not the area of interest, they needed to be removed [15]. Then, segmentation tasks are performed to extract the region of interest for the detection and classification step. Recently, deep-learning based methods tried to combine both segmentation and classification of medical images in one process. Brain tumor segmentation can be categorized into region-based and deep-learning-based segmentations. From region-based segmentation algorithms, we will be addressing clustering, region growing, fuzzy means segmentation algorithms. And, from deep learning U-Net has been addressed.

2.1. Region-Based Brain Tumor Segmentation

A lot of researches have been carried out in the area of segmentation for medical images like breast cancer and brain tumor using various segmentation methods [16]. However, the complexity and large variations of the tissue structure and indistinguishable boundaries between regions of the human brain tissues made the brain tumor segmentation a challenging task [17]. In the past few years, different brain image segmentation approaches have been developed for MRI images and evaluated using different evaluation parameters.
One of the most common, easiest, and fastest algorithm for image segmentation is thresholding. The thresholding technique is based on one or more intensity threshold values where these values are compared with pixel intensities. Thresholding performs well when there is homogeneous intensity in the image. However, applying the thresholding segmentation algorithm to brain tumor segmentation is not recommended because of two reasons: optimal threshold selection is not an easy task, and intensity in the brain tumor is not homogeneous [18]. These problems have been tried to be addressed using image enhancement techniques for clearly differentiating between tissue regions on MRI scans. Rundo et al. [19] proposed a novel medical image enhancement technique called medGA, which is a pre-processing technique based on the genetic algorithm. But medGA needs a user input for the ROI from the MRI slices. Acharya and Kumar [20] proposed a particle-swarm-based contrast enhancement technique for brain MRI images. They have compared the proposed algorithm with other contrast enhancement techniques. But they didn’t show its performance when it is applied as a pre-processing for segmentation using a thresholding technique.
The other commonly used segmentation algorithm in medical images is the watershed algorithm. The working principle behind the watershed segmentation algorithm is similar to the water flooding in the rigged landscape [18]. The watershed algorithm can accurately segment multiple regions at the same time with complete contour for each section. But, the watershed segmentation algorithm suffers from over-segmentation [21].
The region growing algorithm is one of the most successful approaches for brain tumor segmentation. This approach mainly extracts regions with similar pixels [18]. The region-growing algorithm’s performance is highly dependent on the initial seed point selection and the type of similarity measure used between neighboring pixels. However, in most cases selecting an optimal seed point is made manually as presented in Table 1 and a challenging task besides its higher computational cost [18].
Salman et al. [22] and Sarathi et al. [23] stated that region growing segmentation algorithm has shown better performance for brain tumor segmentation to generate ROI. However, Salman et al. [22] in their work manually selected the initial point as the seed for the region growing algorithm-based approach that they proposed to get ROI. Thiruvenkadam [24] explained that manual seed point selection is the most important step for region growing based brain tumor segmentation.
Cui et al. [17] fused two MRI images (MRI-FLAIR and MRI-T2) for generating initial seed points for the region growing algorithm. They automatically select seed points but the overall algorithm is not consistent. The inconsistency comes from the fact that seed points are selected randomly from a set of potential seed points generated by calculating seeds’ probability of belonging to a tumor region.
Sarathi et al. [23] proposed a wavelet features based region growing segmentation algorithm for an original 256 × 256 T1-weighted enhanced MRI image. For the selection of seed points, they first convolved the 64 × 64 kernel with the 64 × 64 preprocessed brain images and followed by wavelet feature extraction. Then significant wavelet feature points were used alternatively as a potential initial seed until the best ROI is extracted. In this paper, mean, variance, standard deviation, and entropy were used as similarity properties to include or exclude the neighboring pixels to the seed point. The experimental result showed that the proposed approach gave better performance results with minimum computational time.
In [25] the intensity values of brain tissue from its different regions were considered to decide the selection of the seed points. However, brain map structure and intensity information need to be known in advance. Therefore, to gain detailed information on the brain images, multi-modal images were preferred, and hence in this work Ho et al. [25] used a fusion of multi-modal images to select the initial seed automatically.
Bauer et al. [26] used a soft-margin SVM classifier for the segmentation of brain tumors hierarchically by classifying MRI voxels. 28 features were extracted from the voxel intensity and first-order textures extracted from patches around the voxel. Conditional Random Fields(CRF) regularization was applied to introduces spatial constraints to the SVM classifier since considers each voxel is independent. The proposed algorithm achieved a DSS of 0.84. They didn’t specify the size of patches taken around the voxels when extracting texture features. There was no comparison performed with state-of-the-art algorithms.
Rundo et al. [27] used Fuzzy C-Means(FCM) based segmentation algorithms to segment the whole tumor volume using their gross tumor volume (GTV) segmentation in the first step and extract the necrosis volume from the gross tumor volume in the second step. But the proposed algorithm needs human intervention for the GTV algorithm.

2.2. Deep Learning-Based Brain Tumor Segmentations

Deep learning has been applied for the classification and segmentation of medical images previously [28,29,30,31,32]. Different versions of CNNs were used for the segmentation of brain tumors from MRI scans.
Li et al. [33], applied generative adversarial networks(GANs) to augment brain datasets by generating realistic paired data. The proposed method can augment n data pairs into n 2-n data. Their data augmentation technique was used to train and test different deep learning-based segmentation techniques using the BRATS2017 dataset. The best performer, the U-net algorithm, achieved a DSS of 0.754 when using the original dataset but this performance was improved to 0.765 in the case of whole tumor segmentation. The network architecture of U-Net is symmetric and composed of Encoder and decoder. The encoder is used to extract features from the input images and decoder constructs segmentation from the extracted features in Encoder [34]. U-Net became the most popular semantic segmentation in medical imaging [34]. In this paper, U-Net was implemented for comparing the performances of our proposed model.
Rundo et al. [35] modified the original U-Net architecture by adding squeeze-excitation (SE) blocks in every skip connection. They proposed two architectures, first only the encoder block output was feed to SE blocks at the skip connection. Another architecture was modifying each skip connection by adding SE blocks at every encoder and decoder block and combine the outputs to modify the original skip connection. The SE blocks are designed to model interdependencies between channels and increases the model generalization capabilities when trained using different datasets. The datasets consisted of prostate MRI scans for zonal segmentation collected from various institutions. The SE block’s ability to adaptive feature recalibration significantly improves the performances of the U-net architecture, when trained across different datasets.

3. Materials and Methods

Figure 1 presents the flowchart of the proposed enhanced region-growing algorithm for brain tumor segmentation. Raw MRI images usually have different artifacts and non-brain parts that affect the segmentation quality and hence a preprocessing step should be applied before segmentation algorithms. The enhanced region-growing algorithm is applied to generate candidate brain tumor regions. The detail methods used in this paper is presented in Section 3.1 through Section 3.4.

3.1. Dataset

The image dataset used in this paper contains multimodal MRI scans of patients with gliomas (54 LGGs and 132 HGGs). It was used for the multimodal Brain Tumor Segmentation (BRATS) 2015 challenge, from the Virtual Skeleton Database (VSD) [36]. Specifically, these image datasets were a combination of the training set (10 LGGs and 20 HGGs) used in the BRATS 2013 challenge [37], as well as 44 LGG and 112 HGG scans provided from the National Institutes of Health (NIH) Cancer Imaging Archive (TCIA). The data of each patient consisted of native and contrast-enhanced (CE) T1-weighted, as well as T2-weighted and T2 Fluid-attenuated inversion recovery (FLAIR) MRI volumes.
In the dataset, the ground truth (GT) was included for training the segmentation model and qualitative evaluation. Specifically, the data from BRATS 2013 were manually annotated, whereas data from TCIA were automatically annotated by fusing the approved by experts results of the segmentation algorithms that ranked high in the BRATS 2012 and 2013 challenges [37]. The GT segmentations comprise the enhancing part of the tumor (ET), the tumor core (TC), which is described by the union of necrotic, non-enhancing, and enhancing parts of the tumor, and the whole tumor (WT), which is the union of the TC and the peritumoral edematous region.

3.2. Preprocessing

In digital image processing preprocessing plays an important role in smoothing and normalizing the MRI images [38]. Performing preprocessing suppresses the impact of dark parts in the borders of the brain images [38].
The BRATS2015 dataset is available in a preprocessed format in which unwanted parts are removed. But, preprocessing is essential for raw MRI data. Skull Stripping is one of the popular pre-processing techniques that remove the skull from brain image. The surroundings of a brain are termed as a skull. The skull stripping is the process of eradicating the tissues that are not cerebral. It is difficult to distinguish non-cerebral and the intra-cranial tissues because of their homogeneity in intensities [39]. In brain tumor segmentation, stripping the skull and other non-brain parts is a crucial step to be accomplished but it is a challenging task [40]. The challenge arises from large anatomical variability among brains, different acquisition methods of brain images, and the existence of artifacts on brain images. These are some of the reasons among many that boost the challenge to design a robust algorithm [40]. Segonne et al. [40] proposed a hybrid approach that was used to strip the skull where they combined the watershed algorithm and deformable surface model.
In the proposed approach, we applied thresholding and morphological operation for preprocessing (see Algorithm 1). Since the MRI images in the local dataset are images with three color channels, it was changed into a grayscale image before the preprocessing. Otsu’s thresholding technique was employed to determine the threshold between the background and the tissue regions. By thresholding, the largest binary object extracts the brain and removes the skull and other tags from the image. Some examples of skull removal algorithm are presented in Figure 2.
Algorithm 1 Skull Stripping
1:
input: gray scale image, i m
2:
Calculate Otsu’s Threshold
  T g r a y t h r e s h ( i m )
3:
Threshold the image
  B W i m 2 b w ( i m , T )
4:
Open the binary image using a disk structuring Seed
  B W i m o p e n ( B W , s e )
5:
Dilate the binary image
  B W i m d i l a t e ( B W , s e )
6:
Select the largest binary image
  B W l a r g e s t _ b l o b ( B W )
7:
Dilate the binary image
  B W i m c l o s e ( B W , s e )
8:
Fill holes on the binary image
  B W i m f i l l ( B W , s e )
9:
Remove the skull
  s t r i p p e d i m ( ! B W ) = 0
10:
return   s t r i p p e d

3.3. Enhanced Region-Growing Approach

The proposed enhanced region-growing based approach that automatically detect the abnormality region and extract the ROI for each brain image is presented in Algorithm 2. This approach is the main contribution of the paper. The role of Algorithm 1 is to strip the skull of the input original brain image. Then, the skull stripped brain image is divided into 32 blocks or patches of size 8 × 8 . For each block i , the average (mean) intensity was computed as indicated in Equation (1):
A v g I i = 1 : 32 = j = 1 8 k = 1 8 I j k 64
As presented in Algorithm 2, line 6 and Equation (1), the mean intensities for each of the 32 blocks were computed and selected only the top five brightest pixels as potential candidates to use as seed points for the region-growing segmentation algorithm, refer Figure 3a,c,e,g. Line 12 to 14 of Algorithm 2 presented the five ROIs generated by region-growing segmentation algorithm, and then compared the results against the ground truth using evaluation parameters to select the best ROI as a final segmentation output, see Figure 3b,d,f,h. The region-growing segmentation algorithm’s threshold point is determined experimentally to be 0.1 since most of the tumor regions appear homogeneous. However, some of the inhomogeneities parts were accommodated with fill hole operations as shown in Figure 3h. In this particular brain image, the tumor core appears black and our algorithm might detect only the boundaries. But, for such cases, we applied the fill holes operations to include the core of the tumor.
Algorithm 2 Enhanced Region Growing Segmentation for Brain Tumor Segmentation
1:
input: skull stripped image, i m
2:
Resize the Image
        i m i m r e s i z e ( i m , [ 256 , 256 ] )
3:
iterate through each 8 × 8 block
4:
for i = 1 : 8 : 256 do
5:
 for j = 1 : 8 : 256 do
6:
  Collect the mean of each block
          m I s m e a n ( i m ( i : i + 7 , j : j + 7 )
7:
  Collect the centers of each block
   c B s [ i + 3 , j + 3 ]
8:
end for
9:
  end for
10:
Select top 5 blocks based on the intensity
         [ i n d , v a l s ] = m a x ( m I s , 5 )
         s e e d s = c B s ( i n d )
11:
return s e e d s
12:
for m = 1 : 5 do
13:
 ROI m = Region-growing(seed m )
14:
end for
15:
Compare each ROI m against GT using evaluation parameters for m = 1 : 5
16:
Select the best ROI as a final segmentation output.

3.4. Evaluation Approach

The most common parameters to be used to evaluate the performance of segmentation algorithms are DSS, Similarity Index (SI), Extra Fraction (EF), Overlap Fraction (OF), Jaccard Similarity (JSI), accuracy (Acc), sensitivity (Sn), specificity (Sp), computation cost, Root Mean Squared Error (RMSE) and intersection over union (IoU). JSI is similar with IoU and Sp is similar with SI.
Consider True Positive (TP) as the number of tumor region pixels correctly identified and classified, False Positive (FP) as the number of normal region pixels in the input image identified as tumor region, False Negative (FN) as the number of tumor region pixels left undetected or misclassified, and True Negative (TN) as the number of normal region pixels in the input region identified as the normal region.

3.4.1. Extra Fraction (EF)

Extra fraction refers to the number of pixels being falsely detected as a tumor region. A minimum extra fraction value means a better segmentation result [41].
E F = F P T P + F N

3.4.2. Overlap Fraction (OF)

Overlap fraction or sensitivity value refers to the number of images segmented and classified correctly [41]. Specifically, overlap fraction refers to the tumor region being correctly identified.
O F = T P T P + F N

3.4.3. Dice Similarity Score (DSS)

It measures the spatial overlap between the original image and the segmented target region.
D S S = T P 1 2 ( 2 T P + F P + F N )
Besides, we have involved the radiologist to evaluate the final ROIs obtained using the proposed approach for randomly selected brain images to validate our proposed approach qualitatively.

4. Experimental Results and Discussion

The first experimental result was the skull stripped brain images as indicated in Figure 2 where Figure 2a,c,e,g were the original brain images of size 256 × 256 and Figure 2b,d,f,h were the skull stripped brain images. Then, as presented in Equation (1), we generated 32 average intensities for each skull stripped brain images and selected the five top average intensities for each image and used as potential initial seed points for region growing algorithm as indicated in Figure 3a,c,e,g. Using the five selected initial seed points for each image, we generated five different ROIs and compared against the respective GT and selected the best ROI as presented in Figure 3b,d,f,h.
To validate the proposed approach, we designed three different experimental setups for analysis. In our first experiment, we randomly selected 15 brain images from the BRATS2015 dataset. In the second experiment, we again randomly selected 12 brain images from the same dataset and finally, in the third experimental setup, we used 800 brain images from the same dataset used in the previous two experimental setups.
In all the three experimental setups, the performance of the proposed approach was evaluated in terms of Acc, IoU, DSS, Sn, Sp, EF, OF, and PSNR. In most cases, especially the deep learning algorithms use DSS to evaluate the segmentation algorithms. The highest value of Acc, IoU, DSS, Sn, Sp, OF and PSNR indicate the highest performance whereas the lowest value of EF indicates poor performance.
In the first experimental setup, 15 brain images were used for experimental analysis, and for each image, the corresponding Acc, IoU, DSS, Sn, Sp, OF, EF, and PSNR were computed as indicated in Table 2. The average value of Acc, IoU, DSS, Sn, Sp, OF, EF, and PSNR for the 15 brain images were used to compare the performance of the proposed approach with that of modified adaptive K-means and U-Net.
Table 2 indicates that the proposed algorithm outperformed modified adaptive K-means, and U-Net in terms of an average value of Acc, IoU, DSS, EF, and PSNR. However, it achieved a lower average value of Sn, Sp, and OF. The lower average value of Sn, Sp, and OF is achieved because of the least value of respective parameters for images 14 and 15. However, still, the U-Net and MAKM have an insignificant higher performance than the proposed approach. In the case of OF and Sn, U-Net achieved 4% and MAKM achieved 1% higher than the proposed approach. In the case of Sp, both the U-Net and MAKM are 1% higher than the proposed approach.
Table 3 presented the comparison of the proposed approach, MAKAM and U-Net for the 12 randomly selected brain images from BRATS2015. The proposed approach scored a higher value of Acc, IoU, DSS, Sp, EF, and PSNR but a lower value of Sn and OF compared to MAKM and U-Net. The value of Acc, IoU, DSS, Sp, EF, and PSNR were 99.1%, 0.82, 0.90, 99.7%, 0.06, and 163.89 respectively whereas the value of Sn and OF were 89.1% and 0.89 respectively. U-Net achieved a higher value for both Sn and OF compared to MAKM and the proposed approach where performance difference was limited to nearly to 2%.
Table 4 presented the experimental results of the proposed approach for 800 brain images and compared them with the performance of MAKM and U-Net. The experimental results showed in Table 4 indicated that the proposed approach scored a higher value of Acc, IoU, DSS, Sp, EF, and PSNR but a lower value of Sn and OF compared to MAKM and U-Net. The value of Acc, IoU, DSS, Sp, EF, and PSNR was 98.72%, 0.67, 0.80, 99.8%, 0.06, and 157.0 respectively whereas the value of Sn and OF were 90.7% and 0.91 respectively. The higher value of Sn and OF were scored by U-Net.
Table 5 presented the achieved state-of-the-art deep learning algorithms’ results on the BRATS2015 dataset and compared with the scored performance of the proposed approach for three different experimental setups/cases. The experimental results achieved were the DSS value of 0.89, 0.90, and 0.80 for case-1, case-2, and case-3 respectively. The average DSS value of the three experimental setups was 0.86. In this paper, no classifier was applied for final segmentation but the enhanced region growing algorithm was effective in generating candidate regions of interest. We did choose the best ROI against GT from the generated ROIs to compare with the other methods. From the experimental results, we saw that the proposed approach can generate the best ROI in most of the test cases. But still, a classifier should be trained by extracting features from the abnormal ROIs for making the algorithm to detect and determine the tumor type.
Figure 4 presented the segmentation results of the proposed algorithm, MAKM, and U-Net in terms of ROIs and their respective ground truths. For im274, im473, im551, im1507, im781, and im733 the proposed approach achieved ROIs which were almost the same as their respective ground truths (GTs). The proposed approach resulted in under-segmentation for im792 and im1238 as indicated in Figure 4. For the case of U-Net, the good segmentation results were observed only for im274, im551, im1507, and im781 and unable to detect the tumor region for im473, im792, im733, and im1238. In the case of MAKAM, over-segmentation results were achieved in almost all randomly selected brain images except for im274 where it detected the normal brain image part as abnormal.
For comparison purposes, we evaluated the performance of the proposed approach with MAKM and U-Net. MAKM [46] is a modified version of the adaptive k-means algorithm proposed by Debelee et al. The performance of the proposed approach was by far better than the MAKM algorithm that mainly proposed for detection of cancer on mammographic images. For the case of U-Net, we first trained the U-Net architecture from the scratch using 16000 slices extracted from MRI scans of 200 patients obtained from the BRATS2015 datasets, with 80 slices per patient (slice 50 to 130). The 200 patients were affected by the fast-growing and rapidly spreading tumors called High-Grade Glioma. The training was performed for 50 epochs until we got no significant improvements. Since the BRATS2015 datasets consisted of MRI scans with much of the preprocessing (such as tag removal and skull stripping) performed, we just applied intensity normalization before the training. We used DSS as the loss function in the training process, for the training of a nine-layer U-net architecture described in [47]. This architecture has an additional batch normalization after each convolutional layer and for evaluation purposes, we randomly selected 15 brain images for the testing after model validation and the testing DSS score value was less by 14% compared with the proposed approach.
Finally, we compared the performance of the proposed approach with the U-Net and its variants based on the BRATS2015 dataset. Daimary et al. [42] and Zhou et al. proposed a U-Net variant architecture and scored a DSS value of 0.73 and 0.87 respectively which was less than what the proposed approach scored. Havaei et al. [43] have evaluated their approach using the BRATS2015 dataset and achieved 0.88, 0.79, and 0.73 for three modalities, whole, core and enhanced respectively.

5. Conclusions

The brain tumor is one of the major cancer types which has been a reason for the higher death rate in the entire world. To combat that a significant number of medical image analysis-based research works have been carried for different types of cancer detection and classification using deep learning and conventional/shallow machine learning approach. Shallow machine learning is usually applied in combination with digital image processing techniques for image-based analysis. In this article, we modified the existing and popular region-growing segmentation algorithm to detect the abnormality region on brain images. The main challenge of the region-growing algorithm is seed point initialization to get the best ROI for any input brain images. In the proposed approach the seed point initialization was made to be automatically generated for any input brain images and tested on the BRATS2015 dataset in three different experimental setups. The experimental result of our approach was compared with MAKM, U-Net architecture, and its variant for brain tumor detection and segmentation. From the experimental result, we have seen that the proposed algorithm can detect brain tumor locations and extract the best ROIs. The results of the proposed method achieved higher performance than modified adaptive k-means. Almost all U-Net architecture and its variants have scored lesser DSS Value for the BRATS2015 brain tumor image dataset. However, in most of the cases, the U-Net either over-segments or missed the tumor region of the brain MRI images. The proposed approach has a problem in thresholding point selection for the region-growing algorithm and was left for future work.

Author Contributions

Conceptualization, E.S.B.; Methodology, E.S.B.; Validation, F.S., T.G.D., S.R.K., H.T.M., W.G.N.; Writing—original draft preparation, E.S.B.; Writing—review and editing, E.S.B., F.S., S.R.K., T.G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sonar, P.; Bhosle, U.; Choudhury, C. Mammography classification using modified hybrid SVM-KNN. In Proceedings of the 2017 International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 28–29 July 2017. [Google Scholar] [CrossRef]
  2. Yasiran, S.S.; Salleh, S.; Mahmud, R. Haralick texture and invariant moments features for breast cancer classification. AIP Conf. Proc. 2016, 1750, 020022. [Google Scholar]
  3. Aldape, K.; Brindle, K.M.; Chesler, L.; Chopra, R.; Gajjar, A.; Gilbert, M.R.; Gottardo, N.; Gutmann, D.H.; Hargrave, D.; Holland, E.C.; et al. Challenges to curing primary brain tumours. Nat. Rev. Clin. Oncol. 2019, 16, 509–520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zhao, Z.; Yang, G.; Lin, Y.; Pang, H.; Wang, M. Automated glioma detection and segmentation using graphical models. PLoS ONE 2018, 13, e0200745. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Birry, R.A.K. Automated Classification in Digital Images of Osteogenic Differentiated Stem Cells. Ph.D. Thesis, University of Salford, Salford, UK, 2013. [Google Scholar]
  6. Drevelegas, A.; Papanikolaou, N. Imaging modalities in brain tumors. In Imaging of Brain Tumors with Histological Correlations; Springer: Berlin/Heidelberg, Germany, 2011; pp. 13–33. [Google Scholar]
  7. Mechtler, L. Neuroimaging in Neuro-Oncology. Neurol. Clin. 2009, 27, 171–201. [Google Scholar] [CrossRef]
  8. Strong, M.J.; Garces, J.; Vera, J.C.; Mathkour, M.; Emerson, N.; Ware, M.L. Brain Tumors: Epidemiology and Current Trends in Treatment. J. Brain Tumors Neurooncol. 2015, 1, 1–21. [Google Scholar] [CrossRef]
  9. Mortazavi, D.; Kouzani, A.Z.; Soltanian-Zadeh, H. Segmentation of multiple sclerosis lesions in MR images: A review. Neuroradiology 2011, 54, 299–320. [Google Scholar] [CrossRef]
  10. Rundo, L.; Tangherloni, A.; Militello, C.; Gilardi, M.C.; Mauri, G. Multimodal medical image registration using Particle Swarm Optimization: A review. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016. [Google Scholar] [CrossRef]
  11. Balakrishnan, G.; Zhao, A.; Sabuncu, M.R.; Guttag, J.; Dalca, A.V. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE Trans. Med. Imaging 2019, 38, 1788–1800. [Google Scholar] [CrossRef] [Green Version]
  12. MY-MS.org. MRI Basics. 2020. Available online: https://my-ms.org/mri_basics.htm (accessed on 1 October 2020).
  13. Stall, B.; Zach, L.; Ning, H.; Ondos, J.; Arora, B.; Shankavaram, U.; Miller, R.W.; Citrin, D.; Camphausen, K. Comparison of T2 and FLAIR imaging for target delineation in high grade gliomas. Radiat. Oncol. 2010, 5, 5. [Google Scholar] [CrossRef] [Green Version]
  14. Society, N.B.T. Quick Brain Tumor Facts. 2020. Available online: https://braintumor.org/brain-tumor-information/brain-tumor-facts/ (accessed on 3 October 2020).
  15. Rahimeto, S.; Debelee, T.; Yohannes, D.; Schwenker, F. Automatic pectoral muscle removal in mammograms. Evol. Syst. 2019. [Google Scholar] [CrossRef]
  16. Kebede, S.R.; Debelee, T.G.; Schwenker, F.; Yohannes, D. Classifier Based Breast Cancer Segmentation. J. Biomim. Biomater. Biomed. Eng. 2020, 47, 1–21. [Google Scholar]
  17. Cui, S.; Shen, X.; Lyu, Y. Automatic Segmentation of Brain Tumor Image Based on Region Growing with Co-constraint. In International Conference on Multimedia Modeling, Proceedings of the MMM 2019: MultiMedia Modeling, Thessaloniki, Greece, 8–11 January 2019; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11295. [Google Scholar]
  18. Angulakshmi, M.; Lakshmi Priya, G.G. Automated Brain Tumour Segmentation Techniques—A Review. Int. J. Imaging Syst. Technol. 2017, 27, 66–77. [Google Scholar] [CrossRef] [Green Version]
  19. Rundo, L.; Tangherloni, A.; Cazzaniga, P.; Nobile, M.S.; Russo, G.; Gilardi, M.C.; Vitabile, S.; Mauri, G.; Besozzi, D.; Militello, C. A novel framework for MR image segmentation and quantification by using MedGA. Comput. Methods Programs Biomed. 2019, 176, 159–172. [Google Scholar] [CrossRef] [PubMed]
  20. Acharya, U.K.; Kumar, S. Particle swarm optimized texture based histogram equalization (PSOTHE) for MRI brain image enhancement. Optik 2020, 224, 165760. [Google Scholar] [CrossRef]
  21. Pandav, S. Brain tumor extraction using marker controlled watershed segmentation. Int. J. Eng. Res. Technol. 2014, 3, 2020–2022. [Google Scholar]
  22. Salman, Y. Validation techniques for quantitative brain tumors measurements. In Proceedings of the IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006. [Google Scholar]
  23. Sarathi, M.P.; Ansari, M.G.A.; Uher, V.; Burget, R.; Dutta, M.K. Automated Brain Tumor segmentation using novel feature point detector and seeded region growing. In Proceedings of the 2013 36th International Conference on Telecommunications and Signal Processing (TSP), Rome, Italy, 2–4 July 2013; pp. 648–652. [Google Scholar]
  24. Thiruvenkadam, K.; Perumal, N. Brain Tumor Segmentation of MRI Brain Images through FCM clustering and Seeded Region Growing Technique. Int. J. Appl. Eng. Res. 2015, 10, 427–432. [Google Scholar]
  25. Ho, Y.L.; Lin, W.Y.; Tsai, C.L.; Lee, C.C.; Lin, C.Y. Automatic Brain Extraction for T1-Weighted Magnetic Resonance Images Using Region Growing. In Proceedings of the 2016 IEEE 16th International Conference on Bioinformatics and Bioengineering (BIBE), Taichung, Taiwan, 31 October–2 November 2016; pp. 250–253. [Google Scholar]
  26. Bauer, S.; Nolte, L.P.; Reyes, M. Fully Automatic Segmentation of Brain Tumor Images Using Support Vector Machine Classification in Combination with Hierarchical Conditional Random Field Regularization. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 354–361. [Google Scholar] [CrossRef] [Green Version]
  27. Rundo, L.; Militello, C.; Tangherloni, A.; Russo, G.; Vitabile, S.; Gilardi, M.C.; Mauri, G. NeXt for neuro-radiosurgery: A fully automatic approach for necrosis extraction in brain tumor MRI using an unsupervised machine learning technique. Int. J. Imaging Syst. Technol. 2017, 28, 21–37. [Google Scholar] [CrossRef]
  28. Debelee, T.G.; Schwenker, F.; Ibenthal, A.; Yohannes, D. Survey of deep learning in breast cancer image analysis. Evol. Syst. 2019. [Google Scholar] [CrossRef]
  29. Debelee, T.G.; Gebreselasie, A.; Schwenker, F.; Amirian, M.; Yohannes, D. Classification of Mammograms Using Texture and CNN Based Extracted Features. J. Biomim. Biomater. Biomed. Eng. 2019, 42, 79–97. [Google Scholar] [CrossRef]
  30. Debelee, T.G.; Kebede, S.R.; Schwenker, F.; Shewarega, Z.M. Deep Learning in Selected Cancers’ Image Analysis—A Survey. J. Imaging 2020, 6, 121. [Google Scholar] [CrossRef]
  31. Afework, Y.K.; Debelee, T.G. Detection of Bacterial Wilt on Enset Crop Using Deep Learning Approach. Int. J. Eng. Res. Afr. 2020, 51, 131–146. [Google Scholar] [CrossRef]
  32. Debelee, T.G.; Amirian, M.; Ibenthal, A.; Palm, G.; Schwenker, F. Classification of Mammograms Using Convolutional Neural Network Based Feature Extraction. In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer: Cham, Switzerland, 2018; Volume 244, pp. 89–98. [Google Scholar]
  33. Li, Q.; Yu, Z.; Wang, Y.; Zheng, H. TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation. Sensors 2020, 20, 4203. [Google Scholar] [CrossRef] [PubMed]
  34. Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef] [PubMed]
  35. Rundo, L.; Han, C.; Nagano, Y.; Zhang, J.; Hataya, R.; Militello, C.; Tangherloni, A.; Nobile, M.S.; Ferretti, C.; Besozzi, D.; et al. USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019, 365, 31–43. [Google Scholar] [CrossRef] [Green Version]
  36. Kistler, M.; Bonaretti, S.; Pfahrer, M.; Niklaus, R.; Büchler, P. The Virtual Skeleton Database: An Open Access Repository for Biomedical Research and Collaboration. J. Med. Internet Res. 2013, 15, e245. [Google Scholar] [CrossRef] [Green Version]
  37. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [Google Scholar] [CrossRef]
  38. Zhao, J.; Meng, Z.; Wei, L.; Sun, C.; Zou, Q.; Su, R. Supervised Brain Tumor Segmentation Based on Gradient and Context-Sensitive Features. Front. Neurosci. 2019, 13, 1–11. [Google Scholar] [CrossRef]
  39. Reddy, B.; Reddy, P.B.; Kumar, P.S.; Reddy, S.S. Developing an Approach to Brain MRI Image Preprocessing for Tumor Detection. Int. J. Res. 2014, 1, 725–731. [Google Scholar]
  40. Ségonne, F.; Dale, A.M.; Busa, E.; Glessner, M.; Salat, D.; Hahn, H.K.; Fischl, B. A Hybrid Approach to the Skull Stripping Problem in MRI. Neuroimage 2004, 22, 1060–1075. [Google Scholar] [CrossRef]
  41. Vishnuvarthanan, G.; Rajasekaran, M.P.; Vishnuvarthanan, N.A.; Prasath, T.A.; Kannan, M. Tumor Detection in T1, T2, FLAIR and MPR Brain Images Using a Combination of Optimization and Fuzzy Clustering Improved by Seed-Based Region Growing Algorithm. Int. J. Imaging Syst. Technol. 2017, 27, 33–45. [Google Scholar] [CrossRef] [Green Version]
  42. Daimary, D.; Bora, M.B.; Amitab, K.; Kandar, D. Brain Tumor Segmentation from MRI Images using Hybrid Convolutional Neural Networks. Procedia Comput. Sci. 2020, 167, 2419–2428. [Google Scholar] [CrossRef]
  43. Havaei, M.; Dutil, F.; Pal, C.; Larochelle, H.; Jodoin, P.M. A Convolutional Neural Network Approach to Brain Tumor Segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Springer: Cham, Switzerland, 2016; pp. 195–208. [Google Scholar] [CrossRef]
  44. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Deep Convolutional Neural Networks for the Segmentation of Gliomas in Multi-sequence MRI. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Springer: Cham, Switzerland, 2016; pp. 131–143. [Google Scholar] [CrossRef]
  45. Malmi, E.; Parambath, S.; Peyrat, J.M.; Abinahed, J.; Chawla, S. CaBS: A Cascaded Brain Tumor Segmentation Approach. Proc. MICCAI Brain Tumor Segmentation (BRATS) 2015, 42–47. Available online: http://www2.imm.dtu.dk/projects/BRATS2012/proceedingsBRATS2012.pdf (accessed on 8 August 2020).
  46. Debelee, T.G.; Schwenker, F.; Rahimeto, S.; Yohannes, D. Evaluation of modified adaptive k-means segmentation algorithm. Comput. Vis. Media 2019. [Google Scholar] [CrossRef] [Green Version]
  47. Dong, H.; Yang, G.; Liu, F.; Mo, Y.; Guo, Y. Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks. In Communications in Computer and Information Science; Springer: Cham, Switzerland, 2017; pp. 506–517. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of the proposed region growing algorithm. In this approach the segmentation result is evaluated both by evaluation parameters and Physicians/Radiologists.
Figure 1. Flowchart of the proposed region growing algorithm. In this approach the segmentation result is evaluated both by evaluation parameters and Physicians/Radiologists.
Jimaging 07 00022 g001
Figure 2. Examples of original abnormal brain tumor images before and after skull removed. (a,c,e,g) represent original brain images with skull; (b,d,f,h) represent the skull removed original brain images.
Figure 2. Examples of original abnormal brain tumor images before and after skull removed. (a,c,e,g) represent original brain images with skull; (b,d,f,h) represent the skull removed original brain images.
Jimaging 07 00022 g002
Figure 3. Generated possible seed points and annotations using proposed approach. (a,c,e,g) represent a skull removed original brain images with five potential seed points for brain images; (b,d,f,h) represent the best ROIs of each respective brain images.
Figure 3. Generated possible seed points and annotations using proposed approach. (a,c,e,g) represent a skull removed original brain images with five potential seed points for brain images; (b,d,f,h) represent the best ROIs of each respective brain images.
Jimaging 07 00022 g003
Figure 4. Segmentation results on BRATS2015 dataset.
Figure 4. Segmentation results on BRATS2015 dataset.
Jimaging 07 00022 g004aJimaging 07 00022 g004b
Table 1. Related Work in region growing seed selection and growth criteria.
Table 1. Related Work in region growing seed selection and growth criteria.
Authors and CitationSeed SelectionRG Criteria
Salman et al., 2006 [22]ManualTexture
Sarathi et al., 2013 [23]Automaticvariance, Entropy
Thiruvenkadam, 2015 [24]Manual-
Ho et al., 2016 [25]AutomaticIntensity
Cui et al., 2019 [17]Semi-automaticIntensity & Spatial Texture
Table 2. Performance comparison of RG with MAKM and U-Net for 15 randomly selected brain images from BRATS2015 Dataset.
Table 2. Performance comparison of RG with MAKM and U-Net for 15 randomly selected brain images from BRATS2015 Dataset.
MetricAlgorithmim01im02im03im04im05im06im07im08im09im10im11im12im13im14im15Avg
RG1001001001009999999999991009999889498
Acc (%)MAKM99999982999999998686808799879993
U-Net1001001001009898747499996799100999293
RG0.940.940.940.930.880.880.850.850.850.850.840.830.810.310.040.78
IoUMAKM0.900.790.790.210.860.860.900.900.260.260.060.190.810.340.650.59
U-Net0.940.960.960.930.700.700.160.160.910.910.030.840.930.810.240.68
RG0.970.970.970.960.930.930.920.920.920.920.910.910.890.470.800.89
DSSMAKM0.950.880.880.350.920.920.950.950.420.420.110.330.900.510.790.68
U-Net0.970.980.980.960.820.820.270.270.950.950.070.920.960.890.390.75
RG9795959888888787858585838110010090
Sn (%)MAKM9179791008686969610010010099851006591
U-Net100989893959599991001009010096886594
RG100100100100100100100100100100100100100840092
Sp (%)MAKM10010010081100100100100858579861008610093
U-Net1001001001009898737399996799100999393
RG0.030.020.020.060.000.000.020.020.000.000.010.000.002.2723.371.72
EFMAKM0.010.000.003.790.000.000.060.062.792.7916.114.090.041.920.002.11
U-Net0.050.020.020.000.350.355.235.230.100.1025.570.180.030.081.692.60
RG0.970.950.950.980.880.880.870.870.850.850.850.830.811.001.000.90
OFMAKM0.910.790.791.000.860.860.960.961.001.001.000.990.851.000.650.91
U-Net1.000.980.980.930.950.950.990.991.001.000.901.000.960.880.650.94
RG72.7274.4074.4072.7270.2270.2269.5069.5069.3869.3875.0970.7968.2556.3148.3168.75
PSNRMAKM70.6369.3869.3855.5169.6769.6771.0271.0256.6656.6655.0256.8868.1957.0366.5364.22
U-Net72.9976.4976.4972.6465.1265.1254.0654.0671.0271.0253.0070.3772.6466.7058.9266.71
Table 3. Performance comparison of RG with MAKM and U-Net for 12 randomly selected brain images from BRATS2015 Dataset.
Table 3. Performance comparison of RG with MAKM and U-Net for 12 randomly selected brain images from BRATS2015 Dataset.
MetricAlgorithmim081im274im473im551im06im973im689im792im1507im781im733im1238Avg
RG99.699.897.499.699.699.7100.099.198.799.299.796.899.1
Acc (%)MAKM84.989.197.295.985.479.776.987.784.395.690.484.687.6
U-NET99.899.893.399.899.898.799.889.299.599.599.186.697.1
RG0.910.920.620.920.920.940.890.770.800.880.850.470.82
IoUMAKM0.050.010.610.500.230.020.020.230.290.580.040.280.24
U-NET0.950.930.390.940.950.760.610.250.920.930.450.310.70
RG0.950.960.760.960.960.970.940.870.890.940.920.640.90
DSSMAKM0.090.010.750.670.380.030.030.370.440.740.090.440.34
U-NET0.980.960.560.970.970.860.760.400.960.960.620.470.79
RG95.393.983.592.194.596.292.282.079.891.292.946.886.7
Sn (%)MAKM18.23.589.199.3100.07.4100.096.999.499.925.898.569.8
U-NET98.997.785.798.098.897.260.998.392.995.445.299.989.1
RG99.8100.098.2100.099.999.9100.099.8100.099.899.8100.099.7
Sp (%)MAKM87.990.997.695.784.782.976.887.483.395.391.683.788.2
U-NET99.899.993.799.899.898.8100.088.999.999.8100.085.897.2
RG0.050.020.350.010.030.020.030.060.000.030.090.000.06
EFMAKM0.180.030.890.991.000.071.000.970.991.000.260.980.70
U-NET0.990.980.860.980.990.970.610.980.930.950.451.000.89
RG0.950.940.830.920.940.960.920.820.800.910.930.470.87
OFMAKM0.180.030.890.991.000.071.000.970.991.000.260.980.70
U-NET0.990.980.860.980.990.970.610.980.930.950.451.000.89
RG165.52174.19147.51167.26166.43170.25188.41157.89154.39159.74169.80145.23163.89
PNSRMAKM129.72132.99146.42142.69130.06126.75125.49131.81129.36142.02134.30129.55133.43
U-NET172.31175.68137.87170.98171.49154.11175.68133.12163.80164.69157.43130.96159.01
Table 4. Performance comparison of RG with MAKM and U-Net for 800 brain images from BRATS2015 Dataset.
Table 4. Performance comparison of RG with MAKM and U-Net for 800 brain images from BRATS2015 Dataset.
MetricAlgorithmim081im274im473im551im06im973im689im792im1507im781im733im1238im368im551Ovr_Avg
RG99.699.897.499.699.699.7100.099.198.799.299.796.895.297.898.72
Acc (%)MAKM84.989.197.295.985.479.776.987.784.395.690.484.698.898.788.60
U-NET99.899.893.399.899.898.799.889.299.599.577.686.683.899.898.20
RG0.910.920.620.920.920.940.890.770.800.880.850.470.280.770.67
IoUMAKM0.050.010.610.500.230.020.020.230.290.580.040.280.810.850.34
U-NET0.950.930.390.940.950.760.610.250.920.930.450.310.260.270.60
RG0.950.960.760.960.960.970.940.870.890.940.920.870.430.960.80
DSSMAKM0.090.010.750.670.380.030.030.370.440.740.090.340.900.920.45
U-NET0.980.960.560.970.970.860.760.400.960.960.620.470.420.430.69
RG95.393.983.592.194.596.292.282.079.891.292.946.826.876.771.1
Sn (%)MAKM18.23.589.199.3100.07.4100.096.999.499.925.898.582.485.589.6
U-NET98.997.785.798.098.897.260.998.392.995.445.299.989.497.890.7
RG99.8100.098.2100.099.999.9100.099.8100.099.899.8100.010010099.8
Sp (%)MAKM87.990.997.695.784.782.976.887.483.395.391.683.710010088.6
U-NET99.899.993.799.899.898.8100.088.999.999.8100.085.883.575.792.1
RG0.050.020.350.010.030.020.030.060.000.030.090.00000.06
EFMAKM0.180.030.890.991.000.071.000.970.991.000.260.980.820.850.90
U-NET0.990.980.860.980.990.970.610.980.930.950.451.000.890.980.91
RG0.950.940.830.920.940.960.920.820.800.910.930.470.270.770.71
OFMAKM0.180.030.890.991.000.071.000.970.991.000.260.980.820.850.90
U-NET0.990.980.860.980.990.970.610.980.930.950.451.000.890.980.91
RG165.52174.19147.51167.26166.43170.25188.41157.89154.39159.74169.80145.23141.3149.8157.0
PNSRMAKM129.72132.99146.42142.69130.06126.75125.49131.81129.36142.02134.30129.55155.0154.1138.6
U-NET172.31175.68137.87170.98171.49154.11175.68133.12163.80164.69157.43130.96129.1125.8152.0
Table 5. Comparison of the proposed approach with U-Net and its variants using BRATS2015 dataset.
Table 5. Comparison of the proposed approach with U-Net and its variants using BRATS2015 dataset.
Authors, Year and CitationModelDatasetDSS
Daimary et al. [42]U-SegNetBRATS20150.73
Zhou et al., 2019OM-Net + CGApBRATS20150.87
Kayalibay et al., 2017CNN + 3D filtersBRATS20150.85
Isensee et al., 2018U-Net + more filtersBRATS20150.85
+ data augmentation
+ dice-loss
Kamnitsas et al., 20163D CNN + CRFBRATS20150.85
Qin et al., 2018AFN-6BRATS20150.84
Havaei et al. [43]CNN(whole)BRATS20150.88
Havaei et al. [43]CNN(core)BRATS20150.79
Havaei et al. [43]CNN(enhanced)BRATS20150.73
Pereira et al. [44]CNN(whole)BRATS20150.87
Pereira et al. [44]CNN(core)BRATS20150.73
Pereira et al. [44]CNN(enhanced)BRATS20150.68
Malmi et al. [45]CNN(whole)BRATS20150.80
Malmi et al. [45]CNN(core)BRATS20150.71
Malmi et al. [45]CNN(enhanced)BRATS20150.64
Taye et al., 2018 [46]MAKMBRATS20150.68
Re-implementedU-NetBRATS20150.75
Erena et al., 2020Case-1:Proposed Approach (15 randomly selected images)BRATS20150.89
Erena et al., 2020Case-2:Proposed Approach (12 randomly selected images)BRATS20150.90
Erena et al., 2020Case-3:Proposed Approach (800 brain images)BRATS20150.80
Erena et al., 2020Average:Proposed ApproachBRATS20150.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Biratu, E.S.; Schwenker, F.; Debelee, T.G.; Kebede, S.R.; Negera, W.G.; Molla, H.T. Enhanced Region Growing for Brain Tumor MR Image Segmentation. J. Imaging 2021, 7, 22. https://doi.org/10.3390/jimaging7020022

AMA Style

Biratu ES, Schwenker F, Debelee TG, Kebede SR, Negera WG, Molla HT. Enhanced Region Growing for Brain Tumor MR Image Segmentation. Journal of Imaging. 2021; 7(2):22. https://doi.org/10.3390/jimaging7020022

Chicago/Turabian Style

Biratu, Erena Siyoum, Friedhelm Schwenker, Taye Girma Debelee, Samuel Rahimeto Kebede, Worku Gachena Negera, and Hasset Tamirat Molla. 2021. "Enhanced Region Growing for Brain Tumor MR Image Segmentation" Journal of Imaging 7, no. 2: 22. https://doi.org/10.3390/jimaging7020022

APA Style

Biratu, E. S., Schwenker, F., Debelee, T. G., Kebede, S. R., Negera, W. G., & Molla, H. T. (2021). Enhanced Region Growing for Brain Tumor MR Image Segmentation. Journal of Imaging, 7(2), 22. https://doi.org/10.3390/jimaging7020022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop