Next Article in Journal
Creating Competitive Opponents for Serious Games through Dynamic Difficulty Adjustment
Previous Article in Journal
A Framework for Detecting Intentions of Criminal Acts in Social Media: A Case Study on Twitter
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications

College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia
LR-SITI Laboratoire Signal Image et Technologies de l’Information, Université de Tunis El Manar, Tunis 1002, Tunisia
Department of Industrial and Systems Engineering, College of Engineering, University of Jeddah, Jeddah 21589, Saudi Arabia
Author to whom correspondence should be addressed.
Information 2020, 11(3), 155;
Submission received: 18 January 2020 / Revised: 6 March 2020 / Accepted: 7 March 2020 / Published: 13 March 2020


Accurate medical images analysis plays a vital role for several clinical applications. Nevertheless, the immense and complex data volume to be processed make difficult the design of effective algorithms. The first aim of this paper is to examine this area of research and to provide some relevant reference sources related to the context of medical image analysis. Then, an effective hybrid solution to further improve the expected results is proposed here. It allows to consider the benefits of the cooperation of different complementary approaches such as statistical-based, variational-based and atlas-based techniques and to reduce their drawbacks. In particular, a pipeline framework that involves different steps such as a preprocessing step, a classification step and a refinement step with variational-based method is developed to identify accurately pathological regions in biomedical images. The preprocessing step has the role to remove noise and improve the quality of the images. Then the classification is based on both symmetry axis detection step and non linear learning with SVM algorithm. Finally, a level set-based model is performed to refine the boundary detection of the region of interest. In this work we will show that an accurate initialization step could enhance final performances. Some obtained results are exposed which are related to the challenging application of brain tumor segmentation.

1. Introduction: Medical Image Analysis Challenges

Precise analysis of medical images such as the segmentation, the detection and the quantification of tumors and cancers are an important task for many clinical applications including medical content-based image retrieval, 3D pathology modelling, normal and abnormal templates (atlases) construction, diagnosis, and therapy evaluation [1,2,3,4]. However, there are several issues and challenges that still present because of the immense and complex data volume to be processed. In addition, there is difficulty for designing effective algorithms due to the sheer size of the datasets coupled with the inter-class and intra-class variability of the anatomical shape and appearance. On the other hand, delineating manually the boundaries of specific regions in medical imaging is impractical. Therefore, an automated and robust process is extremely mandatory.
Several image processing-based techniques had been proposed in the literature and many of them play a vital role in the medical imaging applications. However, many of these approaches are requiring more enhancement. To overcome some existing limitations, in this research, we focus on the development of an effective hybrid framework for a specific task which is known as image segmentation. Our key idea is based on the cooperation principle between different complementary algorithms derived from variational models, statistical classification techniques, and atlas guided methods which will achieve high performances. Our justification for choosing among the aforementioned approaches as follows—(1) the classification techniques have been successfully used to identify big anatomical structures, but in the presence of the noise, they have failed, (2) the variational models have been successfully applied to the localization of particular anatomical structures, but they often need an accurate initialization step and moreover they have failed to identify small lesions, (3) the atlas-based registration techniques have been extensively used to identify anatomical structures through nonlinear registration, but they cannot be directly used to segment (i.e., tumors or lesions). In our research, combining the right approaches into a single powerful framework could help in achieving good results. Therefore, our main contribution is to implement an affective hybrid solution in order to obtain a further improvement in the expected results (i.e., a framework considers the benefit of the incorporation between different complementary approaches).
The organization of this paper is as follows. In the next three sections, an overview of three categories of approaches are discussed. Note that due to the impossibility of an exhaustive review of all segmentation method in a single article, we restricted ourselves to present only some relevant approaches related to medical image analysis. In Section 5, a hybrid framework and experiments for the challenging application of brain tumor segmentation are given. Finally, in Section 7, we present our conclusion and the future work of the research.

2. Atlas-Guided Methods

An atlas (or prior template) is defined as a reference work in which specific structures in the image are placed in a specific coordinate system that is standardized. In the medical image analysis, atlas-guided methods have raised much interest since they exploit prior knowledge to achieve a precise objective (i.e., image segmentation and image registration). In fact, the required information about the size, the shape and the location of different anatomical structures are gained directly from a constructed digital anatomical atlas. This paradigm may be of great interest for many applications (i.e., surgical planning, surgical navigation, image-guided surgery, automatic labelling, morphological and morphometrical studies of brain anatomy, three-dimensional visualization, interactive segmentation, multi-modality fusion, quantitative assessment of the diseases, functional mapping, etc.). Atlas-based segmentation approaches are viewed as a registration problem. Registration component is defined as the process of finding a geometric transformation between two respective images that maps pixels from one input image to homologous pixels in the other input image. Therefore, labels in the atlas will be transferred and the warping process allows simultaneous segmentation of several structures. Therefore, it is significant that we can distinguish especially between pixel-based and model-based techniques, rigid or non-rigid registration, and intra-subject and inter-subject registration.
Atlas-based segmentation depends mainly on two components as follows: (1) the choice of the registration technique and (2) the used prior model. Registration of medical imaging has been described in many publications (see Reference [5] for more details on this topic). The first stereotactic atlas of brain function and anatomy was proposed in Reference [6]. Although this atlas was widely used for anatomic localization, it cannot be easily evolved. For this reason, digitized brain atlas [7] was developed to overcome the previous drawbacks and to provide a lot of details. The design of the standard atlas-guided segmentation is given in Figure 1.
In Reference [7], brain segmentation technique was proposed. A non-linear spatial transformation was identified that best fitted maps between the template and the given image. In Reference [8], authors proposed a pipeline of steps that involve intensity normalization, non-rigid registration, atlas alignment and EM-based classification of major structures in MRI. EM is used to simultaneously estimates image artifacts and anatomical label maps.
A case of medical analysis is the delineation of brain tumors which is still difficult problem. In fact, small lesions are difficult to distinguish from noise and other structures. Moreover, tumor structures have no equivalent in the atlas and vary greatly in size, shape, location, tissue composition and homogeneity. Thus, the variability induced by the tumor shape leads to inappropriate registration between the template and the input image.
The process of manually segmenting the tumor regions slice-by-slice is very hard. Even, it is difficult to use atlas for pathology structure extraction, variety of methods suggest to use with different ways atlas for this reason. A repeated algorithm of both nonlinear registration and classification steps to identify normal and anomalous structures was proposed in Reference [9]. In this case a nonlinear registration is performed to spatially adapt (align and register) the template to match the individual image. All involved phases are repeated till the classification step and the matched anatomy agree.
Another work [10] is based on the combination of multi-parameter images (T1, T2 and PD MRI), a classification algorithm and a prior knowledge system. In this case, the knowledge-based system allows detecting and labelling the image. In Reference [11], authors introduce a tumor “seed” in the atlas. An extension of this work was done in Reference [12]. In fact, the same idea was extended by developing a radially symmetric model of tumor growth. A nonrigid registration between both input images (the one to be treated and the atlas) has the role to generate an early deformation to insert into the template (atlas) the so-called “pathology seed”. The seeding of a synthetic tumor into the atlas generates a “template with lesion” that will be used for lesion detection. The last step is the deformation of the seeded atlas by using optical flow principles and a model of lesion growth. The main problem of this method is related to the realization of early registration, which is not obvious in the case where the tumor is placed nearby the border of the brain.

3. Variational Deformable Models

The variational models are other common approaches broadly applied in several applications and have been also explored in medical image analysis area [13]. The variational methods are more effective than classical edge detection approaches since they offer an appropriate context for merging different information and provide a coherent support for discrete contours and surfaces analysis. In particular, the so-called “level-set” method [14] is one of the attractive approaches in shape modelling. In fact, it allows the contour topology to be handled without intervention, and to calculate intrinsic properties (i.e., curvature) in a very simple way. Moreover, no parameterization step is required. In the next paragraph, a brief review about the variational techniques is discussed. The key idea behind the level-set approach [15] is to project the evolving of a given 2D contour into a three dimensional surface. Basically, the level-set ψ is determined by solving a PDE function as:
ψ t = F · | ψ | ,
where F is a speed function. It depends on several parameters such as the image gradient and the geometric curvature. It should be noted that several speed (evolution) functions have been developed for the level-set algorithm. We can roughly categorize them into edge-based, region-based and prior-based information.
Edge-based information: the authors [16] are the first to define an edge-based term (local gradient of the image) to be integrated into the level-set function. For object segmentation and tracking, this edge term is used to know when the curve evolution can be stopped. The proposed level set equation involves also a constant term (c) for convergence purpose and a mean curvature term (k) for smoothing purpose. It must be pointed out that using only local gradient information is not enough notably in the presence of noise and blur. The proposed level set function is expressed as follow:
ψ t = g ( | I | ) ( c + γ k ) | ψ | .
Region-based information: in order to deal with the limitation of the edge-based term, another term was defined for level set function which is termed by “region-based information”. It has the advantage to offer more information and to be more robust against noise perturbation in the image. The famous region-based used term in the literature is the one proposed in Reference [17]. It is defined as:
ψ t = δ t · ψ [ α k ( I c 1 ) 2 + ( I c 2 ) 2 ] ,
where c 1 is the average image intensity inside the region of interest (ROI) and c 2 is the average intensity outside the ROI. In Reference [18], an evolution equation was developed. In this method, the problem is that it can only detects the enhanced parts of the ROI. Their level-set function is defined as:
ψ t = [ P ( A ) P ( B ) + c 1 k ] | ψ | + c 2 2 ψ ,
where P ( A ) and P ( B ) represent respectively the a posteriori probability of the ROI “A” and the background “B”. c 1 and c 2 are two constants. In Reference [19], authors proposed another region term (see Equation (5)). In this situation, the deformed contour can shrink if the boundary encompasses portions of the background and grows in the case of the frontier is inside the ROI.
ψ t = α D ( x ) | ψ | + ( 1 α ) k | ψ | ,
where D is a useful term helping in enlarging or contracting the model to required features; k is the curvature of the surface; the parameter α [ 0 , 1 ] governs the smoothness in the model; T supervises the brightness property.
Another evolution equation was proposed in Reference [20]. The design of the proposed evolution function is based on the concept of a fuzzy decision (Equation (6)). The later concept has the advantage to fuse both local (gradient) and global information into the same term.
ψ t = g ( s ( P T , I ) ) ( ρ k ν ) | ψ | ,
where P T is a transition probability between the inside and the outside of the ROI. The function g makes it possible to stop the evolving of the model at the border of the ROI. s is the output of a fuzzy decision system.
Shape prior-based information: this third term for level-set approach is presented in different forms in the literature [21,22,23,24,25]. One of the interesting work that investigates a priori knowledge has been proposed under the name “Active Appearance Models” [26]. The key idea is to determine from a training set, the average shape of the required object using the principal component analysis (PCA) on specific points positioned on all learned shapes. Given that the points are selected manually, then the proposed algorithm become not practical and tedious. In Reference [21], shapes are represented by a signed distance function and the PCA was applied on these set of training distance functions. The proposed evolution equation is expressed as follows:
ψ t = g ( I ) ( α k c ) | ψ | + λ ( ψ * ( t ) ψ ( t ) ) ,
where ψ * is the prior level set shape function. Other works are published in References [24,25] in order to model the shape variation where a nonparametric techniques involving a Kernel Density Estimation are used. In this manner, it will be possible to approximate arbitrary the distribution of the shape prior.

4. Statistical Classification and Segmentation of Medical Images

Classification techniques are successfully applied for the determination of major anatomical structures in medical imaging such as MRI and CT-scan. For more details, the reader can refer to a large class of pattern classification based methods [27,28,29,30,31,32,33,34,35,36,37,38,39,40]. Support Vector Machine (SVM) is one of the most used techniques. In this research, we have investigated this method for segmenting tumor regions in different medical imaging. In the following section we briefly review useful details for SVM in order to show how it is possible to apply this method in our context.

Linear and Non-Linear Support Vector Machines(SVM)

Support Vector Machines (SVM) is one of the well-known machine learning algorithms for data classification [41]. It was developed as an alternative solution based on a statistical learning technique for both data regression and classification. It has been effectively employed for several related applications [42,43,44,45]. SVM can be used for either supervised or unsupervised learning. We distinguish especially for the linear and non-linear case for SVM.
Suppose that we have a set of pixels x i , i = 1 N with two possible classes y i { 1 , + 1 } for both images with and without pathologies. The key idea behind SVM is to find a linear or non-linear boundary (called also hyperplane) w T x i + b = 0 which differentiates between the positive examples from the negative examples. When w is a weight vector, x i is the input vector and b is the bias term. This hyperplane separates the positive examples from the negative examples intended to correctly classify training samples. Thus, the search for a solution yields to minimizing of the following objective function:
minimize w , b φ w = w 2 2 subject to y i w T x i + b 1 , for i = 1 N .
Sometimes, the calculated hyperplane may not be desirable if the data has noise in it. So, it is better to smooth the boundary by introducing a vector of slack variables ξ i , i = 1 N that compute the quantity of violation constraints while considering that certain input data can possibly misclassified. The equation is now:
m i n i m i z e w , b φ w = w 2 2 + C i = 1 N ξ i s u b j e c t t o y i w T x i + b 1 ξ i , ξ i 0
where the regularization parameter C has the role to control both the misclassification cost minimization and the margin maximization. If C is too large, the algorithm will overfit the dataset. Each misclassified example x i carries cost ξ i . The problem is then compactly expressed in Lagrangian form by introducing a coefficient multipliers α i , β i and so minimizing the following equation:
L = w 2 2 i = 1 N α i y i w T x i + b 1 + ξ i C i = 1 N ξ i k i = 1 N β i ξ i subject to α i 0 and β i 0 .
The weight vector w that we want to calculate can be estimated through the Karush-Kuhn-Tucker (KTT) [46] as follow: w = i = 1 N α i β i x i The decision function is obtained using the following equation: f x = s i g n ( w x b ) .
In the case of linear SVM, a separating hyperplane may be used to divide the data, but in practice the data is hardly to be divided linearly. Thus, other forms are developed as non-linear cases using the so-named kernel trick x , y = φ x · φ y . In this case, kernels have an important role in mapping data to a high-dimensional space. In the literature, the well known SVM kernels are: linear, polynomial, radial basis, and sigmoid kernels. Therefore, the vector w will be changed as w = i = 1 N α i β i K ( x i ) and f x = s i g n ( w K ( x ) b ) .

5. A Unified Framework for Brain Tumor Segmentation

One of the important applications for biomedical image analysis is brain cancer segmentation in magnetic resonance imaging (MRI). It is one of the critical steps for many clinical applications. It is also a difficult task due to the complexity of MR images. Recently some promising works related to this area of research have been published using various approaches [32,35,47,48,49,50,51]. It is noted also that Deep learning based approaches has been used extensively for image classification, segmentation, enhancement and image registration. Thus, it is possible to apply such approach for image registration [52] or for segmentation [53] or for detection [54]. Deep learning-based approaches have been proposed as an efficient alternative to learn a large scale of medical imaging. A summary of several papers applying most successful deep learning algorithms for medical image analysis is given in References [55,56,57,58,59]. In particular, anatomical brain structures and brain lesions and alzheimer’s disease prediction detection using deep learning has gained interest [60,61]. An overview of current deep learning-based segmentation approaches is given in Reference [62].
In order to reach more accurate results, in this research, we propose a hybrid framework which is able to deal with all possible issues related to the complexity of MRI brain such as the effect of noise and the intensity variations within and between soft tissues. The developed hybrid approach (Figure 2) is considered as a pipeline framework that involves different steps such as a preprocessing step, a classification step and a refinement step with new formulation for the level set-based evolution equation that we have developed previously in Reference [48].
The preprocessing step has the role to improve the quality of the MRI images. In our work, we consider both noise reduction, image registration and intensity normalization. Local noise is reduced with an anisotropic diffusion filter as proposed in Reference [63]. Input images are also aligned by calculating a transformation that allows us to exploit multiple MR modalities. In order to detect the presence of any possible tumor in the input image, we proceed with the idea of extracting the symmetry sagittal axis of the brain as developed in Reference [64], and then comparing the two brain hemispheres. The output of the later step is useful to derive the next step of the image classification via a supervised non-linear learning algorithm “SVM”. Finally, a variational-based model is performed to extract the region of interest. In fact, we have investigated an adaptive speed function for Level-set-based segmentation as proposed in Reference [48]. In this research, our key motivation is to consider the advantage of the collaboration of different information (boundary and regional) into the same designed evolution equation. The adaptive level-set function with the conjunction of the previous steps provide more robust and appropriate segmentation results. The modified speed function that we have implemented is given as follows:
ψ t = [ α r F r e g i o n ( I ) + α b c + k 1 + | I | ] | ψ |
where α b and α r are real coefficients. c is constant and k is a curvature term calculated from the level set function as in Reference [15]. The region based term F r e g i o n ( I ) is defined as:
F r e g i o n i + 1 ( I ) = I ( m T i k · σ T i ) i f I < m T i ( m T i + k · σ T i ) I otherwise
where k is a curvature term, σ is the variance value, m is the mean value, τ i + 1 is a threshold value, and T is the index associated to the region to be segmented.
Experiments were carried out on the dataset (Brain Tumor MRI Database [65]. Input data consists of T1 and T1w where the modality T1w is a modality image obtained after injection of a contrast agent. The sheer size of the dataset is 25 × 181 = 4525 slices. This dataset contains already a ground truth (labelled images) for comparative study purpose. The total number of used images in this work is 1280. We apply the 10-fold cross validation principle. We divide the dataset into 10 different small subsets ( 1280 / 10 = 128 images in each subset). We start by choosing the first subset as Test-subset and the 9 rest ( 9 × 128 = 1152 images) are used for training. Then we change the selection mode and the second subset is used for testing and the 9 rest for training and so one. After that we calculate the average result. Each image in the dataset is modelled and represented with a feature vector. Then, the classification process is based on all these feature vectors and not pixel values. In our work, nine visual features are calculated based on texture characteristics. They are obtained from the GLCM matrix (Gray Level Co-occurrence Matrix): contrast, correlation, cluster-shade, dissimilarity, energy, entropy, homogeneity, mean, and standard deviation.In this work we use Radial Basis Function (RBF) based kernel SVM. The hyperparameters are gamma (radius of RBF) and C. It is noted that C must keep the training error as small as possible, but it should generalize well. The used parameters are gamma = 0.1 and C = 1.0. In this work, the computational complexity of RBF SVM is it O ( n 2 ) where n is the number of input dimensions. The level set approach has also a complexity of O ( n 2 ) . Compared to the baseline methods, it should be noted that the proposed method presents a comparable complexity. The validation process involves different measures such as true positive (TP), false positive (FP), false negative (FN), true negative (TN), and the following:
  • Sensitivity = T P ( T P + F N )
  • Specificity = T N ( F P + T N )
  • Similarity index (SI) = ( 2 T P ) ( 2 T P + F N + F P ) ,
where the similarity index (or Dice similarity coefficient) computes the normalized intersection in of two different segmentations (ground truth and the proposed segmentation). Some of the obtained results are given in Table 1, Table 2 and Table 3. Qualitative and quantitative evaluations are depicted in Table 4 and in Figure 3 and Figure 4. After performing our framework, the results shows that the detected tumor boundaries are very close to the expert’s results and very satisfactory compared to the ground truth. They prove the merit of the proposed method given that the average similarity index is more than 80% which indicates a strong agreement. Our results are also considered very competitive according to the comparative study illustrated in the Table 5. Based on this comparative study, it is clear that our method is very competitive with respect to other methods. The value of accuracy is above 80% is considered strongly acceptable compared to the ground truth given that any value of the similarity index above 0.7 is considered as good result as stated in Reference [66]. These results explain very well the advantage of the proposed method in offering high segmentation performance. Indeed, the merging of different source of information (i.e., the combination of local and global information) and different complementary approaches into the same hybrid framework allows to increase the segmentation accuracy.

6. Conclusions and Discussion

In this paper, we have reviewed specific relevant works related to the area of medical image analysis. Then, we proposed an effective hybrid solution for medical image segmentation. The developed framework allows to consider the benefits of the cooperation of different complementary approaches such as statistical-based, variational-based and atlas-based techniques. We demonstrated the importance of each step in the pipeline by the fact that it can be used as an effective initialization step for the next step and then a more stable and accurate process can be established. The obtained results for both brain tumor segmentation show the merits of the cooperation of different algorithms. Despite the good results that have been achieved, it is important to note that some improvements need to be addressed in our future work. Indeed, the current work is not able to solve all possible issues and it presents some weaknesses. For instance, the robustness of the initialization step depends mainly of the registration preprocessing step. Thus, it is important to apply more efficient registration algorithm to have more accurate initialization step and to avoid significant errors in the parameter estimation. In particular, intensity-based FFD registration method could be an effective solution for our case. The worst results are due to the low contrast between the tumor region and normal tissues. On the other hand, in this study, only some types of tumor have been treated, consequently, it will be interesting that the developed approach must be evaluated on large dataset in the future for better decision. Actually, we plan to design more robust speed function which is able to take into account three different source of information (or terms): regional ( F r ), boundary ( F b ) and shape ( F s ) information. We think that a more general speed function could lead to better performances. On the other hand, given the complexity of anatomical structures, the problem of segmenting a specific region of interest with hybrid process can be faced with different manners: sequential or iterative strategy. Therefore, instead of using a sequential strategy based on the design of a pipeline of algorithms, it will be better to apply an iterative process which needed to define mutual constraints between all involved algorithms. In this case, certain principles of information fusion can be exploited as well. Another interesting future work is the selection of relevant visual features which may increase the capability of our hybrid frameworks.

7. Data Availability

The data used to support the findings of this study are available from the corresponding author [65] upon request.

Author Contributions

Conceptualization, S.B. and R.A.; methodology, S.R. and A.A.; software, S.B. and R.A.; validation, S.R. and A.A.; formal analysis, S.B. and R.A.; investigation, S.R. and A.A.; resources, S.B. and R.A.; data curation, S.R. and A.A.; writing–original draft preparation, S.B. and R.A.; writing–review and editing, S.R. and A.A.; All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Zhou, T.; Thung, K.; Liu, M.; Shi, F.; Zhang, C.; Shen, D. Multi-modal latent space inducing ensemble SVM classifier for early dementia diagnosis with neuroimaging data. Med. Image Anal. 2020, 60, 101630. [Google Scholar] [CrossRef] [PubMed]
  2. Fan, J.; Cao, X.; Yap, P.; Shen, D. BIRNet: Brain image registration using dual-supervised fully convolutional networks. Med. Image Anal. 2019, 54, 193–206. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, J.; Wang, Q.; Peng, J.; Nie, D.; Zhao, F.; Kim, M.; Zhang, H.; Wee, C.Y.; Wang, S.; Shen, D. Multi-task diagnosis for autism spectrum disorders using multi-modality features: A multi-center study. Hum. Brain Mapp. 2017, 38, 3081–3097. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zhang, Y.; Zhang, H.; Chen, X.; Liu, M.; Zhu, X.; Lee, S.; Shen, D. Strength and similarity guided group-level brain functional network construction for MCI diagnosis. Pattern Recognit. 2019, 88, 421–430. [Google Scholar] [CrossRef] [PubMed]
  5. Maintz, J.; Viergever, M. Medical Image Analysis. Surv. Med. Image Registr. 1998, 2, 1–36. [Google Scholar]
  6. Talairach, J.; Tournoux, P. Co-Planar Stereotaxic Atlas of the Human Brain; Georg Thieme Verlag: Stuttgart, Germany, 1988. [Google Scholar]
  7. Collins, D.; Holmes, C.; Peters, T.; Evans, A. Automatic 3D model-based neuroanatomical segmentation. Hum. Brain Mapp. 1995, 3, 190–208. [Google Scholar] [CrossRef]
  8. Pohl, K.M.; Fisher, J.; Grimson, W.E.L.; Kikinis, R.; Wells, W.M. A Bayesian model for joint segmentation and registration. NeuroImage 2006, 31, 228–239. [Google Scholar] [CrossRef] [Green Version]
  9. Warfield, S.; Kaus, M.; Jolesz, F.; Kikinis, R. Adaptive, Template Moderated, Spatially Varying Statistical Classification. Med. Image Anal. 2000, 4, 43–55. [Google Scholar] [CrossRef]
  10. Clark, M. Knowledge-Guided Processing of Magnetic Resonance Images of the Brain. Ph.D. Thesis, Department of Computer Science and Engineering, University of South Florida, Tampa, FL, USA, 2000. [Google Scholar]
  11. Dawant, B.M.; Hartmann, S.L.; Pan, S.; Gadamsetty, S. Brain atlas deformation in the presence of small and large space-occupying tumors. Comput. Aided Surg. 2002, 7, 1–10. [Google Scholar] [CrossRef]
  12. Cuadra, M.; Pollo, C.; Bardera, A.; Cuisenaire, O.; Villemure, J.; Thiran, J. Atlas-based Segmentation of Pathological MR Brain Using a Model of Lesion Growth. IEEE Trans. Med. Imag. 2004, 23, 1301–1314. [Google Scholar]
  13. Liew, A.; Yan, H. Current Methods in the Automatic Tissue Segmentation of 3D Magnetic Resonance Brain Images. Curr. Med. Imag. Rev. 2006, 2, 91–103. [Google Scholar] [CrossRef] [Green Version]
  14. Sethian, J. Level Set Methods and Fast Marching Methods: Evolving Interfaces in Geometry, Fluid Mechanics, Computer Vision, and Materials Science, 2nd ed.; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  15. Osher, S.; Sethian, J. Fronts Propagating with Curvature-Dependent Speed: Algorithms Based on Hamilton-Jacobi Formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef] [Green Version]
  16. Malladi, R.; Sethian, J.A.; Vemuri, B.C. Shape modeling with front propagation: A level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 158–174. [Google Scholar] [CrossRef] [Green Version]
  17. Chan, T.; Vese, L. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ho, S.; Bullitt, E.; Gerig, G. Level set evolution with region competition: Automatic 3-D segmentation of brain tumors. Int. Conf. Pattern Recog. ICPR 2002, 20, 532–535. [Google Scholar]
  19. Lefohn, A.; Cates, J.; Whitaker, R. Interactive, GPU-based level sets for 3D brain tumor segmentation. Med. Image Comput. Comput. Assisted Intervent. 2003, 2003, 564–572. [Google Scholar]
  20. Ciofolo, C.; Barillot, C. Brain segmentation with competitive level sets and fuzzy control. Inf. Process. Med. Imag. 2005, 19, 333–344. [Google Scholar] [CrossRef] [Green Version]
  21. Leventon, M.E.; Grimson, W.E.L.; Faugeras, O. Statistical shape influence in geodesic active contours. CVPR 2000, 316–323. [Google Scholar] [CrossRef] [Green Version]
  22. Cremers, D.; Kohlberger, T.; Schnörr, C. Shape statistics in kernel space for variational image segmentation. Pattern Recog. 2003, 36, 1929–1943. [Google Scholar] [CrossRef] [Green Version]
  23. Charpiat, G.; Faugeras, O.; Keriven, R. Approximations of Shape Metrics and Application to Shape Warping and Empirical Shape Statistics. Found. Comput. Math. 2005, 5, 1–58. [Google Scholar] [CrossRef]
  24. Cremers, D.; Osher, S.J.; Soatto, S. Kernel Density Estimation and Intrinsic Alignment for Shape Priors in Level Set Segmentation. Int. J. Comput. Vis. 2006, 69, 335–351. [Google Scholar] [CrossRef]
  25. Kim, J.; Çetin, M.; Willsky, A.S. Nonparametric Shape Priors for Active Contour-Based Image Segmentation. Signal Process. 2007, 87, 3021–3044. [Google Scholar] [CrossRef] [Green Version]
  26. Cootes, T.F.; Edwards, G.J.; Taylor, C.J. Active Appearance Models. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 23, 484–498. [Google Scholar]
  27. Alroobaea, R.; Alsufyani, A.; Ansari, M.A.; Rubaiee, S.; Algarni, S. Supervised Machine Learning of KFCG Algorithm and MBTC features for efficient classification of Image Database and CBIR Systems. Int. J. Appl. Eng. Res. 2018, 13, 6795–6804. [Google Scholar]
  28. Zhang, Y.; Brady, M.; Smith, S. Segmentation of brain mr images through a hidden markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imag. 2001, 20, 45–57. [Google Scholar] [CrossRef]
  29. Bourouis, S.; Hamrouni, K. An efficient framework for brain tumor segmentation in magnetic resonance images. In Proceedings of the IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications, Sousse, Tunisia, 23–26 November 2008; pp. 1–5. [Google Scholar]
  30. Bourouis, S.; Zaguia, A.; Bouguila, N. Hybrid Statistical Framework for Diabetic Retinopathy Detection. In Proceedings of the Image Analysis and Recognition—15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 27–29 June 2018; pp. 687–694. [Google Scholar]
  31. Prastawa, M.; Bullitt, E.; Ho, S.; Gerig, G. A brain tumor segmentation framework based on outlier detection. Med. Image Anal. (MedIA) 2004, 8, 275–283. [Google Scholar] [CrossRef]
  32. Liu, J.; Udupa, J.; Odhner, D.; Hackney, D.; Moonis, G. A system for brain tumor volume estimation via mr imaging and fuzzy connectedness. Comput. Med. Imag. Graph. 2005, 29, 21–34. [Google Scholar] [CrossRef]
  33. Channoufi, I.; Najar, F.; Bourouis, S.; Azam, M.; Halibas, A.S.; Alroobaea, R.; Al-Badi, A. Flexible Statistical Learning Model for Unsupervised Image Modeling and Segmentation. In Mixture Models and Applications; Springer: Berlin, Germany, 2020; pp. 325–348. [Google Scholar]
  34. Alhakami, W.; ALharbi, A.; Bourouis, S.; Alroobaea, R.; Bouguila, N. Network Anomaly Intrusion Detection Using a Nonparametric Bayesian Approach and Feature Selection. IEEE Access 2019, 7, 52181–52190. [Google Scholar] [CrossRef]
  35. Corso, J.; Sharon, E.; Dube, S.; El-Saden, S.; Sinha, U.; Yuille, A. Efficient multilevel brain tumor segmentation with integrated bayesian model classification. IEEE Trans. Med. Imag. 2008, 27, 629–640. [Google Scholar] [CrossRef] [Green Version]
  36. Bourouis, S.; Hamrouni, K.; Betrouni, N. Automatic MRI Brain Segmentation with Combined Atlas-Based Classification and Level-Set Approach. In Proceedings of the Image Analysis and Recognition, 5th International Conference, ICIAR 2008, Póvoa de Varzim, Portugal, 25–27 June 2008; pp. 770–778. [Google Scholar]
  37. Fitton, I.; Cornelissen, S.A.P.; Duppen, J.C.; Steenbakkers, R.J.H.M.; Peeters, S.T.H.; Hoebers, F.J.P.; Kaanders, J.H.A.M.; Nowak, P.J.C.M.; Rasch, C.R.N.; van Herk, M. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer. Int. J. Med. Phys. Res. Pract. 2011, 38, 4662–4666. [Google Scholar] [CrossRef]
  38. Channoufi, I.; Bourouis, S.; Bouguila, N.; Hamrouni, K. Color image segmentation with bounded generalized Gaussian mixture model and feature selection. In Proceedings of the 4th International Conference on Advanced Technologies for Signal and Image Processing, ATSIP 2018, Sousse, Tunisia, 21–24 March 2018; pp. 1–6. [Google Scholar]
  39. Seim, H.; Kainmueller, D.; Heller, M.; Lamecker, H.; Zachow, S.; Hege, H.C. Automatic Segmentation of the Pelvic Bones from CT Data based on a Statistical Shape Model. Eur. Worksh. Visual Comput. Biomed. 2008, 8, 224–230. [Google Scholar]
  40. Vincent, G.; Wolstenholme, C.; Scott, I.; Bowes, M. Fully automatic segmentation of the knee joint using active appearance models. Proc. Med. Image Anal. Clin. 2010, 1, 224. [Google Scholar]
  41. Cortes, C.; Vapnik, V. Support-vector networks, Machine Learning. Comput. Secur. 1995, 20, 273–297. [Google Scholar]
  42. Bourouis, S.; Zaguia, A.; Bouguila, N.; Alroobaea, R. Deriving Probabilistic SVM Kernels From Flexible Statistical Mixture Models and its Application to Retinal Images Classification. IEEE Access 2019, 7, 1107–1117. [Google Scholar] [CrossRef]
  43. Najar, F.; Bourouis, S.; Bouguila, N.; Belghith, S. Unsupervised learning of finite full covariance multivariate generalized Gaussian mixture models for human activity recognition. Multimed. Tools Appl. 2019, 78, 18669–18691. [Google Scholar] [CrossRef]
  44. Byun, H.; Lee, S.W. A survey on pattern recognition applications of support vector machines. Int. J. Pattern Recognit. Artif. Intell. 2003, 17, 459–486. [Google Scholar] [CrossRef]
  45. Hu, W. Robust Support Vector Machines for Anomaly Detection. In Proceedings of the 2003 International Conference on Machine Learning and Applications (ICMLA 03), Los Angeles, CA, USA, 23–24 June 2003; pp. 23–24. [Google Scholar]
  46. John, P. Fast training of support vector machines using sequential minimal optimization. In Proceedings of the 2008 3rd International Conference on Intelligent System and Knowledge Engineering, Xiamen, China, 17–19 November 1998. [Google Scholar]
  47. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imag. 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  48. Bourouis, S. Adaptive Variational Model and Learning-based SVM for Medical Image Segmentation. In Proceedings of the ICPRAM 2015—Proceedings of the International Conference on Pattern Recognition Applications and Methods, Lisbon, Portugal, 10–12 January 2015; Volume 1, pp. 149–156. [Google Scholar]
  49. Cobzas, D.; Birkbeck, N.; Schmidt, M.; Jagersand, M.; Murtha, A. A 3d variational brain tumor segmentation using a high dimensional feature set. In Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
  50. Bourouis, S.; Hamrouni, K. 3D segmentation of MRI brain using level set and unsupervised classification. Int. J. Image Graph. (IJIG) 2010, 10, 135–154. [Google Scholar] [CrossRef]
  51. Ilunga-Mbuyamba, E.; Aviña-Cervantes, J.G.; Cepeda-Negrete, J.; Ibarra-Manzano, M.A.; Chalopin, C. Automatic selection of localized region-based active contour models using image content analysis applied to brain tumor segmentation. Comput. Biol. Med. 2017, 91, 69–79. [Google Scholar] [CrossRef]
  52. Balakrishnan, G.; Zhao, A.; Sabuncu, M.R.; Guttag, J.V.; Dalca, A.V. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE Trans. Med. Imag. 2019, 38, 1788–1800. [Google Scholar] [CrossRef] [Green Version]
  53. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015—18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  54. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  55. Litjens, G.J.S.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [Green Version]
  57. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef] [PubMed]
  58. Ker, J.; Wang, L.; Rao, J.; Lim, T. Deep learning applications in medical image analysis. IEEE Access 2017, 6, 9375–9389. [Google Scholar] [CrossRef]
  59. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imag. 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
  60. Lin, W.; Tong, T.; Gao, Q.; Guo, D.; Du, X.; Yang, Y.; Guo, G.; Xiao, M.; Du, M.; Qu, X.; et al. Convolutional neural networks-based MRI image analysis for the alzheimer’s disease prediction from mild cognitive impairment. Front. Neurosci. 2018, 12, 777. [Google Scholar] [CrossRef]
  61. Deepak, S.; Ameer, P.M. Brain tumor classification using deep CNN features via transfer learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef]
  62. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J. Digit. Imag. 2017, 30, 449–459. [Google Scholar] [CrossRef] [Green Version]
  63. Perona, P.; Malik, J. Scale-Space and Edge Detection Using Anisotropic Diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 629–639. [Google Scholar] [CrossRef] [Green Version]
  64. Tuzikov, A.V.; Colliot, O.; Bloch, I. Evaluation of the symmetry plane in 3D MR brain images. Pattern Recognit. Lett. 2003, 24, 2219–2233. [Google Scholar] [CrossRef] [Green Version]
  65. Prastawa, M.; Bullitt, E.; Gerig, G. Simulation of Brain Tumors in MR Images for Evaluation of Segmentation Efficacy. Med. Image Anal. (MedIA) 2009, 13, 297–311. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Zijdenbos, A.; Dawant, B.; Margolin, R.; Palmer, A. Morphometric Analysis of White Matter Lesions in MR Images: Method and Validation. IEEE Trans. Med. Imag. 1994, 13, 716–724. [Google Scholar] [CrossRef] [Green Version]
  67. Anitha, V.; Murugavalli, S. Brain tumour classification using two-tier classifier with adaptive segmentation technique. IET Comput. Vis. 2016, 10, 9–17. [Google Scholar] [CrossRef]
  68. Zikic, D.; Glocker, B.; Konukoglu, E.; Criminisi, A.; Demiralp, C.; Shotton, J.; Thomas, O.M.; Das, T.; Jena, R.; Price, S.J. Decision forests for tissue-specific segmentation of high-grade gliomas in multi-channel MR. In International Conference on Medical Image Computing And Computer-Assisted Intervention; Springer: Berlin, Germany, 2012; pp. 369–376. [Google Scholar]
  69. Bauer, S.; Nolte, L.; Reyes, M. Fully Automatic Segmentation of Brain Tumor Images Using Support Vector Machine Classification in Combination with Hierarchical Conditional Random Field Regularization. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2011—14th International Conference, Toronto, CA, USA, 18–22 September 2011; pp. 354–361. [Google Scholar]
  70. Njeh, I.; Sallemi, L.; Ayed, I.B.; Chtourou, K.; Lehéricy, S.; Galanaud, D.; Hamida, A.B. 3D multimodal MRI brain glioma tumor and edema segmentation: A graph cut distribution matching approach. Comput. Med. Imag. Graph. 2015, 40, 108–119. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Atlas-based segmentation framework.
Figure 1. Atlas-based segmentation framework.
Information 11 00155 g001
Figure 2. Proposed hybrid framework for brain tumor segmentation.
Figure 2. Proposed hybrid framework for brain tumor segmentation.
Information 11 00155 g002
Figure 3. Result of the tumor detection: (a) T1w weighted magnetic resonance imaging (MRI) (T1w: is the T1 before contrast injection) (b) T1 weighted MRI (c) Final segmented tumor region (red), and initial detection classification with SVM (green).
Figure 3. Result of the tumor detection: (a) T1w weighted magnetic resonance imaging (MRI) (T1w: is the T1 before contrast injection) (b) T1 weighted MRI (c) Final segmented tumor region (red), and initial detection classification with SVM (green).
Information 11 00155 g003
Figure 4. Illustration of some obtained results for brain tumor segmentation. (a) shows the step of symmetry axis detection. (bd) show the final segmented region of interest (brain tumor) in different MR images.
Figure 4. Illustration of some obtained results for brain tumor segmentation. (a) shows the step of symmetry axis detection. (bd) show the final segmented region of interest (brain tumor) in different MR images.
Information 11 00155 g004
Table 1. Similarity index obtained on several different samples of brain tumor from the dataset in Reference [65].
Table 1. Similarity index obtained on several different samples of brain tumor from the dataset in Reference [65].
Slice index12345678910
Slice index11121314151617181920
Slice index21222324252627282930
Table 2. Sensitivity measures obtained on several different samples of brain tumor from the dataset in Reference [65].
Table 2. Sensitivity measures obtained on several different samples of brain tumor from the dataset in Reference [65].
Slice index12345678910
Slice index11121314151617181920
Slice index21222324252627282930
Table 3. Specificity measures obtained on several different samples of brain tumor from the dataset in Reference [65].
Table 3. Specificity measures obtained on several different samples of brain tumor from the dataset in Reference [65].
Slice index12345678910
Slice index11121314151617181920
Slice index21222324252627282930
Table 4. Average measures obtained on the dataset: “Brain Tumor in MRI” [65].
Table 4. Average measures obtained on the dataset: “Brain Tumor in MRI” [65].
Similarity Index (%)Sensitivity (%)Specificity (%)
Table 5. Similarity index (%) for brain tumor detection using different approaches for the public dataset [65].
Table 5. Similarity index (%) for brain tumor detection using different approaches for the public dataset [65].
ApproachSimilarity Index(%)
Anitha et al. [67]85.0
Bourouis et al. [50]78.5
Zikic et al. [68]71
Bauer et al. [69]62
Njeh et al. [70]89
Our framework80.9

Share and Cite

MDPI and ACS Style

Bourouis, S.; Alroobaea, R.; Rubaiee, S.; Ahmed, A. Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications. Information 2020, 11, 155.

AMA Style

Bourouis S, Alroobaea R, Rubaiee S, Ahmed A. Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications. Information. 2020; 11(3):155.

Chicago/Turabian Style

Bourouis, Sami, Roobaea Alroobaea, Saeed Rubaiee, and Anas Ahmed. 2020. "Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications" Information 11, no. 3: 155.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop