Next Article in Journal
Indeterminate Thyroid Nodules: From Cytology to Molecular Testing
Next Article in Special Issue
Skin Lesion Classification and Detection Using Machine Learning Techniques: A Systematic Review
Previous Article in Journal
Observation of Anatomical Structures in the Human Larynx Using Micro-Computed Tomography with Lugol’s Solution Enhancement
Previous Article in Special Issue
Three-Dimensional Measurement of the Uterus on Magnetic Resonance Images: Development and Performance Analysis of an Automated Deep-Learning Tool
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Recent Advances in Brain Tumor Diagnosis Based on AI-Based Classification

1
Department of Radiological Sciences, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Jeddah City 22384, Saudi Arabia
2
King Abdullah International Medical Research Center, Jeddah City 22384, Saudi Arabia
3
Medical Imaging Department, Ministry of the National Guard—Health Affairs, Jeddah City 11426, Saudi Arabia
Diagnostics 2023, 13(18), 3007; https://doi.org/10.3390/diagnostics13183007
Submission received: 23 June 2023 / Revised: 14 September 2023 / Accepted: 19 September 2023 / Published: 20 September 2023
(This article belongs to the Special Issue Advances in Medical Image Processing, Segmentation and Classification)

Abstract

:
Uncontrolled and fast cell proliferation is the cause of brain tumors. Early cancer detection is vitally important to save many lives. Brain tumors can be divided into several categories depending on the kind, place of origin, pace of development, and stage of progression; as a result, tumor classification is crucial for targeted therapy. Brain tumor segmentation aims to delineate accurately the areas of brain tumors. A specialist with a thorough understanding of brain illnesses is needed to manually identify the proper type of brain tumor. Additionally, processing many images takes time and is tiresome. Therefore, automatic segmentation and classification techniques are required to speed up and enhance the diagnosis of brain tumors. Tumors can be quickly and safely detected by brain scans using imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and others. Machine learning (ML) and artificial intelligence (AI) have shown promise in developing algorithms that aid in automatic classification and segmentation utilizing various imaging modalities. The right segmentation method must be used to precisely classify patients with brain tumors to enhance diagnosis and treatment. This review describes multiple types of brain tumors, publicly accessible datasets, enhancement methods, segmentation, feature extraction, classification, machine learning techniques, deep learning, and learning through a transfer to study brain tumors. In this study, we attempted to synthesize brain cancer imaging modalities with automatically computer-assisted methodologies for brain cancer characterization in ML and DL frameworks. Finding the current problems with the engineering methodologies currently in use and predicting a future paradigm are other goals of this article.

1. Introduction

The human brain, which serves as the control center for all the body’s organs, is a highly developed organ that enables a person to adapt to and withstand various environmental situations [1]. The human brain allows people to express themselves in words, carry out activities, and express thoughts and feelings. Cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) are the three major tissue components of the human brain. The gray matter regulates brain activity and comprises neurons and glial cells. The cerebral cortex is connected to other brain areas through white matter fibers comprising several myelinated axons. The corpus callosum, a substantial band of white matter fibers, connects the left and right hemispheres of the brain [2]. A brain tumor is a brain cell growth that is out of control and aberrant. Any unanticipated development may affect human functioning since the human skull is a rigid and volume-restricted structure, depending on the area of the brain involved. Additionally, it might spread to other organs, further jeopardizing human functions [3]. Early cancer detection makes the ability to plan effective treatment possible, which is crucial for the healthcare sector [4]. Cancer is difficult to cure, and the odds of survival are significantly reduced if it spreads to nearby cells. Undoubtedly, many lives could be preserved if cancer was detected at its earliest stage using quick and affordable diagnostic methods. Both invasive and noninvasive approaches may be utilized to diagnose brain cancer. An incision is made during a biopsy to extract a lesion sample for analysis. It is regarded as the gold standard for the diagnosis of cancer, where pathologists examine several cell characteristics of the tumor specimen under a microscope to verify the malignancy.
Noninvasive techniques include physical inspections of the body and imaging modalities employed for imaging the brain [5]. In comparison to brain biopsy, other imaging modalities, such as CT scans and MRI images, are more rapid and secure. Radiologists use these imaging techniques to identify brain problems, evaluate the development of diseases, and plan surgeries [6]. However, brain scans or image interpretation to diagnose illnesses are prone to inter-reader variability and accuracy, which depends on the medical practitioner’s competency [5]. It is crucial to accurately identify the type of brain disorder to reduce diagnostic errors. Utilizing computer-aided diagnostic (CAD) technologies can improve accuracy. The fundamental idea behind CAD is to offer a computer result as an additional guide to help radiologists interpret images and shorten the reading time for images. This enhances the accuracy and stability of radiological diagnosis [7]. Several CAT-based artificial intelligence techniques, such as machine learning (ML) and deep learning (DL), are described in this review for diagnosing tissues and segmenting tumors. The segmentation process is a crucial aspect of image processing. This approach includes a procedure for extracting the area that helps determine whether a region is infected. Using MRI images to segment brain tumors presents various challenges, including image noise, low contrast, loss borders, shifting intensities inside tissues, and tissue-type variation.
The most complex and crucial task in many medical image applications is detecting and segmenting brain tumors because it often requires much data and information. Tumors come in a variety of shapes and sizes. Automatic or semiautomatic detection/segmentation, helped by AI, is currently crucial in medical diagnostics. The medical professionals must authenticate the boundaries and areas of the brain cancer and ascertain where precisely it rests and the exact impacted locations before therapies such as chemotherapy, radiation, or brain surgery. This review examines the output from various algorithms that are used in segmenting and detecting brain tumors.
The review is structured as follows: Types of brain tumors are described in Section 2. The imaging modalities utilized in brain imaging are discussed in Section 3. The review algorithms used in the study are provided in Section 4. A review of the relevant state-of-the-art is provided in Section 5. The review is discussed in Section 6. The work’s conclusion is presented in Section 7.

2. Types of Brain Tumors

The main three parts of the brain are the brain stem, cerebrum, and cerebellum [1]. The cerebellum is the second-largest component of the brain and manages bodily motor activities, including balance, posture, walking, and general coordination of movements. It is positioned behind the brain and connected to the brain stem. Internal white matter, tiny but deeply positioned volumes of gray matter, and a very thin gray matter outer cortex can all be found in the cerebellum and cerebrum. The brainstem links to the spinal cord. It is situated at the brain’s base. Vital bodily processes, including motor, sensory, cardiac, repositories, and reflexes, are all under the control of the brainstem. Its three structural components are the medulla oblongata, pons, and midbrain [2]. A brain tumor is the medical term for an unexpected growth of brain cells [8]. According to the tumor’s location, the kind of tissue involved, and whether they are malignant or benign, scientists have categorized several types of brain tumors based on the location of the origin (primary or secondary) and additional contributing elements [9]. The World Health Organization (WHO) categorized brain tumors into 120 kinds. This categorization is based on the cell’s origin and behavior, ranging from less aggressive to greater aggressive. Even certain tumor forms are rated, with grades I being the least malignant (e.g., meningiomas, pituitary tumors) and IV being the most malignant. Despite differences in grading systems that rely on the kind of tumor, this denotes the pace of growth [10]. The most frequent type of brain tumor in adults is glioma, which may be classified into HGG and LGG. The WHO further categorized LGG into I–II grade tumors and HGG into III–IV grade. To reduce diagnosing errors, accurate identification of the specific type of brain disorder is crucial for treatment planning. A summary of various types of brain tumors is provided in Table 1.

3. Imaging Modalities

For many years, the detection of brain abnormalities has involved the use of several medical imaging methods. The two brain imaging approaches are structural and functional scanning [11]. Different measurements relating to brain anatomy, tumor location, traumas, and other brain illnesses compose structural imaging [12]. The finer-scale metabolic alterations, lesions, and visualization of brain activity are all picked up by functional imaging methods. Techniques including CT, MRI, SPECT, positron emission tomography (PET), (FMRI), and ultrasound (US) are utilized to localize brain tumors for their size, location as well as shape, and other characteristics [13].

3.1. MRI

MRI is a noninvasive procedure that utilizes nonionizing, safe radiation [14] to display the 3D anatomical structure of any region of the body without the need for cutting the tissue. To acquire images, it employs RF pulses and an intense magnetic field [15].
The body is intended to be positioned within an intense magnetic field. The water molecules of the human body are initially in their equilibrium state when the magnets are off. The magnetic field is then activated by moving the magnets. The body’s water molecules align with the magnetic field’s direction under the effect of this powerful magnetic field [14]. Protons are stimulated to spin opposing the magnetic field and realign by the application of a high RF energy pulse to the body in the magnetic field’s direction. Protons are stimulated to spin opposing the magnetic field and realign by the application of a high RF energy pulse to the body in the magnetic field’s direction. When the RF energy pulse is stopped, the water molecules return to their state of equilibrium and align with the magnetic field once more [14]. This causes the water molecules to produce RF energy, which the scanner detects and transforms into visual images [16]. The tissue structure determines the amount of RF energy the water molecules can use. As we can see in Figure 1, healthy brain has white matter (WM), gray matter (GM), and CSF, according to a structural MRI scan [17]. The primary difference between these tissues in a structural MRI scan is based on the amount of water they contain, with WM constituting 70% water and GM containing 80% water. The CSF fluid is almost entirely composed of water, as shown in Figure 1.
Figure 2 illustrates the fundamental MRI planes used to visualize the anatomy of the brain: axial, coronal, and sagittal. Tl, T2, and FLAIR MRI sequences are most often employed for brain analysis [14]. A Tl-weighted scan can distinguish between gray and white matter. T2-weighted imaging is water-content sensitive and is therefore ideally suited to conditions where water accumulates within the tissues of the brain.
In pathology, FLAIR is utilized to differentiate between CSF and abnormalities in the brain. Gray-level intensity values in pixel spaces form an image during an MRI scan. The values of the gray-level intensity are dependent on the cell density. On T1 and T2 images of a tumor brain, the intensity level of the tumorous tissues differs [16]. The properties of various MRI sequences are shown in Table 2.
Most tumors show low or medium gray intensity on T1-w. On T2-w, most tumors exhibit bright intensity [17]. Examples of MRI tumor intensity level are shown in Figure 3.
Another type of MRI identified as functional magnetic resonance imaging (fMRI) [18] measures changes in blood oxygenation to interpret brain activity. An area of the brain that is more active begins to use more blood and oxygen. As a result, an fMRI correlates the location and mental process to map the continuing activity in the brain.

3.2. CT

CT scanners provide finely detailed images of the interior of the body using a revolving X-ray beam and a row of detectors. On a computer, specific algorithms are used to process the images captured from various angles to create cross-sectional images of the entire body [19]. However, a CT scan can offer more precise images of the skull, spine, and other bone structures close to a brain tumor, as shown in Figure 4. Patients typically receive contrast injections to highlight aberrant tissues. The patient may occasionally take dye to improve their image. When an MRI is unavailable, and the patient has an implantation like a pacemaker, a CT scan may be performed to diagnose a brain tumor. The benefits of using CT scanning are low cost, improved tissue classification detection, quick imaging, and more widespread availability. The radiation risk in a CT scan is 100 times greater than in a standard X-ray diagnosis [19].

3.3. PET

An example of a nuclear medicine technique that analyzes the metabolic activity of biological tissues is positron emission tomography (PET) [20]. Therefore, to help evaluate the tissue being studied, a small amount of a radioactive tracer is utilized throughout the procedure. Fluorodeoxyglucose (FDG) is a popular PET agent for imaging the brain. To provide more conclusive information on malignant (cancerous) tumors and other lesions, PET may also be utilized in conjunction with other diagnostic procedures like CT or MRI. PET scans an organ or tissue by utilizing a scanning device to find photons released by a radionuclide at that site [20]. The chemical compounds that are normally utilized by the specific organ or tissue throughout its metabolic process are combined with a radioactive atom to create the tracer used in PET scans, as shown in Figure 5.

3.4. SPECT

A nuclear imaging examination called a single-photon emission computed tomography (SPECT) combines CT with a radioactive tracer. The tracer is what enables medical professionals to observe the blood flow to tissues and organs [21]. A tracer is injected into the patient’s bloodstream prior to the SPECT scan. The radiolabeled tracer generates gamma rays that the CT scanner can detect since it is radiolabeled. Gamma-ray information is gathered by the computer and shown on the CT cross-sections. A 3D representation of the brain can be created by adding these cross-sections back together [21].

3.5. Ultrasound

An ultrasound is a specialized imaging technique that provides details that can be useful in cancer diagnosis, especially for soft tissues. It is frequently employed as the initial step in the typical cancer diagnostic procedure [22]. One advantage of ultrasound is that a test can be completed swiftly and affordably without subjecting the patient to radiation. However, ultrasound cannot independently confirm a cancer diagnosis and is unable to generate images with the precise level of resolution or detail like a CT or MRI scan. A medical expert gently moves a transducer throughout the patient’s skin across the region of the body being examined during a conventional ultrasound examination. A succession of high-frequency sounds is generated by the transducer, which “bounce off” the patient’s interior organs. The ensuing echoes return to the ultrasound device, which then transforms the sound waves into a 2D image that may be observed in real-time on a monitor. According to [22], US probes have been applied in brain tumor resection. According to the degree of density inside the tissue being assessed, the shape and strength of ultrasonic echoes can change. An ultrasound can detect tumors that may be malignant because solid masses and fluid-filled cysts bounce sound waves differently.

4. Classification and Segmentation Method

As was stated in the introduction, brain tumors are a leading cause of death worldwide. Computer-aided detection and diagnosis refer to software that utilizes DL, ML, and computer vision for analyzing radiological and pathological images. It has been created to assist radiologists in diagnosing human disease in various body regions, including applications for brain tumors. This review explored different CAT-based artificial intelligence approaches, including ML and DL, for automatically classifying and segmenting tumors.

4.1. Classification Methods

A classification is an approach in which related datasets are grouped together according to common features. A classifier in classification is a model created for predicting the unique features of a class label. Predicting the desired class for each type of data is the fundamental goal of classification. Deep learning and machine learning techniques are used for the classification of medical images. The key distinction between the two types is the approach for obtaining the features used in the classification process.

4.1.1. Machine Learning

ML is a branch of AI that allows computers to learn without being explicitly programmed. Classifying medical images, including lesions, into various groups using input features has become one of the latest applications of ML. There are two types of ML algorithms: supervised learning and unsupervised learning [23]. ML algorithms learn from labeled data in supervised learning. Unsupervised learning is the process by which ML systems attempt to comprehend the interdata relationship using unlabeled data. ML has been employed to analyze brain cancers in the context of brain imaging [24]. The main stages of ML classification are image preprocessing, feature extraction, feature selection, and classification. Figure 6 illustrates the process architecture.
1.
Data Acquisition
As previously noted, we can collect brain cancer images using several imaging modalities such as MRI, CT, and PET. This technique effectively visualizes aberrant brain tissues.
2.
Preprocessing
Preprocessing is a very important stage in the medical field. Normally, noise enhancement or reduction in images occurs during preprocessing. Medical noise significantly reduces image quality, making them diagnostically inefficient. To properly classify medical images, the preprocessing stage must be effective enough to eliminate as much noise as possible without affecting essential image components [25]. This procedure is carried out using a variety of approaches, including cropping, image scaling, histogram equalization, filtering using a median filter, and image adjusting [26].
3.
Feature extraction
The process of converting images into features based on several image characteristics in the medical field is known as feature extraction. These features carry the same information as the original images but are entirely different. This technique has the advantages of enhancing classifier accuracy, decreasing overfitting risk, allowing users to analyze data, and speeding up training [27]. Texture, contrast, brightness, shape, gray level co-occurrence matrix (GLCM) [28], Gabor transforms [29], wavelet-based features [30], 3D Haralick features [31], and histogram of local binary patterns (LBP) [32] are some of the examples of the various types of features.
4.
Feature selection
The technique attempts to arrange the features in ascending order of importance or relevance, with the top features being mostly employed in classification. As a result, multiple feature selection techniques are needed to reduce redundant information to discriminate between relevant and nonrelated features [33], such as PCA [34], genetic algorithm (GA) [35], and ICA [36].
5.
ML algorithm
Machine learning aims to divide the input information into separate groups based on common features or patterns of behavior. KNN [35], ANN [37], RF [38], and SVM [39] are examples of supervised methods. These techniques include two stages: training and testing. During training, the data are manually labeled using human involvement. The model is first constructed in this step, after which it is utilized to determine the classes that are unlabeled in the testing stage. Application of the KNN algorithm works by finding the points that are closest to each other by computing the distance between them using one of several different approaches, including the Hamming, Manhatten, Euclidean, and Minkowski distances [35].
The support vector machine (SVM) technique is frequently employed for classification tasks. Every feature forming a data point in this approach, which represents a coordinate, is formed in a distinct n-space. As a result, the objective of the SVM method is to identify a boundary or line across a space with n dimensions, referred to as a hyperplane that separates classes [39]. There are numerous ways to create different hyperplanes, but the one with the maximum margin is the best. The maximum margin is the separation between the most extreme data points inside a class, often known as the support vectors.

4.1.2. Extreme Learning Machine (ELM)

Another new field that uses less computing than neural networks is evolutionary machine learning (EML). It is based on the real-time classification and regression technique known as the single-layer feed-forward neural network (SLFFNN). The input-to-hidden layer weights in the ELM are initialized randomly, whereas the hidden-to-output layer weights are trained to utilize the Moore–Penrose inverse method [40] to obtain a least-squares solution. As a result, classification accuracy is increased while net complexity, training time, and learning speed are all reduced.
Additionally, the hidden layer weights provide the network the capacity to multitask similar to other ML techniques such as KNN, SVM, and Bayesian networks [40]. As shown in Figure 7, the ELM network is composed of three levels, all of which are connected. Weights between the hidden and output layers can only vary, but the weights between the input and hidden layers are initially fixed at random and remain so during training.

4.1.3. Deep Learning (DL)

Beginning a few years ago, deep learning, a branch of machine learning, has been utilized extensively to create automatic, semiautomatic, and hybrid models that can accurately detect and segment tumors in the shortest period possible [41]. DL can learn the features that are significant for a problem by utilizing a training corpus with sufficient diversity and quality. Deep learning [42] has achieved excellent success in tackling the issues of ML by combining the feature extraction and selection phases into the training process [43]. Deep learning is motivated by the comprehension of neural networks that exist within the human brain. DL models are often represented as a sequence of layers generated by a weighted sum of information from the previous layer. The data are represented by the first layer, while the output is represented by the last layer [44]. Deep learning models can tackle extremely difficult problems while often requiring less human interaction than conventional ML techniques because several layers make it possible to duplicate complex mapping functions.
The most common DL model used for the categorization and segmentation of images is a convolution neural network (CNN). In a hierarchical manner, CNN analyzes the spatial relationship of pixels. Convoluting the images with learned filters creates a hierarchy of feature maps, which is how this is accomplished. This convolution function is performed in several layers such that the features are translation- and distortion-invariant and hence accurate to a high degree [45]. Figure 8 illustrates the main process in DL.
Preprocessing is primarily used to eliminate unnecessary variation from the input image and make training the model easier. More actions are required to extend beyond neural network models’ limits, such as resizing normalization. All images must be resized before being entered into CNN classification models since DL requires inputs of a constant size [46]. Images that are greater than the desired size can be reduced by downscaling, interpolation, or cutting the background pixels [46].
Many images are required for CNN-based classification. Data augmentation is one of the most important data strategies for addressing issues with unequal distribution and data paucity [47].
CNN’s architecture is composed of three primary layers: convolutional, pooling, and fully connected. The first layer is the main layer that is able to extract image features such as edges and boundaries. Based on the desired prediction results, this layer may automatically learn many filters in parallel for the training dataset. The first layer creates features, but the second layer oversees data reduction, which minimizes the size of those features and reduces the demand for computing resources. Every neuron in the final layer, which is a completely connected layer, is coupled to every neuron in the first layer. The layer serves as a classifier to classify the acquired feature vector of previous layers [48,49]. The approach that CNN uses is similar to how various neural networks work: it continually modifies its weights by taking an error from the output and inserting it as output to improve filters and weights. In addition, CNN standardizes the output utilizing a SoftMax function [50]. Many types of CNN architecture exist, including ResNet, AlexNet, and cascade-CNN, among others [51].

4.2. Segmentation Method

Brain tumor segmentation, which has been employed in some research, is an important step in improving disease diagnosis, evaluation, treatment plans, and clinical trials. The purpose of segmentation in tumor classification is to detect the tumor location from brain scans, improve representation, and allow quantitative evaluations of image structures during the feature extraction step [52]. Brain tumor segmentation can be accomplished in two ways: manually and completely automatically [53].
Manual tumor segmentation from brain scans is a difficult and time-consuming procedure. Furthermore, the artifacts created during the imaging procedure result in poor-quality images that are difficult to analyze. Additionally, due to uneven lesions, geographical flexibility, and unclear borders, manual detection of brain tumors is challenging. This section discusses several automated brain tumor segmentation strategies to help radiologists overcome these issues.

4.2.1. Region-Based Segmentation

A region in an image is a collection of related pixels that comply with specific homogeneity requirements, such as shape, texture, and pixel intensity values [54]. In a region-based segmentation, the image is divided into disparate areas to precisely identify the target region [55]. When grouping pixels together, the region-based segmentation takes into consideration the pixel values, such as gray-level variance and difference, as well as their spatial closeness, such as the Euclidean distance or region density. K-means clustering [56] and FCM [56] are the most techniques used in this method.

4.2.2. Thresholding Methods

The thresholding approach is a straightforward and effective way to separate the necessary region [57], but finding an optimum threshold in low-contrast images may be challenging.
Based on picture intensity, threshold values are chosen using histogram analysis [58]. There are two types of thresholding techniques: local and global. The global thresholding approach is the best choice for segmentation if the objects and the background have highly uniform brightness or intensity. The Gaussian distribution approach may be used to obtain the ideal threshold value [59]. Otsu thresholding [38] is the popular method among these techniques.

4.2.3. Watershed Techniques

The intensities of the image are analyzed using watershed techniques [60]. Topological watershed [61], marker-based watershed [62], and image IFT watershed [63] are a few examples of watershed algorithms.

4.2.4. Morphological-Based Method

The morphology technique relies on the morphology of image features. It is mostly used for extracting details from images based on shape representation. Dougherty [64] defines dilation and erosion as two basic operations. Dilation is used to increase the size of an image. Erosion reduces the size of images.

4.2.5. Edge-Based Method

Edge detection is performed using variations in image intensity. Pixels at an edge are those where the image’s function abruptly changes. Edge-based segmentation techniques include those by Sobel, Roberts, Prewitt, and Canny [65]. Reference [66] offers an enhanced edge detection approach for tumor segmentation. The development of an automated image-dependent thresholding is combined with the Sobel operator to identify the edges of the brain tumor.

4.2.6. Neural-Networks-Based Method

Neuronal network-based segmentation techniques employ computer models of artificial neural networks consisting of weighted connections between processing units (called neurons). At the connections, the weights act as multipliers. To acquire the coefficient values, training is necessary. The segmentation of medical images and other fields has made use of a variety of neural network designs. Some of the techniques utilized in the segmentation process include the multilayer perceptron (MLP), Hopfield neural networks (HNN) [67], back-propagation learning algorithm, SVM-based segmentation [68], and self-organizing maps (SOM) neural network [67].

4.2.7. DL-Based Segmentation

The primary strategy used in the DL-based segmentation of brain tumors technique is to pass an image through a series of deep learning structures before performing input image segmentation based on the deep features [69]. Many deep learning methods, such as deep CNNs, CNN, and others, have been suggested for segmenting brain tumors.
A deep learning system called semantic segmentation [70] arranges pixels in an image according to semantic categories. The objective is to create a dense pixel-by-pixel segmentation map of the image, and each pixel is given an assigned category or entity.

4.3. Performance Evaluation

An important component of every research work involves evaluating the classification and segmentation performance. The primary goal of this evaluation is to measure and analyze the model’s capability for segmentation or diagnostic purposes. Segmentation is a crucial step in improving the diagnostic process, as we mentioned before, but for this to occur, the segmentation process must be as accurate as feasible. Additionally, to evaluate the diagnostic approach utilized while taking complexity and time into account [71].
True positive (TP), true negative (TN), false positive (FP), and false negative (FN) are the main four elements in any analysis or to evaluate any segmentation or classification algorithm. A pixel that is accurately predicted to be assigned to the specified class in a segmentation method is represented by TP and TN based on the ground truth. Furthermore, FP is a result when the model predicts a pixel wrongly as not belonging to a specific class. A false negative (FN) results when the model wrongly predicts a pixel belonging to a certain class [71].
TP in classification tasks refers to an image that is accurately categorized into a positive category based on the ground truth. Similar to this, the TN result occurs when the model properly classifies an image in the negative category. As opposed to that, FP results occur when the model wrongly assigns an image in the positive class while the actual datum is in the negative category. FN results occur when the model misclassifies an image while it belongs in the positive category. Through the four elements mentioned above, different performance measures enable us to expand the analysis.
Accuracy (ACC) measures a model’s ability to correctly categorize all pixels/classes, whether they are positive or negative. Sensitivity (SEN) shows the percentage of accurately predicted positive images/pixels among all actual positive samples. It evaluates a model’s ability to recognize relevant samples or pixels. The percentage of actual negatives that were predicted is known as specificity (SPE). It indicates a percentage of classes or pixels that could not be accurately recognized [71].
The precision (PR) or positive predictive value (PPV) measures how frequently the model correctly predicts the class or pixel. It provides the precise percentage of positively expected results from models. The most often used statistic that combines SEN and precision is the F1 score [72]. It refers to the two-dimensional harmonic mean.
The Jaccard index (JI), also known as intersection over union (IoU), calculates the percentage of overlap between the model’s prediction output and the annotation ground-truth mask.
The spatial overlap between the segmented region of the model and the ground-truth tumor region is measured by the Dice similarity coefficient (DSC). A DSC value of zero means there is no spatial overlap between the annotated model result and the actual tumor location, whereas a value of one means there is complete spatial overlap. The receiver characteristics curve is summarized by the area under the curve (AUC), which compares SEN to the false positive rate as a measure of a classifier’s ability to discriminate between classes.
The similarity between the segmentation produced by the model and the expert-annotated ground truth is known as the similarity index (SI). It describes how the identification of the tumor region is comparable to that of the input image [71]. Table 3 summarizes different performance equations.

5. Literature Review

5.1. Article Selection

The major goal of this study is to review and understand brain tumor classification and detection strategies developed worldwide between 2010 and 2023. This present study aims to review the most popular techniques for detecting brain cancer that have been made available globally, in addition to looking at how successful CAD systems are in this process.
We did not target any one publisher specifically, but we utilized articles from a variety of sources to account for the diversity of knowledge in a particular field. We collected appropriate articles from several internet scientific research article libraries. We searched the pertinent publications using IEEE Explore, Medline, ScienceDirect, Google Scholar, and ResearchGate.
Each time, the filter choice for the year (2010 to 2023) was chosen so that only papers from the chosen period were presented. Most frequently, we used terms like “detection of MRI images using deep learning,” “classification of brain tumor from CT/MRI images using deep learning,” “detection and classification of brain tumor using deep learning,” “CT brain tumor,” “PET brain tumor,” etc. This study offers an analysis of 53 chosen publications.

5.2. Publicly Available Datasets

The researchers tested the proposed methods on several publicly accessible datasets. In this part, several significant and difficult datasets are covered. The most difficult MRI datasets are BRATS. Table 4 presents a summary of the dataset names.

5.3. Related Work

In addition to the several techniques for segmenting brain tumors that we already highlighted, this section presents a summary of studies that use artificial intelligence to classify brain tumors.

5.3.1. MRI Brain Tumor Segmentation

This section describes the various machine learning, deep learning, region growth, thresholding, and literature-proposed brain tumor segmentation strategies.
To segment brain tumors, Gordillo et al. [80] utilized fuzzy logic structure, which they built utilizing features extracted from MR images and expert knowledge. This system learns unsupervised and is fully automated. With trials conducted on two different forms of brain tumors, glioblastoma multiform and meningioma, the result of segmentation using this approach is shown to be satisfactory, with the lowest accuracy of 71% and a maximum of 93%.
Employing fuzzy c-means clustering on MRI, Rajendran [81] presented logic analyzing for segmenting brain tumors. The region-based technique that iteratively progresses toward the ultimate tumor border was initialized using the tumor type output of fuzzy clustering. Using 15 MR images with manual segmentation ground truth available, tests were conducted on this approach to determine its effectiveness. The overall result was suitable, with a sensitivity of 96.37% and an average Jaccard coefficient value of 83.19%.
An SVM classifier was applied by Kishore et al. to categorize tumor pixels using vectors of features from MR images, such as mean intensity and LBP. Level sets and region-growing techniques were used for the segmentation. The experiments on their suggested methods used MR images with tumor regions manually defined by 11 different participants. Their suggested methods are effective, with a DSC score of 0.69 [82].
A framework for segmenting tumorous MRI 3D images was presented by Abbasi and Tajeripour [38]. The first phase improves the input image’s contrast using bias field correction. The data capacity is reduced using the multilevel Otsu technique in the second phase. LBP in three orthogonal planes and an enhanced histogram of images are employed in the third stage, the feature extraction step. Lastly, the random forest is employed as a classifier for distinguishing tumorous areas since it can work flawlessly with large inputs and has a high level of segmentation accuracy. The overall outcome was acceptable, with a mean Jaccard value of 87% and a DSC of 93%.
By combining two K-means and FCM-clustering approaches, Almahfud et al. [83] suggest a technique for segmenting human brain MRI images to identify brain cancers. Because K-means is more susceptible to color variations, it can rapidly and effectively discover optima and local outliers. So that the cluster results are better and the calculation procedure is simpler, the K-means results are clustered once more with FCM to categorize the convex contour based on the border. To increase accuracy, morphology and noise reduction procedures are also suggested. Sixty-two brain MRI scans were used in the study, and the accuracy rate was 91.94%.
According to Pereira et al. [69], an automated segmentation technique based on CNN architecture was proposed, which explores small three-by-three kernels. Given the smaller number of weights in the network, using small kernels enables the creation of more intricate architectures and helps prevent overfitting. Additionally, they looked at the use of intensity normalizing as an initial processing step, which, when combined with data augmentation, was highly successful in segmenting brain tumors in MRI images. Their suggestion was verified using the BRATS database, yielding Dice similarity coefficient values of 0.88, 0.83, and 0.77 for the Challenge dataset for the whole, core, and enhancing areas.
According to the properties of a separated local square, they suggested a unique approach for segmenting brain tumors [84]. The suggested procedure essentially consists of three parts. An image was divided into homogenous sections with roughly comparable properties and sizes using the super-pixel segmentation technique in the first stage. The second phase was the extraction of gray statistical features and textural information. In the last phase of building the segmentation model, super-pixels were identified as either tumor areas or nontumor regions using SVM. They used 20 images from the BRATS dataset, where a DSC of 86.12% was attained, to test the suggested technique.
The CAD system suggested by Gupta et al. [85] offers a noninvasive method for the accurate tumor segmentation and detection of gliomas. The system takes advantage of the super pixels’ combined properties and the FCM-clustering technique. The suggested CAD method recorded 98% accuracy for glioma detection in both low-grade and high-grade tumors.
Brain tumor segmentation using the CNN-based data transfer to SVM classifier approach was proposed by Cui et al. [68]. Two cascaded phases comprise their algorithm. They trained CNN in the initial step to understand the mapping of the image region to the tumor label region. In the testing phase, they passed the testing image and CNN’s anticipated label output to an SVM classifier for precise segmentation. Tests and evaluations show that the suggested structure outperforms separate SVM-based or CNN-based segmentation, while DSC achieved 86.12%.
The two-pathway-group CNN architecture described by Razzak et al. is a novel approach for brain tumor segmentation that simultaneously takes advantage of local and global contextual traits. This approach imposes equivariance in the 2PG-CNN model to prevent instability and overfitting parameter sharing. The output of a basic CNN is handled as an extra source and combined at the last layer of the 2PG CNN, where the cascade architecture was included. When a group CNN was embedded into a two-route architecture for model validation using BRATS datasets, the results were DSC 89.2%, PR 88.22%, and SEN 88.32% [86].
A semantic segmentation model for the segmentation of brain tumors from multimodal 3D MRIs for the BRATS dataset was published in [87]. After experimenting with several normalizing techniques, they discovered that group-norm and instance-norm performed equally well. Additionally, they have tested with more advanced methods of data augmentation, such as random histogram pairing, linear image transformations, rotations, and random image filtering, but these have yet to show any significant benefit. Further, raising the network depth had no positive effect on performance. However, increasing the number of filters consistently produced better results. Their BRATS end testing dataset values were 0.826, 0.882, and 0.837 for overall Dice coefficient or improved tumor core, entire tumor, and tumor center, respectively.
CNN was used by Karayegen and Aksahin [88] to offer a semantic segmentation approach for autonomously segmenting brain tumors on BRATS image datasets that include images from four distinct imaging modalities (T1, T1C, T2, and FLAIR). This technique was effectively used, and images were shown in a variety of planes, including sagittal, coronal, and axial, to determine the precise tumor location and parameters such as height, breadth, and depth. In terms of tumor prediction, evaluation findings of semantic segmentation carried out using networks are incredibly encouraging. The mean IoU and mean prediction ratio were both calculated to be 86.946 and 91.718, respectively.
A novel, completely automatic method for segmenting brain tumor regions was proposed by Ullah et al. [89] using multiscale residual attention CNN (MRA-UNet). To maintain the sequential information, MRA-UNet uses three sequential slices as its input. By employing multiscale learning in a cascade path, it can make use of the adaptable region of interest strategy and precisely segment improved and core tumor regions. In the BRATS-2020 dataset, their method produced novel outcomes with an overall Dice score of 90.18%.
A new technique for segmenting brain tumors using the fuzzy Otsu thresholding morphology (FOTM) approach was presented by Wisaeng and Sa-Ngiamvibool [90]. The values from each single histogram in the original MRI image were modified by using a color normalizing preprocessing method in conjunction with histogram specification. The findings unambiguously demonstrate that image gliomas, image meningiomas, and image pituitary have average accuracy indices of 93.77%, 94.32%, and 94.37%, respectively. A summary of MRI brain tumor segmentation is provided in Table 5.

5.3.2. MRI Brain Tumor Classification Using ML

The automated classification of brain cancers using MRI images has been the subject of several studies. Cleaning data, feature extraction, and feature selection are the basic steps in the machine learning (ML) process that have been used for this purpose. Building an ML model based on labeled samples is the last step. A summary of MRI brain tumor classification using ML is provided in Table 6.
An NN-based technique to categorize a given MR brain image as either normal or abnormal is presented in [91]. In this method, features were first extracted from images using the wavelet transform, and then the dimensionality of the features was reduced using PCA methodology. The reduced features were routed to a back-propagation NN that uses a scaled conjugate gradient (SCG) to determine the best weights for the NN. This technique was used on 66 images, 18 of which were normal and 48 abnormal. On training and test images, the classification accuracy was 100%.
An automated and efficient CAD method based on ensemble classifiers was proposed by Arakeri and Reddy [36] for the classification of brain cancers on MRI images as benign or malignant. A tumor’s texture, shape, and border properties were extracted and used as a representation. The ICA approach was used to select the most significant features. The ensemble classifier, consisting of SVM, ANN, and kNN classifiers, is trained using these features to describe the tumor. A dataset consisting of 550 patients’ T1- and T2-weighted MR images was used for the experiments. With an accuracy of 99.09% (sensitivity 100% and specificity 98.21%), the experimental findings demonstrated that the suggested classification approach achieves strong agreement with the combined classifier and is extremely successful in the identification of brain tumors. Figure 9 illustrates the CAD method based on ensemble classifiers.
In [92], the authors suggested a novel, wavelet-energy-based method for automatically classifying MR images of the human brain into normal or abnormal. The classifier was SVM, and biogeography-based optimization (BBO) was utilized to enhance the SVM’s weights. They succeeded in achieving 99% precision and 97% accuracy.
Amin et al. [28] suggest an automated technique to distinguish between malignant and benign brain MRI images. The segmentation of potential lesions has used a variety of methodologies. Then, considering shape, texture, and intensity, a feature set was selected for every candidate lesion. The SVM classifier is then used on the collection of features to compare the proposed framework’s precision using various cross-validations. Three benchmark datasets, including Harvard, Rider, and Local, are used to verify the suggested technique. For the procedure, the average accuracy was 97.1%, the area under the curve was 0.98, the sensitivity was 91.9%, and the specificity was 98.0%.
A suitable CAD approach toward classifying brain tumors is proposed in [93]. The database includes meningioma, astrocytoma, normal brain areas, and primary brain tumors. The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the dataset. Altogether, these ROI(s) were used to extract 371 intensity and texture features. These three classes were divided using the ANN classifier. Overall classification accuracy was 92.43%.
Four hundred twenty-eight T1 MR images from 55 individuals were used in a varied dataset for multiclass brain tumor classification [94]. A based-on content active contour model extracted 856 ROIs. These ROIs were used to extract 218 intensity and texture features. PCA was employed in this study to reduce the size of the feature space. The ANN was then used to classify these six categories. The classification accuracy was seen to have reached 85.5%.
A unique strategy for classifying brain tumors in MRI images was proposed in [95] by employing improved structural descriptors and hybrid kernel-SVM. To better classify the image and improve the texture feature extraction process using statistical parameters, they used GLCM and histograms to derive the texture feature from every region. Different kernels were combined to create a hybrid kernel SVM classifier to enhance the classification process. They applied this technique to only axial T1 brain MRI images—93% accuracy for their suggested strategy.
A hybrid system composed of two ML techniques was suggested in [96] for classifying brain tumors. For this, 70 brain MR images overall (60 abnormal, 10 normal) were taken into consideration. DWT was used to extract features from the images. Using PCA, the total number of features was decreased. Following feature extraction, feed-forward back-propagation ANN and KNN were applied individually on the decreased features. The back-propagation learning method for updating weights is covered by FP-ANN. KNN has already been covered. Using KNN and FP-ANN, this technique achieves 97% and 98% accuracy, respectively [96].
A strategy for classifying brain MRI images was presented in [97]. Initially, they used an enhanced image improvement method that comprises two distinct steps: noise removal and contrast enhancement using histogram equalization. Then, using a DWT to extract features from an improved MR brain image, they further decreased these features by mean and standard deviation. Finally, they developed a sophisticated deep neural network (DNN) to classify the brain MRI images as abnormal or normal, and their strategy achieved 95.8%.
Table 6. MRI brain tumor classification using ML.
Table 6. MRI brain tumor classification using ML.
Ref.ScanYearFeature ExtractionFeature SelectionClassificationAcc.
[96]MRI2010GLCMPCAANN and KNN98% and 97%
[91]MRI2011WaveletPCABack-propagation NN100.00%
[94]MRI2013Intensity and texturePCAANN85.50%
[95]MRI2014GLCM-SVM93.00%
[36]MRI2015Texture and shapeICASVM99.09%
[92]MRI2015Wavelet-SVM97.00%
[28]MRI2017Texture and shape-SVM97.10%
[93]MRI2017Intensity and texture-ANN92.43%
[97]MRI2020DWTMean and standard deviationDNN95.8%

5.3.3. MRI Brain Tumor Classification Using DL

Difficulties remain in categorizing brain cancers from an MRI scan, despite encouraging developments in the field of ML algorithms for the classification of brain tumors into their different types. These difficulties are mostly the result of the ROI detection; typical labor-intensive feature extraction methods could be more effective [98]. Owing to the nature of deep learning, the categorization of brain tumors is now a data-driven problem rather than a challenge based on manually created features [99]. CNN is one of the deep learning models that is frequently utilized in brain tumor classification tasks and has produced a significant result [100].
According to a study [101], the CNN algorithm can be used to divide the severity of gliomas into two categories: low severity or high severity, as well as multiple grades of severity (Grades II, III, and IV). Accuracy rates of 71% and 96% were reached by the classifier.
A DL approach based on a CNN was proposed by Sultan et al. [7] to classify different kinds of brain tumors using two publicly available datasets. The proposed method’s block diagram is presented in Figure 10. The first divides cancers into meningioma, pituitary, and glioma tumors. The other one distinguishes among Grade II, III, and IV gliomas. The first and second datasets, which each have 233 and 73 patients, contain a combined total of 3064 and 516 T1 images. The suggested network configuration achieves the best overall accuracy, 96.13% and 98.7%, for the two studies, which results in significant performance [7].
Similarly, ref. [102] showed how to classify brain MRI scan images into malignant and benign using CNN algorithms in conjunction with augmenting data and image processing. They evaluated the effectiveness of their CNN model with pretrained VGG-16, Inception-v3, and ResNet-50 models using the transfer learning methodology. Even though the experiment was carried out on a relatively small dataset, the results reveal that the model’s accuracy result is quite strong and has a very low complexity rate, as it obtained 100% accuracy, compared to VGG-16’s 96%, ResNet-50’s 89%, and Inception-V3’s 75%. The structure of the suggested CNN architecture is shown in Figure 11.
For accurate glioma grade prediction, researchers developed a customized CNN-based deep learning model [103] and evaluated the performance using AlexNet, GoogleNet, and SqueezeNet by transfer learning. Based on 104 clinical glioma patients with (50 LGGs and 54 HGGs), they trained and evaluated the models. The training data was expanded using a variety of data augmentation methods. A five-fold cross-validation procedure was used to assess each model’s performance. According to the study’s findings, their specially created deep CNN model outperformed the pretrained models by an equal or greater percentage. The custom model’s accuracy, sensitivity, F1 score, specificity, and AUC values were, respectively, 0.971, 0.980, 0.970, 0.963, and 0.989.
A novel transfer learning-based active learning paradigm for classifying brain tumors was proposed by Ruqian et al. [104]. Figure 12 describes the workflow for active learning. On the MRI training dataset of 203 patients and the baseline validation dataset of 66 patients, they used a 2D slice-based technique to train and fine-tune the model. Their suggested approach allowed the model to obtain an area under the curve (ROC) of 82.89%. The researchers built a balanced dataset and ran the same process on it to further investigate the robustness of their strategy. Compared to the baseline’s AUC of 78.48%, the model’s AUC was 82%.
A total of 131 patients with glioma were enrolled [105]. A rectangular ROI was used to segment tumor images, and this ROI contained around 80% of the tumor. The test dataset was then created by randomly selecting 20% of the patient-level data. Models previously trained on the expansive natural image database ImageNet were applied to MRI images, and then AlexNet and GoogleNet were developed from scratch and fine-tuned. Five-fold cross-validation (CV) was used on the patient-level split to evaluate the classification task. The averaged performance metrics for validation accuracy, test accuracy, and test AUC from the five-fold CV of GoogleNet were, respectively, 0.867, 0.909, and 0.939.
Hamdaoui et al. [106] proposed an intelligent medical decision-support system for identifying and categorizing brain tumors using images from the risk of malignancy index. They employed deep transfer learning principles to avoid the scarcity of training data required to construct the CNN model. For this, they selected seven CNN architectures that had already been trained on an ImageNet dataset that they carefully fitted on (MRI) data of brain tumors gathered from the BRATS database, as shown in Figure 13. Just the prediction that received the highest score among the predictions made by the seven pretrained CNNs is produced to increase their model’s accuracy. They evaluated the effectiveness of the primary two-class model, which includes LGG and HGG brain cancers, using a ten-way cross-validation method. The test precision, F1 score, test precision, and test sensitivity for their suggested model were 98.67%, 98.06%, 98.33%, and 98.06%, respectively.
A new AI diagnosis model called EfficientNetB0 was created by Khazaee et al. [107] to assess and categorize human brain gliomas utilizing sequences from MR images. They used a common dataset (BRATS-2019) to validate the new AI model, and they showed that the AI components—CNN and transfer learning—provided outstanding performance for categorizing and grading glioma images, with 98.8% accuracy.
In [70], the researchers developed a model using transfer learning and pretrained ResNet18 to identify basal ganglia germinomas more accurately. In this retrospective analysis, 73 patients with basal ganglioma were enrolled. Based on both T1 and T2 data, brain tumors were manually segmented. To create the tumor classification model, the T1 sequence was utilized. Transfer learning and a 2D convolutional network were used. Five-fold cross-validation was used to train the model, and it resulted in a mean AUC of 88%.
Researchers suggested an effective hyperparameter optimization method for CNN based on Bayesian optimization [108]. This method was assessed by categorizing 3064 T1 images into three types of brain cancers (glioma, pituitary, and meningioma). Five popular deep pretrained models are compared to the improved CNN’s performance using transfer learning. Their CNN achieved 98.70% validation accuracy after applying Bayesian optimization.
A novel generated transfer DL model was developed by Alanazi et al. [109] for the early diagnosis of brain cancers into their different categories, such as meningioma, pituitary, and glioma. Several layers of the models were first constructed from scratch to test the performance of standalone CNN models performed for brain MRI images. The weights of the neurons were then revised using the transfer learning approach to categorize brain MRI images into tumor subclasses using the 22-layer, isolated CNN model. Consequently, the transfer-learned model that was created had an accuracy rate of 95.75%.
Rizwan et al. [110] suggested a method to identify various BT classes using Gaussian-CNN on two datasets. One of the datasets is employed to categorize lesions into pituitary, glioma, and meningioma. The other distinguishes between the three glioma classes (II, III, and IV). The first and second datasets, respectively, have 233 and 73 victims from a total of 3064 and 516 images on T1 enhanced images. For the two datasets, the suggested method has an accuracy of 99.8% and 97.14%.
A seven-layer CNN was suggested in [111] to assist with the three-class categorization of brain MR images. To decrease computing time, separable convolution was used. The suggested separable CNN model achieved 97.52% accuracy on a publicly available dataset of 3064 images.
Several pretrained CNNs were utilized in [112], including GoogleNet, Alexnet, Resnet50, Resnet101, VGG-16, VGG-19, InceptionResNetV2, and Inceptionv3. To accommodate additional image categories, the final few layers of these networks were modified. Data from the clinical, Harvard, and Figshare repositories were widely used to assess these models. The dataset was divided into training and testing halves in a 60:40 ratio. The validation on the test set demonstrates that, compared to other proposed models, the Alexnet with transfer learning demonstrated the best performance in the shortest time. The suggested method obtained accuracies of 100%, 94%, and 95.92% using three datasets and is more generic because it does not require any manually created features.
The suggested framework [113] describes three experiments that classified brain malignancies such as meningiomas, gliomas, and pituitary tumors using three designs of CNN (AlexNet, VGGNet, and GoogleNet). Using the MRI slices of the brain tumor dataset from Figshare, each study then investigates transfer learning approaches like fine-tuning and freezing. The data augmentation approaches are applied to the MRI slices for results generalization, increasing dataset samples, and minimizing the risk of overfitting. The fine-tuned VGG16 architecture attained the best accuracy at 98.69% in terms of categorization in the proposed studies.
An effective hybrid optimization approach was used in [114] for the segmentation and classification of brain tumors. To improve categorization, the CNN features were extracted. The suggested chronological Jaya honey badger algorithm (CJHBA) was used to train the deep residual network (DRN), which was used to conduct the classification by using the retrieved features as input. The Jaya algorithm, the honey badger algorithm (HBA), and the chronological notion are all combined in the proposed CJHBA. Using BRATS-2018, the performance is assessed. The highest accuracy is 92.10%. A summary of MRI brain tumor classification using DL is provided in Table 7.

5.3.4. Hybrid Techniques

Hybrid strategies use multiple approaches to achieve high accuracy, emphasizing each approach’s benefits while minimizing the drawbacks. The first method employed a segmentation technique to identify the part of the brain that was infected, and the second method for classification. Hybrid techniques are summarized in Table 8.
The proposed integrated SVM and ANN-based method for classification can be discovered in [115]. The FCM method is used to segment the brain MRI images initially, where the updated membership and k value diverge from the standard method. Two types of characteristics have been retrieved from segmented images to distinguish and categorize tumors. Using SVM, the first category of statistical features was used to differentiate between normal or abnormal brain MRI images. This SVM technique has an accuracy rate of 97.44%. Area, perimeter, orientation, and eccentricity were additional criteria used to distinguish between the tumor and various malignant stages I through IV. The tumor categories and stages of malignant tumors are classified through the ANN back-propagation technique. This suggested strategy has a 97.37% accuracy rate for categorizing tumor stages.
A hybrid segmentation strategy using ANN was suggested in [116] to enhance the brain tumor’s classification outcomes. First, the tumor region was segmented using skull stripping and thresholding. The segmented tumor was subsequently recognized using the canny algorithm, and the features of the identified tumor cell region were then used as the input of the ANN for classification; 98.9% accuracy can be attained with the provided strategy.
A system that can identify and categorize the different types of tumors as well as detect them in T1 and T2 image sequences was proposed by Ramdlon et al. [52]. Only the axial section of the MRI results, which are divided into three classifications (Glioblastoma, Astrocytoma, and Oligodendroglioma), are used for the data analysis using this method. Basic image processing techniques were used to identify the tumor region, including image enhancement, binarization, morphology, and watershed. Following the shape extraction feature segmentation, the KNN classifier was used to classify tumors; 89.5% of tumors were correctly classified.
Gurbina et al. [30] described the suggested integrated DWT and SVM classification methods. The initial segmentation of the brain MRI images was performed using Ostu’s approach. The DWT features were obtained from segmented images to identify and categorize tumors. Brain MRI images were divided into benign and malignant categories using an SVM classifier. This SVM method has a 99% accuracy rate.
The objective of the study in [117] is multilevel segmentation for effective feature extraction and brain tumor classification from MRI data. The authors used thresholding, the watershed algorithm, and morphological methods for segmentation after preprocessing the MRI image data. Through CNN, features are extracted, and SVM classed the tumor images as malignant or noncancerous. The proposed algorithm has an overall accuracy of 87.4%.
The classification of brain tumors into three types—glioblastoma, sarcoma, and metastatic—has been proposed by the authors of [118]. The authors first used FCM clustering to segment the brain tumor and then DWT to extract the features. PCA was then used to minimize the characteristics. Using six layers of DNN, categorization was completed. The suggested method displays 98% accuracy.
The method presented by Babu et al. [119] focused on categorizing and segmenting brain cancers from MRI images. Four processes compose the procedure: image denoising, segmentation of tumor, extracting features, and hybrid classification. They used the wavelet-based method to extract features after employing the thresholding process to remove tumors from brain MRI images. The final hybrid categorization was performed using CNN. The experiment’s findings showed that the approach had a segmentation accuracy of 95.23%, but the suggested optimized CNN had a classification accuracy of 99%.
Improved SVM was suggested as a novel algorithm by Ansari [120]. They recommended four steps for identifying and classifying brain tumors using MRI data: preprocessing, segmentation of images, extracting features, and image categorization. They segmented tumors using a fuzzy clustering approach and extracted key features using GLCM. In the classification stage, improved SVM was finally used. The suggested approach has an 88% accuracy rate.
A fully automated system for segmenting and diagnosing brain tumors was proposed by Farajzadeh et al. [121]. This is accomplished by first applying five distinct preprocessing techniques to an MR image, passing the images through a DWT, and then extracting six local attributes from the image. The processed images are then delivered to an NN, which subsequently extracts higher-order attributes from them. Another NN then weighs the features and concatenates them with the initial MR image. The hybrid U-Net is then fed with the concatenated data to segment the tumor and classify the image. For segmenting and categorizing brain tumors, they attained accuracy rates of 98.93% and 98.81%, respectively.
Table 8. Hybrid techniques.
Table 8. Hybrid techniques.
Ref.YearSegmentation MethodFeature ExtractionClassifierAccuracy
[115]2017FCMshape and statisticalSVM and ANN97.44% and 97.37%
[118]2017FCMDWT and PCACNN98.00%
[52]2019watershedshapeKNN89.50%
[30]2019Ostu’sDWTSVM99.00%
[117]2020thresholding and watershedCNNSVM87.4%.
[116]2020cannyGLCM and GaborANN98.90%
[119]2023thresholdingwaveletCNN99.00%
[120]2023fuzzy clusteringGLCMImproved SVM88.00%
[121]2023U-NetDWTCNN98.93%

5.3.5. Various Segmentation and Classification Methods Employing CT Images

Wavelet statistical texture features (WST) and wavelet co-occurrence texture features (WCT) were combined to segment brain tumors in CT images [122] automatically. After utilizing GA to choose the best texture features, two different NN classifiers were tested to segment the region of a tumor. This approach is shown to provide good outcomes with an accuracy rate of above 97%. Architecture of NN is shown in Figure 14.
For the segmentation and classification of cancers in brain CT images utilizing SVM with GA feature selection, a novel dominating feature extraction methodology was presented in [123]. They used FCM and K-means during the segmentation step and GLCM and WCT during the feature extraction stage. This approach is shown to provide positive results with an accuracy rate of above 98%.
An improved semantic segmentation model for CT images was suggested in [124]. Additionally, classification is used in the suggested work. In the suggested architecture, the semantic segmentation network, which has several convolutional layers and pooling layers, was used to first segment the brain image. Then, using the GoogleNet model, the tumor was divided into three groups: meningioma, glioma, and pituitary tumor. The overall accuracy achieved with this strategy was 99.6%.
A unique correlation learning technique utilizing CNN and ANN was proposed by Woniak et al. [125]. CNN used the support neural network to determine the best filters for the convolution and pooling layers. Consequently, the main neural classification improved efficiency and learns more quickly. Results indicated that the CLM model can achieve 96% accuracy, 95% precision, and 95% recall.
The contribution of image fusion to an enhanced brain tumor classification framework was examined by Nanmaran et al. [126], and this new fusion-based tumor categorization model can be more successfully applied to personalized therapy. A distinct cosine transform-based (DCT) fusion technique is utilized to combine MRI and SPECT images of benign and malignant class brain tumors. With the help of the features extracted from fused images, SVM, KNN, and decision trees were set to test. When using features extracted from fused images, the SVM classifier outperformed KNN and decision tree classifiers with an overall accuracy of 96.8%, specificity of 93%, recall of 94%, precision of 95%, and F1 score of 91%. Table 9 provides different segmentation and classification methods employing CT images.

6. Discussion

Most brain tumor segmentation and classification strategies are presented in this review. The quantitative efficiency of numerous conventional ML- and DL-based algorithms is covered in this article. Figure 15 displays the total number of publications published between 2010 and 2022 used in this review. Figure 16 displays the total number of articles published that perform classification, segmentation, or both.
Brain tumor segmentation uses traditional image segmentation methods like region growth and unsupervised machine learning. Noise, low image quality, and the initial seed point are its biggest challenges. The classification of pixels into multiple classes has been accomplished in the second generation of segmentation methods using unsupervised ML, such as FCM and K-means. These techniques are, nevertheless, quite noise sensitive. Pixel-level classification-based segmentation approaches utilizing conventional supervised ML have been presented to overcome this difficulty. Feature engineering, which extracts the tumor-descriptive pieces of information for the model’s training, is frequently used in conjunction with these techniques. Additionally, postprocessing helps further improve the results of supervised machine learning segmentation. Through the pipeline of its component parts, the deep learning-based approach accomplishes an end-to-end segmentation of tumors using an MRI image. These models frequently eliminate the requirement for manually built features by automatically extracting tumor descriptive information. However, their application in the medical domains is limited by the need for a big dataset for training the models and the complexity of understanding them.
In addition to the segmentation of the brain cancer region from the MRI scan, the classification of the tumor into its appropriate type is crucial for diagnosis and treatment planning, which in today’s medical practice necessitates a biopsy process. Several approaches that use shallow ML and DL have been put forth for classifying brain tumors. Type shallow ML techniques frequently include preprocessing, ROI identification, and feature extraction steps. Extracting descriptive information is a difficult task because of the inherent noise sensitivity associated with MRI image collection as well as differences in the shape, size, and position of tumor tissue cells. As a result, deep learning algorithms are currently the most advanced method for classifying many types of brain cancers, including astrocytomas, gliomas, meningiomas, and pituitary tumors. This review has covered several classifications of brain tumors.
The noisy nature of an MRI image is one of the most frequent difficulties in ML-based segmentation and classification of brain tumors. To increase the precision of brain tumor segmentation and classification models, noise estimation and denoising MRI images is a vital preprocessing operation. As a result, several methods, including the median filter [115], Wiener filter and DWT [30], and DL-based methods [117], have been suggested for denoising MRI images.
Large amounts of data are needed for DL models to operate effectively, but there need to be more datasets available. Data augmentation aids in expanding small datasets and creating a powerful generalized model. A common augmentation method for MRI images has yet to be developed. Although many methods have been presented by researchers, their primary goal is to increase the number of images. Most of the time, they ignore the connections between space and texture. An identical augmentation technique is required for comparative analysis to be conducted on its foundation.

7. General Problems and Challenges

Features are first manually extracted for ML, and are then fed into the ML-based differentiation system. Continuous variation within image classes makes utilizing ML-based algorithms for image classification challenging. Furthermore, the feature extraction methods’ usage of modern distance metrics makes it impossible to determine the similarity between two images.
Deep learning analyzes several parameters and optimizes them to extract and select features on its own. However, the system lacks intelligence in feature selection and typically pools, which reduces parameters and eliminates features that could be useful to the entire system.
Furthermore, DL models need data, and those data are coupled with millions or trillions of parameters. Therefore, enormous amounts of memory and GPU-based computers are required in the current environment. However, because of their high cost, these devices are not available to everyone. Consequently, many researchers need to create models that fit within their available budgets, which significantly impacts the quality of their study.
The noisy nature of an MRI image is one of the most frequent difficulties in ML-based brain tumor detection and classification. Preprocessing is necessary to remove all forms of noise from data and make it more suitable for the task at hand. Preprocessing difficulties exist in all the available datasets. However, the BRATS datasets have problems, such as motion artifacts and noise. There is no established preprocessing standard currently. People employ subpar application software, causing the image quality to decrease rather than improve.

7.1. Brain Cancer and Other Brain Disorders

7.1.1. Stroke

Hemorrhagic strokes come from blood vessel injury or aberrant vascular structure, while ischemic strokes occur when the brain’s blood supply is cut off. Although the fact that strokes and brain tumors are two distinct illnesses, the connections associated with them have been studied [127].
They discovered that stroke patients are more likely than other cancer types to acquire brain cancer. Another intriguing conclusion of the study is that women between the ages of 40 and 60 and elderly stroke patients are more likely to acquire brain cancer.

7.1.2. Alzheimer’s Disease

Short-term loss of memory is an initial symptom of Alzheimer’s disease (AD), a chronic neurodegenerative illness that may become worse over time as the disease progresses [108]. Despite AD and cancer being two distinct diseases, several studies have found a connection between them. According to the research, there is an inverse association between cancer and Alzheimer’s disease. They discovered that patients who had cancer had a 33% lower risk of Alzheimer’s disease than individuals who had not had cancer throughout the course of a mean follow-up of 10 years. Another intriguing finding of the study was that people with AD had a 61% lower risk of developing cancer.

8. Future Directions

The main applications of CADx systems are in educating and training; clinical practice is not one of them. CADx-based systems still need to be widely used in clinics. The absence of established techniques for assessing CADx systems in a practical environment is one cause of this. The performance metrics outlined in this study provide a helpful and necessary baseline for comparing algorithms, but because they are all so dependent on the training set, more advanced tools are required.
The fact that the image formats utilized to train the models were those characteristics of the AI research field (PNG) rather than those of the radiology field (DICOM, NIfTI) is noteworthy. Many of the articles analyzed needed authors with clinical backgrounds.
A different but related technical issue that may affect the performance of CADx systems in practice is the need for physician training on interacting with and interpreting the results of such systems for diagnostic decisions. This issue must be dealt with in all the papers included in the review. In terms of research project relevance and the acceptance of its findings, greater participation by doctors in the process may be advantageous.

9. Conclusions

A brain tumor is an abnormal growth of brain tissue that affects the brain’s ability to function normally. The primary objective in medical image processing is to find accurate and helpful information with the minimum possible errors by using algorithms. The four steps involved in segmenting and categorizing brain tumors using MRI data are preprocessing, picture segmentation, extracting features, and image classification. The diagnosis, treatment strategy, and patient follow-up can all be greatly enhanced by automating the segmentation and categorization of brain tumors. It is still difficult to create a fully autonomous system that can be deployed on clinical floors due to the appearance of the tumor and its irregular size, form, and nature. The review’s primary goal is to present the state-of-the-art in the field of brain cancer, which includes the pathophysiology of the disease, imaging technologies, WHO classification standards for tumors, primary methods of diagnosis, and CAD algorithms for brain tumor classifications using ML and DL techniques. Automating the segmentation and categorization of brain tumors using deep learning techniques has many advantages over region-growing and shallow ML systems. DL algorithms’ powerful feature learning capabilities are primarily to blame for this. Although DL techniques have made a substantial contribution, a general technique is still needed. This study reviewed 53 studies that used ML and DL to classify brain tumors based on MRI, and it examined the challenges and obstacles that CAD brain tumor classification techniques now face in practical application and advancement—a thorough examination of the variables that might impact classification accuracy. The MRI sequences and web address of the online repository for the dataset are among the publicly available databases that have been briefly listed in Table 4 and used in the experiments evaluated in this paper.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Watson, C.; Kirkcaldie, M.; Paxinos, G. The Brain: An Introduction to Functional Neuroanatomy. 2010. Available online: http://ci.nii.ac.jp/ncid/BB04049625 (accessed on 22 May 2023).
  2. Jellinger, K.A. The Human Nervous System Structure and Function, 6th edn. Eur. J. Neurol. 2009, 16, e136. [Google Scholar] [CrossRef]
  3. DeAngelis, L.M. Brain tumors. N. Engl. J. Med. 2001, 344, 114–123. [Google Scholar] [CrossRef]
  4. Louis, D.N.; Perry, A.; Wesseling, P.; Brat, D.J.; Cree, I.A.; Figarella-Branger, D.; Hawkins, C.; Ng, H.K.; Pfister, S.M.; Reifenberger, G.; et al. The 2021 WHO Classification of Tumors of the Central Nervous System: A summary. Neuro-Oncology 2021, 23, 1231–1251. [Google Scholar] [CrossRef]
  5. Hayward, R.M.; Patronas, N.; Baker, E.H.; Vézina, G.; Albert, P.S.; Warren, K.E. Inter-observer variability in the measurement of diffuse intrinsic pontine gliomas. J. Neuro-Oncol. 2008, 90, 57–61. [Google Scholar] [CrossRef]
  6. Mahaley, M.S., Jr.; Mettlin, C.; Natarajan, N.; Laws, E.R., Jr.; Peace, B.B. National survey of patterns of care for brain-tumor patients. J. Neurosurg. 1989, 71, 826–836. [Google Scholar] [CrossRef] [PubMed]
  7. Sultan, H.H.; Salem, N.M.; Al-Atabany, W. Multi-Classification of Brain Tumor Images Using Deep Neural Network. IEEE Access 2019, 7, 69215–69225. [Google Scholar] [CrossRef]
  8. Johnson, D.R.; Guerin, J.B.; Giannini, C.; Morris, J.M.; Eckel, L.J.; Kaufmann, T.J. 2016 Updates to the WHO Brain Tumor Classification System: What the Radiologist Needs to Know. RadioGraphics 2017, 37, 2164–2180. [Google Scholar] [CrossRef] [PubMed]
  9. Buckner, J.C.; Brown, P.D.; O’Neill, B.P.; Meyer, F.B.; Wetmore, C.J.; Uhm, J.H. Central Nervous System Tumors. Mayo Clin. Proc. 2007, 82, 1271–1286. [Google Scholar] [CrossRef] [PubMed]
  10. World Health Organization: WHO, “Cancer”. July 2019. Available online: https://www.who.int/health-topics/cancer (accessed on 30 March 2022).
  11. Amyot, F.; Arciniegas, D.B.; Brazaitis, M.P.; Curley, K.C.; Diaz-Arrastia, R.; Gandjbakhche, A.; Herscovitch, P.; Hinds, S.R.; Manley, G.T.; Pacifico, A.; et al. A Review of the Effectiveness of Neuroimaging Modalities for the Detection of Traumatic Brain Injury. J. Neurotrauma 2015, 32, 1693–1721. [Google Scholar] [CrossRef]
  12. Pope, W.B. Brain metastases: Neuroimaging. Handb. Clin. Neurol. 2018, 149, 89–112. [Google Scholar] [CrossRef]
  13. Abd-Ellah, M.K.; Awad, A.I.; Khalaf, A.A.; Hamed, H.F. A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magn. Reson. Imaging 2019, 61, 300–318. [Google Scholar] [CrossRef] [PubMed]
  14. Ammari, S.; Pitre-Champagnat, S.; Dercle, L.; Chouzenoux, E.; Moalla, S.; Reuze, S.; Talbot, H.; Mokoyoko, T.; Hadchiti, J.; Diffetocq, S.; et al. Influence of Magnetic Field Strength on Magnetic Resonance Imaging Radiomics Features in Brain Imaging, an In Vitro and In Vivo Study. Front. Oncol. 2021, 10, 541663. [Google Scholar] [CrossRef] [PubMed]
  15. Sahoo, L.; Sarangi, L.; Dash, B.R.; Palo, H.K. Detection and Classification of Brain Tumor Using Magnetic Resonance Images. In Advances in Electrical Control and Signal Systems: Select Proceedings of AECSS, Bhubaneswar, India, 8–9 November 2019; Springer: Singapore, 2020; Volume 665, pp. 429–441. [Google Scholar] [CrossRef]
  16. Kaur, R.; Doegar, A. Localization and Classification of Brain Tumor using Machine Learning & Deep Learning Techniques. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 59–66. [Google Scholar]
  17. The Radiology Assistant: Multiple Sclerosis 2.0. 1 December 2021. Available online: https://radiologyassistant.nl/neuroradiology/multiple-sclerosis/diagnosis-and-differential-diagnosis-3#mri-protocol-ms-brain-protocol (accessed on 22 May 2023).
  18. Savoy, R.L. Functional magnetic resonance imaging (fMRI). In Encyclopedia of Neuroscience; Elsevier: Charlestown, MA, USA, 1999. [Google Scholar]
  19. Luo, Q.; Li, Y.; Luo, L.; Diao, W. Comparisons of the accuracy of radiation diagnostic modalities in brain tumor. Medicine 2018, 97, e11256. [Google Scholar] [CrossRef]
  20. Positron Emission Tomography (PET). Johns Hopkins Medicine. 20 August 2021. Available online: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/positron-emission-tomography-pet (accessed on 20 May 2023).
  21. Mayfield Brain and Spine. SPECT Scan. 2022. Available online: https://mayfieldclinic.com/pe-spect.htm (accessed on 22 May 2023).
  22. Sastry, R.; Bi, W.L.; Pieper, S.; Frisken, S.; Kapur, T.; Wells, W.; Golby, A.J. Applications of Ultrasound in the Resection of Brain Tumors. J. Neuroimaging 2016, 27, 5–15. [Google Scholar] [CrossRef]
  23. Nasrabadi, N.M. Pattern recognition and machine learning. J. Electron. Imaging 2007, 16, 49901. [Google Scholar]
  24. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef]
  25. Mohan, M.R.M.; Sulochana, C.H.; Latha, T. Medical image denoising using multistage directional median filter. In Proceedings of the 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015], Nagercoil, India, 9–20 March 2015. [Google Scholar]
  26. Borole, V.Y.; Nimbhore, S.S.; Kawthekar, S.S. Image processing techniques for brain tumor detection: A review. Int. J. Emerg. Trends Technol. Comput. Sci. (IJETTCS) 2015, 4, 2. [Google Scholar]
  27. Ziedan, R.H.; Mead, M.A.; Eltawel, G.S. Selecting the Appropriate Feature Extraction Techniques for Automatic Medical Images Classification. Int. J. 2016, 4, 1–9. [Google Scholar]
  28. Amin, J.; Sharif, M.; Yasmin, M.; Fernandes, S.L. A distinctive approach in brain tumor detection and classification using MRI. Pattern Recognit. Lett. 2017, 139, 118–127. [Google Scholar] [CrossRef]
  29. Islam, A.; Reza, S.M.; Iftekharuddin, K.M. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE Trans. Biomed. Eng. 2013, 60, 3204–3215. [Google Scholar] [CrossRef]
  30. Gurbină, M.; Lascu, M.; Lascu, D. Tumor detection and classification of MRI brain image using different wavelet transforms and support vector machines. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 505–508. [Google Scholar]
  31. Xu, X.; Zhang, X.; Tian, Q.; Zhang, G.; Liu, Y.; Cui, G.; Meng, J.; Wu, Y.; Liu, T.; Yang, Z.; et al. Three-dimensional texture features from intensity and high-order derivative maps for the discrimination between bladder tumors and wall tissues via MRI. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 645–656. [Google Scholar] [CrossRef]
  32. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunç, H.M. Brain tumor classification using modified local binary patterns (LBP) feature extraction methods. Med. Hypotheses 2020, 139, 109696. [Google Scholar] [CrossRef]
  33. Afza, F.; Khan, M.S.; Sharif, M.; Saba, T. Microscopic skin laceration segmentation and classification: A framework of statistical normal distribution and optimal feature selection. Microsc. Res. Tech. 2019, 82, 1471–1488. [Google Scholar] [CrossRef]
  34. Lakshmi, A.; Arivoli, T.; Rajasekaran, M.P. A Novel M-ACA-Based Tumor Segmentation and DAPP Feature Extraction with PPCSO-PKC-Based MRI Classification. Arab. J. Sci. Eng. 2017, 43, 7095–7111. [Google Scholar] [CrossRef]
  35. Adair, J.; Brownlee, A.; Ochoa, G. Evolutionary Algorithms with Linkage Information for Feature Selection in Brain Computer Interfaces. In Advances in Computational Intelligence Systems; Springer Nature: Cham, Switzerland, 2016; pp. 287–307. [Google Scholar]
  36. Arakeri, M.P.; Reddy, G.R.M. Computeraided diagnosis system for tissue characterization of brain tumor on magnetic resonance images. Signal Image Video Process. 2015, 9, 409–425. [Google Scholar] [CrossRef]
  37. Wang, S.; Zhang, Y.; Dong, Z.; Du, S.; Ji, G.; Yan, J.; Phillips, P. Feed-forward neural network optimized by hybridization of PSO and ABC for abnormal brain detection. Int. J. Imaging Syst. Technol. 2015, 25, 153–164. [Google Scholar] [CrossRef]
  38. Abbasi, S.; Tajeripour, F. Detection of brain tumor in 3D MRI images using local binary patterns and histogram orientation gradient. Neurocomputing 2017, 219, 526–535. [Google Scholar] [CrossRef]
  39. Zöllner, F.G.; Emblem, K.E.; Schad, L.R. SVM-based glioma grading: Optimization by feature reduction analysis. Z. Med. Phys. 2012, 22, 205–214. [Google Scholar] [CrossRef]
  40. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  41. Bhatele, K.R.; Bhadauria, S.S. Brain structural disorders detection and classification approaches: A review. Artif. Intell. Rev. 2019, 53, 3349–3401. [Google Scholar] [CrossRef]
  42. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  43. Hu, A.; Razmjooy, N. Brain tumor diagnosis based on metaheuristics and deep learning. Int. J. Imaging Syst. Technol. 2020, 31, 657–669. [Google Scholar] [CrossRef]
  44. Tandel, G.S.; Balestrieri, A.; Jujaray, T.; Khanna, N.N.; Saba, L.; Suri, J.S. Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm. Comput. Biol. Med. 2020, 122, 103804. [Google Scholar] [CrossRef] [PubMed]
  45. Sahaai, M.B. Brain tumor detection using DNN algorithm. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 3338–3345. [Google Scholar]
  46. Hashemi, M. Enlarging smaller images before inputting into convolutional neural network: Zero-padding vs. interpolation. J. Big Data 2019, 6, 98. [Google Scholar] [CrossRef]
  47. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Briefings Bioinform. 2017, 19, 1236–1246. [Google Scholar] [CrossRef]
  48. Gorach, T. Deep convolutional neural networks—A review. Int. Res. J. Eng. Technol. (IRJET) 2018, 5, 439. [Google Scholar]
  49. Ogundokun, R.O.; Maskeliunas, R.; Misra, S.; Damaševičius, R. Improved CNN Based on Batch Normalization and Adam Optimizer. In Proceedings of the Computational Science and Its Applications–ICCSA 2022 Workshops, Malaga, Spain, 4–7 July 2022; Part V. pp. 593–604. [Google Scholar]
  50. Ismael SA, A.; Mohammed, A.; Hefny, H. An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artif. Intell. Med. 2020, 102, 101779. [Google Scholar] [CrossRef]
  51. Baheti, P. A Comprehensive Guide to Convolutional Neural Networks. V7. Available online: https://www.v7labs.com/blog/convolutional-neural-networks-guide (accessed on 24 April 2023).
  52. Ramdlon, R.H.; Kusumaningtyas, E.M.; Karlita, T. Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 660–667. [Google Scholar] [CrossRef]
  53. Gurusamy, R.; Subramaniam, V. A machine learning approach for MRI brain tumor classification. Comput. Mater. Contin. 2017, 53, 91–109. [Google Scholar]
  54. Pohle, R.; Toennies, K.D. Segmentation of medical images using adaptive region growing. In Proceedings of the Medical Imaging 2001: Image Processing, San Diego, CA, USA, 4–10 November 2001; Volume 4322, pp. 1337–1346. [Google Scholar] [CrossRef]
  55. Dey, N.; Ashour, A.S. Computing in medical image analysis. In Soft Computing Based Medical Image Analysis; Academic Press: Cambridge, MA, USA, 2018; pp. 3–11. [Google Scholar]
  56. Hooda, H.; Verma, O.P.; Singhal, T. Brain tumor segmentation: A performance analysis using K-Means, Fuzzy C-Means and Region growing algorithm. In Proceedings of the 2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies, Ramanathapuram, India, 8–10 May 2014; pp. 1621–1626. [Google Scholar]
  57. Sharif, M.; Tanvir, U.; Munir, E.U.; Khan, M.A.; Yasmin, M. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. J. Ambient. Intell. Humaniz. Comput. 2018, 1–20. [Google Scholar] [CrossRef]
  58. Shanthi, K.J.; Kumar, M.S. Skull stripping and automatic segmentation of brain MRI using seed growth and threshold techniques. In Proceedings of the 2007 International Conference on Intelligent and Advanced Systems, Kuala Lumpur, Malaysia, 25–28 November 2007; pp. 422–426. [Google Scholar] [CrossRef]
  59. Zhang, F.; Hancock, E.R. New Riemannian techniques for directional and tensorial image data. Pattern Recognit. 2010, 43, 1590–1606. [Google Scholar] [CrossRef]
  60. Singh, N.P.; Dixit, S.; Akshaya, A.S.; Khodanpur, B.I. Gradient Magnitude Based Watershed Segmentation for Brain Tumor Segmentation and Classification. In Advances in Intelligent Systems and Computing; Springer Nature: Cham, Switzerland, 2017; pp. 611–619. [Google Scholar] [CrossRef]
  61. Couprie, M.; Bertrand, G. Topological gray-scale watershed transformation. Vis. Geom. VI 1997, 3168, 136–146. [Google Scholar] [CrossRef]
  62. Khan, M.S.; Lali, M.I.U.; Saba, T.; Ishaq, M.; Sharif, M.; Saba, T.; Zahoor, S.; Akram, T. Brain tumor detection and classification: A framework of marker-based watershed algorithm and multilevel priority features selection. Microsc. Res. Tech. 2019, 82, 909–922. [Google Scholar] [CrossRef]
  63. Lotufo, R.; Falcao, A.; Zampirolli, F. IFT-Watershed from gray-scale marker. In Proceedings of the XV Brazilian Symposium on Computer Graphics and Image Processing, Fortaleza, Brazil, 10 October 2003. [Google Scholar] [CrossRef]
  64. Dougherty, E.R. An Introduction to Morphological Image Processing; SPIE Optical Engineering Press: Bellingham, WA, USA, 1992. [Google Scholar]
  65. Kaur, D.; Kaur, Y. Various image segmentation techniques: A review. Int. J. Comput. Sci. Mob. Comput. 2014, 3, 809–814. [Google Scholar]
  66. Aslam, A.; Khan, E.; Beg, M.S. Improved Edge Detection Algorithm for Brain Tumor Segmentation. Procedia Comput. Sci. 2015, 58, 430–437. [Google Scholar] [CrossRef]
  67. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35, 2279–2301. [Google Scholar] [CrossRef]
  68. Cui, B.; Xie, M.; Wang, C. A Deep Convolutional Neural Network Learning Transfer to SVM-Based Segmentation Method for Brain Tumor. In Proceedings of the 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT), Jinan, China, 18–20 October 2019; pp. 1–5. [Google Scholar] [CrossRef]
  69. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  70. Ye, N.; Yu, H.; Chen, Z.; Teng, C.; Liu, P.; Liu, X.; Xiong, Y.; Lin, X.; Li, S.; Li, X. Classification of Gliomas and Germinomas of the Basal Ganglia by Transfer Learning. Front. Oncol. 2022, 12, 844197. [Google Scholar] [CrossRef]
  71. Biratu, E.S.; Schwenker, F.; Ayano, Y.M.; Debelee, T.G. A survey of brain tumor segmentation and classification algorithms. J. Imaging 2021, 7, 179. [Google Scholar] [CrossRef]
  72. Wikipedia Contributors. F Score. Wikipedia. 2023. Available online: https://en.wikipedia.org/wiki/F-score (accessed on 22 May 2023).
  73. Brain Tumor Segmentation (BraTS) Challenge. Available online: http://www.braintumorsegmentation.org/ (accessed on 22 May 2023).
  74. RIDER NEURO MRI—The Cancer Imaging Archive (TCIA) Public Access—Cancer Imaging Archive Wiki. Available online: https://wiki.cancerimagingarchive.net/display/Public/RIDER+NEURO+MRI (accessed on 22 May 2023).
  75. Harvard Medical School Data. Available online: http://www.med.harvard.edu/AANLIB/ (accessed on 16 March 2021).
  76. The Cancer Genome Atlas. TCGA. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM (accessed on 22 May 2023).
  77. The Cancer Genome Atlas. TCGA-LGG. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG (accessed on 22 May 2023).
  78. Cheng, J. Figshare Brain Tumor Dataset. 2017. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427/5 (accessed on 13 May 2022).
  79. IXI Dataset—Brain Development. Available online: https://brain-development.org/ixi-dataset/ (accessed on 22 May 2023).
  80. Gordillo, N.; Montseny, E.; Sobrevilla, P. A new fuzzy approach to brain tumor segmentation. In Proceedings of the 2010 IEEE International Conference, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
  81. Rajendran; Dhanasekaran, R. A hybrid Method Based on Fuzzy Clustering and Active Contour Using GGVF for Brain Tumor Segmentation on MRI Images. Eur. J. Sci. Res. 2011, 61, 305–313. [Google Scholar]
  82. Reddy, K.K.; Solmaz, B.; Yan, P.; Avgeropoulos, N.G.; Rippe, D.J.; Shah, M. Confidence guided enhancing brain tumor segmentation in multi-parametric MRI. In Proceedings of the 9th IEEE International Symposium on Biomedical Imaging, Barcelona, Spain, 2–5 May 2012; pp. 366–369. [Google Scholar] [CrossRef]
  83. Almahfud, M.A.; Setyawan, R.; Sari, C.A.; Setiadi, D.R.I.M.; Rachmawanto, E.H. An Effective MRI Brain Image Segmentation using Joint Clustering (K-Means and Fuzzy C-Means). In Proceedings of the 2018 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 21–22 November 2018; pp. 11–16. [Google Scholar]
  84. Chen, W.; Qiao, X.; Liu, B.; Qi, X.; Wang, R.; Wang, X. Automatic brain tumor segmentation based on features of separated local square. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017. [Google Scholar]
  85. Gupta, N.; Mishra, S.; Khanna, P. Glioma identification from brain MRI using superpixels and FCM clustering. In Proceedings of the 2018 Conference on Information and Communication Technology (CICT), Jabalpur, India, 26–28 October 2018. [Google Scholar] [CrossRef]
  86. Razzak, M.I.; Imran, M.; Xu, G. Efficient Brain Tumor Segmentation with Multiscale Two-Pathway-Group Conventional Neural Networks. IEEE J. Biomed. Health Inform. 2018, 23, 1911–1919. [Google Scholar] [CrossRef] [PubMed]
  87. Myronenko, A.; Hatamizadeh, A. Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs. In Proceedings of the International MICCAI Brainlesion Workshop, Singapore, 18 September 2020; pp. 82–89. [Google Scholar] [CrossRef]
  88. Karayegen, G.; Aksahin, M.F. Brain tumor prediction on MR images with semantic segmentation by using deep learning network and 3D imaging of tumor region. Biomed. Signal Process. Control. 2021, 66, 102458. [Google Scholar] [CrossRef]
  89. Ullah, Z.; Usman, M.; Jeon, M.; Gwak, J. Cascade multiscale residual attention CNNs with adaptive ROI for automatic brain tumor segmentation. Inf. Sci. 2022, 608, 1541–1556. [Google Scholar] [CrossRef]
  90. Wisaeng, K.; Sa-Ngiamvibool, W. Brain Tumor Segmentation Using Fuzzy Otsu Threshold Morphological Algorithm. IAENG Int. J. Appl. Math. 2023, 53, 1–12. [Google Scholar]
  91. Zhang, Y.; Dong, Z.; Wu, L.; Wang, S. A hybrid method for MRI brain image classification. Expert Syst. Appl. 2011, 38, 10049–10053. [Google Scholar] [CrossRef]
  92. Yang, G.; Zhang, Y.; Yang, J.; Ji, G.; Dong, Z.; Wang, S.; Feng, C.; Wang, Q. Automated classification of brain images using wavelet-energy and biogeography-based optimization. Multimed. Tools Appl. 2015, 75, 15601–15617. [Google Scholar] [CrossRef]
  93. Tiwari, P.; Sachdeva, J.; Ahuja, C.K.; Khandelwal, N. Computer Aided Diagnosis System—A Decision Support System for Clinical Diagnosis of Brain Tumours. Int. J. Comput. Intell. Syst. 2017, 10, 104–119. [Google Scholar] [CrossRef]
  94. Sachdeva, J.; Kumar, V.; Gupta, I.; Khandelwal, N.; Ahuja, C.K. Segmentation, Feature Extraction, and Multiclass Brain Tumor Classification. J. Digit. Imaging 2013, 26, 1141–1150. [Google Scholar] [CrossRef]
  95. Jayachandran, A.; Dhanasekaran, R. Severity Analysis of Brain Tumor in MRI Images Using Modified Multitexton Structure Descriptor and Kernel-SVM. Arab. J. Sci. Eng. 2014, 39, 7073–7086. [Google Scholar] [CrossRef]
  96. El-Dahshan, E.-S.A.; Hosny, T.; Salem, A.-B.M. Hybrid intelligent techniques for MRI brain images classification. Digit. Signal Process. 2010, 20, 433–441. [Google Scholar] [CrossRef]
  97. Ullah, Z.; Farooq, M.U.; Lee, S.-H.; An, D. A hybrid image enhancement based brain MRI images classification technique. Med. Hypotheses 2020, 143, 109922. [Google Scholar] [CrossRef] [PubMed]
  98. Kang, J.; Ullah, Z.; Gwak, J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning Classifiers. Sensors 2021, 21, 2222. [Google Scholar] [CrossRef]
  99. Díaz-Pernas, F.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network. Healthcare 2021, 9, 153. [Google Scholar] [CrossRef] [PubMed]
  100. Badža, M.M.; Barjaktarović, M. Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef]
  101. Ertosun, M.G.; Rubin, D.L. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks. In Proceedings of the AMIA Annual Symposium, San Francisco, CA, USA, 14–18 November 2015; Volume 2015, pp. 1899–1908. [Google Scholar]
  102. Khan, H.A.; Jue, W.; Mushtaq, M.; Mushtaq, M.U. Brain tumor classification in MRI image using convolutional neural network. Math. Biosci. Eng. 2020, 17, 6203–6216. [Google Scholar] [CrossRef]
  103. Özcan, H.; Emiroğlu, B.G.; Sabuncuoğlu, H.; Özdoğan, S.; Soyer, A.; Saygı, T. A comparative study for glioma classification using deep convolutional neural networks. Math. Biosci. Eng. MBE 2021, 18, 1550–1572. [Google Scholar] [CrossRef]
  104. Hao, R.; Namdar, K.; Liu, L.; Khalvati, F. A Transfer Learning–Based Active Learning Framework for Brain Tumor Classification. Front. Artif. Intell. 2021, 4, 635766. [Google Scholar] [CrossRef]
  105. Yang, Y.; Yan, L.-F.; Zhang, X.; Han, Y.; Nan, H.-Y.; Hu, Y.-C.; Hu, B.; Yan, S.-L.; Zhang, J.; Cheng, D.-L.; et al. Glioma Grading on Conventional MR Images: A Deep Learning Study with Transfer Learning. Front. Neurosci. 2018, 12, 804. [Google Scholar] [CrossRef]
  106. El Hamdaoui, H.; Benfares, A.; Boujraf, S.; Chaoui, N.E.H.; Alami, B.; Maaroufi, M.; Qjidaa, H. High precision brain tumor classification model based on deep transfer learning and stacking concepts. Indones. J. Electr. Eng. Comput. Sci. 2021, 24, 167–177. [Google Scholar] [CrossRef]
  107. Khazaee, Z.; Langarizadeh, M.; Ahmadabadi, M.E.S. Developing an Artificial Intelligence Model for Tumor Grading and Classification, Based on MRI Sequences of Human Brain Gliomas. Int. J. Cancer Manag. 2022, 15, e120638. [Google Scholar] [CrossRef]
  108. Amou, M.A.; Xia, K.; Kamhi, S.; Mouhafid, M. A Novel MRI Diagnosis Method for Brain Tumor Classification Based on CNN and Bayesian Optimization. Healthcare 2022, 10, 494. [Google Scholar] [CrossRef] [PubMed]
  109. Alanazi, M.; Ali, M.; Hussain, J.; Zafar, A.; Mohatram, M.; Irfan, M.; AlRuwaili, R.; Alruwaili, M.; Ali, N.T.; Albarrak, A.M. Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer Deep-Learning Model. Sensors 2022, 22, 372. [Google Scholar] [CrossRef] [PubMed]
  110. Rizwan, M.; Shabbir, A.; Javed, A.R.; Shabbr, M.; Baker, T.; Al-Jumeily, D. Brain Tumor and Glioma Grade Classification Using Gaussian Convolutional Neural Network. IEEE Access 2022, 10, 29731–29740. [Google Scholar] [CrossRef]
  111. Isunuri, B.V.; Kakarla, J. Three-class brain tumor classification from magnetic resonance images using separable convolution based neural network. Concurr. Comput. Pract. Exp. 2021, 34, e6541. [Google Scholar] [CrossRef]
  112. Kaur, T.; Gandhi, T.K. Deep convolutional neural networks with transfer learning for automated brain image classification. J. Mach. Vis. Appl. 2020, 31, 20. [Google Scholar] [CrossRef]
  113. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A Deep Learning-Based Framework for Automatic Brain Tumors Classification Using Transfer Learning. Circuits Syst. Signal Process. 2019, 39, 757–775. [Google Scholar] [CrossRef]
  114. Deepa, S.; Janet, J.; Sumathi, S.; Ananth, J.P. Hybrid Optimization Algorithm Enabled Deep Learning Approach Brain Tumor Segmentation and Classification Using MRI. J. Digit. Imaging 2023, 36, 847–868. [Google Scholar] [CrossRef]
  115. Ahmmed, R.; Swakshar, A.S.; Hossain, M.F.; Rafiq, M.A. Classification of tumors and it stages in brain MRI using support vector machine and artificial neural network. In Proceedings of the 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017. [Google Scholar]
  116. Sathi, K.A.; Islam, S. Hybrid Feature Extraction Based Brain Tumor Classification using an Artificial Neural Network. In Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 30–31 October 2020; pp. 155–160. [Google Scholar] [CrossRef]
  117. Islam, R.; Imran, S.; Ashikuzzaman; Khan, M.A. Detection and Classification of Brain Tumor Based on Multilevel Segmentation with Convolutional Neural Network. J. Biomed. Sci. Eng. 2020, 13, 45–53. [Google Scholar] [CrossRef]
  118. Mohsen, H.; El-Dahshan, E.A.; El-Horbaty, E.M.; Salem, A.M. Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 2017, 3, 68–71. [Google Scholar] [CrossRef]
  119. Babu, P.A.; Rao, B.S.; Reddy, Y.V.B.; Kumar, G.R.; Rao, J.N.; Koduru, S.K.R. Optimized CNN-based Brain Tumor Segmentation and Classification using Artificial Bee Colony and Thresholding. Int. J. Comput. Commun. Control. 2023, 18, 577. [Google Scholar] [CrossRef]
  120. Ansari, A.S. Numerical Simulation and Development of Brain Tumor Segmentation and Classification of Brain Tumor Using Improved Support Vector Machine. Int. J. Intell. Syst. Appl. Eng. 2023, 11, 35–44. [Google Scholar]
  121. Farajzadeh, N.; Sadeghzadeh, N.; Hashemzadeh, M. Brain tumor segmentation and classification on MRI via deep hybrid representation learning. Expert Syst. Appl. 2023, 224, 119963. [Google Scholar] [CrossRef]
  122. Padma, A.; Sukanesh, R. A wavelet based automatic segmentation of brain tumor in CT images using optimal statistical texture features. Int. J. Image Process. 2011, 5, 552–563. [Google Scholar]
  123. Padma, A.; Sukanesh, R. Automatic Classification and Segmentation of Brain Tumor in CT Images using Optimal Dominant Gray level Run length Texture Features. Int. J. Adv. Comput. Sci. Appl. 2011, 2, 53–121. [Google Scholar] [CrossRef]
  124. Ruba, T.; Tamilselvi, R.; Beham, M.P.; Aparna, N. Accurate Classification and Detection of Brain Cancer Cells in MRI and CT Images using Nano Contrast Agents. Biomed. Pharmacol. J. 2020, 13, 1227–1237. [Google Scholar] [CrossRef]
  125. Woźniak, M.; Siłka, J.; Wieczorek, M.W. Deep neural network correlation learning mechanism for CT brain tumor detection. Neural Comput. Appl. 2021, 35, 14611–14626. [Google Scholar] [CrossRef]
  126. Nanmaran, R.; Srimathi, S.; Yamuna, G.; Thanigaivel, S.; Vickram, A.S.; Priya, A.K.; Karthick, A.; Karpagam, J.; Mohanavel, V.; Muhibbullah, M. Investigating the Role of Image Fusion in Brain Tumor Classification Models Based on Machine Learning Algorithm for Personalized Medicine. Comput. Math. Methods Med. 2022, 2022, 7137524. [Google Scholar] [CrossRef]
  127. Burns, A.; Iliffe, S. Alzheimer’s disease. BMJ 2009, 338, b158. [Google Scholar] [CrossRef]
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM), and CSF [17].
Figure 1. Healthy brain MRI image showing white matter (WM), gray matter (GM), and CSF [17].
Diagnostics 13 03007 g001
Figure 2. Fundamental MRI planes: (a) coronal, (b) sagittal, and (c) axial.
Figure 2. Fundamental MRI planes: (a) coronal, (b) sagittal, and (c) axial.
Diagnostics 13 03007 g002
Figure 3. MRI brain tumor: (a) FLAIR image, (b) T1 image, and (c) T2 image [17].
Figure 3. MRI brain tumor: (a) FLAIR image, (b) T1 image, and (c) T2 image [17].
Diagnostics 13 03007 g003
Figure 4. CT brain tumor.
Figure 4. CT brain tumor.
Diagnostics 13 03007 g004
Figure 5. PET brain tumor.
Figure 5. PET brain tumor.
Diagnostics 13 03007 g005
Figure 6. ML block diagram.
Figure 6. ML block diagram.
Diagnostics 13 03007 g006
Figure 7. Extreme learning machine.
Figure 7. Extreme learning machine.
Diagnostics 13 03007 g007
Figure 8. DL block diagram.
Figure 8. DL block diagram.
Diagnostics 13 03007 g008
Figure 9. CAD method based on ensemble classifiers.
Figure 9. CAD method based on ensemble classifiers.
Diagnostics 13 03007 g009
Figure 10. A block schematic showing the suggested approach. Reprinted (adapted) with permission from [7]. Copyright 2019 IEEE.
Figure 10. A block schematic showing the suggested approach. Reprinted (adapted) with permission from [7]. Copyright 2019 IEEE.
Diagnostics 13 03007 g010
Figure 11. Proposed method. Reprinted (adapted) with permission from [102]. Copyright 2020 Mathematical Biosciences and Engineering.
Figure 11. Proposed method. Reprinted (adapted) with permission from [102]. Copyright 2020 Mathematical Biosciences and Engineering.
Diagnostics 13 03007 g011
Figure 12. Workflow of the suggested active learning framework based on transfer learning. Reprinted (adapted) with permission from [104]. Copyright 2021 Frontiers in Artificial Intelligence.
Figure 12. Workflow of the suggested active learning framework based on transfer learning. Reprinted (adapted) with permission from [104]. Copyright 2021 Frontiers in Artificial Intelligence.
Diagnostics 13 03007 g012
Figure 13. Proposed process for deep transfer learning. Reprinted (adapted) with permission from [106]. Copyright 2021 Indonesian Journal of Electrical Engineering and Computer Science.
Figure 13. Proposed process for deep transfer learning. Reprinted (adapted) with permission from [106]. Copyright 2021 Indonesian Journal of Electrical Engineering and Computer Science.
Diagnostics 13 03007 g013
Figure 14. Architecture of NN.
Figure 14. Architecture of NN.
Diagnostics 13 03007 g014
Figure 15. Number of articles published from 2010 to 2022.
Figure 15. Number of articles published from 2010 to 2022.
Diagnostics 13 03007 g015
Figure 16. Number of articles published that perform classification, segmentation, or both.
Figure 16. Number of articles published that perform classification, segmentation, or both.
Diagnostics 13 03007 g016
Table 1. Types of brain tumors.
Table 1. Types of brain tumors.
Types of Tumors Based onTypeComment
NatureBenignLess aggressive and grows slowly
MalignantLife-threatening and rapidly expanding
OriginPrimary tumorOriginates in the brain directly
Secondary tumorThis tumor develops in another area of the body like lung and breast before migrating to the brain
GradingGrade IBasically, regular in shape, and they develop slowly
Grade IIAppear strange to the view and grow more slowly
Grade IIIThese tumors grow more quickly than grade II cancers
Grade IVReproduced with greater rate
Progression stageStage 0Malignant but do not invade neighboring cells
Stage 1Malignant and quickly spreading
Stage 2
Stage 3
Stage 4The malignancy invades every part of the body
Table 2. Properties of various MRI sequences.
Table 2. Properties of various MRI sequences.
T1T2Flair
White MatterBrightDarkDark
Gray MatterGrayDarkDark
CSFDarkBrightDark
TumorDarkBrightBright
Table 3. Performance equation.
Table 3. Performance equation.
ParameterEquation
ACC ( T P + T N ) / ( T P + F N + F P + T N )
SEN T P / ( T P + F N )
SPE T N / ( T N + F P )
PR T P / ( T P + F P )
F1_SCORE 2 P R S E N / ( P R + S E N )
DCS 2 T P / ( 2 T P + F P + F N
Jaccard T P / ( T P + F P + F N )
Table 4. Summary of the dataset.
Table 4. Summary of the dataset.
DatasetMRI SequencesSource
BRATST1, T2, FLAIR[73]
RIDERT1, T2, FLAIR[74]
HarvardT2[75]
TCGAT1, T2, FLAIR[76,77]
FigshareT1[78]
IXIT1, T2[79]
Table 5. MRI brain tumor segmentation.
Table 5. MRI brain tumor segmentation.
Ref.ScanYearTechniqueMethodPerformance MetricsResult
[80]MRI2010region-basedFCMAcc93.00%
[81]MRI2011region-basedFCMJaccard83.19%
[82]MRI2012NNLBP with SVMDSC69.00%
[69]MRI2016DLCNNDSC88.00%
[84]MRI2017NNGLCM with SVMDSC86.12%
[38]MRI2017NNLBP with RFJaccard and DSC87% and 93%
[85]MRI2018region-basedFCMAcc98.00%
[83]MRI2018region-basedFCM and k-meanAcc91.94%
[68]MRI2019DL and NNCNN with SVMDSC88.00%
[86]MRI2019DLTwo-path CNNDSC89.20%
[87]MRI2019DLsemanticAcc88.20%
[88]MRI2021DLsemanticIoU91.72%
[89]MRI2022DLMRA-UNetDSC98.18%
[90]MRI2023region-basedFuzzy Otsu ThresholdAcc94.37%
Table 7. MRI brain tumor classification using DL.
Table 7. MRI brain tumor classification using DL.
Ref.ScanYearTechniqueMethodResultPerformance Metrics
[101]MRI2015DLCustom-CNN96.00%Acc
[7]MRI2019DLCustom-CNN98.70%Acc
[102]MRI2020DLVGG-16, Inception-v3, ResNet-5096%
75%
89%
Acc
[103]MRI2021DLAlexNet, GoogleNet, SqueezeNet97.10%Acc
[104]MRI2021DLCustom-CNN82.89%ROC
[105]MRI2018DLAlexNet90.90%Test acc
[106]MRI2021DLmulti-CNN structure98.67%
98.06%
98.33%
98.06%
precision,
F1 score, precision,
sensitivity
[107]MRI2022DLEfficientNetB098.80%Acc
[70]MRI2022DLResNet1888.00%AUC
[108]MRI2022DLCustom-CNN98.70%Acc
[109]MRI2022DLCustom-CNN95.75%Acc
[110]MRI2022DLGaussian-CNN99.80%Acc
[111]MRI2020DLseven-layer CNN97.52%Acc
[112]MRI2021DLAlexnet100.00%Acc
[113]MRI2019DLVGG1698.69%Acc
[114]MRI2023DLCNN92.10%Acc
Table 9. Various segmentation and classification methods employing CT images.
Table 9. Various segmentation and classification methods employing CT images.
Ref.YearTypeSegmentationFeature ExtractionFeature SelectionClassificationResult
[122]2011CTNNWCT and WSTGA-97.00%
[123]2011CTFCM and k-meanGLCM and WCTGASVM98.00%
[124]2020CTSemantic--GoogleNet99.60%
[125]2021CT---CNN96.00%
[126]2022SPECT/MRI-DCT-SVM96.80%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaifi, R. A Review of Recent Advances in Brain Tumor Diagnosis Based on AI-Based Classification. Diagnostics 2023, 13, 3007. https://doi.org/10.3390/diagnostics13183007

AMA Style

Kaifi R. A Review of Recent Advances in Brain Tumor Diagnosis Based on AI-Based Classification. Diagnostics. 2023; 13(18):3007. https://doi.org/10.3390/diagnostics13183007

Chicago/Turabian Style

Kaifi, Reham. 2023. "A Review of Recent Advances in Brain Tumor Diagnosis Based on AI-Based Classification" Diagnostics 13, no. 18: 3007. https://doi.org/10.3390/diagnostics13183007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop