Next Article in Journal
PDDL Task Planning for Tunnel Spraying Based on a Multivariate Coating Accumulation Model
Previous Article in Journal
SPT-AD: Self-Supervised Pyramidal Transformer Network-Based Anomaly Detection of Time Series Vibration Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Analysis of Neural Networks, Fuzzy Logic and Genetic Algorithms in Tumor Classification

1
Doctoral School of Multidisciplinary Engineering Sciences, Széchenyi István University, 9026 Győr, Hungary
2
College of Pharmacy, University of Basrah, Basrah 61004, Iraq
3
Department of Telecommunications, Széchenyi István University, 9026 Győr, Hungary
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(9), 5186; https://doi.org/10.3390/app15095186
Submission received: 17 March 2025 / Revised: 24 April 2025 / Accepted: 29 April 2025 / Published: 7 May 2025

Abstract

:
This study explores existing research on neural networks, fuzzy logic-based models, and genetic algorithms applied to brain tumor classification. A systematic review of 53 studies was conducted following PRISMA guidelines, covering search strategy, selection criteria, quality assessment, and data extraction. Articles were collected from three scientific databases: Web of Science, Scopus, and IEEE. The review primarily focuses on practical contributions, with most studies emphasizing applications over conceptual insights. Key methods in the field demonstrate significant impact and innovation. Commonly used training and testing mechanisms include dataset splitting, augmentation, and validation techniques, highlighting their widespread adoption for performance evaluation. The analysis of evaluation metrics shows that accuracy and the DICE score are the most frequently used, alongside sensitivity, specificity, recall, and other domain-specific measures. The variety of metrics underscores the need for tailored approaches based on dataset characteristics and research objectives. By highlighting trends, challenges, and research gaps, this review provides actionable insights for advancing BTC research. It offers a comprehensive overview of techniques and evaluation methods to guide future developments in this critical domain.

1. Introduction

Brain tumors are among the most dangerous types of cancer and require precise and timely classification to improve patient outcomes. Traditional methods of brain tumor diagnosis rely heavily on manual interpretation by radiologists, which can be time-consuming, subjective, and prone to errors. Early diagnosis is crucial in determining the appropriate treatment path, including surgery, radiotherapy, or chemotherapy [1]. In recent years, integrating AI-based technologies—such as artificial neural networks (ANNs), fuzzy logic-based neural networks, and genetic algorithms—into brain tumor diagnosis has transformed the field. These AI-based technologies enable automatic, high-speed, and accurate tumor classification, enhance clinical decision-making, and reduce human error.
Artificial neural networks (ANNs) have received significant attention in medical image analysis and classification due to their ability to solve complex computational problems, particularly in medical diagnostics that require high accuracy [2]. One of the key applications of ANNs is brain tumor classification (BTC). BTC systems rely on medical imaging data, such as MRI scans, to detect, diagnose, and plan treatment for brain tumors at an early stage [3]. Additionally, BTC systems assist radiologists and clinicians in accurately distinguishing between different types of brain tumors, such as gliomas, meningiomas, and pituitary tumors [1]. The typical workflow of BTC algorithms consists of several key stages: (1) data acquisition, where MRI scans are collected; (2) preprocessing, which includes noise reduction and image normalization to enhance quality; (3) segmentation, which identifies regions of interest, particularly tumor areas; (4) feature extraction, where characteristics such as tumor size and texture are identified; (5) classification, using techniques like ANNs, fuzzy logic-based neural networks (FLNNs), or genetic algorithms (GAs) to determine the tumor type; and (6) evaluation, where the model’s performance is assessed based on accuracy, precision, recall, and other relevant metrics [4].
Deep learning models, a subset of ANNs, have demonstrated their ability to learn complex patterns in data. Numerous studies have explored the effectiveness of popular deep learning models, such as convolutional neural networks (CNNs), in BTC [5]. CNNs are particularly well-suited for image processing tasks and have been widely applied in medical imaging. However, despite their success, several challenges remain. One major challenge is the variability in datasets used across different studies, often leading to inconsistent performance when applied to new or diverse patient populations. Another challenge is comparing results across studies due to the lack of standardized evaluation metrics and transparent training and testing mechanisms [6].
Fuzzy logic-based neural networks (FLNNs) represent another promising BTC approach. FLNNs combine the strengths of fuzzy logic and neural networks to address uncertainties and imprecise data in medical image classification [7]. Fuzzy logic enables the system to interpret data more flexibly using linguistic variables such as “small”, “medium”, or “large” [8]. FLNNs enhance the accuracy of BTC systems by mitigating noise and ambiguity often present in medical images. Combining fuzzy rules derived from expert knowledge and neural network learning capabilities improves the system’s interpretability and robustness.
Another approach involves using GAs inspired by natural selection. GAs are employed to optimize the parameters of ANNs and to select the most relevant features from MRI images for accurate classification. Integrating GAs into ANNs enhances BTC accuracy by optimizing feature selection and adjusting the hyperparameters of ANN models. Moreover, the ability of GAs to evolve through crossover and mutation processes enables efficient optimization of complex models such as ANNs [9].
The primary challenge in applying ANNs, FLNNs, and GAs in BTC lies in the diversity and inconsistency of methods, datasets, and evaluation metrics. Additionally, the lack of standardized datasets and preprocessing techniques complicates the assessment of model generalizability. This systematic review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [10]. The study aims to consolidate the current body of knowledge in this field by extracting and analyzing key data from 53 relevant studies. It examines various types of neural networks, fuzzy logic-based models, genetic algorithms, the datasets used, segmentation approaches, and evaluation metrics. This review provides insights to guide future BTC research and development by identifying common trends, research gaps, and challenges.
This study aims to answer the following questions:
  • What are the trends in applying ANNs for segmentation tasks, as reflected in the types of ANNs, datasets, and segmentation approaches used in the recent literature?
  • How do different training and testing mechanisms and evaluation metrics correlate with performance outcomes (e.g., accuracy, precision) in ANN-based segmentation models across the selected papers?
Unlike previous systematic reviews that primarily focus on specific models, this study uniquely integrates and compares artificial neural networks, fuzzy logic-based neural networks, and genetic algorithms within the same systematic framework. Moreover, it contributes by classifying studies into conceptual and practical categories, detailing the train-test mechanisms, segmentation approaches, and evaluation metrics.
The remainder of this paper is organized as follows: Section 2 explains the study’s methodology. Section 3 presents the descriptive analysis of the selected studies. Section 4 provides qualitative insights and discusses key findings. Finally, Section 5 concludes the research with recommendations for future work.

2. Methodology

The PRISMA methodology is one of the most widely used frameworks for reporting meta-analyses and systematic reviews [10]. It has been applied in various research fields, including deep learning, machine learning methods, and IPv6 security issues [11]. Figure 1 illustrates the methodology followed in this paper.

2.1. PRISMA Basic Spets

The PRISMA framework consists of four main stages:
  • Search strategy;
  • Selection criteria;
  • Ensuring result quality.
  • Data Extraction
Systematic research requires defining and developing a search strategy to identify relevant literature. This strategy includes three major scientific databases: Web of Science, Scopus, and IEEE. These databases are among the largest repositories of scientific and technical publications and contain a wealth of high-quality research. One of the search terms used in the databases was “brain tumor”. In the initial research phase, the search scope was limited to studies published between 2014 and 2024.
The study included research from all countries but was limited to publications written in English. In the Web of Science database, the “publication topic” option was used as a filter to focus on image processing with medical images. This refinement reduced the number of studies from 113 to 99, as 14 were excluded. In the IEEE database, the “publishing topic” criterion was applied to focus on medical image analysis, excluding 18 studies from a total of 101, leaving 83 relevant studies. For the Scopus database, the selection criterion was based on “keywords”, yielding 41 relevant papers out of 58, with 17 papers excluded due to irrelevance. The final number of selected research papers across all databases was 223. Table 1 presents the selection criteria applied in this study.
Three main steps were taken to improve the quality of the search.
Firstly, the “document type” field was used in the Web of Science and Scopus databases to filter search results, while the “document identifier” field was applied in the IEEE database. Additionally, a filter was implemented to exclude review articles and conference papers. As a result, six articles were excluded from Web of Science, two from Scopus, and four from IEEE.
The second step was removing the duplicates, which was performed using the duplicate removal features in Microsoft Excel in two stages:
A. Removing duplicates within individual databases
  • Web of Science: in total, 5 papers removed;
  • IEEE: in total, 4 papers removed;
  • Scopus: in total, 10 papers removed.
B. Matching across all three databases and removing duplicates
  • Web of Science: in total, 15 additional papers removed;
  • IEEE: in total, 13 additional papers removed.
This step excluded 26 papers from Web of Science, 21 from IEEE, and 12 from Scopus.
The last step, evaluation step was divided into two parts. The first part focused on analyzing the abstracts of the papers, while the second part examined citations and the year of publication.
A.
Extracting articles that employed magnetic resonance imaging (MRI)
The abstract summarizes the research work, making it useful for identifying relevant studies. Analyzing and evaluating the abstract content helped isolate research that specifically employed MRI. Any study that did not rely on MRI for conducting experiments was excluded. As a result, 28 articles were excluded from Web of Science, 22 from IEEE, and 10 from the Scopus database.
B.
Extracting highly cited articles
This step involved identifying and selecting articles with high citation counts from the past decade (2014–2024). As a result, 12 papers were excluded from Web of Science, 29 from IEEE, and 10 from the Scopus database.
The statistical summary of the studies is presented in Table 2, under the Stage of Maintaining Quality section.

2.2. Data Analysis

The data analysis focused on identifying key insights concerning the review’s principles and research questions. Classification and analysis were conducted based on titles, keywords, and abstracts, as they contained the primary data and were examined rigorously. This study followed the same approach as Abdullah et al. [11].
Each article was classified as either a “conceptual contribution” or a “practical contribution”.
  • A conceptual contribution refers to descriptive, comparative, analytical, or review-based research;
  • A practical contribution indicates that the research involved designing, developing, implementing a program, or presenting a novel algorithm.
Another level of analysis was based on the research domain. This study explores brain tumor classification, focusing on the parameters used in related research. For instance, some researchers aim to classify and detect brain tumors using MRI images and apply machine learning or deep learning techniques. Thus, based on the applied methods, this classification approach primarily relies on MRI images for brain tumor detection.
The study considers two primary perspectives:
  • The “field” perspective examines the specific parameters researchers worked with;
  • The “parameter” perspective reflects the targeted application areas.
Additionally, a third classification focuses on the specific problems the studies aimed to address, such as data preprocessing or brain tumor classification.

3. Results

This section presents the results of applying the PRISMA framework, including a descriptive analysis and classification of the literature.

3.1. Descriptive Analysis

In response to the first research question, “What are the trends in applying ANNs for segmentation tasks, as reflected in the types of ANNs, datasets, and segmentation approaches used in the recent literature?” several descriptive statistical analyses were conducted. The following subsections detail these statistical analyses.

3.2. Trends in Citations and Publications over Time

This subsection examines citation trends and the number of publications from 2014 to 2024 across different indexing platforms, including IEEE, Scopus, and Web of Science. Figure 2, Figure 3 and Figure 4 illustrate these trends, providing insights into the growth and impact of research on ANNs for segmentation tasks over the past decade.
The abnormal rise in citation trends observed in Figure 2 can be attributed to the influence of several highly cited research papers published in 2018 and 2019. Notably, studies such as Wang et al. (2018) [12], which introduced advancements in interactive medical image segmentation using deep learning with image-specific fine-tuning, and Mallick et al. (2019) [13], who developed a deep wavelet autoencoder-based approach for brain MRI cancer detection, significantly contributed to the spike in citations.
Additional works by Hossain et al. (2019) [14] and Hemanth et al. (2019) [15] on brain tumor detection using convolutional neural networks and machine learning approaches further fueled this sharp increase, reflecting a surge in interest and acknowledgment of these impactful studies.
For Figure 3, the abnormal increase in citations can be attributed to the influence of highly cited research papers, particularly the comprehensive survey by Litjens et al. (2017) [16], titled “A Survey on Deep Learning in Medical Image Analysis”. This foundational work provided an extensive overview of deep learning advancements in medical imaging and became a critical reference for subsequent studies in the field. Its widespread citation underscores the survey’s pivotal role in shaping research directions and methodologies, leading to the marked rise in citation counts observed in Scopus.
The figures illustrate a consistent upward trend in citations and publications from 2014 to 2024 across IEEE, Scopus, and Web of Science. This steady increase highlights the growing relevance and research interest in artificial neural network (ANN) applications for segmentation tasks, with notable growth in recent years. The patterns depicted emphasize the broader academic community’s engagement and the significant impact of ANN advancements on segmentation research, reflecting the field’s evolving influence and importance in scientific literature.

3.3. AI Techniques Used in Brain Tumor Classification

This subsection provides a detailed analysis of the AI techniques applied in the reviewed literature for brain tumor classification tasks. The findings, illustrated in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, reveal a predominant use of various AI methods.
Figure 5 presents the overall distribution of AI techniques, indicating that neural networks account for 69% of all approaches. Among these, convolutional neural networks (CNNs) are the most widely utilized, as depicted in Figure 6 (56%), followed by U-Net (37%) and transformers (2%).
Figure 7 details machine learning methods, highlighting supervised learning techniques, such as naïve Bayes, support vector machines (SVM), and k-nearest neighbors (KNN). SVMs contribute 54% to the supervised learning category, followed by KNN (15%) and extreme learning machine (ELM) (15%).
Unsupervised techniques, shown in Figure 8, include K-means and fuzzy c-means (FCM) clustering, each representing 50% of the methods.
Figure 9 presents the usage of other algorithms, such as generative adversarial networks (GANs) and adversarial networks like MTANS, which represent emerging trends but are less frequently applied.
This comprehensive analysis underscores the diversity and evolution of AI methodologies in advancing brain tumor classification.

3.4. Dataset Analysis

After examining AI techniques, this subsection explores the types of datasets used in the reviewed studies. As depicted in Figure 10, the analysis reveals that 13 datasets were sourced from hospitals, emphasizing real-world medical data for developing and validating AI models. Among these databases that were used in the articles are as follows: Tianjin Universal Center of Medical Imaging and Diagnostic (TUCMD), Seoul National University Hospital, and The Cancer Imaging Archive, covering more than 2000 patients from over 200 institutions. In contrast, 38 datasets were obtained from publicly available repositories, such as Kaggle (BRATS 2013, BRATS 2014, BRATS 2015, BRATS 2016, BRATS 2019, BTD20, BraTS dataset, BraTS 2017, BraTS 2018, BraTS 2021, FigShare, SARTAJ, and Br35H). The databases covered different ages, from newborns to adults, as well as different types of tumors, highlighting the accessibility and utility of internet-based data resources.
This dataset analysis underscores the importance of diverse and comprehensive data in advancing brain tumor research.
In addition to model-related challenges, computational issues within datasets also pose significant barriers. These include the need for extensive data cleaning to address noise, labeling errors, and format inconsistencies. Missing data are another common issue that can affect model training and evaluation. Furthermore, large dataset volumes require substantial computational resources for storage, processing, and training, which may not be accessible in all research settings. These challenges must be addressed to ensure reliable and scalable model development.

3.5. Segmentation Approaches

This subsection provides an overview of the segmentation approaches used in brain tumor classification, as presented in Figure 11. The analysis reveals that ANNs are the most widely adopted, accounting for 47% of all segmentation techniques. Within this category, commonly used approaches include CNNs, U-Net, context-guided attentive conditional random fields (CRFs), transformers, and the NeuroXAI framework.
Machine learning approaches constitute 18% of the segmentation methods. Several algorithms are employed, such as fuzzy c-means clustering, K-means clustering, and GANs.
The remaining 34% of approaches fall under “Other Techniques”. These include essential image processing methods such as image normalization, feature extraction, filtering and denoising, edge detection, and mathematical transformations. These techniques are crucial for preparing and refining medical images.
This distribution underscores the dominant role of ANNs while highlighting the complementary use of machine learning and image processing techniques in brain tumor segmentation research.

3.6. Literature Classification

A comprehensive qualitative analysis was conducted in response to the first research question. The following subsections describe the steps taken to complete this analysis.

3.6.1. Classification of Articles into “Conceptual Contribution” and “Practical Contribution” Groups

Each article was classified as either a “conceptual contribution” or a “practical contribution”.
  • A conceptual contribution refers to research that involves description, comparison, analysis, or review-based studies on the topic;
  • A practical contribution indicates that the research involved designing, developing, or implementing a program or presenting a novel algorithm.
This study includes 53 research papers, analyzed based on their research methodology. The articles were categorized as either conceptual or practical contributions, as shown in Table 3, which presents the classification of the 53 articles.
The practical contribution category represents the majority, with 47 articles, while the conceptual contribution category consists of only six studies. This suggests that researchers focus more on developing practical applications than conceptual or theoretical discussions.

3.6.2. Separating Articles That Address Segmentation

After classifying the articles, those specifically focused on brain tumor segmentation were identified, as this was a key aspect of the second research question. The selection process involved examining each article’s title, abstract, and methodology.
As a result, 43 out of 53 articles were selected for further analysis. Figure 12 illustrates the percentage of articles focused on brain tumor segmentation compared to those addressing other research topics.

3.6.3. Studying the Train/Test Mechanism, Evaluation Metrics, and Segmentation Approaches

At this stage, the articles focused on brain tumor segmentation were examined. The primary focus was training and testing techniques, evaluation metrics, and segmentation approaches. These aspects were extracted from the abstracts and methodologies of the selected studies.
The findings reveal various training and testing methods, as well as different evaluation metrics, depending on the role of segmentation. These results are summarized in Table 4.

4. Discussion

The review reveals that practical contributions dominate the field, with 47 studies focusing on practical applications, compared to only six studies classified as conceptual contributions. This trend highlights a strong inclination among researchers over the past decade toward advancing practical aspects of the field rather than theoretical frameworks.
Furthermore, this review identified two pivotal works [32,44] that introduced groundbreaking methods widely recognized as innovations. These studies have garnered substantial citations, underscoring their significant impact and contributions to the field.
The training and testing mechanisms discussed in this review incorporate various methods and techniques. The most frequently used approach is the one-split method, employed in 26 papers. This method involves dividing the dataset into two parts, typically allocating the majority for training and the remainder for testing (e.g., 80–20% or 70–30% splits). Dataset size and model complexity often determine the split ratio.
The second most common approach is data augmentation, which has been applied in 20 papers to increase dataset size and diversity by introducing transformations to the existing data. The validation technique, used in 14 papers, divides datasets into three parts for training, validation, and testing.
Additional techniques include the following:
  • Support vector machines (SVM), noted in five papers;
  • MATLAB, Keras, and Z-score normalization.
These methods were employed in 43 papers that utilized segmentation approaches to evaluate model performance. This review highlights the popularity of techniques such as dataset splitting, augmentation, and validation. They are essential steps in deep learning workflows, each with specific advantages and limitations. Splitting the dataset into training, validation, and test sets helps assess model performance objectively, but improper splitting may lead to biased results. Data augmentation enhances model generalization by creating diverse samples from limited data; however, excessive or unrealistic transformations can introduce noise. Validation techniques, such as cross-validation, improve reliability but increase computational cost. A balanced approach is required to ensure robust and meaningful model evaluation, which is widely adopted for its effectiveness in improving results.
The evaluation metrics summarized in Table 5 demonstrate the diversity of approaches used to assess model performance. The DICE score and accuracy are indeed popular metrics because of their straightforward application and ability to provide immediate insights into model performance. Among these, the accuracy score, cited in 25 papers, measures the ratio of correctly identified instances (true positives and true negatives) relative to the total number of instances. The DICE score, used in 23 papers, evaluates the overlap between predicted and actual sets, making it a crucial metric for image segmentation and binary classification tasks. Sensitivity, sometimes referred to as recall, measures the proportion of actual positives that are correctly identified by the model. Specificity measures the proportion of actual negatives that are accurately identified as such, thus avoiding false positives. F1-score is the harmonic mean of precision (the number of true positive results divided by the number of all positive results, including those not relevant) and recall.
Additional metrics such as true positives (TP), true negatives (TN), false positives (FP), false negatives (FN), false positive rate (FPR), negative predictive value (NPV), boundary displacement error (BDE), mean intersection over union (mIoU), false classification rate, pixel accuracy, mismatch, volumetric overlap, Matthews correlation coefficient (MCC), and area under the curve (AUC) further illustrate the wide range of evaluation methods utilized in the literature.
The variety of evaluation metrics reflects the demands of different tasks and datasets. While accuracy remains a widely used metric, its limitations in addressing imbalanced datasets necessitate supplementary measures such as sensitivity, specificity, and F1 score for a more comprehensive analysis. In image segmentation tasks, metrics like the DICE score, Jaccard coefficient, and Hausdorff95 distance provide deeper insights into model performance. Domain-specific metrics such as PSNR and MSE are also critical in evaluating image quality in relevant contexts.
The selection of appropriate metrics should align with the research objectives, dataset characteristics, and application requirements to ensure thorough and meaningful assessments of model efficacy.
GANs, while powerful generative models, have faced several criticisms that can limit their practical application, especially when compared to more established architectures like CNNs and U-Nets for specific tasks like image segmentation.
Emerging techniques like federated learning and self-supervised learning hold significant promise in the realm of tumor classification, yet they remain relatively underexplored. Federated learning offers a solution to privacy concerns by allowing multiple institutions to collaboratively train models without the need to share sensitive patient data.
On the other hand, self-supervised learning is valuable in situations where annotated data are scarce. It allows models to extract meaningful patterns and representations from large volumes of unlabeled data.

5. Conclusions

This systematic review analyzed literature on image processing, particularly medical images, focusing on brain tumor detection using magnetic resonance imaging (MRI). Fifty-three articles from the past ten years were reviewed and selected for their relevance to brain tumor detection. The analysis showed that 43 of the 53 studies employed brain tumor segmentation techniques, and 47 included practical applications of their proposed methods. Most studies relied on convolutional neural networks (CNNs) and machine learning (ML) models. Neural networks are recognized for their strong learning ability and accuracy, but require large datasets. Fuzzy logic offers interpretability and handles uncertainty well, but may lack precision in complex patterns. Genetic algorithms are effective in feature selection and optimization, but are computationally intensive. By comparing these techniques, the review provides insights into their suitability for different tumor classification tasks.
However, several limitations were identified, including long training times for datasets, particularly in cases involving hybrid models or complex ML algorithms. Additionally, some studies had insufficient dataset sizes, limiting their ability to achieve optimal accuracy. These findings emphasize the need for further in-depth research to enhance accuracy, as the results directly impact patient outcomes. Therefore, several approaches can be adopted to enhance the model’s accuracy when dealing with limited medical image data, like (data augmentation, transfer learning, synthetic data generation, and others).
Prioritizing tumor segmentation, localization, and classification remains critical to improving the precision and reliability of brain tumor detection methods. Future studies should explore larger, more diverse datasets and develop optimized computational strategies to mitigate processing inefficiencies. Additionally, integrating multi-modal imaging techniques and leveraging advanced AI models could drive significant breakthroughs in early tumor detection, ultimately leading to improved clinical decision-making and patient survival rates.

Ethical Considerations

While this study does not directly utilize patient data, it is essential to emphasize the ethical standards pertinent to research of this nature. Upholding patient privacy and adhering to data protection regulations like GDPR or HIPAA is a crucial obligation in clinical and health data research. The articles utilized for this study had obtained necessary ethical approval, ensuring compliance with institutional and regulatory guidelines. Any future work involving direct data collection will explicitly address obtaining ethical approvals as well as establishing protocols for handling such information.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kumar, K.A.; Prasad, A.; Metan, J. A hybrid deep CNN-Cov-19-Res-Net Transfer learning architype for an enhanced Brain tumor Detection and Classification scheme in medical image processing. Biomed. Signal Process. Control 2022, 76, 103631. [Google Scholar]
  2. Abdou, M.A. Literature review: Efficient deep neural networks techniques for medical image analysis. Neural Comput. Appl. 2022, 34, 5791–5812. [Google Scholar] [CrossRef]
  3. Muhammad, K.; Khan, S.; Del Ser, J.; De Albuquerque, V.H.C. Deep learning for multigrade brain tumor classification in smart healthcare systems: A prospective survey. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 507–522. [Google Scholar] [CrossRef] [PubMed]
  4. Kumar, S.; Choudhary, S.; Jain, A.; Singh, K.; Ahmadian, A.; Bajuri, M.Y. Brain tumor classification using deep neural network and transfer learning. Brain Topogr. 2023, 36, 305–318. [Google Scholar] [CrossRef]
  5. Vankdothu, R.; Hameed, M.A. Brain tumor MRI images identification and classification based on the recurrent convolutional neural network. Meas. Sens. 2022, 24, 100412. [Google Scholar] [CrossRef]
  6. Ghorbian, M.; Ghorbian, S.; Ghobaei-arani, M. A comprehensive review on machine learning in brain tumor classification: Taxonomy, challenges, and future trends. Biomed. Signal Process. Control 2024, 98, 106774. [Google Scholar] [CrossRef]
  7. Torshabi, A.E. Investigation the efficacy of fuzzy logic implementation at image-guided radiotherapy. J. Med. Signals Sens. 2022, 12, 163–170. [Google Scholar] [CrossRef]
  8. Zadeh, L.A. Fuzzy logic. Computer 1988, 21, 83–93. [Google Scholar] [CrossRef]
  9. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G.; Lv, J. Automatically designing CNN architectures using the genetic algorithm for image classification. IEEE Trans. Cybern. 2020, 50, 3840–3854. [Google Scholar] [CrossRef]
  10. Galeazzi, C.; Sacchetti, A.; Cisbani, A.; Babini, G. The PRISMA program. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; p. IV-105. [Google Scholar]
  11. Abdullah, S.A.; Al Ashoor, A.A. Ipv6 security issues: A systematic review following prisma guidelines. Baghdad Sci. J. 2022, 19, 1430. [Google Scholar] [CrossRef]
  12. Wang, G.; Li, W.; Zuluaga, M.A.; Pratt, R.; Patel, P.A.; Aertsen, M.; Doel, T.; David, A.L.; Deprest, J.; Ourselin, S. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans. Med. Imaging 2018, 37, 1562–1573. [Google Scholar] [CrossRef] [PubMed]
  13. Mallick, P.K.; Ryu, S.H.; Satapathy, S.K.; Mishra, S.; Nguyen, G.N.; Tiwari, P. Brain MRI image classification for cancer detection using deep wavelet autoencoder-based deep neural network. IEEE Access 2019, 7, 46278–46287. [Google Scholar] [CrossRef]
  14. Hossain, T.; Shishir, F.S.; Ashraf, M.; Al Nasim, M.A.; Shah, F.M. Brain tumor detection using convolutional neural network. In Proceedings of the 1st international conference on advances in science, engineering and robotics technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–6. [Google Scholar]
  15. Hemanth, G.; Janardhan, M.; Sujihelen, L. Design and implementing brain tumor detection using machine learning approach. In Proceedings of the 3rd international conference on trends in electronics and informatics (ICOEI), Tirunelveli, India, 23–25 April 2019; pp. 1289–1294. [Google Scholar]
  16. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
  17. Díaz-Pernas, F.J.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare 2021, 9, 153. [Google Scholar] [CrossRef]
  18. Ghafourian, E.; Samadifam, F.; Fadavian, H.; Jerfi Canatalay, P.; Tajally, A.; Channumsin, S. An ensemble model for the diagnosis of brain tumors through MRIs. Diagnostics 2023, 13, 561. [Google Scholar] [CrossRef]
  19. Menze, B.; Isensee, F.; Wiest, R.; Wiestler, B.; Maier-Hein, K.; Reyes, M.; Bakas, S. Analyzing magnetic resonance imaging data from glioma patients using deep learning. Comput. Med. Imaging Graph. 2021, 88, 101828. [Google Scholar] [CrossRef]
  20. Kurdi, S.Z.; Ali, M.H.; Jaber, M.M.; Saba, T.; Rehman, A.; Damaševičius, R. Brain tumor classification using meta-heuristic optimized convolutional neural networks. J. Pers. Med. 2023, 13, 181. [Google Scholar] [CrossRef]
  21. Daimary, D.; Bora, M.B.; Amitab, K.; Kandar, D. Brain tumor segmentation from MRI images using hybrid convolutional neural networks. Procedia Comput. Sci. 2020, 167, 2419–2428. [Google Scholar] [CrossRef]
  22. Reddy, D.; Bhavana, V.; Krishnappa, H. Brain tumor detection using image segmentation techniques. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 18–22. [Google Scholar]
  23. Ramalakshmi, K.; Rajagopal, S.; Kulkarni, M.B.; Poddar, H. A hyperdimensional framework: Unveiling the interplay of RBP and GSN within CNNs for ultra-precise brain tumor classification. Biomed. Signal Process. Control. 2024, 96, 106565. [Google Scholar]
  24. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef]
  25. Balamurugan, T.; Gnanamanoharan, E. Brain tumor segmentation and classification using hybrid deep CNN with LuNetClassifier. Neural Comput. Appl. 2023, 35, 4739–4753. [Google Scholar] [CrossRef]
  26. Liu, Z.; Tong, L.; Chen, L.; Zhou, F.; Jiang, Z.; Zhang, Q.; Wang, Y.; Shan, C.; Li, L.; Zhou, H. Canet: Context aware network for brain glioma segmentation. IEEE Trans. Med. Imaging 2021, 40, 1763–1777. [Google Scholar] [CrossRef] [PubMed]
  27. Lin, J.; Lin, J.; Lu, C.; Chen, H.; Lin, H.; Zhao, B.; Shi, Z.; Qiu, B.; Pan, X.; Xu, Z. CKD-TransBTS: Clinical knowledge-driven hybrid transformer with modality-correlated cross-attention for brain tumor segmentation. IEEE Trans. Med. Imaging 2023, 42, 2451–2461. [Google Scholar] [CrossRef] [PubMed]
  28. Al-Masni, M.A.; Kim, D.-H. CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation. Sci. Rep. 2021, 11, 10191. [Google Scholar] [CrossRef]
  29. Zhang, D.; Huang, G.; Zhang, Q.; Han, J.; Han, J.; Yu, Y. Cross-modality deep feature learning for brain tumor segmentation. Pattern Recognit. 2021, 110, 107562. [Google Scholar] [CrossRef]
  30. Xu, Q.; Ma, Z.; Na, H.; Duan, W. DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation. Comput. Biol. Med. 2023, 154, 106626. [Google Scholar] [CrossRef]
  31. Peng, J.; Kim, D.D.; Patel, J.B.; Zeng, X.; Huang, J.; Chang, K.; Xun, X.; Zhang, C.; Sollee, J.; Wu, J. Deep learning-based automatic tumor burden assessment of pediatric high-grade gliomas, medulloblastomas, and other leptomeningeal seeding tumors. Neuro-Oncology 2022, 24, 289–299. [Google Scholar] [CrossRef]
  32. Ladkat, A.S.; Bangare, S.L.; Jagota, V.; Sanober, S.; Beram, S.M.; Rane, K.; Singh, B.K. Deep Neural Network-Based Novel Mathematical Model for 3D Brain Tumor Segmentation. Comput. Intell. Neurosci. 2022, 2022, 4271711. [Google Scholar] [CrossRef]
  33. Preetha, C.J.; Meredig, H.; Brugnara, G.; Mahmutoglu, M.A.; Foltyn, M.; Isensee, F.; Kessler, T.; Pflüger, I.; Schell, M.; Neuberger, U. Deep-learning-based synthesis of post-contrast T1-weighted MRI for tumour response assessment in neuro-oncology: A multicentre, retrospective cohort study. Lancet Digit. Health 2021, 3, e784–e794. [Google Scholar] [CrossRef]
  34. Xu, Y.; He, X.; Xu, G.; Qi, G.; Yu, K.; Yin, L.; Yang, P.; Yin, Y.; Chen, H. A medical image segmentation method based on multi-dimensional statistical features. Front. Neurosci. 2022, 16, 1009581. [Google Scholar] [CrossRef]
  35. Telrandhe, S.R.; Pimpalkar, A.; Kendhe, A. Detection of brain tumor from MRI images by using segmentation & SVM. In Proceedings of the 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), Coimbatore, India, 29 February–1 March 2016; pp. 1–6. [Google Scholar]
  36. Abd El Kader, I.; Xu, G.; Shuai, Z.; Saminu, S.; Javaid, I.; Salim Ahmad, I. Differential deep convolutional neural network model for brain tumor classification. Brain Sci. 2021, 11, 352. [Google Scholar] [CrossRef] [PubMed]
  37. Zeineldin, R.A.; Karar, M.E.; Elshaer, Z.; Coburger, J.; Wirtz, C.R.; Burgert, O.; Mathis-Ullrich, F. Explainability of deep neural networks for MRI analysis of brain tumors. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1673–1683. [Google Scholar] [CrossRef] [PubMed]
  38. Micallef, N.; Seychell, D.; Bajada, C.J. Exploring the u-net++ model for automatic brain tumor segmentation. IEEE Access 2021, 9, 125523–125539. [Google Scholar] [CrossRef]
  39. Athisayamani, S.; Antonyswamy, R.S.; Sarveshwaran, V.; Almeshari, M.; Alzamil, Y.; Ravi, V. Feature extraction using a residual deep convolutional neural network (ResNet-152) and optimized feature dimension reduction for MRI brain tumor classification. Diagnostics 2023, 13, 668. [Google Scholar] [CrossRef]
  40. Choi, Y.S.; Bae, S.; Chang, J.H.; Kang, S.-G.; Kim, S.H.; Kim, J.; Rim, T.H.; Choi, S.H.; Jain, R.; Lee, S.-K. Fully automated hybrid approach to predict the IDH mutation status of gliomas via deep learning and radiomics. Neuro-Oncology 2021, 23, 304–313. [Google Scholar] [CrossRef]
  41. Hu, M.; Zhong, Y.; Xie, S.; Lv, H.; Lv, Z. Fuzzy system based medical image processing for brain disease prediction. Front. Neurosci. 2021, 15, 714318. [Google Scholar] [CrossRef] [PubMed]
  42. Conte, G.M.; Weston, A.D.; Vogelsang, D.C.; Philbrick, K.A.; Cai, J.C.; Barbera, M.; Sanvito, F.; Lachance, D.H.; Jenkins, R.B.; Tobin, W.O. Generative adversarial networks to synthesize missing T1 and FLAIR MRI sequences for use in a multisequence brain tumor segmentation model. Radiology 2021, 299, 313–323. [Google Scholar] [CrossRef]
  43. Khan, A.H.; Abbas, S.; Khan, M.A.; Farooq, U.; Khan, W.A.; Siddiqui, S.Y.; Ahmad, A. Intelligent model for brain tumor identification using deep learning. Appl. Comput. Intell. Soft Comput. 2022, 2022, 8104054. [Google Scholar] [CrossRef]
  44. Archana, K.; Komarasamy, G. A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor. J. Intell. Syst. 2023, 32, 20220206. [Google Scholar] [CrossRef]
  45. Zhang, J.; Xie, Y.; Wang, Y.; Xia, Y. Inter-slice context residual learning for 3D medical image segmentation. IEEE Trans. Med. Imaging 2020, 40, 661–672. [Google Scholar] [CrossRef] [PubMed]
  46. Gunasekara, S.R.; Kaldera, H.; Dissanayake, M.B. A systematic approach for MRI brain tumor localization and segmentation using deep learning and active contouring. J. Healthc. Eng. 2021, 2021, 6695108. [Google Scholar] [CrossRef] [PubMed]
  47. Anantharajan, S.; Gunasekaran, S.; Subramanian, T.; Venkatesh, R. MRI brain tumor detection using deep learning and machine learning approaches. Meas. Sens. 2024, 31, 101026. [Google Scholar] [CrossRef]
  48. Chen, G.; Ru, J.; Zhou, Y.; Rekik, I.; Pan, Z.; Liu, X.; Lin, Y.; Lu, B.; Shi, J. MTANS: Multi-scale mean teacher combined adversarial network with shape-aware embedding for semi-supervised brain lesion segmentation. NeuroImage 2021, 244, 118568. [Google Scholar] [CrossRef]
  49. Gumaei, A.; Hassan, M.M.; Hassan, M.R.; Alelaiwi, A.; Fortino, G. A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access 2019, 7, 36266–36273. [Google Scholar] [CrossRef]
  50. Jia, Z.; Chen, D. Brain tumor identification and classification of MRI images using deep learning techniques. IEEE Access 2020. [Google Scholar] [CrossRef]
  51. Ullah, M.S.; Khan, M.A.; Almujally, N.A.; Alhaisoni, M.; Akram, T.; Shabaz, M. BrainNet: A fusion assisted novel optimal framework of residual blocks and stacked autoencoders for multimodal brain tumor classification. Sci. Rep. 2024, 14, 5895. [Google Scholar] [CrossRef]
  52. Tripathy, S.; Singh, R.; Ray, M. Automation of brain tumor identification using efficientnet on magnetic resonance images. Procedia Comput. Sci. 2023, 218, 1551–1560. [Google Scholar] [CrossRef]
  53. Srinivasan, S.; Francis, D.; Mathivanan, S.K.; Rajadurai, H.; Shivahare, B.D.; Shah, M.A. A hybrid deep CNN model for brain tumor image multi-classification. BMC Med. Imaging 2024, 24, 21. [Google Scholar] [CrossRef]
  54. Tahir, B.; Iqbal, S.; Usman Ghani Khan, M.; Saba, T.; Mehmood, Z.; Anjum, A.; Mahmood, T. Feature enhancement framework for brain tumor segmentation and classification. Microsc. Res. Tech. 2019, 82, 803–811. [Google Scholar] [CrossRef]
  55. Wadhwa, A.; Bhardwaj, A.; Verma, V.S. A review on brain tumor segmentation of MRI images. Magn. Reson. Imaging 2019, 61, 247–259. [Google Scholar] [CrossRef]
  56. Farchi, N.R.K.T.A. Automatic classification of preprocessed mri brain tumors images using deep convolutional neural network. IJTPE J. 2023, 15, 68–73. [Google Scholar]
  57. Chaudhary, J.; Rani, R.; Kamboj, A. Deep learning-based approach for segmentation of glioma sub-regions in MRI. Int. J. Intell. Comput. Cybern. 2020, 13, 389–406. [Google Scholar] [CrossRef]
  58. Ramamoorthy, M.; Qamar, S.; Manikandan, R.; Jhanjhi, N.Z.; Masud, M.; AlZain, M.A. Earlier detection of brain tumor by pre-processing based on histogram equalization with neural network. Healthcare 2022, 10, 1218. [Google Scholar] [CrossRef]
  59. Kiranmayee, B.; Rajinikanth, T.; Nagini, S. Enhancement of SVM based MRI brain image classification using pre-processing techniques. Indian J. Sci. Technol. 2016, 9, 1–7. [Google Scholar] [CrossRef]
  60. Lavanya, N.; Nagasundaram, S. Improving Brain Tumor MRI Images with Pre-Processing Techniques for Noise Removal. In Proceedings of the 2023 International Conference on Sustainable Communication Networks and Application (ICSCNA), Theni, India, 15–17 November 2023; pp. 1530–1538. [Google Scholar]
  61. Harish, S.; Ahammed, G.A. Integrated modelling approach for enhancing brain MRI with flexible pre-processing capability. Int. J. Electr. Comput. Eng. 2019, 9, 2416. [Google Scholar]
  62. Poornachandra, S.; Naveena, C. Pre-processing of mr images for efficient quantitative image analysis using deep learning techniques. In Proceedings of the 2017 International Conference on Recent Advances in Electronics and Communication Technology (ICRAECT), Bangalore, India, 16–17 March 2017; pp. 191–195. [Google Scholar]
  63. Archa, S.; Kumar, C.S. Segmentation of brain tumor in MRI images using CNN with edge detection. In Proceedings of the 2018 International Conference on Emerging Trends and Innovations in Engineering and Technological Research (ICETIETR), Ernakulam, India, 11–13 July 2018; pp. 1–4. [Google Scholar]
  64. Ali, T.M.; Nawaz, A.; Ur Rehman, A.; Ahmad, R.Z.; Javed, A.R.; Gadekallu, T.R.; Chen, C.-L.; Wu, C.-M. A sequential machine learning-cum-attention mechanism for effective segmentation of brain tumor. Front. Oncol. 2022, 12, 873268. [Google Scholar] [CrossRef] [PubMed]
  65. Russo, C.; Liu, S.; Di Ieva, A. Spherical coordinates transformation pre-processing in Deep Convolution Neural Networks for brain tumor segmentation in MRI. Med. Biol. Eng. Comput. 2022, 60, 121–134. [Google Scholar] [CrossRef]
  66. Biratu, E.S.; Schwenker, F.; Ayano, Y.M.; Debelee, T.G. A survey of brain tumor segmentation and classification algorithms. J. Imaging 2021, 7, 179. [Google Scholar] [CrossRef]
  67. Kapoor, L.; Thakur, S. A survey on brain tumor detection using image processing techniques. In Proceedings of the 2017 7th International Conference on Cloud Computing, Data Science & Engineering-Confluence, Noida, India, 12–13 January 2017; pp. 582–585. [Google Scholar]
  68. Aggarwal, M.; Tiwari, A.K.; Sarathi, M.P.; Bijalwan, A. An early detection and segmentation of Brain Tumor using Deep Neural Network. BMC Med. Inform. Decis. Mak. 2023, 23, 78. [Google Scholar] [CrossRef]
Figure 1. Research Methodology.
Figure 1. Research Methodology.
Applsci 15 05186 g001
Figure 2. Number of citations and publications per year from 2014 to 2024 in IEEE.
Figure 2. Number of citations and publications per year from 2014 to 2024 in IEEE.
Applsci 15 05186 g002
Figure 3. Number of citations and publications per year from 2014 to 2024 in Scopus.
Figure 3. Number of citations and publications per year from 2014 to 2024 in Scopus.
Applsci 15 05186 g003
Figure 4. Number of citations and publications per year from 2014 to 2024 in Web of Science.
Figure 4. Number of citations and publications per year from 2014 to 2024 in Web of Science.
Applsci 15 05186 g004
Figure 5. Type of ANN.
Figure 5. Type of ANN.
Applsci 15 05186 g005
Figure 6. Neural network types.
Figure 6. Neural network types.
Applsci 15 05186 g006
Figure 7. Machine learning types (supervised learning).
Figure 7. Machine learning types (supervised learning).
Applsci 15 05186 g007
Figure 8. Machine learning types (unsupervised learning).
Figure 8. Machine learning types (unsupervised learning).
Applsci 15 05186 g008
Figure 9. Other algorithms used in BTC.
Figure 9. Other algorithms used in BTC.
Applsci 15 05186 g009
Figure 10. Dataset sources in tumor classification research.
Figure 10. Dataset sources in tumor classification research.
Applsci 15 05186 g010
Figure 11. Distribution of segmentation techniques used in tumor classification studies.
Figure 11. Distribution of segmentation techniques used in tumor classification studies.
Applsci 15 05186 g011
Figure 12. Distribution of articles on brain tumor segmentation vs. other research topics.
Figure 12. Distribution of articles on brain tumor segmentation vs. other research topics.
Applsci 15 05186 g012
Table 1. Selection criteria.
Table 1. Selection criteria.
Web of ScienceIEEEScopus
Keyword:“brain tumor”“brain tumor”“brain tumor”
Criterion applied:“publication topic”“publishing topic” “keyword”
Value of criterion:Image processing with a medical image Analysis of medical image Image processing in the medical field
Records returned:11310158
Record excluded:141817
Records extracted:998341
Table 2. Stage of maintaining quality.
Table 2. Stage of maintaining quality.
TaskSteppingExcluded Papers
Web of ScienceIEEEScopus
FilterCounting articles, reviews, and conference papers642
Removal duplicatedExclusion based on duplicates in the same database5410
Exclusion based on duplicates between the three databases15130
Evaluation Analyzing the abstract papers282210
Citations and the year of publication of papers122910
Table 3. Article classification.
Table 3. Article classification.
No.AuthorsApproachType
1Díaz-Pernas et al., 2021 [17]CNN, sliding mechanismPractical
2Ghafourian et al., 2023 [18]Social spider optimization (SSO) algorithmPractical
3Menze et al., 2021 [19]U-Net, intensity normalization, spatial harmonizationConceptual
4Kurdi et al., 2023 [20]Fuzzy c-means clusteringPractical
5Vankdothu & Hameed, 2022 [5]K-means clustering, gray level co-occurrence matrix (GLCM)Practical
6Daimary et al., 2020 [21]CNN, U-NetPractical
7Reddy et al., 2018 [22]K-means clustering Practical
8Ramalakshmi et al., 2024 [23]Gray standard normalization, regional binary patternsPractical
9Pereira et al., 2016 [24]CNN, intensity normalization, bias field correction Practical
10Balamurugan & Gnanamanoharan, 2023 [25]Hybrid FCM-GMM (fuzzy c-means and Gaussian mixture model)Practical
11Liu et al., 2021 [26]U-Net, context-guided attentive CRFsPractical
12Lin et al., 2023 [27]Modality-correlated cross-attention (MCCA), trans and CNN feature calibration (TCFC), transformerPractical
13Al-Masni & Kim, 2021 [28]U-Net, inversion recovery (IR) Practical
14Zhang et al., 2021 [29]GANS for knowledge transfer, cross-modality feature transition (CMFT) process, modalities, and a cross-modality feature fusion (CMFF)Practical
15Xu et al., 2023 [30]Primary feature conservation (PFC) strategy and compact split-attention (CSA)Practical
16Peng et al., 2022 [31]3d U-NetPractical
17Ladkat et al., 2022 [32]3D attention U-Net, a mathematical model for pixel enhancementPractical
18Preetha et al., 2021 [33]CNNPractical
19Xu et al., 2022 [34]CNN, transformer Practical
20Telrandhe et al., 2016 [35]K-means clustering, median filtering, and skull maskingPractical
21Abd El Kader et al., 2021 [36]CNNPractical
22Zeineldin et al., 2022 [37]Neuroxai frameworkConceptual
23Micallef et al., 2021 [38]U-NetPractical
24Athisayamani et al., 2023 [39]Canny Mayfly algorithm (ACMA), spatial gray level dependence matrix (SGLDM)Practical
25Choi et al., 2021 [40]CNNPractical
26Hu et al., 2021 [41]HPU-Net, fuzzy c-means clusteringPractical
27Conte et al., 2021 [42]U-NetPractical
28Khan et al., 2022 [43]CNNPractical
29Archana & Komarasamy, 2023 [44]U-NetPractical
30Zhang et al., 2020 [45]Context residual module, inter-slice context informationPractical
31Gunasekara et al., 2021 [46]CNN, Chan–Vese algorithmConceptual
32Anantharajan et al., 2024 [47]Fuzzy c-means clustering, adaptive contrast enhancement algorithm (ACEA), and a median filterPractical
33Chen et al., 2021 [48]Teacher model with adversarial learning, signed distance maps (SDM)Practical
34Gumaei et al., 2019 [49]The paper does not focus on image segmentation but utilizes a principal component analysis-normalized GIST (PCA-NGIST) feature extraction method without segmentation.Practical
35Jia & Chen, 2020 [50]FAHS-SVM, skull stripping, morphological operations, and wavelet transformationPractical
36MS Ullah et al. (2024) [51]ResNet-50 and Stacked AutoencodersPractical
37Tripathy et al., 2023 [52]Cropping and converting to grayscale, the binary thresholding methodPractical
38Srinivasan S et al., 2024 [53]Deep Convolutional Neural Network (CNN)Practical
39Tahir et al., 2019 [54]Otsu method wavelet denoising and histogram equalizationPractical
40Wadhwa et al., 2019 [55]The paper reviews multiple approaches, focusing on a hybrid model combining fully convolutional neural networks (FCNN) with conditional random fields (CRF)Conceptual
41Farchi, 2023 [56]The paper focuses on classification, not segmentation, but employs image preprocessing techniques such as resizing, grayscale conversion, image smoothing, and enhancement prior to classificationPractical
42Chaudhary et al., 2020 [57]Binary segmentationPractical
43Ramamoorthy et al., 2022 [58]Otsu methodPractical
44Kiranmayee et al., 2016 [59]Fuzzy c-means clustering, thresholding method, Watershed segmentation Practical
45Lavanya & Nagasundaram, 2023 [60]FiltersPractical
46Harish & Ahammed, 2019 [61]The segmentation aspect is not the focus of the paper. Instead, it discusses image enhancement techniques to improve the quality of brain MRI imagesPractical
47Poornachandra & Naveena, 2017 [62]N4ITK algorithmConceptual
48Archa & Kumar, 2018 [63]CNN, wavelet transform, median filteringPractical
49Ali et al., 2022 [64]U-Net, Markov random field (MRF)Practical
50Russo et al., 2022 [65]CNN, transforming Cartesian coordinates into spherical coordinatesPractical
51Biratu et al., 2021 [66]CNNConceptual
52Kapoor & Thakur, 2017 [67]K-means clustering, Fuzzy C-Means clustering, Genetic Algorithms, thresholding method, Watershed segmentationConceptual
53Aggarwal et al., 2023 [68]ResnetPractical
Table 4. Summary of training/testing techniques, evaluation metrics, and segmentation approaches in brain tumor segmentation studies.
Table 4. Summary of training/testing techniques, evaluation metrics, and segmentation approaches in brain tumor segmentation studies.
No.AuthorsTT MechanismEvaluation MetricsSegmentation Approach
1Díaz-Pernas et al., 2021 [17]Split 80-20, cross-validation methodDICE score, predicted tumor type accuracy score, and sensitivityCNN, sliding mechanism
2Ghafourian et al., 2023 [18]Split 70-30, cross-validation methodAccuracy, sensitivity, specificity, F1 scoreSocial spider optimization (SSO) algorithm
3Menze et al., 2021 [19]AugmentationHausdorff distance, volumetric mismatchU-Net, intensity normalization, spatial harmonization
4Kurdi et al., 2023 [20]MATLAB TOOL,Pixel accuracy, accuracy, sensitivity, specificity, error rateFuzzy c-means clustering
5Vankdothu & Hameed, 2022 [5]Split 70-30DICE score, accuracy, Sensitivity, specificityK-means clustering, Gray level co-occurrence matrix (GLCM)
6Daimary et al., 2020 [21]Split 60-40DICE score, accuracyCNN, U-Net
7Reddy et al., 2018 [22] DICE score, accuracy, precision, recall, true positive (TP), true negative (TN), false positive (FP), false negative (FN)K-means clustering
8Ramalakshmi et al., 2024 [23] False classification ratio, accuracy, Sensitivity, specificityGray standard normalization, regional binary patterns
9Pereira et al., 2016 [24]Cross-validation method DICE score, Sensitivity, Positive Predictive Value (PPV)CNN, intensity normalization, bias field correction
10Balamurugan & Gnanamanoharan, 2023 [25]Split 70-30 Accuracy, sensitivity, specificity, precision, recall, F-score, DICE Similarity Index (DSI)Hybrid FCM-GMM (fuzzy c-means and Gaussian mixture model)
11Liu et al., 2021 [26]Cross-validation method Sensitivity, specificity, Hausdorff 95 distanceU-Net, context-guided attentive CRFs
12Lin et al., 2023 [27]Split 80-10-10 DICE score, sensitivity, Hausdorff 95 distanceModality-correlated cross-attention (MCCA), Trans and CNN feature calibration (TCFC), transformer
13Al-Masni & Kim, 2021 [28]Split 80-20 DICE score, accuracy, sensitivity, specificity, Jaccard index, Matthews correlation coefficient (MCC), and area under the curve (AUC)U-Net, inversion recovery (IR)
14Zhang et al., 2021 [29]Split 80-20 DICE, score, sensitivity, specificity, Hausdorff 95 distanceGans for knowledge transfer, cross-modality feature Transition (CMFT) process, modalities, and a cross-modality feature fusion (CMFF)
15Xu et al., 2023 [30]Split 70-10-20, augmentation, Adam optimization DICE score, accuracy, precision, recall, mean intersection over union (MIOU)Primary feature conservation (PFC) strategy and compact split-attention (CSA)
16Peng et al., 2022 [31]Split 80-20, augmentationDICE score, ICC3D U-Net
17Ladkat et al., 2022 [32]Split 77-23Sensitivity, specificity, Hausdorff95 distance3D attention U-Net, a mathematical model for pixel enhancement
18Preetha et al., 2021 [33]Split 8-0-20, cross-validation methodDice score, C-index, SSIMCNN
19Xu et al., 2022 [34]Adam optimization Hausdorff 95 distance, mean Intersection over union (MIOU)CNN, transformer
20Telrandhe et al., 2016 [35]SVM AccuracyK-means clustering, median filtering, and skull masking
21Abd El Kader et al., 2021 [36]Split 8-0-20, cross-validation methodDICE score, accuracy, sensitivity, specificity, F1 score, precisionCNN
22Zeineldin et al., 2022 [37]Augmentation, Z-scoreDICE scoreNeuroxai framework
23Micallef et al., 2021 [38]Split 80-20, cross-validation method, augmentation DICE score, sensitivity, specificity, Hausdorff95 distanceU-Net
24Athisayamani et al., 2023 [39]Split 70-15-15, augmentation Accuracy, sensitivity, specificity, and recallCanny Mayfly algorithm (ACMA), spatial gray level dependence matrix (SGLDM)
25Choi et al., 2021 [40] Accuracy, precision, and recallCNN
26Hu et al., 2021 [41]Cross-validation method DICE score, Jaccard coefficientHPU-Net, fuzzy c-means clustering
27Conte et al., 2021 [42]Split 64-20-16, cross-validation method DICE scoreU-Net
28Khan et al., 2022 [43]Split 87-13, cross-validation method Accuracy, error rateCNN
29Archana & Komarasamy, 2023 [44]Split 80-20 DICE score, accuracy, F1 score, precision, recallU-Net
30Zhang et al., 2020 [45]Split 75-0-25, cross-validation, augmentationHausdorff95 distanceContext residual module, inter-slice context information
31Gunasekara et al., 2021 [46]Split 80-0-20, cross-validation DICE score, peak signal-to-noise ratioCNN, Chan–Vese algorithm
32Anantharajan et al., 2024 [47]SVM Accuracy, sensitivity, specificity, peak signal-to-noise ratio, Jaccard coefficient (JC)Fuzzy c-means clustering, Adaptive Contrast Enhancement Algorithm (ACEA), and a median filter
33Chen et al., 2021 [48]Split 80-0-20, cross-validation methodFalse positive rate (FPR), true positive rate, positive predictive value (PPV)Teacher model with adversarial learning, signed distance maps (SDM)
34Jia & Chen, 2020 [50]SVM Accuracy, sensitivity, specificityFAHS-SVM, skull stripping, morphological operations, and wavelet transformation
35Tripathy et al., 2023 [52]Split 67-13-20 DICE score, accuracy, sensitivity, specificity, F1 score, precisionCropping and converting to grayscale, binary thresholding
36Srinivasan S et al., 2024 [53]5-fold cross-validation
Dataset split into 60 training, 20 validation, 20 testing
Accuracy, Sensitivity, Specificity, Precision, ROC-AUCclassification task only (binary, multi-type, and tumor grade)
37Tahir et al., 2019 [54]Split 90-0-10, cross-validation method, SVM Accuracy, sensitivity, specificityWavelet denoising and histogram equalization
38Ramamoorthy et al., 2022 [58] Accuracy, specificity, precisionOtsu method
39Kiranmayee et al., 2016 [59]SVM DICE score, accuracyFuzzy c-means clustering, thresholding method, Watershed segmentation
40Archa & Kumar, 2018 [63]Augmentation DICE score, accuracyCNN, wavelet transform, median filtering
41Ali et al., 2022 [64] Accuracy, sensitivity, precisionUNET, Markov Random Field (MRF)
42Russo et al., 2022 [65]Split 80-20, cross-validation method, augmentationDICE score, sensitivity, specificity, Hausdorff 95 distanceCNN, transforming Cartesian coordinates to spherical coordinates, Canny
43Aggarwal et al., 2023 [68]Split 34-46Sensitivity, specificity, Jaccard coefficient, peak signal-to-noise ratio, mean square error (MSE)Resnet
Table 5. Distribution of metrics in the articles.
Table 5. Distribution of metrics in the articles.
No.AuthorsAccuracySensitivitySpecifityRecall OR (TPR)F1 ScorePrecision OR (ppv)Dice ScoreHausdorff 95 DistanceOther Classification MetricsOther Segmentation Metrics
1Díaz-Pernas et al., 2021 [17]XX X
2Ghafourian et al., 2023 [18]XXX X
3Menze et al., 2021 [19] X
4Kurdi et al., 2023 [20]XXX XX
5Vankdothu & Hameed, 2022 [5]XXX X
6Daimary et al., 2020 [21]X X
7Reddy et al., 2018 [22]X X XX X
8Ramalakshmi et al., 2024 [23]XXX X
9Pereira et al., 2016 [24] X XX
10Balamurugan & Gnanamanoharan, 2023 [25]XXXXXXX
11Liu et al., 2021 [26] XX X
12Lin et al., 2023 [27] XX X
13Al-Masni & Kim, 2021 [28]XXX X X
14Zhang et al., 2021 [29] XX XX
15Xu et al., 2023 [30]X X XX X
16Peng et al., 2022 [31] X X
17Ladkat et al., 2022 [32] XX X
18Preetha et al., 2021 [33] X XX
19Xu et al., 2022 [34] X X
20Telrandhe et al., 2016 [35]X
21Abd El Kader et al., 2021 [36]XXX XX
22Zeineldin et al., 2022 [37] X
23Micallef et al., 2021 [38] XX XX
24Athisayamani et al., 2023 [39]XXXX
25Choi et al., 2021 [40]X X X
26Hu et al., 2021 [41] X X
27Conte et al., 2021 [42] X
28Khan et al., 2022 [43]X X
29Archana & Komarasamy, 2023 [44]X XXXX
30Zhang et al., 2020 [45] X
31Gunasekara et al., 2021 [46] X X
32Anantharajan et al., 2024 [47]XXX X
33Chen et al., 2021 [48] X X X
34Jia & Chen, 2020 [50]XXX
35Tripathy et al., 2023 [52]XXX XXX
36Srinivasan S et al., 2024 [53]XXX X X
37Tahir et al., 2019 [54]XXX
38Ramamoorthy et al., 2022 [58]X X X
39Kiranmayee et al., 2016 [59]X X
40Archa & Kumar, 2018 [63]X X
41Ali et al., 2022 [64]XX X
42Russo et al., 2022 [65] XX XX
43Aggarwal et al., 2023 [68] XX XX
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Al-Ashoor, A.; Lilik, F.; Nagy, S. A Systematic Analysis of Neural Networks, Fuzzy Logic and Genetic Algorithms in Tumor Classification. Appl. Sci. 2025, 15, 5186. https://doi.org/10.3390/app15095186

AMA Style

Al-Ashoor A, Lilik F, Nagy S. A Systematic Analysis of Neural Networks, Fuzzy Logic and Genetic Algorithms in Tumor Classification. Applied Sciences. 2025; 15(9):5186. https://doi.org/10.3390/app15095186

Chicago/Turabian Style

Al-Ashoor, Ahmed, Ferenc Lilik, and Szilvia Nagy. 2025. "A Systematic Analysis of Neural Networks, Fuzzy Logic and Genetic Algorithms in Tumor Classification" Applied Sciences 15, no. 9: 5186. https://doi.org/10.3390/app15095186

APA Style

Al-Ashoor, A., Lilik, F., & Nagy, S. (2025). A Systematic Analysis of Neural Networks, Fuzzy Logic and Genetic Algorithms in Tumor Classification. Applied Sciences, 15(9), 5186. https://doi.org/10.3390/app15095186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop