Brain Tumor Recognition Using Artificial Intelligence Neural-Networks (BRAIN): A Cost-Effective Clean-Energy Platform

: Brain tumors necessitate swift detection and classification for optimal patient outcomes. Deep learning has been extensively utilized to recognize complex tumor patterns in magnetic resonance imaging (MRI) images, aiding in tumor diagnosis, treatment, and prognostication. However, model complexity and limited generalizability with unfamiliar data hinder appropriate clinical integration. The objective of this study is to develop a clean-energy cloud-based deep learning platform to classify brain tumors. Three datasets of a total of 2611 axial MRI images were used to train our multi-layer convolutional neural network (CNN). Our platform automatically optimized every transfer learning and data augmentation feature combination to provide the highest predictive accuracy for our classification task. Our proposed system identified and classified brain tumors successfully and efficiently with an overall precision value of 96.8% [95% CI; 93.8–97.6]. Using clean energy supercomputing resources and cloud platforms cut our workflow to 103 min, $0 in total cost, and a negligible carbon footprint (0.0014 kg eq CO 2 ). By leveraging automated optimized learning, we developed a cost-effective deep learning (DL) platform that accurately classified brain tumors from axial MRI images of different levels. Although studies have identified machine learning tools to overcome these obstacles, only some are cost-effective, generalizable, and usable regardless of experience.


Introduction
Brain tumors constitute a complex set of pathologies that are associated with poor survival.Brain tumors pose a significant healthcare burden, affecting approximately 308,000 people worldwide in 2020 and costing nearly two hundred billion dollars in the United States alone [1,2].Brain tumors can be classified as either primary (originating in the brain) or secondary (originating elsewhere in the body and spreading to the brain).They can be intra-axial (i.e., gliomas) or extra-axial (i.e., meningiomas and pituitary adenomas) and can be further sub-classified based on their cellular and molecular features [3].Intra-axial tumors are more difficult to treat and are usually discovered at advanced stages when significant mass effect disrupts cortical areas.The most common primary brain tumors include gliomas (astrocytoma, glioblastoma, and ependymoma), pituitary adenomas, meningiomas, and schwannomas [4].Despite tremendous advances in many areas of medicine, survival outcomes remain very low [3].The complexity of the brain tissue and the variability in survival rates necessitate the development of more reliable early detection techniques.Accurate classification of brain tumors is critical for effective treatment planning, as different tumor types may require different treatment approaches.
The gold standard for identifying and diagnosing brain tumors is a pathological assessment of tissue morphology.However, non-invasive modalities capable of accurately classifying tumor type and grade are highly desirable [5].Currently, magnetic resonance imaging (MRI) is widely used to provide clinicians with important information on lesion location, structural features (i.e., vascularity, cellularity, and perfusion), and dynamics with normal tissue (i.e., metastasis and mass effect) [6].Classification is traditionally performed manually, which is very time-consuming, prone to error, and potentially not as capable of integrating nuanced macroscopic patterns [7].Consequently, there has been a surge in interest in computer-aided diagnosis (CAD) to improve early detection and treatment options, such as with colorectal polyp and lung cancer classification [8,9].
Deep learning (DL), a branch of artificial intelligence inspired by the understanding of the neural networks within humans, has shown great potential for improving the accuracy and efficiency of brain tumor classification [10,11].By training deep neural networks on large datasets of brain tumor imaging data, researchers and clinicians can develop highly accurate models that can automatically identify and classify different types of brain tumors [1,[10][11][12].The use of DL in brain tumor classification has the potential to improve patient outcomes by enabling faster and more accurate diagnosis and treatment planning [8,9].By reducing the burden on healthcare providers, DL can help allocate more time for clinicians to perform higher-level care [13].

Literature Review
A systematic review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.The main objective was to survey the types of model algorithms and evaluate subsequent accuracy metrics of studies that have utilized CNN to automate brain tumor classification.The PubMed, Embase, and Cochrane Library databases were queried from inception to 22 October 2023, with the following search criteria: ("deep learning" OR "machine learning") AND "brain tumor" AND ("MRI" OR "MR" OR "CT") AND "image classification" OR "brain tumor classification").Studies published in the English language were included if they directly utilized CNN for the purposes of brain tumor classification.Studies were excluded if they did not utilize CNN or if they were review articles.There were no restrictions on publication date.Three authors (M.S.G., E.T., and T.S.) performed the database searches and screened the titles and abstracts for relevance in accordance with the inclusion and exclusion criteria.Any discrepancies were resolved by consensus among all authors.Eligible articles were pooled, and duplicates were removed.Full-text articles and their respective reference lists were reviewed for final study inclusion.Data were extracted from all eligible articles by four independent authors (M.S.G., E.T., T.S., and J.W) who utilized a preformed data extraction sheet to collect the following information: author name, year, brain tumor subtypes classified, model algorithm, accuracy (averaged if multiple models were used), dataset size, and dataset source.A total of 920 articles were identified through our systematic literature search (Figure 1).After screening, duplicate removal, and appraisal for relevancy, 47 articles were included for final review.

Data
We utilized 3 different publicly accessible datasets via the Kaggle repository (https://www.kaggle.com)[14][15][16].Our training dataset included 2611 T1-Weighted Contrast-Enhanced images (TWCEI) images of which 495 subjects were tumor-free, 195 subjects with glioblastoma, 548 subjects with astrocytoma, 908 subjects with meningioma, and 465 subjects with schwannoma (Figure 2).Tumor diagnoses were identified via pathological assessment.All images were obtained in the axial plane.Out of 2611 total images, 2088 (80%) were used for training, 263 (10%) were used for validation, and 260 (10%) were used for testing.To further validate the generalizability of our model's performance, we conducted further testing on two separate test datasets (100 images from each dataset distinct from our training, validation, and test data).All images used throughout our workflow were composed of 630 × 630 pixels in size.

Workflow
Our workflow prioritized cost-effectiveness without sacrificing classification accuracy (Figure 3).To develop our cloud-based platform, we utilized Google Collaboratory, which allowed us to develop our code, leverage cloud-based Python notebooks, and harness supercomputing resources (Initially, we performed a series of learned optimizations to improve our model's performance).We utilized various permutations of EfficientNet, BERT, Transformer, TensorFlow Hub, PyTorch Hub, ResNet-50, VGG-16, Inception V3, DenseNet201, Xception, MobilleNet, and ResNet to

Data
We utilized 3 different publicly accessible datasets via the Kaggle repository (https:// www.kaggle.com)[14][15][16].Our training dataset included 2611 T1-Weighted Contrast-Enhanced images (TWCEI) images of which 495 subjects were tumor-free, 195 subjects with glioblastoma, 548 subjects with astrocytoma, 908 subjects with meningioma, and 465 subjects with schwannoma (Figure 2).Tumor diagnoses were identified via pathological assessment.All images were obtained in the axial plane.Out of 2611 total images, 2088 (80%) were used for training, 263 (10%) were used for validation, and 260 (10%) were used for testing.To further validate the generalizability of our model's performance, we conducted further testing on two separate test datasets (100 images from each dataset distinct from our training, validation, and test data).All images used throughout our workflow were composed of 630 × 630 pixels in size.

Data
We utilized 3 different publicly accessible datasets via the Kaggle repository (https://www.kaggle.com)[14][15][16].Our training dataset included 2611 T1-Weighted Contrast-Enhanced images (TWCEI) images of which 495 subjects were tumor-free, 195 subjects with glioblastoma, 548 subjects with astrocytoma, 908 subjects with meningioma, and 465 subjects with schwannoma (Figure 2).Tumor diagnoses were identified via pathological assessment.All images were obtained in the axial plane.Out of 2611 total images, 2088 (80%) were used for training, 263 (10%) were used for validation, and 260 (10%) were used for testing.To further validate the generalizability of our model's performance, we conducted further testing on two separate test datasets (100 images from each dataset distinct from our training, validation, and test data).All images used throughout our workflow were composed of 630 × 630 pixels in size.

Workflow
Our workflow prioritized cost-effectiveness without sacrificing classification accuracy (Figure 3).To develop our cloud-based platform, we utilized Google Collaboratory, which allowed us to develop our code, leverage cloud-based Python notebooks, and harness supercomputing resources (Initially, we performed a series of learned optimizations to improve our model's performance).We utilized various permutations of EfficientNet, BERT, Transformer, TensorFlow Hub, PyTorch Hub, ResNet-50, VGG-16, Inception V3, DenseNet201, Xception, MobilleNet, and ResNet to

Workflow
Our workflow prioritized cost-effectiveness without sacrificing classification accuracy (Figure 3).To develop our cloud-based platform, we utilized Google Collaboratory, which allowed us to develop our code, leverage cloud-based Python notebooks, and harness supercomputing resources (Initially, we performed a series of learned optimizations to improve our model's performance).We utilized various permutations of EfficientNet, BERT, Transformer, TensorFlow Hub, PyTorch Hub, ResNet-50, VGG-16, Inception V3, DenseNet201, Xception, MobilleNet, and ResNet to achieve the highest possible prediction capabilities [17].To further optimize our BRAIN model, we implemented data process-ing and augmentation techniques as previously described in the literature [17][18][19][20][21].We expanded our data by incorporating varying combinations of noise injection, pixelation, image translation, cropping, resizing, and zooming [17,19,20].Our BRAIN workflow was able to automatically optimize our CNN model with the best possible combination of transfer learning and data augmentation features to provide the highest predictive accuracy for our classification task.Finally, to test the generalizability of our model, we tested BRAIN on two external datasets, each with distinct MRI parameters.
achieve the highest possible prediction capabilities [17].To further optimize our BRAIN model, we implemented data processing and augmentation techniques as previously described in the literature [17][18][19][20][21].We expanded our data by incorporating varying combinations of noise injection, pixelation, image translation, cropping, resizing, and zooming [17,19,20].Our BRAIN workflow was able to automatically optimize our CNN model with the best possible combination of transfer learning and data augmentation features to provide the highest predictive accuracy for our classification task.Finally, to test the generalizability of our model, we tested BRAIN on two external datasets, each with distinct MRI parameters.

Improvements to CNN Models for Brain Tumors Classification
To improve upon previously reported efforts in brain tumor classification, we utilized data pre-processing (DP) and transfer learning (TL) techniques to improve the efficiency and effectiveness of feature training [17][18][19][20][21].Our DP workflow implemented TensorFlow, Keras, and PyTorch to perform data augmentation, including different permutations of image resizing, cropping, noise injection, translation, and pixel normalization [17][18][19][20][21].By artificially expanding various feature parameters of our training dataset, DP can increase the generalizability of a model to robustly train new images while reducing model overfitting [22].
TL has been well studied in medical oncology to facilitate image classification [23].TL is an approach where a pre-trained model, previously trained on a large dataset, is adapted to a new related task with comparatively less data.By leveraging the knowledge learned from the initial training task, transfer learning can improve the performance and reduce the training and resource allocation for a new model.In our TL workflow, we utilized GoogleNet, AlexNet, EfficientNet, BERT, Transformer, TensorFlow Hub, PyTorch Hub, ResNet-50, VGG-16, Inception V3, DenseNet201, Xception, and MobilleNet models to transfer the trained parameter weights to our new CNN model.We fine-tuned this model using Bayesian optimization to automate robust hyperparameter space scanning and optimization [17][18][19][20][21][22][23].Our improved CNN model was then used for the final brain tumor classification.

Improvements to CNN Models for Brain Tumors Classification
To improve upon previously reported efforts in brain tumor classification, we utilized data pre-processing (DP) and transfer learning (TL) techniques to improve the efficiency and effectiveness of feature training [17][18][19][20][21].Our DP workflow implemented TensorFlow, Keras, and PyTorch to perform data augmentation, including different permutations of image resizing, cropping, noise injection, translation, and pixel normalization [17][18][19][20][21].By artificially expanding various feature parameters of our training dataset, DP can increase the generalizability of a model to robustly train new images while reducing model overfitting [22].
TL has been well studied in medical oncology to facilitate image classification [23].TL is an approach where a pre-trained model, previously trained on a large dataset, is adapted to a new related task with comparatively less data.By leveraging the knowledge learned from the initial training task, transfer learning can improve the performance and reduce the training and resource allocation for a new model.In our TL workflow, we utilized GoogleNet, AlexNet, EfficientNet, BERT, Transformer, TensorFlow Hub, PyTorch Hub, ResNet-50, VGG-16, Inception V3, DenseNet201, Xception, and MobilleNet models to transfer the trained parameter weights to our new CNN model.We fine-tuned this model using Bayesian optimization to automate robust hyperparameter space scanning and optimization [17][18][19][20][21][22][23].Our improved CNN model was then used for the final brain tumor classification.

Systematic Review
Forty-seven studies that utilized CNN for brain tumor classification were included (Table 1) [13,21,.Since their inception, CNNs have continued to undergo serial refinements to maximize the robustness of brain tumor classification [17].In 2012, AlexNet was the first revolutionary advancement that introduced consecutively stacked convolutional layers.This was one of the best models for brain tumor image classification, with some studies achieving nearly 99 percent accuracy [18,34,35,49].The next breakthrough was Visual Geometry Group (VGG), which built upon AlexNet, allowing for multiple layers and further increasing the depth of CNN architecture [29,30,58,70].Despite increasing classification accuracies, one issue that quickly arose was the large computational expenses and model overfitting that were seen with CNN-based classification models [21].GoogleNet, a 22-layer deep CNN developed at Google, addressed many of these problems by employing the Inception module, which used a single convolutional image block (max pooling and three varying filter sizes) that could be concatenated and passed onto the next layer [71].Many studies have incorporated these frameworks in combination, some of which add extra convolutional layers to further reduce computation expenses [25,34,[48][49][50][51][52].Given the limited availability of large robust training datasets, many CNN architectures rely on transfer learning and data augmentation techniques to increase model efficiency and performance.However, issues regarding vanishing gradients leading to accuracy saturation have necessitated further development of CNN architecture.
Vanishing gradients occur during the training of CNN, in which the gradients used to update subsequent network layers "vanish" during backpropagation, limiting the ability to learn and tune earlier layers in the network [21].ResNet helped to mitigate such issues by incorporating residual skip-connection blocks that contain altered activation functions called rectified linear units (ReLU) [21,47,49,52].More recently, DenseNet was created to further reduce the accuracy loss associated with vanishing gradients by implementing dense connections (DenseBlocks) between layers.By connecting all feature maps of proceeding layers to feedforward as the input into all subsequent layers, you can effectively shorten network connections, leading to reduced parameters and more accurate and efficient computation [72].
These landmark advancements in CNN architecture have allowed others to fine-tune and extract various elements for more advanced classification tasks.Ge C et al. utilized multistream CNN to not only classify tumor subtypes but also to subcategorize gliomas with or without IDH and 1p19q mutations [28].Similarly, 3DCNN and hybrid CNNs were utilized to classify low-versus high-grade gliomas [32,33,41,48].Xie et al. were able to further categorize brain tumors into four grades (Grade I, II, III, and IV) and five levels (AST-II, AST-III, OLI-II, OLI-III, and GBM classifications) [21].To date, the significant development of CNN architecture is continuously expanding classification and segmentation capabilities for diagnosing and managing brain tumors.However, important limitations regarding computational costs, clean energy computing, model generalizability, and learning curve limit the integration and widespread use of DL in various clinical settings [21].In an effort to address these limitations, we propose a clean-energy cloudbased computing platform that enables virtually any physician, regardless of computational knowledge, to classify brain tumors cost-effectively.

Evaluation Metrics
To evaluate our deep learning model based on CNN architecture for the classification of brain tumors (tumor-free, glioblastoma, astrocytoma, meningioma, and schwannoma), we conducted standard evaluation measures, including precision, recall, and accuracy.From the proposed method obtained by using our dataset, three statistical indices, namely true positive (TP), false positive (FP), false negative (FN), and true negative (TN), were calculated and used to evaluate the performance of the proposed classification system, as shown below (Rasool et al.) [73].

Performance Analysis
Figures 4 and 5 illustrate the overall performance and area under the precision-recall curve (AUC).Our results show that the proposed system classified the brain tumors successfully and efficiently with an overall precision value of 96.8% for all five labels using single-label classification.When analyzed for individual labeling accuracy, our model demonstrated AUC (area under receiver operating characteristic curve) values of 99.2% for glioblastoma, 98.4% for schwannoma, 97.7% for astrocytoma, 97.2% for meningioma, and 94.5% for normal tumor-free controls.After deploying our model to classify images from two different external datasets, we achieved an average AUC value of 98.2% for glioblastoma, 94.5% for schwannoma, 97.7% for astrocytoma, 98.3% for meningioma, and 99.1% for normal tumor-free controls.These findings highlight the robust performance of our optimized CNN in classifying four major subtypes of brain tumors as well as the generalizability to maintain classification accuracy with images using different MRI parameters.
our optimized CNN in classifying four major subtypes of brain tumors as well as the generalizability to maintain classification accuracy with images using different MRI parameters.

Cost-Effectiveness
In addition to improving the classification accuracy of our BRAIN model, we were able to develop a cost-effective platform that reduced our overall resource use.By focusing on enhancing automated optimized learning, we were able to reduce the total time of our entire workflow (i.e., from image acquisition to CNN model training and to model performance evaluation) to 103 min.By using cloud computing and storage platforms and clean energy supercomputing resources, we were able to cut our total cost to $0.Owing to the rising concerns of increasing carbon footprint due to the increasing use of artificial intelligence, we ensured that our platform utilized clean energy.We estimated our carbon footprint using the following machine learning emissions calculator formula: 250 W × 1 h = 0.25 kWh × 0.57 kg eq.CO2/kWh = 0.14 kg eq.CO2 Future iterations of model training can utilize specific computing resources that can reduce carbon emission to 0 kg eq.CO2.[74].

Our Model Compared to the Current Literature
Several studies have explored the use of deep learning models for brain tumor classification based on magnetic resonance imaging (MRI) scans using various algorithms and techniques.A survey conducted by Biratu et al. provides an overview of brain tumor segmentation and classification algorithms, highlighting the challenges and time- our optimized CNN in classifying four major subtypes of brain tumors as well as the generalizability to maintain classification accuracy with images using different MRI parameters.

Cost-Effectiveness
In addition to improving the classification accuracy of our BRAIN model, we were able to develop a cost-effective platform that reduced our overall resource use.By focusing on enhancing automated optimized learning, we were able to reduce the total time of our entire workflow (i.e., from image acquisition to CNN model training and to model performance evaluation) to 103 min.By using cloud computing and storage platforms and clean energy supercomputing resources, we were able to cut our total cost to $0.Owing to the rising concerns of increasing carbon footprint due to the increasing use of artificial intelligence, we ensured that our platform utilized clean energy.We estimated our carbon footprint using the following machine learning emissions calculator formula: 250 W × 1 h = 0.25 kWh × 0.57 kg eq.CO2/kWh = 0.14 kg eq.CO2 Future iterations of model training can utilize specific computing resources that can reduce carbon emission to 0 kg eq.CO2.[74].

Cost-Effectiveness
In addition to improving the classification accuracy of our BRAIN model, we were able to develop a cost-effective platform that reduced our overall resource use.By focusing on enhancing automated optimized learning, we were able to reduce the total time of our entire workflow (i.e., from image acquisition to CNN model training and to model performance evaluation) to 103 min.By using cloud computing and storage platforms and clean energy supercomputing resources, we were able to cut our total cost to $0.Owing to the rising concerns of increasing carbon footprint due to the increasing use of artificial intelligence, we ensured that our platform utilized clean energy.We estimated our carbon footprint using the following machine learning emissions calculator formula: 250 W × 1 h = 0.25 kWh × 0.57 kg eq.CO 2 /kWh = 0.14 kg eq.CO 2 (4) Future iterations of model training can utilize specific computing resources that can reduce carbon emission to 0 kg eq.CO 2 [74].

Our Model Compared to the Current Literature
Several studies have explored the use of deep learning models for brain tumor classification based on magnetic resonance imaging (MRI) scans using various algorithms and techniques.A survey conducted by Biratu et al. provides an overview of brain tumor segmentation and classification algorithms, highlighting the challenges and time-consuming nature of manual segmentation [12].The research community has been striving to develop automatic methods that are non-invasive and can aid in the diagnosis process, avoiding the need for biopsy [10,21].
Our model is unique compared to previous models due to its innovative approach to brain tumor classification.We have developed a purely cloud-based platform that combines the strengths of deep learning models with the versatility of automated cloud storage, computing, and deployment.These benefits can potentially enhance the accuracy and efficiency of brain tumor classification, ultimately improving patient outcomes and treatment planning.
Our platform offers several notable benefits compared to existing approaches.Firstly, our model takes into consideration the environmental impact by prioritizing a low-carbon footprint.We have implemented efficient algorithms and optimized computational processes to minimize energy consumption during the classification process.This eco-friendly approach aligns with the growing need for sustainable solutions in healthcare [10,21].
Ease of use is another significant advantage of our platform.We have designed an intuitive user interface that simplifies the workflow for medical professionals.The platform provides a seamless experience, allowing users to upload MRI scans, initiate the classification process, and access the results effortlessly.This user-friendly interface ensures that our platform can be readily adopted in clinical settings, even by users with limited technical experience with machine learning platforms.
Furthermore, our platform leverages available resources effectively.By utilizing deep learning models and advanced image processing techniques, we optimize the use of computational resources without compromising accuracy.This resource-efficient approach enables scalability, making our platform suitable for handling large datasets and accommodating future advancements in brain tumor classification [10,21,75].

Current Limitations of Brain Tumor Classification Models
While image classification models for brain tumors have shown promising results, they still face certain limitations.One of the main challenges is the need for large annotated datasets for training deep learning models effectively.The availability of high-quality labeled data is crucial for achieving optimal performance.However, obtaining such datasets can be a time-consuming and resource-intensive process.Additionally, the manual annotation process itself can introduce subjective biases and inconsistencies [9].
Overfitting is another common issue with deep learning platforms.This occurs when an algorithm learns the systematic and random fluctuations in the data so well that it negatively impacts the ability to classify new data.This leads to reduced generalizability, which is needed to adequately perform in various clinical settings [22].Brain tumor classification models trained on specific datasets may not perform as effectively on unseen data from different sources or patient populations.This lack of generalizability can hinder the widespread adoption of these models in clinical practice [22].Further research and development are required to address these limitations and enhance the robustness of the models.

Limitations of Our Models
While our platform offers numerous benefits, it also has certain limitations.One of the primary limitations is the interpretability of the deep learning models used for classification.Deep learning models are seldom analyzed to understand which features are most important, making it challenging to understand the underlying decision-making process.Interpretability is crucial in the medical domain, where clinicians need to justify and explain the reasoning behind diagnostic results [76].Addressing this limitation and improving the interpretability of our models are areas of ongoing research.Another limitation is the potential for false positives or false negatives in the classification of results.Despite the high accuracy achieved by our models, there is always a possibility of misclassification.False positives can lead to unnecessary interventions or treatments, while false negatives may result in missed diagnoses.Continuous refinement and validation of our models are necessary to minimize these errors and ensure reliable outcomes.

Future Direction of Brain Tumor Classification
Looking ahead, the future of image classification for brain tumors holds great potential.Improvements in data availability, including larger and more diverse datasets, will contribute to the development of more accurate and robust models.Additionally, more refined models can better grade tumor progression or be used to predict tumor recurrence.Surgically, deep learning models can harness image segmentation to quantify unresected tumor volumes.
To address the limitations of current models, future research should focus on interpretability and generalizability.Developing techniques to enhance the explainability of deep learning models will increase their acceptance and trust in clinical practice.Interrogating the explainability of models can allow clinicians to understand the steps and features that most greatly influence classification tasks [76].Additionally, exploring transfer learning and domain adaptation methods can improve the generalizability of models, enabling them to perform well with new clinical data.
Moreover, the integration of multimodal data, such as combining MRI with other imaging modalities or clinical data (i.e., pathological specimens), holds promise for improving the accuracy and reliability of brain tumor classification.Fusion of complementary information from different sources can provide a more comprehensive understanding of tumor characteristics, help predict tumor burden and recurrence, and aid in treatment planning [10,21].

Conclusions
We have developed an improved CNN model that can accurately classify brain tumors in brain MR images for the purpose of diagnosing brain cancer.Our experimental results showed that our proposed model performed robustly and can be easily applied in healthcare systems.Our model offers several advantages over existing approaches, including a lowcarbon footprint, ease of use, and resource optimization.However, challenges remain in the field of image classification for brain tumors, such as the need for larger annotated datasets and improved interpretability.By addressing these limitations and focusing on interpretability and generalizability, the future of image classification for brain tumors holds great promise in enhancing diagnostic accuracy, treatment planning, and patient outcomes.

Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagram.

Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagram.

Figure 1 .
Figure 1.Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagram.

Figure 2 .
Figure 2. Brain tumor MRI image subtypes.Red circles outline respective tumor location.

Figure 4 .
Figure 4. Performance of brain tumor classification (note: numbers may not add up to 100 due to rounding).

Figure 4 .
Figure 4. Performance of brain tumor classification (note: numbers may not add up to 100 due to rounding).

Figure 4 .
Figure 4. Performance of brain tumor classification (note: numbers may not add up to 100 due to rounding).

4. 1 .
Our Model Compared to the Current Literature Several studies have explored the use of deep learning models for brain tumor classification based on magnetic resonance imaging (MRI) scans using various algorithms and techniques.A survey conducted by Biratu et al. provides an overview of brain tumor segmentation and classification algorithms, highlighting the challenges and time-

Table 1 .
Studies utilizing CNN for brain tumor classification.