Next Article in Journal
Beyond Benchmarks: Spotting Key Topical Sentences While Improving Automated Essay Scoring Performance with Topic-Aware BERT
Next Article in Special Issue
A Novel Image Encryption Algorithm Based on Multiple Random DNA Coding and Annealing
Previous Article in Journal
Contrastive Learning via Local Activity
Previous Article in Special Issue
Approximate Nearest Neighbor Search Using Enhanced Accumulative Quantization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Approach for Classifying Brain Tumours Combining a SqueezeNet Model with SVM and Fine-Tuning

1
Faculty of Computing, Universiti Teknologi Malaysia, Johor Bahru 81310 UTM, Malaysia
2
Department of Computer Science, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(1), 149; https://doi.org/10.3390/electronics12010149
Submission received: 26 November 2022 / Revised: 17 December 2022 / Accepted: 23 December 2022 / Published: 29 December 2022
(This article belongs to the Special Issue Advances in Signal, Image and Information Processing)

Abstract

:
Cancer of the brain is most common in the elderly and young and can be fatal in both. Brain tumours can heal better if they are diagnosed and treated quickly. When it comes to processing medical images, the deep learning method is essential in aiding humans in diagnosing various diseases. Classifying brain tumours is an essential step that relies heavily on the doctor’s experience and training. A smart system for detecting and classifying these tumours is essential to aid in the non-invasive diagnosis of brain tumours using MRI (magnetic resonance imaging) images. This work presents a novel hybrid deep learning CNN-based structure to distinguish between three distinct types of human brain tumours through MRI scans. This paper proposes a method that employs a dual approach to classification using deep learning and CNN. The first approach combines the unsupervised classification of an SVM for pattern classification with a pre-trained CNN (i.e., SqueezeNet) for feature extraction. The second approach combines the supervised soft-max classifier with a finely tuned SqueezeNet. To evaluate the efficacy of the suggested method, MRI scans of the brain were used to analyse a total of 1937 images of glioma tumours, 926 images of meningioma tumours, 926 images of pituitary tumours, and 396 images of a normal brain. According to the experiment results, the finely tuned SqueezeNet model obtained an accuracy of 96.5%. However, when SqueezeNet was used as a feature extractor and an SVM classifier was applied, recognition accuracy increased to 98.7%.

1. Introduction

Brain tumours are abnormal growths of brain tissue that can disrupt normal brain function [1]. The human brain contains billions of active cells, making analysis challenging. Brain tumours are now one of the leading causes of death in both children and adults. This is true across all age groups. Primary brain tumours are diagnosed in approximately 250,000 people globally each year. This accounts for fewer than 2% of all cancer cases [2]. The World Health Organisation (WHO) states that approximately 150 distinct varieties of brain tumours can affect humans. Some tumours are not cancerous but are still considered tumours. Both malignant and benign tumours are included in this category. Benign tumours spread within the brain. However, brain cancer is the common name given to malignant tumours located in the brain, because these tumours can spread to other parts of the body [3,4]. Human brain tumours are treated with surgery, radiation therapy, and chemotherapy.
Early detection and accurate grading of brain tumours are critical for human survival. The most common electronic modalities are ultrasound, computerised tomography (CT), and magnetic resonance imaging (MRI). Due to the high density of brain tumours, the manual technique is extremely challenging. Accordingly, a computer-based, automated method for tumour detection is extremely beneficial [5]. In recent times, the situation has been very different. Radiologists can find brain tumours quickly and without surgery by using machine learning techniques and deep learning models to enhance the algorithms that find brain tumours [6]. Deep neural network modelling has progressed, leading to the development of new methods for detecting and classifying brain tumours using medical images [7,8].
Medical images play an essential role in assisting humans to identify various diseases. Numerous non-invasive imaging modalities (MRI, CT, PET, X-ray, and SPECT) are used to study brain tumours [9,10]. MRI and CT are two common ways to check for differences in the location, shape, or size of cells in brain tissue, which can help find tumours when they are still at an early stage [11].
To become an “expert system”, a machine learning (ML) technique known as an image classification algorithm is used to train a computer system. One utilisation applied in the medical domain uses medical imaging for diagnosis and education. In the classification process for medical images, such as brain images, the pre-processing and feature extraction steps are used to determine and classify the type of tumour in the image [12,13]. There are numerous classification techniques, but one popular technique is SVM, which can be implemented on various datasets for classification purposes, including medical image datasets [14].
Deep learning techniques have recently yielded promising results for performing data classification, clustering, and rule mining across different data fields [15]. Deep learning is a type of artificial neural network with a structure based on the human nervous system. Deep learning is different from other methods because it focuses on extracting features automatically from data representations. Convolutional Neural Network (CNN) is the deep learning technique used most often in medical image analysis. The CNN model makes it possible to process medical images quickly and automatically extract structured data features and representations [16,17]. CNN typically contains several layers, such as an inserting layer, a resulting layer, and many undiscovered layers. In the undiscovered layers, a series of convolutional layers are linked. The starting function comprises the ReLU layer, and the final convolution embroils the backpropagation [18].
This study, however, will be used to classify brain tumours based on MRI human brain images using the hybrid CNN-SVM method. The primary points of this research’s contribution are as follows:
It illustrates employing a hybrid deep learning model to detect early-stage brain cancers. This can expedite treatment and prevent the spread of malignant tissue.
It illustrates the accurate performance of the supervised and unsupervised classification for classifying MRI brain tumours based on the CNN structure of feature extraction.
It illustrates that a hybrid deep learning classification method that combines SqueezeNet and SVM produces more accurate and superior results than conventional methods.
It helps radiologists minimise errors while using magnetic resonance imaging (MRI) pictures to diagnose cancers manually without invasive procedures.
The rest of this paper is organised as follows: Section 2 discusses similar efforts (related work). The strategy and details of the proposed method are described in Section 3. Discussion of the proposed method is presented in Section 4. Conclusions are presented in Section 5.

2. Related Work

Recent years have seen the widespread adoption of ML and DL techniques for detecting and classifying brain tumours using different image modalities, especially MRI of human brain images. This section will discuss the newest and most important research on the topic of this paper. Afshar et al. [19] designed a capsule network (Caps-Net) to analyse the MRI brain image and the rough tumour borders to classify brain cancers. This study had a reliability of 90.89%. Barjaktarovic et al. [20] proposed a novel CNN architecture for brain tumour classification based on modifying a pre-trained network’s performance on T1-weighted contrast-enhanced MRI images. The model comprises two 10-fold cross-validation strategies that use augmented images. It is 96.56% accurate. Mzough et al. [21] proposed a fully automated 3D Convolutional Neural Network (CNN) model for dividing glioma brain tumours into low-grade and high-grade gliomas using an intensity-normalisation and adaptive contrast-enhancement-based pre-processing technique. The team received a validation accuracy score of 96.49% using the Brats-2018 dataset. Hashemzehi et al. [22] combined CNN and NADE to create a hybrid model for analysing MRI scans for signs of brain cancer. The researchers used 3064 T1-weighted images with contrast enhancement. They were tested to see how well they could spot three different types of brain cancer, and they achieved a 96% accuracy rate. Based on MRI scans of meningioma, pituitary, and glioma tumours, Diaz-Pernas et al. [23] proposed a fully automated segmentation and classification algorithm. They utilised CNN to implement the principle of a multi-scale approach fundamental to human functioning. They achieved a 97% success rate on images from 3064 slices taken from 233 patients. Sajja et al. [24] constructed a new VGG16 network to classify malignant and benign brain tumours using Brat’s 577 T1-weighted brain tumour dataset. They achieved 96.70% accuracy with their performance. Tazin et al. [25] modified MobileNetV2, a novel method developed using deep learning to detect brain tumours. They achieved the highest level of accuracy, with 92%. In a novel method that has been suggested for the classification of brain tumours, Fatih et al. [26] combined CNN structure and neutrosophic expert maximum-fuzzy (NS-CNN) sure entropy. The neutrosophic set expert maximum-fuzzy sure method was utilised to segment the brain tumour. The images then went through a CNN architecture, which extracted features of the tumour. The SVM technique then used those features to decide whether the tumour was benign or cancerous. Paul et al. [27] used images of axial brain tumours to train and develop two separate classification approaches (a fully connected CNN). The accuracy of the CNN architecture, which had two layers of convolutional processing and then two layers of fully connected processing, was 91.43%. Khawaldeh et al. [28] proposed a non-invasive grading system for glioma brain tumours based on a modified version of the AlexNet model of CNN architecture. The categorisation was performed on whole-brain MRI scans with image-level labels rather than pixel-level labels.

3. The Proposed Approach

The main objective and reason for writing this paper is to describe a new method for using a hybrid deep learning model. The proposed method examined two distinct hybridisations. The first model, which is known as SN-SVM, combines SqueezeNet and SVM. The second model is a combination of SqueezeNet and Fine-Tuning, which is known as SN-FT. The block diagram of the proposed methodology is depicted in Figure 1. In this section, we discuss the proposed strategy in the following sections.

3.1. Features Extraction

The method for the extraction of features uses a SqueezeNet model that has already been trained. The SqueezeNet system functions as a feature extractor randomly, and also lets the input image progress forward until it reaches a layer that has already been set (the feature extraction layer). The process terminates here, with the last layer outputs used as our features. The SqueezeNet model of pre-trained CNN deep learning is utilised in this proposed approach. The SqueezeNet model aims to create a smaller neural network with fewer parameters that could fit into computer memory and be transmitted more easily over a computer network [19,20]. SqueezeNet is an advanced CNN model that uses only 3 × 3 and 1 × 1 convolutional kernels. The building block of SqueezeNet is called the fire module.
A fire module comprises a convolution layer with “squeeze” and “expand” layers. An input image is first passed through a standalone convolutional layer called “conv1”. A squeeze convolutional layer has a single filter. These are fed into an expanded layer, which contains of a mixture of 1 × 1 and 3 × 3 convolution filters that capture spatial information (feature extraction) at various scales, as illustrated in Figure 2. This layer is followed by eight “fire modules,” numbered “fire2” through “fire9”. After layers conv1, fire4, fire8, and conv10, max-pooling is performed with a stride of 2. The ReLU activation connects all the squeezes and expands the layers within the fire module. Dropout layers are added after the Fire9 module to reduce overfitting. Downsampling is placed relatively late, resulting in SqueezeNet with a “complex bypass” [20,21,22]. Figure 2 illustrates a zoomed-out image of the SqueezeNet architecture.
After using SqueezeNet to extract medical features from MRI brain images, we obtained features vectors of length 1000 D that were trained with a learning rate of 0.8 and a batch size for training of 32, which is linearly decreased throughout the training process, as shown in the details of layers of the SqueezeNet network in Table 1.

3.2. Classification Subsection

CNN’s SqueezeNet model is used in the hybrid deep learning algorithm based on two classification methods. Firstly, it uses both SqueezeNet and the unsupervised SVM classifier (SN-SVM). Secondly, it combines the supervised classifier and automatic feature extraction of SqueezeNet with Fine-Tuning techniques (SN-FT), as illustrated in Figure 1. This hybrid deep learning algorithm classifies MRI of human brain images as normal (not-tumour) or abnormal (tumour) tissue, and tumour types as meningioma, pituitary, or glioma.
The supervisor approach offers a computer system with a training collection of examples with acceptable objectives. After a comparison phase, this trained selection system accurately responds to possible inputs. Based on medical photographs, the supervisor classification in the medical domain can classify brain tumour images as normal, meningioma, cystic oligodendroglioma, lymphoma, ependymoma, glioblastoma multiform, anaplastic astrocytoma, etc. The trained system must generate behaviour based on these classes’ secret inputs. This process is referred to as multi-labelling [23].
In the non-supervised classification approach, the system can decide for itself rather than being trained on a dataset. Non-supervised learning is developing a clustering method for categorising input items. These clusters were not previously identified. It forms groups based on similarity. In the medical domain, the supervisor classification can classify brain tumour images as “normal” or “abnormal” [23].

3.2.1. SN-SVM Model

The first method combines the SqueezeNet model with the unsupervised SVM (support vector machines) classifier technique. After feature extraction, the SqueezeNet model is used to determine whether MRI of human brain images shows normal (not-tumour) or abnormal tissue (tumour) as meningioma, pituitary, or glioma tumours.
SVM is a commonly used classifier technique that is related to supervised learning methods and is applied in many fields, such as face analysis, medical image analysis, handwriting analysis, etc. It is especially useful for pattern detection and regression-based applications [24,25]. The SVM classification function solves the learning problem by separating the different sets and finding an “optimal” hyperplane.
The algorithm of SVM is divided into three parts:
  • The basics of linearly separable groups;
  • The expansion of the non-linearly detachable case using kernel functions;
  • The application to non-linearly divisible groups.
The Radial Basis Function (RBF) kernel is one of the kernel functions that is frequently used. When there is no prior knowledge of the results, and when discontinuities are suitable, it creates a piecewise linear solution [26]. Figure 3 illustrates the process of the SVM technique which has two objects (A) and (B) and the Hyperplane separates them based on features.
This proposed method uses an SVM classifier to identify normal and abnormal tissue in MRI brain scans (tumour, not-tumour, glioma, meningioma, or pituitary tumours). After features have been extracted from brain MRI scans using the SqueezeNet model’s convolution layers, the SVM algorithm will be applied to the FCL to categorise tumours. Figure 4 illustrates the structure of the hybrid classification model (SN-SVM).
Figure 4 explains the steps of the SN-SVM method. The preliminary steps of the hybrid CNN-SVM model to classify MRI brain images are:
  • Feature extraction step.
-
In the hybrid CNN-SVM model and the automatically hybrid CNN-SVM model, the SqueezeNet model automatically generates features through the MRI brain images dataset.
-
Passing the features extracted to the Full Connection Layer (FCL).
  • Classification step.
-
The SVM algorithm applies to the Full Connection Layer (FCL), also called the soft-max layer.
-
Passing the features to the SVM module for training and testing the MRI brain image dataset.
-
An RBF function is used as a kernel in the hybrid model.
-
Define SVM-based kernel as k ( x , y ) = exp ( x y σ ) ,
where x and y are feature space samples in the training set parameter.
-
The objective has two main classes:
1- Not-Tumour;
2- Tumour, which has three classes:
Meningioma;
Pituitary;
Glioma.

3.2.2. SN-FT Model

The second proposed method is supervised classification with fine-tuning the pre-trained SqueezeNet before classifying brain tissue. Fine-tuning, in particular, is a process that retunes or tweaks a model that has already been trained (pre-trained) for one mission to perform a second similar mission [27,28]. Assuming the new task is similar to the old one, we can save time and effort by using an already-trained artificial neural network instead of building one from scratch [29,30].
The SqueezeNet model and the combined Fine-Tuning technique (SN-FT) of outputs are utilised by this approach. After feature extraction, SqueezeNet was used to extract features and classify brain tissue though MRI images using Fine-Tuning techniques to determine abnormal output tissue (glioma, meningioma, pituitary tumours, or not-tumour). Fine-tuning is a process that takes an already-trained (pre-trained) model, such as the SqueezeNet model, which is used for feature extraction though MRI of human brain images, and retunes or tweaks the model to make it perform a second similar mission by freezing the weights of the first few layers of the network. A new output layer (soft-max) is then added to the target model, with the number of outputs equal to the number of categories in the target dataset. Finally, the SqueezeNet model is ready to retrain the new dataset by setting the initial layers to zero. The layers’ significant freezing process increases the network’s training speed. Figure 5 illustrates how CNN-based hybrid deep learning with output fine-tuning is assembled and Figure 6 shows the structure of fine-tuning technique process.
Figure 5 explains the steps of the SN-FT method. The main steps of fine-tuning the algorithm after feature extraction from the inputting images via a convolutional layer (CL) of the SqueezeNet model are as follows:
  • Feature extraction step.
-
The SqueezeNet model automatically generates features through the MRI of a human brain image dataset in the hybrid CNN-SVM model.
-
Passing the features extracted to the Full Connection Layer (FCL).
  • Classification step.
-
Load the pre-train SqueezeNet model of the CNN algorithm (All model builds and associated parameters are replicated on the SqueezeNet model, except for the output layer).
-
Replace the pre-trained network’s last layer (soft-max) with our new output layer.
-
Add an output layer with as many outputs as categories to the target model.
-
Freeze the weights of some initial layers of the network’s pre-training. This is because the initial layers capture common features such as curves and edges that are also essential to our new situation.
-
Train the new model structure while maintaining weights. Next, the network learns dataset-specific features.
-
The objective of the output layer:
  • Not-Tumour;
  • Meningioma Tumour;
  • Glioma Tumour;
  • Pituitary Tumour.

4. Results and Discussion

Numerous experimental assessments have been conducted to evaluate the performance of a hybrid deep learning algorithm using the SqueezeNet model of the CNN technique. All the experimental evaluations have been conducted using Matlab R2021b programming. The hybrid deep learning algorithm uses the SqueezeNet model based on two methods. Firstly, it combines the SqueezeNet model with the SVM classifier technique (SN-SVM). Secondly, it combines the model with Fine-Tuning techniques (SN-FT). This hybrid deep learning algorithm is used to classify MRI brain images with tumours such as glioma, meningioma, and pituitary tumours, and not a tumour. The accuracy results of the performance of each algorithm are as follows: the SN-SVM algorithm is 98.7% and SN-FT is 96.5%. Based on the accuracy results, we find that the SN-SVM algorithm has the superior performance and the best accuracy among other proposed algorithms.

4.1. Dataset

This study applied data augmentation methods to a large dataset consisting of 3460 unique brain MRI image types [31]. In 2017, Jun Cheng was the first to publish the dataset online; in 2020, Sartaj Bhuvaji updated the most recent revision [32]. There are 3064 images of T1-weighted, contrast-enhanced MRI images in the image dataset sourced from kaggle.com. Meningiomas account for 708, gliomas for 1426, and pituitary tumours for 930; these are the three most common types of brain tumours. From 233 patients, images were obtained in three different orientations: axial (994 images), sagittal (1025 images), and coronal (1045 images). The data were randomly divided into training and testing sets, with 80% of the data allocated to the former and 20% to the latter. There are four additional folders inside each main one. These files contain MRI scans of the various tumour types [32]. Figure 7 shows meningioma, pituitary, glioma, and not-tumour used in this dataset and displays each tumour type’s axial, sagittal, and coronal planes.

4.2. Evaluation Measures

This study aims to evaluate the efficacy of a proposed hybrid classification based on the CNN structure in differentiating between three types of brain tumours (meningioma, pituitary, and glioma) and the normal brain (No_Tumour). The effectiveness of the proposed approach was measured using conventional evaluation criteria. Accuracy, precision, recall, and the F1-score were used as measures in this investigation.
Four statistical indices—true negative (TN), true positive (TP), false positive (FP), and false negative (FN)—were used to measure the performance of the proposed classification system by utilising the MRI dataset [33,34].
Accuracy = T P + T N T P + F P + T N + F N
Precision   = T P T P + F P
Recall   = T P T P + T N
Error   Rate   = T P + T N T P + F N + F P + T N
F1-score = 2*(precision*recall)/(precision + recall)

4.3. Performance Analysis

All experiments were conducted on a laptop with a hard SSD-type drive, 20 GB of RAM, and an Intel Core-I7 processor. The MRI brain image dataset was of size 224. The dataset was separated into training and testing sets, with 80% of the data in the former and 20% in the latter, respectively. The following sections detail the outcomes of applying the proposed classification dataset to brain images using the specified CNN architecture (hybrid deep learning) based on two combined methodologies, with the first combining the SqueezeNet technique with the SVM technique (SN-SVM) and the second combining the SqueezeNet technique with the Fine-Tuning technique (SN-FT), to identify tumour type through MRI brain images. The overall confusion matrices are depicted in Figure 8 and Figure 9, with the conclusion that the proposed system was able to classify brain tumours with 98.7% accuracy when using the SN-SVM method and 96.5% accuracy when using the SN-FT method, with both having an input picture size of 224 by 224 pixels.
Figure 8 and Figure 9 show the result of Recall and Precision for each type of MRI brain image (meningioma tumour, pituitary tumour, glioma tumour and Not Tumour) and error rate for two hybrid methods (SN-SVM and SN-FT).This study used both normal and abnormal MRI brain tissue images. It is important to note that normal brain tissue is not-tumour, while abnormal tissue is a tumour, such as meningioma, pituitary, and glioma tumours. The proposed hybrid classification method is used in this study to perform an efficient automatic brain tumour classification. The proposed method’s performance was evaluated through three measurements. Table 2 and Table 3 compare the two hybrid deep learning methods proposed (SN-SVM and SN-FT).
Figure 10 depicts the performance of the proposed methods (SN-SVM and SN-FT) for the classification of MRI brain images to determine whether or not they have a tumour, as well as classifying the tumour types as glioma, meningioma, and pituitary tumours based on recall, precision, and F1-Score parameters.
Figure 11 and Table 4 display the accuracy of performance and error rate of each method (SN-SVM and SN-FT) of the proposed hybrid deep learning algorithms for classifying MRI brain images as having tumours (glioma, meningioma, and pituitary tumour) or not. The accuracy results of the performance of each algorithm are as follows: SN-SVM is 98.73% and SN-FT is 96.51%. Based on the accuracy result, we find that the SN-SVM algorithm has the superior performance and the best accuracy.
Figure 11 illustrates that the SN-SVM method proposed classifier had the best performance, with 98.7% accuracy, 98.7% recall, 98.3% precision, 98.5% specificity, and 1.3% error rate.
Table 5 compares our proposed method’s accuracy, precision, and F1-score with related work methods. As shown in Table 5, five approaches were used for the comparison, as recently described in the literature. The best results obtained were 97.00% accuracy, 95.80% precision, and 96.07% F1-score, by the multi-scale CNN approximation [35], which were superior to those of our proposed method (SN-FT). However, our second approach (SN-SVM) seems to exceed all of the methods, with 98.73% accuracy, 98.39% precision, and 98.56% F1-score.
Figure 12 displays all of the authors mentioned who used contrast brain tumour MRI images in their experiments. The SN-SVM proposed method outperforms the others in terms of accuracy, with a value of almost 99%.

4.4. Model Evaluation Using Public Dataset

To validate the performance of the proposed multi-hybrid classification method, an additional benchmark dataset has been used in this section. We applied the MRI brain images dataset obtained from different patients gathered from several hospitals, WHO (World Health Organization), and the Whole Brain Atlas site, which was published by www.kaggle.com (website: https://www.kaggle.com/datasets/navoneel/brain-mri-images-for-brain-tumor-detection (accessed on 10 November 2022)). It contains 253 images, with 155 images of persons with brain tumours and 98 images of persons without brain tumours. Some sample images from the MRI brain dataset are shown in Figure 13.
In this experiment, the dataset has been divided into 80% and 20% segments for training and testing, respectively. The outcomes of the proposed approach and other reported results in the literature are shown in the Table 6. The results indicated that the proposed method is able to achieve similar results to some extent. Nevertheless, the proposed approach has many advantages compared with [40,41,42]. This is because the proposed hybrid model combines the benefits of the deep learning model SqueezeNet in performing automatic feature extraction and the strength of SVM classifiers in performing classification.
The results of CNN architecture (hybrid deep learning classification) based on combining the SqueezeNet technique with the SVM (SN-SVM) and Fine-Tuning (SN-FT) techniques is shown in the confusion matrices of the testing results of MRI brain images classified as having tumours or not-tumours in Figure 14.

5. Conclusions

A new CNN architecture for efficient automated classification of brain tumours in three brain datasets, meningioma, glioma, and pituitary, was discussed. This paper introduces a novel CNN-SVM-based hybrid deep learning classification model. This model combines the SqueezeNet technique with SVM to identify tumour types through MRI of human brain images. Our architecture succeeded in efficiently classifying the brain tumour into three classes with high performance by measuring accuracy, recall, precision, error rate, and F1-score in all dataset cases: T1-weighted, T2-weighted, Fluid Attenuated Inversion Recovery (FLAIR), and T1-weighted contrast-enhanced (T1ce). The system significantly classifies the MRI brain images into a tumour or not a tumour and the tumour into three levels—meningioma, glioma, and pituitary tumour—to diagnose brain tumours in their early stages. Based on quantitative results from the brain tumour dataset, the SN-SVM approach proved to be highly effective, achieving a recall of 98.3%, precision of 98.7%, F1-score of 98.5%, and classification accuracy of 98.7% on the test set.
In future, other CNN models such as AlexNet or ResNet will be used in conjunction with other machine learning techniques to classify brain tumour types from MRI brain images.

Author Contributions

Conceptualization, M.R., N.A.I., A.A.-D. and W.M.S.Y.; methodology, N.A.I., W.M.S.Y. and A.A.; software, M.R. and N.A.I.; validation, A.A., N.A.I. and M.R.; formal analysis, M.R. and N.A.I.; investigation, all authors; writing—original draft preparation, M.R. and A.A.-D.; writing—review and editing, M.R., W.M.S.Y. and A.A.; visualization, W.M.S.Y. and N.A.I.; supervision, N.A.I., A.A. and A.A.-D; project administration, A.A.-D. and W.M.S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rasool, M.; Ismail, N.A.; Boulila, W.; Ammar, A.; Samma, H.; Yafooz, W.M.; Emara, A.H.M. A Hybrid Deep Learning Model for Brain Tumour Classification. Entropy 2022, 24, 799. [Google Scholar] [CrossRef]
  2. Nayak, D.R.; Padhy, N.; Mallick, P.K.; Zymbler, M.; Kumar, S. Brain Tumor Classification Using Dense Efficient-Net. Axioms 2022, 11, 34. [Google Scholar] [CrossRef]
  3. Pradhan, A.; Mishra, D.; Das, K.; Panda, G.; Kumar, S.; Zymbler, M. On the Classification of MR Images Using “ELM-SSA” Coated Hybrid Model. Mathematics 2021, 9, 2095. [Google Scholar] [CrossRef]
  4. Wild, C.P.; Stewart, B.W.; Wild, C. World Cancer Report 2014; World Health Organization: Geneva, Switzerland, 2014. [Google Scholar]
  5. Reddy, A.V.; Krishna, C.; Mallick, P.K.; Satapathy, S.K.; Tiwari, P.; Zymbler, M.; Kumar, S. Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks. J. Big Data 2020, 7, 1–17. [Google Scholar] [CrossRef]
  6. Nayak, D.R.; Padhy, N.; Mallick, P.K.; Bagal, D.K.; Kumar, S. Brain Tumour Classification Using Noble Deep Learning Approach with Parametric Optimization through Metaheuristics Approaches. Computers 2022, 11, 10. [Google Scholar] [CrossRef]
  7. Mansour, R.F.; Escorcia-Gutierrez, J.; Gamarra, M.; Díaz, V.G.; Gupta, D.; Kumar, S. Artificial intelligence with big data analytics-based brain intracranial hemorrhage e-diagnosis using CT images. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–13. [Google Scholar]
  8. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  9. Kaus, M.R.; Warfield, S.K.; Nabavi, A.; Black, P.M.; Jolesz, F.A.; Kikinis, R. Automated segmentation of MR images of brain tumors. Radiology 2001, 218, 586–591. [Google Scholar] [CrossRef] [Green Version]
  10. Jayadevappa, D.; Srinivas Kumar, S.; Murty, D. Medical image segmentation algorithms using deformable models: A review. Iete Tech. Rev. 2011, 28, 248–255. [Google Scholar] [CrossRef]
  11. Tiwari, A.; Srivastava, S.; Pant, M. Brain tumor segmentation and classification from magnetic resonance images: Review of selected methods from 2014 to 2019. In Pattern Recognition Letters; Elsevier: Amsterdam, The Netherlands, 2020; Volume 131, pp. 244–260. [Google Scholar]
  12. Gosavi, D.; Dere, S.; Bhoir, D.; Rathod, M. Brain Tumor Classification Using GLCM Features and Neural Network. In Proceedings of the 2nd International Conference on Advances in Science & Technology (ICAST), Mumbai, India, 8–9 April 2019. [Google Scholar]
  13. Giraddi, S.; Vaishnavi, S. Detection of Brain Tumor using Image Classification. In Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, India, 8–9 September 2017; pp. 640–644. [Google Scholar]
  14. Soofi, A.A.; Awan, A. Classification techniques in machine learning: Applications and issues. J. Basic Appl. Sci. 2017, 13, 459–465. [Google Scholar] [CrossRef]
  15. Sultana, J. Predicting Indian Sentiments of COVID-19 Using MLP and Adaboost. Turk. J. Comput. Math. Educ. (Turcomat) 2021, 12, 706–714. [Google Scholar]
  16. Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based brain tumor image segmentation using deep learning methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef]
  17. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Für Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  18. Dheir, I.M.; Mettleq, A.S.A.; Elsharif, A.A.; Abu-Naser, S.S. Classifying nuts types using convolutional neural network. Int. J. Acad. Inf. Syst. Res. (Ijaisr) 2019, 3, 12–18. [Google Scholar]
  19. Abhinav, G. Deep Learning Reading Group: SqueezeNet. Available online: https://www.kdnuggets.com/2016/09/deep-learning-reading-group-squeezenet.html (accessed on 5 May 2021).
  20. Gholami, A.; Kwon, K.; Wu, B.; Tai, Z.; Yue, X.; Jin, P.; Zhao, S.; Keutzer, K. Squeezenext: Hardware-aware neural network design. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1638–1647. [Google Scholar]
  21. Chappa, R.T.N.; El-Sharkawy, M. Squeeze-and-Excitation SqueezeNext: An Efficient DNN for Hardware Deployment. In Proceedings of the 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020; pp. 0691–0697. [Google Scholar]
  22. Beheshti, N.; Johnsson, L. Squeeze u-net: A memory and energy efficient image segmentation network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Virtual, 14–19 June 2020; pp. 364–365. [Google Scholar]
  23. Latif, J.; Xiao, C.; Imran, A.; Tu, S. Medical imaging using machine learning and deep learning algorithms: A review. In Proceedings of the 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Sindh, Pakistan, 30–31 January 2019; pp. 1–5. [Google Scholar]
  24. Nguyen, L. Tutorial on support vector machine. Appl. Comput. Math. 2017, 6, 1–15. [Google Scholar]
  25. Sharma, R.; Sungheetha, A. An efficient dimension reduction based fusion of CNN and SVM model for detection of abnormal incident in video surveillance. J. Soft Comput. Paradig. (Jscp) 2021, 3, 55–69. [Google Scholar] [CrossRef]
  26. Bhavsar, H.; Panchal, M.H. A review on support vector machine for data classification. Int. J. Adv. Res. Comput. Eng. Technol. (Ijarcet) 2012, 1, 185–189. [Google Scholar]
  27. Renda, A.; Frankle, J.; Carbin, M. Comparing rewinding and fine-tuning in neural network pruning. arXiv 2020, arXiv:2003.02389. [Google Scholar]
  28. Nagabandi, A.; Kahn, G.; Fearing, R.S.; Levine, S. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 7559–7566. [Google Scholar]
  29. Gong, K.; Guan, J.; Liu, C.-C.; Qi, J. PET image denoising using a deep neural network through fine tuning. IEEE Trans. Radiat. Plasma Med Sci. 2018, 3, 153–161. [Google Scholar] [CrossRef]
  30. Dash, A.K.; Mohapatra, P. A Fine-tuned deep convolutional neural network for chest radiography image classification on COVID-19 cases. Multimed. Tools Appl. 2022, 81, 1055–1075. [Google Scholar] [CrossRef]
  31. Cheng, J. Brain Tumor Dataset. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427/5 (accessed on 2 April 2017).
  32. Kadam, A.; Bhuvaji, S.; Bhumkar, P.; Dedge, S.; Kanchan, S. Brain Tumor Classification (MRI). Available online: https://www.kaggle.com/sartajbhuvaji/brain-tumor-classification-mri (accessed on 5 May 2021).
  33. Alqudah, A.M.; Albadarneh, A.; Abu-Qasmieh, I.; Alquran, H. Developing of robust and high accurate ECG beat classification by combining Gaussian mixtures and wavelets features. Australas. Phys. Eng. Sci. Med. 2019, 42, 149–157. [Google Scholar] [CrossRef]
  34. Alqudah, A.M.; Alquraan, H.; Qasmieh, I.A.; Alqudah, A.; Al-Sharu, W. Brain Tumor Classification Using Deep Learning Technique—A Comparison between Cropped, Uncropped, and Segmented Lesion Images with Different Sizes. arXiv 2020, arXiv:2001.08844. [Google Scholar] [CrossRef]
  35. Díaz-Pernas, F.J.; Martínez-Zarzuela, M.; Antón-Rodríguez, M.; González-Ortega, D. A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare 2021, 9, 153. [Google Scholar] [CrossRef] [PubMed]
  36. Badža, M.M.; Barjaktarović, M.Č. Classification of brain tumors from MRI images using a convolutional neural network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef] [Green Version]
  37. Hashemzehi, R.; Mahdavi, S.J.S.; Kheirabadi, M.; Kamel, S.R. Detection of brain tumors from MRI images base on deep learning using hybrid model CNN and NADE. Biocybern. Biomed. Eng. 2020, 40, 1225–1232. [Google Scholar] [CrossRef]
  38. Sajja, V.R. Classification of Brain Tumors using Fuzzy C-means and VGG16. Turk. J. Comput. Math. Educ. (Turcomat) 2021, 12, 2103–2113. [Google Scholar]
  39. Tazin, T.; Sarker, S.; Gupta, P.; Ayaz, F.I.; Islam, S.; Monirujjaman Khan, M.; Bourouis, S.; Idris, S.A.; Alshazly, H. A Robust and Novel Approach for Brain Tumor Classification Using Convolutional Neural Network. Comput. Intell. Neurosci. 2021, 2021, 2392395. [Google Scholar] [CrossRef]
  40. Khalil, M.; Ayad, H.; Adib, A. Performance evaluation of feature extraction techniques in MR-Brain image classification system. Procedia Comput. Sci. 2018, 127, 218–225. [Google Scholar] [CrossRef]
  41. Leo, M.J. MRI Brain Image Segmentation and Detection Using K-NN Classification. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2019; Volume 1362, p. 012073. [Google Scholar]
  42. Khawaldeh, S.; Pervaiz, U.; Rafiq, A.; Alkhawaldeh, R.S. Noninvasive grading of glioma tumor using magnetic resonance imaging with convolutional neural networks. Appl. Sci. 2018, 8, 27. [Google Scholar] [CrossRef]
Figure 1. The proposed methodology.
Figure 1. The proposed methodology.
Electronics 12 00149 g001
Figure 2. SqueezeNet architecture with a squeeze and expand process.
Figure 2. SqueezeNet architecture with a squeeze and expand process.
Electronics 12 00149 g002
Figure 3. The SVM linear solution with the decision boundary.
Figure 3. The SVM linear solution with the decision boundary.
Electronics 12 00149 g003
Figure 4. Structure of the SN-SVM model.
Figure 4. Structure of the SN-SVM model.
Electronics 12 00149 g004
Figure 5. The finely tuned model.
Figure 5. The finely tuned model.
Electronics 12 00149 g005
Figure 6. (Left): The original CNN network architecture. (Right): Removing the FC layers from CNN (this will serve as our extracted features).
Figure 6. (Left): The original CNN network architecture. (Right): Removing the FC layers from CNN (this will serve as our extracted features).
Electronics 12 00149 g006
Figure 7. The brain tumour types in three planes.
Figure 7. The brain tumour types in three planes.
Electronics 12 00149 g007
Figure 8. Results of SN-SVM method.
Figure 8. Results of SN-SVM method.
Electronics 12 00149 g008
Figure 9. Results of SN-FT method.
Figure 9. Results of SN-FT method.
Electronics 12 00149 g009
Figure 10. The results performance of the SN-SVM and SN-FT methods for classification types of tumours (glioma, meningioma, and pituitary tumour) and not-tumour.
Figure 10. The results performance of the SN-SVM and SN-FT methods for classification types of tumours (glioma, meningioma, and pituitary tumour) and not-tumour.
Electronics 12 00149 g010
Figure 11. Results performance of the SN-SVM and SN-FT methods.
Figure 11. Results performance of the SN-SVM and SN-FT methods.
Electronics 12 00149 g011
Figure 12. Comparison of the related works based on accuracy, precision, and F1-score.
Figure 12. Comparison of the related works based on accuracy, precision, and F1-score.
Electronics 12 00149 g012
Figure 13. Sample images from the MRI brain dataset.
Figure 13. Sample images from the MRI brain dataset.
Electronics 12 00149 g013
Figure 14. Results of proposed method using public dataset. (A): Results of SN-SVM method. (B): Results of SN-FT method.
Figure 14. Results of proposed method using public dataset. (A): Results of SN-SVM method. (B): Results of SN-FT method.
Electronics 12 00149 g014
Table 1. Details of layers of the SqueezeNet network.
Table 1. Details of layers of the SqueezeNet network.
Layer NoLayerLayer NameLayer Properties
1Covn1Image Input224 × 224 × 1 images
Convolutional(1 × 1) filters instead of (3 × 3) with [2, 2] of stride.
No Liner LayerReLU activation
2Max PoolingMax Pooling(3 × 3) max pooling with [2, 2] of stride and [0 0 0 0] of padding.
3Fire2Convolutional96 (1 × 1) filters instead of (3 × 3) with [2, 2] of stride.
No Liner LayerReLU activation
4Fire3Convolutional128 (1 × 1) filters instead of (3 × 3) with [2, 2] of stride.
No Liner LayerReLU activation
5Fire4Convolutional128 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner LayerReLU activation
6Max PoolingMax Pooling(3 × 3) max pooling with [2, 2] of stride and [0 0 0 0] of padding.
7Fire5Convolutional256 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner LayerReLU activation
8Fire6Convolutional256 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner LayerReLU activation
9Fire7Convolutional384 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner Layer ReLU activation
10Fire8Convolutional384 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner LayerReLU activation
11Max PoolingMax Pooling(3 × 3) max pooling with [2, 2] of stride and [0 0 0 0] of padding.
12Fire9Convolutional512 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner LayerReLU activation
13DropoutDropout50% dropout
14Covn10Convolutional512 (3 × 3) conv with [2, 2] of stride and [2, 2] of padding.
No Liner LayerReLU activation
15Max PoolingMax Pooling(3 × 3) max pooling with [2, 2] of stride and [0 0 0 0] of padding.
16Fully ConnectedFully Connected1000 hidden neurons in a fully connected (FC) layer.
17Soft-maxSoft-maxSoft-max
18OutputClassification Output 4Output classes: “1” for meningioma tumour, “2” for glioma tumour, “3” for a pituitary tumour, “4” for not-tumour.
Table 2. Results of each type of tumour in the SN-SVM proposed method.
Table 2. Results of each type of tumour in the SN-SVM proposed method.
Tumour TypesSN-SVM Proposed Method
RecallPrecisionF1-Score
Glioma98.9%98.9%98.9%
Meningioma97.9%98.9%98.3%
Pituitary99.4%99.4%99.4%
Not_Tumour98.7%96.3%97.5%
Table 3. Results of each type of tumour in the SN-FT proposed method.
Table 3. Results of each type of tumour in the SN-FT proposed method.
Tumour TypesSN-FT Proposed Method
RecallPrecisionF1-Score
Glioma98.4%92%95%
Meningioma93.5%97.2%95.3%
Pituitary96.7%100%98.3%
Not_Tumour98.7%98.8%98.8%
Table 4. Results of the performance of the SN-SVM and SN-FT methods.
Table 4. Results of the performance of the SN-SVM and SN-FT methods.
SqueezeNet technique with the SVM technique (SN-SVM)
RecallPrecisionF1-scoreError RateAccuracy
98.7%98.3%98.5%1.3%98.7%
SqueezeNet technique with the Fine-Tuning technique (SN-FT)
RecallPrecisionF1-croseError RateAccuracy
96.8%97%96.8%3.5%96.5%
Table 5. Comparisons with the literature.
Table 5. Comparisons with the literature.
Ref.MethodAccuracyPrecisionF1-Crose
Barjaktarovic et al. [36]CNN96.56%94.81%94.94%
Hashemzehi et al. [37]CNN and NAND96.00%94.49%94.56%
Diaz-Pernas et al. [35]Multi-scale CNN97.00%95.80%96.07%
Sajja et al. [38]Deep-CNN (VGG16)96.70%97.05%97.05%
Tazin et al. [39]MobileNetV292.00%92.50%92.00%
Proposed methodsSN-SVM98.73%98.39%98.56%
SN-FT96.5%96.8%96.8%
Table 6. Performance results using public dataset.
Table 6. Performance results using public dataset.
Ref.MethodAccuracy
Barjaktarovic et al. [36]CNN96.56%
Hashemzehi et al. [37]CNN and NAND96.00%
Diaz-Pernas et al. [35]Multi-scale CNN97.00%
Sajja et al. [38]Deep-CNN (VGG16)96.70%
Tazin et al. [39]MobileNetV292.00%
Proposed methodsSN-SVM100%
SN-FT98%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rasool, M.; Ismail, N.A.; Al-Dhaqm, A.; Yafooz, W.M.S.; Alsaeedi, A. A Novel Approach for Classifying Brain Tumours Combining a SqueezeNet Model with SVM and Fine-Tuning. Electronics 2023, 12, 149. https://doi.org/10.3390/electronics12010149

AMA Style

Rasool M, Ismail NA, Al-Dhaqm A, Yafooz WMS, Alsaeedi A. A Novel Approach for Classifying Brain Tumours Combining a SqueezeNet Model with SVM and Fine-Tuning. Electronics. 2023; 12(1):149. https://doi.org/10.3390/electronics12010149

Chicago/Turabian Style

Rasool, Mohammed, Nor Azman Ismail, Arafat Al-Dhaqm, Wael M. S. Yafooz, and Abdullah Alsaeedi. 2023. "A Novel Approach for Classifying Brain Tumours Combining a SqueezeNet Model with SVM and Fine-Tuning" Electronics 12, no. 1: 149. https://doi.org/10.3390/electronics12010149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop