Next Article in Journal
A Qualitative Evaluation of ChatGPT4 and PaLM2’s Response to Patient’s Questions Regarding Age-Related Macular Degeneration
Previous Article in Journal
Influence of Covariates on 18F-FDG PET/CT Diagnostic Accuracy for Liver Metastasis
Previous Article in Special Issue
Integration of Localized, Contextual, and Hierarchical Features in Deep Learning for Improved Skin Lesion Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification

by
Yezi Ali Kadhim
1,2,3,
Mehmet Serdar Guzel
4 and
Alok Mishra
5,6,*
1
College of Engineering, University of Baghdad, Jadriyah, Baghdad 10071, Iraq
2
Department of Modeling and Design of Engineering Systems (MODES), Atilim University, Ankara 06830, Turkey
3
Department of Electrical and Electronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
4
Department of Computer Engineering, Ankara University, Yenimahalle, Ankara 06100, Turkey
5
Faculty of Engineering, Norwegian University of Science and Technology, 7034 Trondheim, Norway
6
Department of Software Engineering, Atilim University, Incek, Ankara 06830, Turkey
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(14), 1469; https://doi.org/10.3390/diagnostics14141469
Submission received: 22 May 2024 / Revised: 27 June 2024 / Accepted: 2 July 2024 / Published: 9 July 2024
(This article belongs to the Special Issue Impact of Deep Learning in Biomedical Engineering)

Abstract

:
Medicine is one of the fields where the advancement of computer science is making significant progress. Some diseases require an immediate diagnosis in order to improve patient outcomes. The usage of computers in medicine improves precision and accelerates data processing and diagnosis. In order to categorize biological images, hybrid machine learning, a combination of various deep learning approaches, was utilized, and a meta-heuristic algorithm was provided in this research. In addition, two different medical datasets were introduced, one covering the magnetic resonance imaging (MRI) of brain tumors and the other dealing with chest X-rays (CXRs) of COVID-19. These datasets were introduced to the combination network that contained deep learning techniques, which were based on a convolutional neural network (CNN) or autoencoder, to extract features and combine them with the next step of the meta-heuristic algorithm in order to select optimal features using the particle swarm optimization (PSO) algorithm. This combination sought to reduce the dimensionality of the datasets while maintaining the original performance of the data. This is considered an innovative method and ensures highly accurate classification results across various medical datasets. Several classifiers were employed to predict the diseases. The COVID-19 dataset found that the highest accuracy was 99.76% using the combination of CNN-PSO-SVM. In comparison, the brain tumor dataset obtained 99.51% accuracy, the highest accuracy derived using the combination method of autoencoder-PSO-KNN.

1. Introduction

Medicine, the cornerstone of healthcare, is pivotal in improving and sustaining human health. It encompasses a vast array of disciplines, from preventive care to sophisticated treatment modalities, all of which are aimed at enhancing the quality of life. In recent times, the focus on medicine combined with computer science has intensified in order to save time and effort and delve deeper into understanding disease and diagnosing it.
This research proposes an automated approach that leverages advantageous features to reduce prediction errors and enhance diagnostic quality. The method utilizes a variety of machine learning and deep learning techniques to classify COVID-19 and brain tumors based on the analysis of MRI and CXR images. A brain tumor is an abnormal and unwanted growth of tissue cells in the brain that causes neurological complications in patients. Nowadays, as a result of environmental and human health behaviors, instances of these tumor are quickly increasing [1]. We will need a combination of a computer-aided diagnostic (CAD) system and a medical image processing method that generates high-quality images of the afflicted bodily component, typically human soft tissues, to handle this situation. Magnetic resonance imaging is a brain imaging technique that provides significant information to allows a physician or CAD to identify whether a patient has a tumor or not and, if a tumor is identified, to discriminate between its forms so that the patient can receive suitable early treatment. Unlike X-ray imaging, MRI reveals every essential detail without exposing the patient to radiation [2]. It is a versatile method since the contrast between one tissue and another can be altered by changing the imaging method. It is possible to make images with great contrast by adjusting the radio frequency and gradient pulse, for example. There are two types of brain tumors: benign and malignant [3]. Non-cancerous tumors are benign, and cancerous tumors, called malignant tumors, are more likely to develop as a result of cancer in any region of the body, not just the brain. On the other hand, the COVID-19 pandemic, which affected the entire world in the last few months of 2019, started in China. The virus was officially named SARS-CoV-2 (severe acute respiratory syndrome-coronavirus-2) by the World Health Organization (WHO) [4]. The virus spread quickly, infecting an enormous amount of people, and the WHO declared it a “pandemic.” It is commonly known that the two main ways in which the virus spreads are by air and physical touch. This virus reportedly attacks the lungs directly, resulting in severe pneumonia [5]. Being an RNA virus, it is difficult and time-consuming to identify. Early identification is essential to reducing COVID-19’s effects [6,7]. Consequently, individuals infected with the virus have an increased chance of surviving the potentially fatal situation and receiving timely medical attention. However, this process is expensive, time-consuming, and inconvenient. Moreover, the actual sensitivity of detection is low [8,9]. This test is insufficient on its own; medical image processing methods like chest X-rays (CXR) are required to support it.
Kennedy and Eberhart developed the particle swarm optimization (PSO) technique in the middle of the 1990s. Using a huge quantity of integrated knowledge about the design space, the PSO technique publishes a set of randomly chosen solutions (the primary group) in the design space in order to find the best solution among several iterations (movements). All group members gain from this. The PSO approach is based on how flocks of birds, fish, and other animals can use information sharing to adapt to their environment, find abundant food supplies, and avoid predators like fishermen. This strategy has an evolutionary advantage. References [10,11] serve as a motion simulator for the novel optimization approach and provide a detailed history of the PSO algorithm’s development. Due to its ability to cope with continuous and discrete variables, restricted nonlinear constraints, and targets without the need for gradient information, the genetic algorithm (GA) technique is utilized to handle complicated optimization issues. This study uses the formal hypothesis testing method to examine the computational efficacy and efficiency of GA and PSO. The goal is to support the claim that, although PSO’s computing efficiency is superior to GA’s, it is just as successful at locating optimal total solutions. The outcomes of this test will be crucial for the continued development of PSO. The other sections of this article are organized as follows. Prior to formulating the hypothesis testing procedure, the PSO and GA versions employed in the comparison study are summarized. Second, the effectiveness of GA and PSO were compared using three well-known criteria. Third, two optimization issues involving the two-space systems are brought forward in order to assess how well both techniques work in real-world situations. The last two sections also present the results of this study.
The activity of ants is one of the intriguing natural behaviors for locating food, displaying a highly cognitive nature. Ants have a cunning manner of finding food that they can employ to reduce output error and discover the quickest path to food. This study first introduces the behavior of ants before introducing modified ant colony optimization for the feature selection approach. The COVID-19 and brain tumor data are classified using deep learning and autoencoder-based techniques in the second phase [12].
The primary objective of this study is to utilize PSO in conjunction with deep learning techniques to attain high accuracy in classifying COVID-19 or brain tumor datasets.
The aim of incorporating the PSO algorithm is to select significant and effective features while eliminating redundant and irrelevant ones from complex datasets inspired by the natural swarm behavior observed in fish and birds. This algorithm is integrated with deep learning techniques to extract and select optimal features, thereby improving model performance. In recent years, there has been a surge in deep learning-based studies focused on classifying brain tumors and COVID-19. For example, ref. [13] achieved an accuracy of 97.5% in MRI brain tumor classification using a CNN-based approach, simplifying the system’s complexity by employing a deeper architecture. To reduce feature dimensionality, Jaeyong et al. [14] employed an ensemble of deep feature methods. These were complemented by the support vector machine (SVM) with radial basis function (RBF) for accurate tumor classification. A hybrid feature extraction technique based on the regularized extreme learning machine was presented by Abdu Gumaei et al. [15]. They evaluated their suggested strategy using brain MRI scans. Min–max was employed for the preprocessing and augmentation of the contrast. In their study, the experiments yielded a result of 94.233%.
However, these studies have several drawbacks in terms of the complexity of implementation, parameter selection in preprocessing steps, determining the coarse structures of deep neural networks (DNNs) accurately, and the difficulty of implementing complex DNN structures and training algorithms. Furthermore, in terms of data limitations, a common challenge is the lack of an adequate amount of reliable data, and these studies are specified as having one medical dataset for training algorithms. This can lead to imbalances within the datasets used for multiclass classification. These gaps highlight the need for improved algorithms that can handle complex data. This research is motivated by the increasing complexity of diagnosing clinical big data in the context of a growing patient population, and the PSO was selected for its robust search capabilities in identifying optimal features from the data.
The principal contributions of this study are as follows:
  • Development of interpretable models: We focus on developing models that not only provide accurate predictions but also offer insights into features by extracting features from different medical image datasets using several deep learning techniques; autoencoders; and pre-trained CNNs (namely, AlexNet, GoogLeNet, ResNet50, and DenseNet201). These are coupled with the next step of the PSO meta-heuristic algorithm, and the last step is that the results are predicted by a different classifier (SVM, KNN, DT, etc.). Ultimately, we utilize the particle swarm optimization (PSO) method to enhance detection precision by selecting the most significant and effective features while eliminating redundant ones from different datasets.
  • The results of this study were proven through the use of two different medical datasets, the first being for a brain tumor imaged with MRI, and the second being a completely different dataset for the lungs of COVID-19 patients that were imaged with CXR.
  • We demonstrate the novelty and superiority of this proposed feature selection combination over existing diagnostic baseline models.
  • We validate the effectiveness of the PSO algorithm in feature selection compared to genetic algorithms (GAs) and ant colony optimization (ACO) for various medical datasets, including MRI and X-ray images. This algorithm showed superiority over other heuristic algorithms in different datasets.
  • This proposed method will aid in improving understanding and interpretation by medical professionals.
The remainder of this paper is organized as follows: the related research is presented in Section 2; the datasets and their preprocessing, methodologies, deep learning techniques, and the proposed feature selector are described in Section 3; and the results are also presented, discussed, and analyzed in Section 4. Then, finally, the conclusion, limitations, and prospects for future work recommendations are given in Section 5.

2. Related Works

In [16], the PSO method was used for feature selection in the automated classification of brain images using wavelet energy and biogeography-based optimization. Different classifiers were used and tested; the best result was obtained using the PSO method. In [17], fusion-based feature extraction was used for COVID-19 diagnosis, and the deep learning method was used to classify the healthy and non-healthy images. Domingos Alves et al. [18] suggested using the PSO with optimized XGBoost and convolutional neural network for the classification of COVID-19 patients based on chest X-ray images. The hybrid PSO and SVM classifier model is presented in [19] for brain tumor classification. For feature learning, a brand-new deep learning model termed MRSDAE is put forth in [20]. The authors’ approach works incredibly well for extracting characteristics from vibration signals. They simultaneously developed the parameters for the proposed approach and structure using the PSO.
In [21], the authors used the PSO for feature selection from COVID-19 data and used the voting classifier algorithms on CT images. In [22], the real-time application based on PSO and one-dimensional CNN and SVM is used for the classification of medical data.
For feature extraction from MRI brain images, Kaplan et al. presented an approach based on the modified local binary pattern [23]. In their approach, the image is first normalized and smoothed using filtering, after which the two separate modified LBP methods—nLBP and LBP—are utilized. The nLBP method was shown to have a high accuracy of 94.56%. They employed machine learning techniques like ANN, Random Forest, K-NN, and decision trees for classification.
In [24], the authors propose the classification of liver and brain tumor diseases using the CNN, discrete wavelet transform, and LSTM. The dataset from Firat University, which includes 56 benign and 56 cancerous photos, is used in their research. The authors were able to accurately diagnose the liver tumor and brain tumor with 98.60% and 99.10% accuracy, respectively.
For MRI classification, Swati et al. [25] employed transfer learning and fine tuning. They applied the pre-trained CNN model and the novel transfer-learning-based fine-tuning scenario. Analyzing the 5-fold cross-validation number, they obtained 94.82% accuracy.
Brain tumor diagnosis in MRI images is conducted using a hybrid technique based on neural autoregressive distribution estimates and convolutional neural networks [26].
In [27], the authors used VGG16 and VGG19 for feature extraction. For the validation of the results, the BraTS datasets are used. The accuracy obtained from their method was 97.8%, 96.9%, and 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively.
In [28], the authors extracted features using the discrete wavelet transform (DWT) method. Pathak et al. [29] used the cost-sensitive top-2 smooth loss function to utilize and enhance the accuracy of the results.
The Bag of Words (BoW) technique was utilized by Cheng et al. [30] to extract characteristics from the photos, and SVM was used to classify the images. To identify text sources, BoW was utilized [31,32]. This approach to identifying the features of medical images is ineffective, and in [30,33,34] the authors employed deep learning with the SVM classifier as their sole feature selector in order to choose the most useful features and produce high-quality results. To extract as many of the most useful features as possible and categorize the dataset using an SVM classifier, a modified deep CNN network was employed [35].
The authors in [36] employ the 2D discrete wavelet transform (DWT) and 2D Gabor filter. The facial recognition systems that were employed in [37,38] also benefit from these feature extraction techniques. To detect and categorize images of brain cancers, Ghassemi et al. [39] coupled the Generative Adversarial Network (GAN) and ConvNet (random split). GAN techniques have recently been applied to face recognition systems [40,41]. Human faces have much higher resolutions than tumor photos, and they also have more visible objects. ConvNet, however, can be combined to achieve great accuracy.
The accuracy score of Umut Özkaya et al. [42] was 98.27% thanks to the usage of CNN and data fusion. Because data fusion produces strong features, this method can be recognized as having a higher performance compared to other methods. When the feature extraction approach used in COVID-19 pictures is not joined by a feature selection method like ANN [43], DNN [44], Random Forest [45], Tailored CNN [46], DenseNet [47], DarkNet-19-based CNN [48], or deep learning [8], we cannot suppose that the result will be of high accuracy.
In [49], S. Asif et al. significantly improved Mpox detection accuracy by using a novel CGO-Ensemble framework, which integrated transfer-learning models with feature layers and residual blocks. For weight allocation, they also the Chaos Game Optimization (CGO) algorithm on two widely recognized benchmark datasets: the Mpox Skin Lesion Dataset (MSLD) and the Mpox Skin Image Dataset (MSID). The results were 100% for MSLD and 94.16% for MSID. In [50], S. Asif et al. developed the MO-WAE model, which uses particle swarm optimization for optimal weight distribution and combines DenseNet201, MobileNet, and DenseNet169 with extra layers for better classification. This model successfully detects Mpox with an impressive 97.78% accuracy.
In [51], S. Asif et al. proposed a novel deep-stacked ensemble model called “BMRI-NET”. This model detects brain tumors from MRI results and achieves an accuracy of 98.69% on a Figshare brain MRI dataset, containing three types of brain tumors (meningiomas, gliomas, and pituitary tumors) and consisting of 3064 images. In [52], W. Wang et al. introduced a self-tuning convolutional neural network (PSTCNN) guided by PSO. Used on a dataset for COVID-19 diagnosis, this method achieved an accuracy of 93.99%, sensitivity of 93.65%, and specificity of 94.32%. In [53], S. Punitha et al. introduced a technique utilizing an Artificial Bee Colony (ABC)-optimized ANN to classify the patients into two classes, either COVID-19 or non-COVID-19. The ABC algorithm is used to optimize the ANN’s input features, initial weights, and hidden nodes. This study has an acceptable accuracy of 92.37%.
In [54], S. K. Rajeev et al. proposed a method that employed an Improved Gabor Wavelet Transform (IGWT) for feature extraction. The optimal features were selected using the Black Widow Adaptive Red Deer optimization (BWARD) algorithm, and an Elman-BiLSTM network was used to classify brain tumors from MRI images. This method achieved an accuracy of 98.4%. In [55], S. Rajakumar et al. developed a deep learning framework that was improved by a new political exponential Deer Hunting Optimization Algorithm (DHOA) and they used classifiers from the Pyramid Scene Parsing Network (PSPNet), Shepard convolutional neural network (ShCNN), and Deep CNN. This approach segments and classifies MRI images of brain tumors, achieving a 92.9% tumor classification accuracy.
In [56], Geetha et al. developed a deep learning model for the classification of brain tumors using the Sine Cosine Archimedes Optimization Algorithm (SCAOA). MRI brain images were preprocessed and segmented, and after that, the features were extracted. This model achieved a sensitivity of 92.3%, a specificity of 92.0%, and an accuracy of 93.0%.

3. Material and Methods

In this study, two significant types of medical datasets were implemented: MRI scans for brain tumors and X-rays for COVID-19. The images from both datasets served as inputs for the combined detection system.
Preprocessing was applied to these datasets and introduced to our approaches. Our methods involved two combinations of deep learning approaches with meta-heuristic algorithms, specifically pre-trained CNN with PSO and autoencoders with PSO. The results of the features were introduced to several classifiers in order to evaluate our approaches.

3.1. Datasets

3.1.1. COVID-19 Dataset

The severe acute respiratory illness is caused by a coronavirus strain known as COVID-19 (coronavirus disease 2019; SARS-CoV-2). Before spreading globally, the first cases were found in Wuhan, Hubei, China, in late December 2019 [57]. The WHO classified the current illness as a pandemic on 11 March 2020.
The dataset contains 6432 X-ray images in total [58]; image sizes vary and are not fixed. All images have been modified. This dataset is organized so that 80% is used for the training of total images and the remainder is used for the test dataset represented in Figure 1. It consists of three classes: COVID-19, pneumonia, and normal. These images are divided into 460 COVID-19, 3418 pneumonia, and 1266 normal images for the training and validation of the model. Also, we use 116 COVID-19, 855 pneumonia, and 317 normal samples to test the model. A sample from each class in the COVID-19 dataset is shown in Figure 2.

3.1.2. Brain Tumor Dataset

The proposed method was examined and tested by using data on brain tumors gathered between 2005 and 2010 from Tianjin Medical University General Hospital and Nanfang Hospital in Guangzhou, Guangdong, China [59]. Three distinct types of brain tumors are represented in this brain tumor dataset, which consists of 3064 T1-weighted contrast-enhanced pictures from 233 patients: meningioma (708 slices), glioma (1426 slices), and pituitary tumor (930 slices). We randomly divided these images into two groups, using 80% for training and validation and 20% for testing the model. This means that the pituitary sample is divided into 744 slices to train the model and 186 slices for the test; the glioma sample is divided into 1141 slices for training and the remaining (285 slices) are used for the test; and the meningioma is divided into 566 slices for training the model and 142 slices for testing. The results are represented in Figure 3. The images are 512 × 512 pixels in size and are available as .png files. Figure 4 presents sample images from the brain tumor dataset, showcasing examples from each of the three tumor classes.

3.1.3. Dataset Preprocessing

Each brain tumor image is provided at a resolution of 512 × 512 pixels, while COVID-19 images vary in size. It is important to prepare these image datasets beforehand to improve the quality of features, which helps to enhance predictions for both CXR and MRI images. The processed data are then used for feature extraction using either an autoencoder with PSO or a CNN with a PSO model, as shown in Figure 5.
Two different preprocesses are used for each model. For the autoencoder with the PSO model, the original RGB images are changed to grayscale. They are resized to be 64 × 64 pixels. The final preprocessing step includes converting these images (matrices) into arrays (vectors) before feeding them into the model. On the other hand, for the model of CNN with PSO, preprocessing involves resizing the images to 227 × 227 pixels for Alexnet, and 224 × 224 for the other CNN pre-trained model used.

3.2. Methodology

In the training stage for the first combined model, we applied these training modified and augmented images to the autoencoder network in order to perform feature extraction. After that, the aid of the PSO feature selection algorithm enhanced the accuracy of our model in choosing the best features. After obtaining the best features, we applied several learnable classifiers such as discriminant, ensemble, Naive Bayes, support vector machine, decision tree, and k-nearest neighbors. These classifiers categorize data based on the labels of the input type and learn from the characteristics extracted via the PSO method. To present the quality of this study for the diagnosis of the disease, we applied the PSO feature selection algorithm with other deep learning techniques. We present a second combination that can deal in particular with image processing. The convolutional neural network is a deep learning method. We applied several pre-trained CNNs to train the model and extract features from the input dataset. Furthermore, PSO was implemented to select features. After that, the selected features were introduced into learnable classifiers for the detection of our problems. To prove the results, the outcomes of evaluating several parameters were calculated, such as accuracy, sensitivity, specificity, etc. The classifiers were trained in a supervised fashion to learn the weights of each label and tested to calculate the learning rate. Figure 6 shows the overall framework of the proposed system.

3.2.1. Feature Extraction with CNN

The pre-trained convolutional neural network model is used in the initial stage to extract significant features from the image. The pre-trained technique is one of the pre-trained procedures that are used. A popular model, used in many studies on picture categorization, is AlexNet. This step aims to extract sensitive and high-level information from input photos. This model embodies the science of human eyesight. Input, convolution, pooling, and fully connected layers are among the layers that make up CNNs, as shown in Figure 7, which represents a simple CNN structure. The core operations of a convolutional neural network model are contained in this structure. As a result of its use of the shared weight technique rather than the fully connected technique to reduce computation, CNN is now frequently used to solve a variety of computer vision problems. From one model to another, the following different features are extracted:
  • Alexnet: Input size is 227 × 227 × 3, and the number of features is 4096.
  • Googlenet: Input size is 224 × 224 × 3, and the number of features is 1000.
  • Resnet50: Input size is 224 × 224 × 3, and the number of features is 2048.
  • Densenet201: Input size is 224 × 224 × 3, and the number of features is 1920.

3.2.2. Feature Extraction with Autoencoders

An autoencoder is a neural network that has been trained to replicate its input at its output. Deep neural networks can be trained using autoencoders. Insofar as no labeled data are required, the process of training an autoencoder is unsupervised. The optimization of a cost function continues to be the foundation of the training process. The discrepancy between the input x and the output reconstruction x ^ is measured by the cost function.
An encoder plus a decoder comprises an autoencoder. Although the encoder and decoder can each have numerous levels, let us assume for the sake of simplicity that they each only have one layer.
When an autoencoder receives a vector as input, x R D x , it maps that vector onto another vector, z R D ( 1 ) , as seen below:
z = h 1 ( W 1 x + b 1 )
where the first layer is denoted by the superscript (1). The encoder’s transfer function is represented by h 1 : R D ( 1 ) R D ( 1 ) , its weight matrix is represented by W 1 R D ( 1 ) × D x , and its bias vector is represented by b 1 R D ( 1 ) . The decoder then approximates the original input vector, x, from the encoded representation, z, as follows:
x ^ = h 2 ( W 2 z + b 2 )
where the second layer is indicated by the superscript (2), the decoder’s transfer function is represented by h 2 : R D x R D x , its weight matrix is represented by W 1 R D x × D ( 1 ) , and its bias vector is represented by b 2 R D x .

Sparse Autoencoders

By including a regularizer in the cost function, it is possible to promote the sparsity of an autoencoder [60,61]. This regularizer is based on a neuron’s average output activation value. A neuron’s i average output activity measure is calcualted as follows:
ρ i ^ = 1 n j = 1 n z i 1 x j = 1 n j = 1 n h ( w i 1 T x j + b i ( 1 ) )
where n represents the overall quantity of training samples, the jth training sample is xj, b ( 1 ) is the ith entry of the bias vector of b i ( 1 ) , and w i 1 T is the ith row of the weight matrix W ( 1 ) . A neuron is said to be “firing” if its output activation value is high. Only a few of the training instances activate the hidden layer neuron, as shown by its low output activation value. By adding a term to the cost function in a way that limits the values of ρ i ^ , keeping them low, the autoencoder is compelled to learn a representation where each hidden layer neuron fires in a constrained number of training samples. In other words, by reacting to a trait that appears in a very small percentage of the training instances, each neuron carves out a niche for itself.

Sparsity Regularization

The sparsity regularizer makes an effort to restrict the output of the sparsity of the buried layer. When a neuron’s average activation value, ρ i ^ , and its target value, I, do not have a similar value, it is possible to promote sparsity by using a regularization term that takes a big value [61,62]. Kullback–Leibler divergence is one possible name for sparsity regularization.
Ω s p a r s i t y = i = 1 D ( 1 ) K L ( ρ ρ i ^ ) = i = 1 D ( 1 ) ρ log ρ ρ i ^ + ( 1 ρ ) log ( 1 ρ 1 ρ i ^ )
Kullback–Leibler divergence is a function that assesses the degree of separation between two distributions. In this instance, it has a value of zero when a value of ρ and ρ i ^ are equal to each other and increases as they diverge. This term must be modest in order to minimize the cost function; as a result, ρ and ρ i ^ must be relatively close to one another. When training an autoencoder, the Sparsity Proportion name–value pair input allows you to specify the desired average activation value.

L2 Regularization

The sparsity regularizer can be made to have minimum values by increasing the weights w(l) and decreasing the values of z(1) while training a sparse autoencoder [60,61]. By applying a regularization term to the weights in the cost function. This term penalizes large weights by adding their squared values to the cost functions. The so-called L2 regularization term is defined as follows:
Ω w e i g h t s = 1 2 l L j n i k ( w j i ( l ) ) 2
L is the number of covert layers.
There are two options for the transfer function: logistic sigmoid function and positive saturating linear transfer function. These transfer functions are illustrated in Equations (6) and (7), respectively.
f x = 1 1 + e x
f x = 0 , i f   x 0 x , i f   0 < x < 1 1 , i f   x 1
For the encoder transfer function, we used the sigmoid function. This function’s curve is shown in Figure 8.

Deep Sparse Autoencoders

The comma-separated pair consisting of the loss function specifies the cost function to utilize for training and mean square sparsity. The cost function for training a sparse autoencoder is the adjusted mean squared error function [60]:
E = 1 N n = 1 N k = 1 K ( x k n x k n ^ ) 2 M e a n   s q u a r e d   e r r o r + λ × Ω w e i g h t s L 2 r e g u l a r i z a t i o n + β × Ω s p a r s i t y s p a r s i t y r e g u l a r i z a t i o n
where λ is the sparsity regularization term’s coefficient and is the L2 regularization term’s coefficient β. While training an autoencoder, you can provide the values of λ and β by using the L2 weight regularization and sparsity regularization name–value pair arguments, respectively.

3.2.3. Particle Swarm Optimization Algorithm for Feature Selection

PSO Explanation and Literature

To overcome the issue of interpreting pictures in computer animations that attempt to mimic natural events, the hypothesis of employing a multitude of agents (particles, populations) that interact with simple natural techniques to construct seemingly intricate disability behaviors was adopted. As part of his research at Lacasse Film, Rios, one of the field’s pioneers, used particle devices, which individually have several components that combine to generate a fuzzy function. A succession of moving points that were normally started at predetermined positions was generated at random by the particle machine. Color, texture, a finite lifespan, and other aspects were included in the graphic simulation. Multiple random factors were used to alter the velocity vectors. Then, by adopting the velocity vector, each particle moved on to the following position, disengaging from its previous location. A forced angle was used to alter the location in order to make the move appear natural. In-depth research was performed on these systems to produce social results and real-world interactions in graphical settings. For some animations (such as a bird population), it was important to portray group behaviors with higher levels of dynamism than mere particles. It was conceivable to create a file that started member activity, but it was quite laborious, and it was also challenging to come up with a response that sounded natural. Reynolds’ higher-level group algorithm was built on the Rios particle system. He took into account the particle’s prior motion and added additional elements to it including inclinations, positional identification, and information correspondence. The group members’ extra actions were in accordance with the fundamental principles of group membership, such as the need to avoid collisions, adjust one’s velocity vector to that of the group as a whole, and be in a better position than the others. While enhancing member intelligence, the development of these fundamental models did away with the requirement for route registration. However, giving people more liberty might lead to issues like incompatibility. In order to tackle the issue, Reynolds made a specific move based on the importance of superiority. But once more, the choice may be arbitrary and illogical. Consider a gadget with a straightforward implementation, in which every particle is aware of the motions that are occurring throughout the entire population. In this instance, when the population particle count rises, the problem may become extremely challenging or perhaps impossible. Reynolds suggested the neighborhood system as a solution, which is also employed in nature because of members’ limited visibility, despite current studies suggesting that this strategy alters the population’s impaired behavior. By including social behaviors, Kennedy and Haharat aimed to expand Reynolds’ model. More significantly, they switched the Hepner and Greenander algorithm’s straightforward goal of finding a nest, developed from a periodic group algorithm, with the more practical one of locating food. This inspired academics to apply this approach to challenging mathematical issues. The objective function of the issue is viewed, in terms of these methods, as a function of population fitness. Because they are more universal than the bird model, they are now referred to as representational. A more effective and straightforward model was created by eliminating the redundant and inefficient variables [62,63,64,65,66].

PSO for Feature Selection

Obtaining the optimum solution for the entire swarm and each individual particle is the goal of PSO, which changes particle position and velocity over time. Equations that are based on the following velocity equation are used to update the particle positions and velocities iteratively and uniform random variables between 0 and 1 are used to generate the random variation. Where vi, k is the inertia factor, α is the self-confidence learning constant, β is the swarm influence learning constant, r1, and r2 are random values between zero and one, and vi, k is the velocity of particle i on iteration k. Particle i has never achieved a better position than PB, and any member of the population has never achieved a better position than GB. The particle position is xi, k.
The resulting algorithm for calculating the next particle position was expressed as follows:
v t + 1 = v t + φ 1 β 1 p i x i + φ 1 β 1 p g x i
v i , k + 1 = w × v i , k + α × r 1 * P B x i , k + β × r 2 × G B x i , k
x t + 1 = v t + v t + 1
x i , k + 1 = x i , k + v i , k
In Figure 9, where pi is the best local solution and pg is the best global solution, it is shown how particle location and velocity are updated. Particle swarms converge to similar solutions significantly more quickly and effectively than genetic algorithms, according to a considerable amount of research conducted by Hassan et al. [67].
Figure 10 depicts the algorithm’s flowchart, which demonstrates how the algorithm works. Moreover, the objective function results in convergence, as illustrated in Figure 11.

4. Results and Discussion

The approach proposed in this study was trained on an Intel(R) Core(TM) i7-6500U CPU at 2.50 GHz and 2.60 GHz using MATLAB 2021b. A comprehensive study was conducted to determine the best combined technique. In the first stage, an autoencoder method was applied to two common medical datasets, featuring COVID-19 and brain tumor patients. We utilized the PSO method to separate out and choose the key features from the input training dataset. On the other hand, we applied a pre-trained CNN with the PSO algorithm to classify both of the datasets that were used in the first model. The performance of models derived from the confusion matrix was measured using several key criteria, including F1-score, recall, accuracy, precision, and others. For multiclass classification, overall accuracy, class detection rate, and class FP rate are used. True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN), standing for positive and negative instances, respectively, are our basic terms. True Positive (TP) denotes positive instances that are classified as positive, True Negative (TN) denotes negative instances that are classified as negative, and False Positive (FP) indicates false instances that are classified as positive.

4.1. Autoencoder with PSO for COVID-19 Dataset

In this study, different scenarios are implemented and evaluated to ensure the performance of this model and to perform a comprehensive comparison of the combinations. In the first stage, autoencoder was applied to a COVID-19 dataset with 6 different types of classifiers. Table 1 shows the result for the autoencoder with the PSO algorithm for the COVID-19 dataset.
Accuracy is the most important parameter for evaluating this classification model, and this depends on the true values of the classified tested image, The SVM classifier obtained the higher accuracy rate which is 98.83%. Also, the misclassification rate is 1.17%.

4.2. Autoencoder with PSO for Brain Tumor Dataset

This model has proven successful in classifying medical images by applying these second famous data for brain tumors consisting of 3 different classes as well (meningioma, glioma, and pituitary brain tumors). Table 2 shows the result for the autoencoder with the PSO algorithm for the brain tumor dataset.
The implementation of an autoencoder training model, with the aid of a PSO algorithm, to select the best features was accurate; we obtained good quality in the classification of brain tumor classes, and the KNN classifier obtained 99.51% accuracy.

4.3. Pre-Trained CNN with PSO for COVID-19 Dataset

For the second scenario, several pre-trained CNNs (Alexnet, Googlenet, Resnet50, and Densenet201) were applied. CNN is a form of artificial neural network, used in image processing and detection, that was created primarily to process pixel input. A CNN uses multilayer perceptron-like technology that is designed for modest processing demands. It was also applied with PSO in order to verify the power of the feature selection method used in this study. Also, 6 types of classifiers were implemented to compute the classification model. Table 3 represents the result of four pre-trained CNNs with PSO for the COVID-19 dataset.
As stated in Table 3, the Resnet-50 pre-trained network outperformed the other three pre-trained CNNs when evaluated with the PSO feature selection algorithm. For the COVID-19 dataset, the SVM classifier produced a spectacular result with 99.76% accuracy classification.

4.4. Pre-Trained CNN with PSO for Brain Tumor Dataset

The brain tumor dataset was used in this evaluation of pre-trained CNN with PSO. Table 4 shows the diagnosis rates for a brain tumor in this model.
When applying the pre-trained Resnet-50, the discriminant classifier obtains a respectable 98.85% accuracy, as shown in Table 4.

4.5. Comparison Tables

Table 5 Shows the comparison of the previous studies performed with the proposed method for medical dataset diagnosis.
In [19], the SVM classifier was used to perform the MRI brain tumor after a machine learning model was built, using the PSO approach for feature selection.
For the two classes (benign and malignant), the hybrid model of PSO and SVM obtained 95.23%, which was an acceptable result compared to the SVM, which only obtained 86.82%, a reasonable result, indicating that the feature selection impacts on the classification process [73], while the PSO algorithm was successful in the hybrid model. There is no feature extraction method in this hybrid model. For this reason, our proposed method had the luck to obtain a higher accuracy rate, even though the proposed method was tested for a three-class dataset.
The accuracies of the approaches utilized by GLCM [68,69] are 82.00% and 96.50%, respectively. The gray-level co-occurrence matrix is helpful for textured photos like fingerprint and palmprint images because it may provide accurate findings for this type of dataset [74]. Additionally, because textural features cannot be relevant in data images, the method employed in [68], which used the GLCM, could not produce high-quality results. However, the authors combined the GLCM with the pre-trained VGG-16 CNN [69] and found that the results were improved. This indicates that employing GLCM alone will not improve a system’s performance. The VGG-19, created by the Softmax classifier, was utilized by the authors in [75] as well, and it achieved 94.58% accuracy. This problem demonstrates that the authors’ high-accuracy findings were obtained in [69] when the GLCM and VGG-19 were combined.
However, in [17], more combinations were introduced in order to achieve higher accuracy than the above studies. This study introduces FM-CNN, a fusion-based feature extraction model that uses a convolutional neural network (CNN) to perform automated COVID-19 diagnosis. Preprocessing, feature extraction, and classification are the three key aspects of the FM-CNN model. Initially, noise is removed from the input chest X-ray (CXR) pictures using preprocessing based on Wiener Filtering (WF). The preprocessed images are then sent into a fusion-based feature extraction model that uses local binary patterns, the gray-level co-occurrence matrix, and the gray-level run-length matrix (LBP). Finally, the particle swarm optimization (PSO) algorithm is used to select the best subset of characteristics. It shows that, for three classes (COVID-19, pneumonia, normal) of dataset used, the performance of this model was 98.06%, which was lower than that of the proposed method, 99.76%. As mentioned above, GLCM is effective in determining the texture of images [74]. The spatial plane features of each pixel are extracted using the texture representation model GLRM in relation to the high-order statistics [76], and the LBP is an efficient method used for texture feature extraction. This method is very popular for face detection and pattern recognition approaches [77]. Here, PSO proved its efficacy by selecting features and enhancing the accuracy of the diagnosis of the medical dataset. However, this model obtained a lower performance compared with the proposed method.
In [78], the authors performed texture feature extraction in addition to PCA to identify the best feature. The best features from medical data cannot be found via PCA feature selection [79]. Additionally, none of the other approaches listed in Table 5 apply combination methods for feature extraction and feature selection. The outcomes cannot be improved using ineffectual characteristics. Additionally, the use of comparable approaches for feature extraction and selection using the same dataset that we utilized provide the basis for this related research.
Based on the intensity and texture features that Sachdeva et al. [78] recommended, the PCA was made. The features will be more likely to locate the high Eigenvalues in the image when using this approach to PCA, which finds the Eigenvalues of the features. However, in images of brain tumors, some areas of the image are ineffective at high Eigenvalues, leading to some errors and a less-than-accurate result.
The features from pictures of brain tumors and COVID-19 are extracted using the capsule networks (CapsNets) in [50,70,80]. Two-dimensional signals, such as those employed recently in voice recognition signals and medical imaging, cannot be produced with great accuracy using this method [81]. The performance for 1D signals is superior to that OF 2D signals.
A deep convolutional autoencoder alone cannot produce satisfactory results [71,72]. More unimportant features are extracted from the images by the autoencoder. Choose the finest option by boosting these aspects.
For the proposed method, the highest accuracy of 99.76% was achieved by the model that contained CNN Resnet50 for feature extraction with the PSO algorithm for the selection of the features and the classification performed by the SVM learnable classifier, which depends on the last best features obtained from this model. PSO is a derivative-free methodology, like other heuristic optimization techniques, and is one of the most effective ways of solving global optimization problems. PSO has a straightforward concept and coding implementation when compared to other heuristic optimization methods. PSO is less sensitive to kinds of objective function than conventional mathematical techniques and other heuristic techniques [82]. There are some limitations of the PSO method [83]: it is unable to solve problems involving non-coordinated systems, such as the solution to the energy field and the moving rules of the particles in the energy field. Partially optimistic thinking is common in this method, resulting in less speed and directional control. The proposed method has been compared with another meta-heuristic method, Ant Colony Optimization (ACO), in relation to the time and accuracy of models, as shown in Table 6.
Table 6 shows that the combination model of the deep learning method with the PSO feature selection algorithm had a much longer consumption time than the same method with the ACO algorithm since the ant colonies’ aging behaviors were the inspiration for the ACO. Ants’ indirect communication lies at the heart of this behavior, allowing them to find quick routes between their nest and food sources [83].
Both the ACO and PSO algorithms are data clustering algorithms since they employ swarm dynamics. The ACO, however, works flawlessly when applied to problems with clear sources and destinations. PSO is a clustering method with numerous objectives, dynamic optimization, and constraint handling all at once. PSO is more appropriate for issues that call for ambiguous solutions, while ACO is more appropriate for issues that call for clear-cut answers [83].
Because it avoids becoming stuck in the local minimum, ant colony optimization surpasses the genetic method in speed to the global minimum point (GA). ACO aims to find the best answers to numerous optimization issues by replicating the cognitive behavior of ants. Due to its benefits, which include ease of implementation, a limited number of parameters, adaptability, etc., it has attracted significant interest on a global scale [84]. Ant colony optimization is straightforward, adaptable, reliable, scalable, and self-organizing. Compared to a genetic algorithm, it has fewer control parameters (GAs). Individuals are capable of carrying out several tasks at once. In comparison to GAs, swarm intelligence (SI) uses less memory. ACOs have a number of benefits, and many parties benefit from this type of care. The patient community gains in many different ways, such as improved outcomes, higher-quality care, more provider engagement, and a general decline in out-of-pocket costs. These techniques have an advantage over methods employing simulated annealing and genetic algorithms when the graph may alter dynamically since the ant colony algorithm can run continuously and adapt to changes in real time [84].
For this reason, the performance of the deep learning method can be much more accurate in classifying the medical dataset when it is combined with meta-heuristic algorithms for feature selection.

5. Conclusions

In this study, five models were employed on various medical datasets, each comprising two stages: a deep learning technique and a meta-heuristic algorithm. The models, totaling ten combinations, were evaluated with six learnable classifiers for optimal detection accuracy. The first stage involved feature extraction using either an autoencoder or pre-trained CNNs (AlexNet, GoogleNet, ResNet 50, or DenseNet 201). The second stage utilized meta-heuristic algorithms, using either PSO, ACO, or GA. PSO was used and showed its superior performance, as shown in Table 6, in terms of enhancing feature selection and improving accuracy by choosing the most noticeable visual attributes to limit the volume of data that needed to be processed across CXR and MRI datasets. Finally, the selected features were classified by the third stage using learnable classifiers decision tree, SVM, KNN, ensemble, Naive Bayes, and discriminant classifiers to process the acquired features to assess the model’s correctness.
This study achieved a satisfying classification accuracy using these combined models on CXR and MRI images of COVID-19 and brain tumor patients, respectively. The best model for the CXR dataset was ResNet-50 combined with PSO and SVM, achieving 99.76% accuracy. For the MRI dataset, the highest accuracy of 99.51% was obtained using the combination of an autoencoder, PSO, and KNN.
To lower the misclassification error rate, the PSO meta-heuristic method is used to look for the most crucial features in the accessible feature set.
The effectiveness of the suggested approach is assessed using two different medical datasets with different features, which are the CXR and MRI image datasets. The CXR information is utilized to determine the patient’s status as either COVID-19, pneumonia, and normal while using MRI images to determine the type of brain tumor present (meningioma, glioma, and pituitary). The accuracy of the entire system is indeed significantly impacted by the removal of weak, redundant, and noisy elements. When compared to the other cutting-edge techniques listed in Table 5, the suggested system has the highest accuracy performance. The main goal of the proposed accurate approach is to assist the medical field in performing earlier status detection.

6. Limitations

The limitations of this work include the following concerns:
  • Its reliance on supervised learning with labeled data limits automation and potential inability to fully exploit deep learning’s potential.
  • The constraint of a small dataset in the medical field restricts the generalizability and robustness of an approach.
  • The need for human participation in the diagnostic prediction process hinders scalability and real-time application in clinical settings.
  • The deep learning models employed are computationally intensive, requiring substantial hardware resources for training and inference, which may limit their applicability in resource-constrained settings.
  • The complexity of segmenting brain tumors and COVID-19 lesions, which may require specialized expertise and resources.

7. Future Work

  • We should aim to include a larger and more diverse dataset, covering various demographics and imaging devices, to improve the robustness and generalizability of the models.
  • We should address data labeling challenges, as the process can be time-consuming and error-prone, affecting model quality.
  • Involving close collaboration with healthcare professionals to ensure the method’s relevance, feasibility, and usability in clinical practice can lead to locating the exact segmentation of the tumor.
  • After the classification of these tumors, the segmentation of tumors is needed in order to perform exact detection: knowing the shape and size will aid the doctors in identifying the level at which the tumor is.
  • Focus on optimizing model architectures and hyperparameters to enhance performance while reducing computational requirements, making the models more feasible for real-time clinical use.

Author Contributions

Conceptualization, Y.A.K., data curation, Y.A.K.; formal analysis, Y.A.K., M.S.G.; A.M.; investigation, Y.A.K. and A.M.; methodology, Y.A.K., M.S.G. and A.M.; supervision, A.M. and M.S.G.; validation, Y.A.K., M.S.G. and A.M., visualization, Y.A.K., M.S.G. and A.M., writing original draft, Y.A.K.; writing review and editing, Y.A.K., M.S.G. and A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the Norwegian University of Science and Technology, for the support of Open access fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Both datasets are available in a publicly accessible repository; COVID-19: The data presented in this study are openly available in [Kaggle], reference number [59]. Brain Tumor: The data presented in this study are openly available in [FigShare] at [https://doi.org/10.6084/m9.figshare.1512427.v5], reference number [60].

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

ParameterAbbreviation
AccuracyACC
Sensitivity, Recall, hit rate, True Positive RateTPR
Specificity, Selectivity, True Negative RateTNR
False positive rateFPR
False negative rateFNR
Precision, Positive Predictive ValuePPV
Negative Predictive ValueNPV
F1 ScoreF1 Score
Misclassification rateMR

References

  1. Devi, A. Brain tumor detection. IJITR 2015, 3, 1950–1952. [Google Scholar]
  2. Bhattacharyya, D.; Kim, T. Brain tumor detection using MRI image analysis. In Proceedings of the International Conference on Ubiquitous Computing and Multimedia Applications, Daejeon, Republic of Korea, 13–15 April 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 307–314. [Google Scholar]
  3. Zhang, Y.-D.; Wu, L. An MR brain images classifier via principal component analysis and kernel support vector machine. Prog. Electromagn. Res. 2012, 130, 369–388. [Google Scholar] [CrossRef]
  4. Lippi, G.; Plebani, M.; Henry, B.M. Thrombocytopenia is associated with severe coronavirus disease 2019 (COVID-19) infections: A meta-analysis. Clin. Chim. Acta 2020, 506, 145–148. [Google Scholar] [CrossRef] [PubMed]
  5. Wu, Y.; Xu, X.; Chen, Z.; Duan, J.; Hashimoto, K.; Yang, L.; Liu, C.; Yang, C. Nervous system involvement after infection with COVID-19 and other coronaviruses. Brain. Behav. Immun. 2020, 87, 18–22. [Google Scholar] [CrossRef] [PubMed]
  6. Xu, G.; Yang, Y.; Du, Y.; Peng, F.; Hu, P.; Wang, R.; Yin, M.; Li, T.; Tu, L.; Sun, J. Clinical pathway for early diagnosis of COVID-19: Updates from experience to evidence-based practice. Clin. Rev. Allergy Immunol. 2020, 59, 89–100. [Google Scholar] [CrossRef] [PubMed]
  7. Lalmuanawma, S.; Hussain, J.; Chhakchhuak, L. Applications of machine learning and artificial intelligence for COVID-19 (SARS-CoV-2) pandemic: A review. Chaos Solitons Fractals 2020, 139, 110059. [Google Scholar] [CrossRef] [PubMed]
  8. Ardakani, A.A.; Kanafi, A.R.; Acharya, U.R.; Khadem, N.; Mohammadi, A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput. Biol. Med. 2020, 121, 103795. [Google Scholar] [CrossRef] [PubMed]
  9. Al-Tawfiq, J.A.; Memish, Z.A. Diagnosis of SARS-CoV-2 infection based on CT scan vs. RT-PCR: Reflecting on experience from MERS-CoV. J. Hosp. Infect. 2020, 105, 154–155. [Google Scholar] [CrossRef] [PubMed]
  10. Yang, X.-S.; Cui, Z.; Xiao, R.; Gandomi, A.H.; Karamanoglu, M. Swarm Intelligence and Bio-Inspired Computation: Theory and Applications; Newnes: London, UK, 2013; ISBN 0124051774. [Google Scholar]
  11. Barolli, A.; Sakamoto, S.; Barolli, L.; Takizawa, M. Performance analysis of simulation system based on particle swarm optimization and distributed genetic algorithm for WMNs considering different distributions of mesh clients. In Proceedings of the International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Matsue, Japan, 4–6 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 32–45. [Google Scholar]
  12. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  13. Seetha, J.; Raja, S.S. Brain tumor classification using convolutional neural networks. Biomed. Pharmacol. J. 2018, 11, 1457. [Google Scholar] [CrossRef]
  14. Kang, J.; Ullah, Z.; Gwak, J. Mri-based brain tumor classification using ensemble of deep features and machine learning classifiers. Sensors 2021, 21, 2222. [Google Scholar] [CrossRef] [PubMed]
  15. Gumaei, A.; Hassan, M.M.; Hassan, M.R.; Alelaiwi, A.; Fortino, G. A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access 2019, 7, 36266–36273. [Google Scholar] [CrossRef]
  16. Yang, G.; Zhang, Y.; Yang, J.; Ji, G.; Dong, Z.; Wang, S.; Feng, C.; Wang, Q. Automated classification of brain images using wavelet-energy and biogeography-based optimization. Multimed. Tools Appl. 2016, 75, 15601–15617. [Google Scholar] [CrossRef]
  17. Shankar, K.; Mohanty, S.N.; Yadav, K.; Gopalakrishnan, T.; Elmisery, A.M. Automated COVID-19 diagnosis and classification using convolutional neural network with fusion based feature extraction model. Cogn. Neurodyn. 2023, 17, 1–14. [Google Scholar] [CrossRef] [PubMed]
  18. Júnior, D.A.D.; da Cruz, L.B.; Diniz, J.O.B.; da Silva, G.L.F.; Junior, G.B.; Silva, A.C.; de Paiva, A.C.; Nunes, R.A.; Gattass, M. Automatic method for classifying COVID-19 patients based on chest X-ray images, using deep features and PSO-optimized XGBoost. Expert Syst. Appl. 2021, 183, 115452. [Google Scholar] [CrossRef] [PubMed]
  19. Kumar, A.; Ashok, A.; Ansari, M.A. Brain tumor classification using hybrid model of PSO and SVM classifier. In Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 12–13 October 2018; pp. 1022–1026. [Google Scholar]
  20. Yu, J.-B. Evolutionary manifold regularized stacked denoising autoencoders for gearbox fault diagnosis. Knowl.-Based Syst. 2019, 178, 111–122. [Google Scholar] [CrossRef]
  21. El-Kenawy, E.-S.M.; Ibrahim, A.; Mirjalili, S.; Eid, M.M.; Hussein, S.E. Novel feature selection and voting classifier algorithms for COVID-19 classification in CT images. IEEE Access 2020, 8, 179317–179335. [Google Scholar] [CrossRef] [PubMed]
  22. Navaneeth, B.; Suchetha, M. PSO optimized 1-D CNN-SVM architecture for real-time detection and classification applications. Comput. Biol. Med. 2019, 108, 85–92. [Google Scholar] [CrossRef] [PubMed]
  23. Kaplan, K.; Kaya, Y.; Kuncan, M.; Ertunç, H.M. Brain tumor classification using modified local binary patterns (LBP) feature extraction methods. Med. Hypotheses 2020, 139, 109696. [Google Scholar] [CrossRef] [PubMed]
  24. Kutlu, H.; Avcı, E. A novel method for classifying liver and brain tumors using convolutional neural networks, discrete wavelet transform and long short-term memory networks. Sensors 2019, 19, 1992. [Google Scholar] [CrossRef] [PubMed]
  25. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for MR images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef] [PubMed]
  26. Hashemzehi, R.; Mahdavi, S.J.S.; Kheirabadi, M.; Kamel, S.R. Detection of brain tumors from MRI images base on deep learning using hybrid model CNN and NADE. Biocybern. Biomed. Eng. 2020, 40, 1225–1232. [Google Scholar] [CrossRef]
  27. Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef] [PubMed]
  28. Barstugan, M.; Ozkaya, U.; Ozturk, S. Coronavirus (COVID-19) classification using ct images by machine learning methods. arXiv 2020, arXiv:2003.09424. [Google Scholar]
  29. Pathak, Y.; Shukla, P.K.; Tiwari, A.; Stalin, S.; Singh, S. Deep transfer learning based classification model for COVID-19 disease. IRBM 2020, 43, 87–92. [Google Scholar] [CrossRef] [PubMed]
  30. Cheng, J.; Huang, W.; Cao, S.; Yang, R.; Yang, W.; Yun, Z.; Wang, Z.; Feng, Q. Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS ONE 2015, 10, e0140381. [Google Scholar] [CrossRef] [PubMed]
  31. Huang, C.-R.; Lee, L.-H. Contrastive approach towards text source classification based on top-bag-of-word similarity. In Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation, Cebu City, Philippines; 2008; pp. 404–410. [Google Scholar]
  32. Rui, W.; Xing, K.; Jia, Y. BOWL: Bag of word clusters text representation using word embeddings. In Proceedings of the International Conference on Knowledge Science, Engineering and Management, Passau, Germany, 5–7 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–14. [Google Scholar]
  33. Thejaswini, P.; Bhat, M.B.; Prakash, M.K. Detection and classification of tumour in brain MRI. Int. J. Eng. Manufact. (IJEM) 2019, 9, 11–20. [Google Scholar]
  34. Sethy, P.K.; Behera, S.K.; Ratha, P.K.; Biswas, P. Detection of coronavirus disease (COVID-19) based on deep features and Support Vector Machine. Int. J. Math. Eng. Manag. Sci. 2020, 5, 643–651. [Google Scholar] [CrossRef]
  35. Deepak, S.; Ameer, P.M. Brain tumor classification using deep CNN features via transfer learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef] [PubMed]
  36. Ismael, M.R.; Abdel-Qader, I. Brain tumor classification via statistical features and back-propagation neural network. In Proceedings of the 2018 IEEE international conference on electro/information technology (EIT), Rochester, MI, USA, 3–5 May 2018; pp. 252–257. [Google Scholar]
  37. Ahmed, S.; Frikha, M.; Hussein, T.D.H.; Rahebi, J. Face Recognition System using Histograms of Oriented Gradients and Convolutional Neural Network based on with Particle Swarm Optimization. In Proceedings of the 2021 International Conference on Electrical, Communication and Computer Engineering (ICECCE), Kuala Lumpur, Malaysia, 12–13 June 2021; pp. 1–5. [Google Scholar]
  38. Allagwail, S.; Gedik, O.S.; Rahebi, J. Face recognition with symmetrical face training samples based on local binary patterns and the Gabor filter. Symmetry 2019, 11, 157. [Google Scholar] [CrossRef]
  39. Ghassemi, N.; Shoeibi, A.; Rouhani, M. Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomed. Signal Process. Control 2020, 57, 101678. [Google Scholar] [CrossRef]
  40. Alashik, K.M.; Yildirim, R. Human Identity Verification From Biometric Dorsal Hand Vein Images Using the DL-GAN Method. IEEE Access 2021, 9, 74194–74208. [Google Scholar] [CrossRef]
  41. Hussin, S.H.S.; Yildirim, R. StyleGAN-LSRO Method for Person Re-Identification. IEEE Access 2021, 9, 13857–13869. [Google Scholar] [CrossRef]
  42. Özkaya, U.; Öztürk, Ş.; Barstugan, M. Coronavirus (COVID-19) classification using deep features fusion and ranking technique. In Big Data Analytics and Artificial Intelligence Against COVID-19: Innovation Vision and Approach; Springer: Berlin/Heidelberg, Germany, 2020; pp. 281–295. [Google Scholar]
  43. Mollalo, A.; Rivera, K.M.; Vahedi, B. Artificial neural network modeling of novel coronavirus (COVID-19) incidence rates across the continental United States. Int. J. Environ. Res. Public Health 2020, 17, 4204. [Google Scholar] [CrossRef] [PubMed]
  44. Gozes, O.; Frid-Adar, M.; Sagie, N.; Zhang, H.; Ji, W.; Greenspan, H. Coronavirus detection and analysis on chest ct with deep learning. arXiv 2020, arXiv:2004.02640. [Google Scholar]
  45. Yeşilkanat, C.M. Spatio-temporal estimation of the daily cases of COVID-19 in worldwide using random forest machine learning algorithm. Chaos Solitons Fractals 2020, 140, 110210. [Google Scholar] [CrossRef] [PubMed]
  46. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest x-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
  47. Li, X.; Zhu, D. Covid-xpert: An ai powered population screening of COVID-19 cases using chest radiography images. arXiv 2020, arXiv:2004.03042. [Google Scholar]
  48. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef] [PubMed]
  49. Asif, S.; Zhao, M.; Li, Y.; Tang, F.; Zhu, Y. CGO-Ensemble: Chaos Game Optimization Algorithm-Based Fusion of Deep Neural Networks for Accurate Mpox Detection. Neural Netw. 2024, 173, 106183. [Google Scholar] [CrossRef]
  50. Asif, S.; Zhao, M.; Tang, F.; Zhu, Y.; Zhao, B. Metaheuristics optimization-based ensemble of deep neural networks for Mpox disease detection. Neural Netw. 2023, 167, 342–359. [Google Scholar] [CrossRef] [PubMed]
  51. Asif, S.; Zhao, M.; Chen, X.; Zhu, Y. BMRI-NET: A deep stacked ensemble model for multi-class brain tumor classification from MRI images. Interdiscip. Sci. Comput. Life Sci. 2023, 15, 499–514. [Google Scholar] [CrossRef] [PubMed]
  52. Wang, W.; Pei, Y.; Wang, S.-H.; manuel Gorrz, J.; Zhang, Y.-D. PSTCNN: Explainable COVID-19 diagnosis using PSO-guided self-tuning CNN. Biocell 2023, 47, 373–384. [Google Scholar] [CrossRef] [PubMed]
  53. Punitha, S.; Stephan, T.; Kannan, R.; Mahmud, M.; Kaiser, M.S.; Belhaouari, S.B. Detecting COVID-19 from lung computed tomography images: A swarm optimized artificial neural network approach. IEEE Access 2023, 11, 12378–12393. [Google Scholar] [CrossRef]
  54. Rajeev, S.K.; Rajasekaran, M.P.; Vishnuvarthanan, G.; Arunprasath, T. A biologically-inspired hybrid deep learning approach for brain tumor classification from magnetic resonance imaging using improved gabor wavelet transform and Elmann-BiLSTM network. Biomed. Signal Process. Control 2022, 78, 103949. [Google Scholar] [CrossRef]
  55. Rajakumar, S.; Agalya, V.; Rajeswari, R.; Pachlor, R. Political exponential deer hunting optimization-based deep learning for brain tumor classification using MRI. Signal Image Video Process. 2023, 17, 3451–3459. [Google Scholar] [CrossRef]
  56. Geetha, M.; Srinadh, V.; Janet, J.; Sumathi, S. Hybrid archimedes sine cosine optimization enabled deep learning for multilevel brain tumor classification using mri images. Biomed. Signal Process. Control 2024, 87, 105419. [Google Scholar] [CrossRef]
  57. Zhu, H.; Wei, L.; Niu, P. The novel coronavirus outbreak in Wuhan, China. Glob. Health Res. Policy 2020, 5, 6. [Google Scholar] [CrossRef] [PubMed]
  58. Li, A.C.; Lee, D.T.; Misquitta, K.K.; Uno, K.; Wald, S. COVID-19 detection from chest radiographs using machine learning and convolutional neural networks. medRxiv 2020. [Google Scholar] [CrossRef]
  59. Cheng, J.; Yang, W.; Huang, M.; Huang, W.; Jiang, J.; Zhou, Y.; Yang, R.; Zhao, J.; Feng, Y.; Feng, Q. Retrieval of brain tumors by adaptive spatial pooling and fisher vector representation. PLoS ONE 2016, 11, e0157112. [Google Scholar] [CrossRef]
  60. Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
  61. Olshausen, B.A.; Field, D.J. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vis. Res. 1997, 37, 3311–3325. [Google Scholar] [CrossRef]
  62. Fernandez-Viagas, V.; Ruiz, R.; Framinan, J.M. A new vision of approximate methods for the permutation flowshop to minimise makespan: State-of-the-art and computational evaluation. Eur. J. Oper. Res. 2017, 257, 707–721. [Google Scholar] [CrossRef]
  63. Beni, G. Swarm intelligence. Complex Soc. Behav. Syst. Game Theory Agent-Based Models 2020, 791–818. [Google Scholar] [CrossRef]
  64. Nguyen, B.H.; Xue, B.; Zhang, M. A survey on swarm intelligence approaches to feature selection in data mining. Swarm Evol. Comput. 2020, 54, 100663. [Google Scholar] [CrossRef]
  65. Niu, W.; Feng, Z.; Feng, B.; Xu, Y.; Min, Y. Parallel computing and swarm intelligence based artificial intelligence model for multi-step-ahead hydrological time series prediction. Sustain. Cities Soc. 2021, 66, 102686. [Google Scholar] [CrossRef]
  66. Cho, W.K.T. An evolutionary algorithm for subset selection in causal inference models. J. Oper. Res. Soc. 2017, 69, 630–644. [Google Scholar]
  67. Hassan, R.; Cohanim, B.; De Weck, O.; Venter, G. A comparison of particle swarm optimization and the genetic algorithm. In Proceedings of the Collection of Technical Papers—AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Austin, TX, USA, 18–21 April 2005; Volume 2, pp. 1138–1150. [Google Scholar]
  68. Widhiarso, W.; Yohannes, Y.; Prakarsah, C. Brain tumor classification using gray level co-occurrence matrix and convolutional neural network. IJEIS (Indones. J. Electron. Instrum. Syst.) 2018, 8, 179–190. [Google Scholar] [CrossRef]
  69. Belaid, O.N.; Loudini, M. Classification of Brain Tumor by Combination of Pre-Trained VGG16 CNN. J. Inf. Technol. Manag. 2020, 12, 13–25. [Google Scholar]
  70. Afshar, P.; Mohammadi, A.; Plataniotis, K.N. Brain tumor type classification via capsule networks. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3129–3133. [Google Scholar]
  71. Li, D.; Fu, Z.; Xu, J. Stacked-autoencoder-based model for COVID-19 diagnosis on CT images. Appl. Intell. 2021, 51, 2805–2817. [Google Scholar] [CrossRef]
  72. Khozeimeh, F.; Sharifrazi, D.; Izadi, N.H.; Joloudari, J.H.; Shoeibi, A.; Alizadehsani, R.; Gorriz, J.M.; Hussain, S.; Sani, Z.A.; Moosaei, H. CNN AE: Convolution Neural Network combined with Autoencoder approach to detect survival chance of COVID-19 patients. arXiv 2021, arXiv:2104.08954. [Google Scholar]
  73. Mohanaiah, P.; Sathyanarayana, P.; GuruKumar, L. Image texture feature extraction using GLCM approach. Int. J. Sci. Res. Publ. 2013, 3, 1–5. [Google Scholar]
  74. Latha, Y.L.M.; Prasad, M.V.N.K. GLCM based texture features for palmprint identification system. In Computational Intelligence in Data Mining-Volume 1; Springer: Berlin/Heidelberg, Germany, 2015; pp. 155–163. [Google Scholar]
  75. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  76. Mohanty, A.K.; Beberta, S.; Lenka, S.K. Classifying benign and malignant mass using GLCM and GLRLM based texture features from mammogram. Int. J. Eng. Res. Appl. 2011, 1, 687–693. [Google Scholar]
  77. Ammar, M.; Mahmoudi, S.; Stylianos, D. A Set of Texture-Based Methods for Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment. In Soft Computing Based Medical Image Analysis; Elsevier: Amsterdam, The Netherlands, 2018; pp. 137–147. [Google Scholar]
  78. Sachdeva, J.; Kumar, V.; Gupta, I.; Khandelwal, N.; Ahuja, C.K. Segmentation, feature extraction, and multiclass brain tumor classification. J. Digit. Imaging 2013, 26, 1141–1150. [Google Scholar] [CrossRef] [PubMed]
  79. Gárate-Escamila, A.K.; El Hassani, A.H.; Andrès, E. Classification models for heart disease prediction using feature selection and PCA. Inform. Med. Unlocked 2020, 19, 100330. [Google Scholar] [CrossRef]
  80. Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. Covid-caps: A capsule network-based framework for identification of COVID-19 cases from X-ray images. Pattern Recognit. Lett. 2020, 138, 638–643. [Google Scholar] [CrossRef]
  81. Wu, X.; Liu, S.; Cao, Y.; Li, X.; Yu, J.; Dai, D.; Ma, X.; Hu, S.; Wu, Z.; Liu, X. Speech emotion recognition using capsule networks. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 6695–6699. [Google Scholar]
  82. Lee, K.Y.; Park, J.-B. Application of particle swarm optimization to economic dispatch problem: Advantages and disadvantages. In Proceedings of the 2006 IEEE PES Power Systems Conference and Exposition, Atlanta, GA, USA, 29 October–1 November 2006; pp. 188–192. [Google Scholar]
  83. Selvi, V.; Umarani, R. Comparative analysis of ant colony and particle swarm optimization techniques. Int. J. Comput. Appl. 2010, 5, 1–6. [Google Scholar] [CrossRef]
  84. Kadhim, Y.A.; Khan, M.U.; Mishra, A. Deep learning-based computer-aided diagnosis (cad): Applications for medical image datasets. Sensors 2022, 22, 8999. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Distribution of COVID-19, pneumonia, and normal CXR images in training, validation, and testing datasets.
Figure 1. Distribution of COVID-19, pneumonia, and normal CXR images in training, validation, and testing datasets.
Diagnostics 14 01469 g001
Figure 2. Three classes of COVID-19 dataset: (1) COVID-19; (2) pneumonia, and (3) normal.
Figure 2. Three classes of COVID-19 dataset: (1) COVID-19; (2) pneumonia, and (3) normal.
Diagnostics 14 01469 g002
Figure 3. Distribution of meningioma, glioma, and pituitary MRI images in training, validation, and testing dataset.
Figure 3. Distribution of meningioma, glioma, and pituitary MRI images in training, validation, and testing dataset.
Diagnostics 14 01469 g003
Figure 4. Three classes of brain tumor datasets; (1) meningioma, (2) glioma, and (3) pituitary brain tumors.
Figure 4. Three classes of brain tumor datasets; (1) meningioma, (2) glioma, and (3) pituitary brain tumors.
Diagnostics 14 01469 g004
Figure 5. Image dataset preprocessing steps.
Figure 5. Image dataset preprocessing steps.
Diagnostics 14 01469 g005
Figure 6. The diagram for different combinations of the proposed models.
Figure 6. The diagram for different combinations of the proposed models.
Diagnostics 14 01469 g006
Figure 7. Basic CNN structure.
Figure 7. Basic CNN structure.
Diagnostics 14 01469 g007
Figure 8. Sigmoid function.
Figure 8. Sigmoid function.
Diagnostics 14 01469 g008
Figure 9. Velocity and Position situations [67].
Figure 9. Velocity and Position situations [67].
Diagnostics 14 01469 g009
Figure 10. PSO flowchart.
Figure 10. PSO flowchart.
Diagnostics 14 01469 g010
Figure 11. Convergence of objective function.
Figure 11. Convergence of objective function.
Diagnostics 14 01469 g011
Table 1. Autoencoder with PSO feature selection method on COVID-19 dataset.
Table 1. Autoencoder with PSO feature selection method on COVID-19 dataset.
MethodACCTPRTNRFPRFNRPPVNPVF1-ScoreMR
Decision Tree98.0593.3398.471.536.6784.4899.4088.681.95
SVM98.8397.1998.981.022.8189.6599.7493.271.17
KNN97.9092.3898.391.617.6283.6299.3187.782.10
Ensemble98.2994.3398.641.365.6786.299.4890.081.71
Naïve Bayes90.3787.2399.340.6612.7793.9690.0190.479.63
Discriminant98.6097.1198.731.272.8987.0699.7491.811.40
Table 2. Autoencoder with PSO feature selection method on brain tumor dataset.
Table 2. Autoencoder with PSO feature selection method on brain tumor dataset.
MethodACCTPRTNRFPRFNRPPVNPVF1-ScoreMR
Decision Tree94.1289.4796.213.7910.5391.3995.3190.425.88
SVM95.7589.698.781.2210.497.3195.0893.304.25
KNN99.5198.4199.940.061.5999.9899.2999.190.49
Ensemble94.6187.6898.041.9612.3295.6994.1491.515.39
Naïve Bayes79.4476.9398.391.6123.0797.3171.6685.9320.56
Discriminant97.2292.4699.510.497.5498.9296.4895.582.78
Table 3. Multi-pre-trained CNN using the COVID-19 dataset’s PSO feature selection approach.
Table 3. Multi-pre-trained CNN using the COVID-19 dataset’s PSO feature selection approach.
Pre-Trained CNN with PSO (COVID-19 Dataset)
ClassifiersACCTPRTNRFPRFNRPPVNPVF1-SCOREMR
Pre-trained CNN (AlexNet) + PSO
Decision Tree96.4289.7497.632.3710.2690.0598.4689.893.58
SVM99.4599.9299.400.600.0896.9599.1198.410.55
KNN98.5299.1098.401.600.9093.3398.496.131.48
Ensemble98.9198.7498.811.191.2697.4298.5298.081.09
Naïve Bayes95.7296.3699.460.543.6494.8295.8195.584.28
Discriminant99.6899.8799.650.350.1397.8999.1498.870.32
Pre-trained CNN (GoogleNet) + PSO
Decision Tree96.9790.1398.201.809.8792.1698.8991.133.03
SVM98.7599.0198.901.100.9996.0299.9197.491.25
KNN98.8399.0898.731.270.9292.9899.9195.931.17
Ensemble98.7599.0198.731.270.9996.6099.9197.791.25
Naïve Bayes98.5296.4099.230.773.6093.0599.1494.701.48
Discriminant98.8399.0298.811.190.9897.1999.9198.101.17
Pre-trained CNN (ResNet 50) + PSO
Decision Tree97.5192.8597.892.117.1590.2999.4091.552.49
SVM99.7699.8999.740.260.1197.4199.1898.630.24
KNN98.9998.1399.060.941.8794.3899.8296.221.01
Ensemble99.2299.8199.150.850.1996.4999.6098.120.78
Naïve Bayes99.0698.8299.910.091.1899.1399.0698.970.94
Discriminant99.6699.4099.640.360.6097.3199.6398.340.34
Pre-trained CNN (DenseNet 201) + PSO
Decision Tree97.8294.8498.141.865.1692.6399.5793.722.18
SVM99.3799.0999.400.600.9196.2599.9197.650.63
KNN98.7599.0198.731.270.9994.6199.9196.761.25
Ensemble98.9198.4698.811.191.5496.8498.9197.641.09
Naïve Bayes99.0697.7199.570.432.2995.6899.4096.680.94
Discriminant98.7599.0198.731.270.9997.1999.9198.091.25
Table 4. On a brain tumor dataset, many pre-trained CNNs were used with the PSO feature selection approach.
Table 4. On a brain tumor dataset, many pre-trained CNNs were used with the PSO feature selection approach.
Pre-Trained CNN with PSO (Brain Tumor Dataset)
ClassifiersACCTPRTNRFPRFNRPPVNPVF1-SCOREMR
Pre-trained CNN (AlexNet) + PSO
Decision Tree89.3982.7092.287.7217.3082.2592.5082.4710.61
SVM97.8796.2598.591.413.7596.7798.3696.512.13
KNN98.6997.3499.290.712.6698.3898.8297.861.31
Ensemble95.7593.4796.733.276.5392.4797.1892.974.25
Naïve Bayes88.0986.9396.333.6713.0792.4792.9889.6111.91
Discriminant97.5595.7298.351.654.2896.2398.1295.972.45
Pre-trained CNN (GoogleNet) + PSO
Decision Tree87.7681.0090.559.4519.0077.9592.0379.4512.24
SVM96.4193.1597.872.136.8595.1696.9594.143.59
KNN93.8087.2397.322.6812.7794.0893.6790.536.20
Ensemble92.9885.9296.373.6314.0891.9393.4488.827.02
Naïve Bayes90.8686.5096.723.2813.5093.0189.9289.649.14
Discriminant96.4192.6799.030.977.3397.8495.7895.183.59
Pre-trained CNN (ResNet 50) + PSO
Decision Tree90.2185.3992.187.8214.6181.7593.9183.539.79
SVM97.8796.7598.361.643.2596.2398.5996.492.13
KNN98.8598.9098.831.171.1097.3199.5398.101.15
Ensemble96.7395.6097.212.794.4093.5498.1294.563.27
Naïve Bayes90.5188.7297.952.0511.2895.6991.0892.079.49
Discriminant97.2294.9699.270.735.0498.3897.4296.642.78
Pre-trained CNN (DenseNet 201) + PSO
Decision Tree93.4788.4295.744.2611.5890.3294.8489.366.53
SVM96.9096.1397.222.783.8793.5498.3694.823.10
KNN97.7195.7498.581.424.2696.7798.1296.252.29
Ensemble96.7394.6297.652.355.3894.6297.6594.623.27
Naïve Bayes93.3195.4996.843.164.5193.0196.6494.236.69
Discriminant96.9094.9698.111.895.0495.6997.4295.323.10
Table 5. Proposed method with other previous methods for COVID-19, and brain tumor datasets.
Table 5. Proposed method with other previous methods for COVID-19, and brain tumor datasets.
Ref.JournalYearMethodsDatasetsClassesResults
[19]IEEE International Conference2018PSO + SVMBrain TumorBenign, malignant95.23%
[17]Springer Link2021CNN + (GLCM), (GLRM), (LBP) + PSOCOVID-19COVID-19, Pneumonia, normal98.06%
[8]Journal Pre-proof2020CNNCOVID-19COVID-19, non-COVID-1986.27%
[68]Indonesian Journal of Electronics and Instrumentation Systems (IJEIS)2018GLCM + CNNBrain TumorMeningioma, glioma, pituitary82.00%
[69]The Information Technology Management (ICCMIT’20)2020GLCM + VGG16 + SoftmaxBrain TumorMeningioma, glioma, pituitary96.50%
[70]Scientific Reports—Computer Science2018Capsule networks (CapsNets) + SoftmaxBrain TumorMeningioma, glioma, pituitary86.56%
[71]Springer Link2020Stacked autoencoder + SoftmaxCOVID-19Positive, negative94.70%
[72]Scientific Reports—Computer Science2021CNN + Autoencoder + SVMCOVID-19 (Private)COVID-19, normal96.05%
Proposed Method2021CNN + PSO + SVMCOVID-19Meningioma, glioma, pituitary99.76%
Proposed Method2021Autoencoder + PSO + kNNBrain TumorCOVID-19, pneumonia, normal99.51%
Table 6. Comparison table between PSO, ACO, and GA in terms of time and results for the brain tumor dataset.
Table 6. Comparison table between PSO, ACO, and GA in terms of time and results for the brain tumor dataset.
DatasetCombined MethodsPSOACOGA
ClassifiersAcc.Time (h)ClassifiersAcc.Time (h)ClassifiersAcc.Time (h)
COVID-19AutoencoderSVM98.83%1:15:00SVM98.68%0:27:00KNN97.98%1:00:00
CNN (AlexNet)Discriminant99.68%4:12:00Discriminant99.53%1:17:00KNN98.60%2:07:00
CNN (GoogleNet)Discriminant98.83%1:00:00SVM98.91%0:21:00Naïve Bayes98.13%0:40:00
CNN (ResNet 50)SVM99.76%2:05:00SVM99.61%0:43:00KNN98.60%1:05:00
CNN (DenseNet 201)SVM99.37%2:00:00Naïve Bayes99.14%0:39:00KNN98.75%1:04:00
Brain TumorAutoencoderKNN99.51%0:23:00KNN99.18%0:11:00Ensemble96.24%0:16:00
CNN (AlexNet)KNN98.69%0:58:00Discriminant98.69%0:12:00Ensemble94.61%0:30:00
CNN (GoogleNet)Discriminant96.41%0:15:00Discriminant96.73%0:05:00KNN93.96%0:09:00
CNN (ResNet 50)KNN98.85%0:30:00KNN97.87%0:11:00Ensemble97.06%0:16:00
CNN (DenseNet 201)KNN97.71%0:27:00SVM98.20%0:09:00Ensemble96.57%0:18:00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kadhim, Y.A.; Guzel, M.S.; Mishra, A. A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification. Diagnostics 2024, 14, 1469. https://doi.org/10.3390/diagnostics14141469

AMA Style

Kadhim YA, Guzel MS, Mishra A. A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification. Diagnostics. 2024; 14(14):1469. https://doi.org/10.3390/diagnostics14141469

Chicago/Turabian Style

Kadhim, Yezi Ali, Mehmet Serdar Guzel, and Alok Mishra. 2024. "A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification" Diagnostics 14, no. 14: 1469. https://doi.org/10.3390/diagnostics14141469

APA Style

Kadhim, Y. A., Guzel, M. S., & Mishra, A. (2024). A Novel Hybrid Machine Learning-Based System Using Deep Learning Techniques and Meta-Heuristic Algorithms for Various Medical Datatypes Classification. Diagnostics, 14(14), 1469. https://doi.org/10.3390/diagnostics14141469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop