Next Article in Journal
Research on the Complex Characteristics of Urban Subway Network and the Identification Method of Key Lines
Next Article in Special Issue
Innovative Tool for Automatic Detection of Arterial Stenosis on Cone Beam Computed Tomography
Previous Article in Journal
Configuring Reconfigurable Intelligent Surface for Parallel MIMO Visible Light Communications with Asymptotic Capacity Maximization
Previous Article in Special Issue
Mixed-Sized Biomedical Image Segmentation Based on U-Net Architectures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Binary Starling Murmuration Optimizer Algorithm to Select Effective Features from Medical Data

by
Mohammad H. Nadimi-Shahraki
1,2,3,*,
Zahra Asghari Varzaneh
4,
Hoda Zamani
1,2 and
Seyedali Mirjalili
3,5,*
1
Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad 8514143131, Iran
2
Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad 8514143131, Iran
3
Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Brisbane, QLD 4006, Australia
4
Department of Computer Science, Faculty of Mathematics and Computer, Shahid Bahonar University of Kerman, Kerman 7616914111, Iran
5
Yonsei Frontier Lab, Yonsei University, Seoul 03722, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 564; https://doi.org/10.3390/app13010564
Submission received: 20 November 2022 / Revised: 20 December 2022 / Accepted: 29 December 2022 / Published: 31 December 2022

Abstract

:
Feature selection is an NP-hard problem to remove irrelevant and redundant features with no predictive information to increase the performance of machine learning algorithms. Many wrapper-based methods using metaheuristic algorithms have been proposed to select effective features. However, they achieve differently on medical data, and most of them cannot find those effective features that may fulfill the required accuracy in diagnosing important diseases such as Diabetes, Heart problems, Hepatitis, and Coronavirus, which are targeted datasets in this study. To tackle this drawback, an algorithm is needed that can strike a balance between local and global search strategies in selecting effective features from medical datasets. In this paper, a new binary optimizer algorithm named BSMO is proposed. It is based on the newly proposed starling murmuration optimizer (SMO) that has a high ability to solve different complex and engineering problems, and it is expected that BSMO can also effectively find an optimal subset of features. Two distinct approaches are utilized by the BSMO algorithm when searching medical datasets to find effective features. Each dimension in a continuous solution generated by SMO is simply mapped to 0 or 1 using a variable threshold in the second approach, whereas in the first, binary versions of BSMO are developed using several S-shaped and V-shaped transfer functions. The performance of the proposed BSMO was evaluated using four targeted medical datasets, and results were compared with well-known binary metaheuristic algorithms in terms of different metrics, including fitness, accuracy, sensitivity, specificity, precision, and error. Finally, the superiority of the proposed BSMO algorithm was statistically analyzed using Friedman non-parametric test. The statistical and experimental tests proved that the proposed BSMO attains better performance in comparison to the competitive algorithms such as ACO, BBA, bGWO, and BWOA for selecting effective features from the medical datasets targeted in this study.

1. Introduction

With recent advancements in medical information technology, a huge volume of raw medical data is rapidly generated from different medical resources such as medical examinations, radiology, laboratory tests, mobile health applications, and wearable healthcare technologies [1,2,3]. Extracting informative knowledge from these medical data using artificial intelligence and machine learning algorithms can help in faster treatment and significantly reduce patient mortality rates [4,5]. Application of these algorithms in some diseases such as Diabetes, Heart problems, Hepatitis, and Coronavirus is more common than others due to their high epidemic and mortality rates, expensive tests, and the requirement of special experience [6,7,8]. One of the main challenges in such disease datasets is the existence of redundant and irrelevant features [9], which can decrease the effectiveness of disease diagnosis systems. In medical data mining and machine learning [10,11], one of the most crucial preprocessing steps is feature selection, which eliminates redundant and irrelevant features to uncover effective ones. Since there are 2N distinct feature subsets in a dataset with N features, the feature selection problem is NP-hard [12,13]. Therefore, evaluating all feature subsets to find effective features is very costly, and if each feature is added to the dataset, then the complexity will be doubled [13,14].
Filter-based, wrapper-based, and embedded methods are the three main categories of feature selection techniques [15,16]. The classification algorithm is not involved in filter-based methods, which typically operate based on feature ranking. Wrapper-based methods use a classifier algorithm to evaluate individual candidate subsets of features as opposed to filter-based methods [17,18]. Embedded methods combine the qualities of filter and wrapper methods, and the feature selection algorithm is integrated as part of the learning algorithm [16]. Many wrapper feature selection methods based on metaheuristic algorithms have been proposed [15,16] that can effectively solve feature selection problems as an NP-hard problem in a reasonable response time [19,20]. The main goal of using metaheuristic algorithms is to search the feature space and find near-optimal solutions effectively. Metaheuristic algorithms are recognized as robust problem solvers to solve a variety of problems with different types, such as continuous [21], discrete [22,23,24], and constraint [25,26]. Particle swarm optimization (PSO) [27], ant colony optimization (ACO) [28], differential evolution (DE) [29], cuckoo optimization algorithm (COA) [30], krill herd (KH) [31], social spider algorithm (SSA) [32], crow search algorithm (CSA) [33], grasshopper optimization algorithm (GOA) [34], quantum-based avian navigation optimizer algorithm (QANA) [35] and African vultures optimization algorithm (AVOA) [36] are some of the successful metaheuristic algorithms that are promisingly developed to solve feature selection problems.
Many metaheuristic-based methods have been proposed to select features from medical data [37,38,39]. However, a few of them can select effective features that may provide acceptable accuracy in diagnosing all the targeted diseases in this study, including Diabetes, Heart problems, Hepatitis, and Coronavirus [40]. The main reason for this drawback is generating and storing many irrelevant and redundant features in the medical processes, which reduces the efficiency of classification algorithms used in disease diagnosis systems. Therefore, a metaheuristic algorithm is needed to select useful and effective features from medical datasets by striking a proper balance between local and global search strategies. Responding to this need, particularly for the datasets targeted in the scope of this study, is our motivation to introduce binary versions of the newly proposed starling murmuration optimizer (SMO) algorithm [41], which can balance between its search strategies efficiently. The SMO algorithm uses a dynamic multi-flock construction and three search strategies: separating, diving, and whirling. Starlings in large flocks turn, dive, and whirl across the sky in SMO. The separating search strategy enriches population diversity by employing the quantum harmonic oscillator. With the help of a quantum random dive operator, the diving search strategy enhances the exploration. In contrast, the whirling search strategy significantly uses cohesion force in the vicinity of promising regions. The SMO algorithm has shown a high ability to solve different complex and engineering problems, but it was not yet developed for solving feature selection problems. The binary version of SMO or BSMO is expected to effectively solve the feature selection problem.
The BSMO algorithm generates candidate subsets of features using two different approaches. The first approach develops binary versions of BSMO using several S-shaped and V-shaped transfer functions. In contrast, in the second approach, BSMO maps each dimension in a continuous solution generated by SMO to 0 or 1 using a variable threshold method. The scope of this study is limited to selecting effective features from four targeted datasets consisting of Diabetes, Heart, Hepatitis, and Coronavirus. The performance of the BSMO’s variants is assessed on targeted datasets in terms of fitness, accuracy, sensitivity, specificity, precision, and error. The results are contrasted with competing binary algorithms like the ant colony optimization (ACO) [28], binary bat algorithm (BBA) [42], binary grey wolf optimization (bGWO) [43], and binary whale optimization algorithm (BWOA) [39]. The main contributions of this study can be summarized as follows.
  • Developing the BSMO algorithm as a binary version of the SMO algorithm.
  • Transferring the continuous solutions to binary ones effectively using two different approaches, including S-shaped and V-shaped transfer functions and value threshold method.
  • Evaluating BSMO on medical datasets targeted in this study and comparing its performance with other popular feature selection algorithms.
  • Finding satisfactory results in selecting effective features from the targeted medical datasets.
The rest of this paper is organized as follows. The related works are reviewed in Section 2. A description of the standard SMO algorithm is presented in Section 3. The details of the proposed BSMO algorithm are presented in Section 4. Section 5 includes the experimental evaluation and the comparison between the proposed BSMO and contender algorithms. Section 6 concludes this study and its finding, and suggests some future works.

2. Related Works

Real-world optimization problems have different properties and involve various intricacies, creating critical challenges for optimization algorithms in solving them. Generally, optimization problems in mechanical and engineering applications are mostly faced with multiple properties, such as linear and non-linear constraints in decision variables, non-differentiable objectives, and constraint functions. Therefore, many constraint-handling methods, such as penalty functions, static, dynamic, annealing, adaptive, co-evolutionary, and the death penalty, are developed to cope with such challenges [44]. The other optimization problems, especially in feature selection applications, mostly involve different intricacies such as discrete search spaces, existing irrelevant and redundant features, and high dimensionality feature space. Feature selection is a common way in preprocessing phase to cope with such intricacies by selecting only a small subset of relevant features from the original dataset [45,46]. Feature selection reduces the feature space’s dimensionality, speeds up the learning process, simplifies the learned model, and boosts classifier performance by eliminating redundant and irrelevant features [47,48,49].
The topic of feature selection is presented as a binary optimization problem with the conflicting objectives of reducing the number of features and enhancing classification accuracy. Each solution is presented by a D-dimensional binary vector that only has the two values 0 and 1, where 0 signifies that the corresponding feature is not selected, and 1 indicates that it is selected. The number of dimensions in this binary vector corresponds to the number of features in the initial feature dataset. In many machine learning and data mining tasks, including intrusion detection [50,51,52,53], spam detection [54,55], financial problem prediction [56], and classification [57,58,59]. Particularly, finding an optimal subset of features from medical datasets is a challenging problem that many researchers have recently considered. Metaheuristic algorithms are recognized as prominent problem-solver to solve optimization problems especially feature selection. Based on the source of their inspiration, metaheuristic algorithms may be divided into eight groups: physical-based, biology-based, swarm-based, social-based, mu-sic-based, sport-based, chemistry-based, and math-based [60,61,62]. Since most metaheuristic algorithms are proposed for continuous problems, many binarization methods such as logical operators, variable threshold methods and transfer functions, are developed to map the continuous feature space to the binary one. In the literature, the most famous transfer functions are S-shaped [63], V-shaped [64,65,66], U-shaped [67,68], X-shaped transfer function [69], and Z-shaped [70]. This section presents an overview of the most recent related works on metaheuristics for the wrapper feature selection problem in medical data classification.
Nadimi-Shahraki et al. [40] proposed an improved whale optimization algorithm called BE-WOA. In BE-WOA, a pooling mechanism and three effective search strategies, migration, preferential selection, and surrounded prey, are used to improve the WOA to select effective features from medical datasets. BE-WOA also applied to predict Coronavirus 2019 disease or COVID-19. The obtained results prove the efficiency of the BE-WOA algorithm. The gene selection technique is used for high-dimensional datasets where the number of samples is small, and the number of features is large. Finding the best feature subset in a dataset is the process of gene selection [71]. For gene selection, Alirezanejad et al. [72] developed two Xvariance heuristics against mutual congestion. This approach involves ranking the features first. Then, using Monte’s cross-validation, ten subsets of features are chosen based on forward feature selection (FFS). To enhance the results, majority voting is applied to the features selected in the prior stage to calculate accuracy, sensitivity, specificity, and matthews correlation coefficient.
Asghari Varzaneh et al. [73] proposed a new COVID-19 intubation prediction strategy using the binary version of the horse herd optimization algorithm to select the effective features. The results of the tests showed that the proposed feature selection method is better than other methods. Pashaei et al. [74] introduced two binary variations of the chimp optimization algorithm using S-shaped and V-shaped transfer functions for biomedical data classification. In a recent study, Nadimi-Shahraki et al. [75] proposed the binary version of the quantum-based avian navigation optimizer algorithm (BQANA) to select the optimal feature subset from high-dimensional medical datasets. The reported results show that the BQANA using a threshold method can dominate all contender algorithms. Alweshah et al. [76] proposed the greedy crossover (GC) operator strategy to boost the exploration capability of the coronavirus herd immunity optimizer (CHIO). Then, some medical datasets were used to evaluate the performance of the proposed algorithm in addressing the feature selection problem in the field of medical diagnosis. The results indicated that the GC operator strikes a balance between the search strategies of the CHIO algorithm.
For challenges involving medical feature selection, Anter et al. [77] proposed a hybrid crow search optimization algorithm combined with chaos theory and a fuzzy c-means algorithm (CFCSA). The suggested algorithm avoids local optima and improves the CSA’s convergence using chaos theory and the global optimization method. The test results show the efficiency and stability of CFCSA for solving medical data and real problems. Singh et al. [78] proposed a hybrid ensemble-filter wrapper feature selection algorithm to improve the performance of classifiers in medical data applications. In this algorithm, first, the filter-based method is used based on the weight points to produce the ranking of the features. Then, the sequential forward selection algorithm is used as a wrapper-based feature selection to generate an optimal feature subset. To propose the binary version of the atom search optimization algorithm (ASO), Too et al. [79] applied four S-shaped and four V-shaped transfer functions to solve the feature selection problem. Among the eight presented binary versions, BASO based on the S1–shaped transfer function has the highest performance. Moreover, Mirjalili et al. [67] proposed a new binary version of the PSO algorithm using a U-shaped transfer function to transform continuous velocity values into binary values. The results show that U-shaped transfer functions significantly increase the performance of BPSO.
Elgamal et al. [80] enhanced the reptile search optimization algorithm (RSA) by employing the chaotic map and simulated annealing algorithm to tackle feature selection issues for high-dimensional medical datasets. Applying chaos theory to RSA improves its exploration ability, and hybridizing RSA with the simulated annealing algorithm can avoid local optima trapping. Many metaheuristic algorithms have been proposed to solve feature selection problems, such as binary ant lion optimizer (BALO) [81], return-cost-based binary firefly algorithm (Rc-BBFA) [82], chaotic dragonfly algorithm (CDA) [83], binary chimp optimization algorithm (BChOA) [84], altruistic whale optimization algorithm (AltWOA) [85], binary African vulture optimization algorithm (BAVOA) [86], and binary dwarf mongoose optimization algorithm (BDMSAO) [87].
Studying related works shows that various metaheuristic algorithms have been used to select effective features from medical data. However, most of them cannot find effective features for providing an acceptable diagnosis of important diseases such as Diabetes, Heart, Hepatitis, and Coronavirus. To respond to this weakness, the BSMO algorithm is introduced to develop a new wrapper feature selection method for these diseases in this study.

3. Starling Murmuration Optimizer (SMO)

SMO is a population-based metaheuristic algorithm recently developed by Zamani et al. [41]. The SMO algorithm is modeled the starlings’ behavior during their stunning murmuration using three new search strategies, separating, diving, and whirling. The starling’s population is denoted by S = { s 1 ,   s 2 ,   ,   s N } where N is the population size. The position of each starling si at iteration t is denoted using a vector   X i ( t ) = ( x i , 1 ,   x i , 2 ,   ,   x i , D )   and its fitness value is expressed by F i ( t ) . In first iteration, each X i ( t ) is initiated by a uniform random distribution in a D-dimensional search space using Equation (1), where X L and X U are lower and upper bounds of the search space, respectively and r a n d   ( 0 , 1 )   is a random value between 0 and 1.
X i ( t ) = X L + r a n d ( 0 , 1 ) × ( X U X L ) , i = 1 ,   2 ,   . ,   N
For the rest of the iterations, the population of starlings is moved using the separating, diving, and whirling search strategies. The details of these search strategies are discussed in the following sections.

3.1. Separating Search Strategy

The separation search strategy is promoted diversity throughout the population. In this strategy, first, a portion of starlings with size Psep are randomly selected to separate from population S using Equation (2). Then, some dimensions of the selected starlings are updated using Equation (3), where   X G ( t ) is the global best position, and X r ( t ) is randomly selected from a population S. In each iteration, the best position obtained so far is stored, then these positions are joined with the separated positions with size P s e p , ultimately X r ( t ) is randomly selected from these sets. Q 1 ( y )   is a separation operator which is calculated using Equation (4), where   α   is the quantum harmonic oscillator, parameters m and k are the particle’s mass and strength, respectively and the parameter h is Planck’s constant. Moreover, the function H n   is the Hermite polynomial with integer index n, and y is a random number.
P s e p = log ( t + D ) log ( M a x I t ) × 2
X i ( t + 1 ) = X G ( t ) + Q 1 ( y ) × ( X r ( t ) X r ( t ) )
Q 1 ( y ) = ( α 2 n × n ! × π 1 2 ) 1 2 H n ( α × y ) × e 0.5 × α 2 × y 2 ,   α = ( m × k ) 1 4
The rest of the starlings with a size of Ń (NPsep) is flocked using dynamic multi-flock construction to search the problem space using either diving or whirling search strategies. Each iteration creates a dynamic multi-flock using k non-empty flocks f1fk. First, k best starlings are separated from the population Ń and stored in matrix R, then the rest of the population (Ń-R) is divided among the k flocks. Finally, each position of R assigns to each flock such that f1← {R1 U f1}, …, fk← {Rk U fk}.
As shown in Equation (6), the diving and whirling search strategies are assigned to the flocks based on the quality of each flock. The quality of each flock (Qq (t)) is evaluated using Equation (5), where k is the number of flocks, sfij (t) is the fitness value of the starling si in the flock fj, and n is the number of starlings in each flock. The parameter μQ (t) in Equation (6) denotes the average of all flock’s quality.
Q q ( t ) = i = 1 k 1 n j = 1 n s f i j ( t ) 1 n i = 1 n s f q i ( t )
  X i ( t + 1 ) = { Diving   search   strategy                                 Q q ( t ) μ Q ( t ) Whirling   search   strategy                         Q q ( t ) > μ Q ( t )

3.2. Diving Search Strategy

The diving search strategy is encouraged the selected flocks (Qq (t)µQ (t)) to explore the search space effectively. The starlings are moved using upward and downward quantum random dives (QRD). The starlings of a flock switch among these quantum dives using two quantum probabilities shown in Equation (7), where | ψ U p ( X i ) | and | ψ D o w n ( X i ) | are the upward and downward probabilities that are computed using Equations (8) and (9). Parameters φ and θ are set by the user, and | ψ ( δ 2 ) is an inverse-Gaussian distribution that is computed using Equation (10), where the values of λ and µ are set by the user, and y is a random number.
Q R D = { Upward   quantum   dive                                   | ψ U p ( X i ) > | ψ D o w n ( X i ) | Downward   quantum   dive                       | ψ U p ( X i ) | | ψ D o w n ( X i ) |  
| ψ U p ( X i ) = e i φ cos θ × | ψ ( δ 2 ) e i φ sin θ × | ψ ( δ 2 )
| ψ D o w n ( X i ) = e i φ sin θ × | ψ ( δ 2 ) + e i φ cos θ × | ψ ( δ 2 )
| ψ ( δ 2 ) = λ 2 × π × y 3 × e [ λ ( y μ ) 2 2 × μ 2 × y ]
The downward and upward quantum dives are computed using Equations (11) and (12), respectively, where | ψ ( R D ) is selected from set R, | ψ ( X i ) is the position of starling si in the current iteration, the position of | ψ ( X r ) is randomly selected among flocks assigned for diving strategy, | ψ ( X j ) is randomly selected from the population S and the best starlings set. | ψ ( δ 1 )   is a random position selected from the best starlings set obtained from the first iteration so far and the starling population S.
| ψ ( t + 1 ,   X i ) = | ψ ( R D ) | ψ D o w n ( X i ) × ( | ψ ( X i ) | ψ ( X r ) )
| ψ ( t + 1 ,   X i ) = | ψ ( R D ) + | ψ U p ( X i ) × ( | ψ ( X i ) | ψ ( X j ) + | ψ ( δ 1 ) )

3.3. Whirling Search Strategy

Starlings of a flock exploit the search problem using the whirling search strategy when the quality of the flock is more than the average quality of all flocks (Qq (t) > µQ (t)). The whirling search strategy is denoted in Equation (13), where Xi (t+1) is the next position of starling si at iteration t, a position XRW (t) is randomly selected from set R of flocks that are considered for the whirling search strategy, XN (t) randomly selected from all flocks that want to use the whirling search strategy. Ci (t) is the cohesion operator which is calculated using Equation (14), where ξ (t) is a random number between intervals 0 and 1.
X i ( t + 1 ) = X i ( t ) + C i ( t ) × ( X R W ( t ) X N ( t ) )
C i ( t ) = c o s ( ξ ( t ) )
The pseudocode of the SMO algorithm is shown in Algorithm 1.
Algorithm 1: Starling Murmuration Optimizer (SMO)
  Input: N (Population size), k (Flocks size), and MaxIt (Maximum iterations).
  Output: Global best solution.
1:Begin
2:Randomly distributed N starlings in the search space.
3:Set t = 1.
4:WhiletMaxIt
5:    Separating a portion of starlings with size Psep from the population using Equation (2).
6:   The rest of the population is flocked into k flocks using the dynamic multi-flock construction.
7:   Computing the quality of each flock (fq) using Equation (5).
8:   For q = 1: k
9:     If  Qq (t)µQ (t)
10:         Moving starlings of the flock fq using the diving strategy.
11:     Else
12:         Moving starlings of the flock fq using the whirling strategy.
13:     End if
14:    End for
15:    Update the position of starlings and global best solution.
16:    t = t + 1.
17:End while
18:Return position of best starling as a global best solution.
19:End

4. Binary Starling Murmuration Optimizer (BSMO)

SMO is a new metaheuristic algorithm that effectively solves various engineering and complex problems. However, the ability of the SMO algorithm to solve feature selection problems has not been studied yet, which is the motivation of this study. In this study, a binary starling murmuration optimizer (BSMO) is proposed to select effective features from the datasets of four important targeted diseases consisting Diabetes, Heart problems, Hepatitis, and Coronavirus. The proposed BSMO is developed using two different approaches. The first approach uses S-shaped and V-shaped transfer functions, whereas the second approach maps the continuous search space to 0 or 1 using a threshold value.
Suppose matrix X is to represent the population of starlings in the BSMO, then Figure 1 shows the representation scheme of the proposed BSMO algorithm in solving the feature selection problem. Figure 1a–c show starling Si, binary vector Bi, and the selected feature set SFi. Each starling Si is transformed using different transform functions to the binary vector Bi in which the value of 1 for each element means the corresponding feature should be selected to form the selected feature set SFi. Accordingly, the BSMO algorithm uses the fitness function defined in Equation (15) [83,88].
F i t i = α E + β | S F i | D
where E determines the error rate of the classification algorithm, | S F i | and D are the number of the selected feature in a subset of SFi, and the total features in the dataset, respectively. α and   β = 1 α   are two constant values to control the significance of the classification accuracy and feature subset reduction, respectively. Since the accuracy is more important of the number of features, usually β is very smaller than α, in this study, α = 0.99 and β = 0.01, according to [89].

4.1. BSMO Using S-Shaped Transfer Function (S-BSMO)

This method uses the sigmoid transfer function (S-shape) to map the continuous to the binary version of the SMO algorithm. Therefore, updating the position of the starlings by the transfer functions S will cause them to be in a binary search space, and their position vector will only take the values of “0” or “1”. The sigmoid function S2 formulated in Equation (16) first used in BPSO to develop a binary PSO [89,90].
S ( x i d ( t + 1 ) ) = 1 1 + e x i d ( t )
where x i d ( t )   and S ( x i d ( t + 1 ) ) show the position and probability of changing the binary position value of the search agent ith in dimension d in the tth iteration, respectively. Since the calculated value of S is still in continuous mode, it must be compared with a threshold value to create binary mode. Therefore, the new position of the search agent is updated using Equation (17), where b i d ( t + 1 ) is a binary position of ith search agent in dimension d, and r is a random value between 0 and 1.
b i d ( t + 1 ) = { 0             i f                   r < S ( x i d ( t + 1 ) )   1               i f                 r S ( x i d ( t + 1 ) )     ,
In addition to the transfer function S2 introduced in Equation (16), three other types of S-shaped transfer functions, including S1, S3, and S4 have been used. All four transfer functions are formulated in Table 1. Moreover, all these transfer functions are shown visually in Figure 2. According to the figure, as the slope of the transfer function S increases, the probability of changing the position value increases. Therefore, S1 obtains the highest probability, and S4 obtains the lowest probability, effectively updating agents’ position and finding the optimal solution.

4.2. BSMO Using V-Shaped Transfer Function (V-BSMO)

In this approach, the V-shaped transfer function is used to calculate the probability of changing the position of the agents in the SMO algorithm. Probability values are calculated using the V-shaped (hyperbolic) transfer function by Equation (18) [64], where x i d ( t ) indicates the position value of the ith search agent in dimension d at iteration t.
V ( x i d ( t + 1 ) ) = | t a n h ( x i d ( t ) ) |
Considering that the V-shaped transfer function is different from the S-shaped transfer function, after calculating the probability values, the Equation (19) [64] is used to update the position of each search agent.
b i d ( t + 1 ) = { x i d ( t ) 1                                 i f         r < V ( x i d ( t + 1 ) ) x i d ( t )                                           i f         r V ( x i d ( t + 1 ) )    
where, b i d ( t + 1 ) indicates the binary position of the ith search agent at iteration t + 1 in dimension d. Moreover, x i d ( t ) 1 indicates the complement of   x i d ( t ) . In addition, r is a random number in [0,1]. Unlike the S-shaped transfer function, the V-shaped transfer function does not force the search agents into 0 or 1. According to Equation (19), if the value of V is small and less than the value of r, the binary position of the search agents in dimension d will not change. On the other hand, if the calculated value of the transfer function is greater than or equal to the value r, the position of the search agents is changed to the complement of the current binary position. Table 1 formulates the mathematical equations of transfer functions V1, V2, V3, and V4, and Figure 2 represents transfer functions visually. According to Figure 2, V1 has the highest probability, and V2, V3, and V4 have lower probability values for moving the positions of search agents, respectively [89].

4.3. BSMO Using Variable Threshold Method (Threshold-BSMO)

In this section, the SMO transforms the continuous solutions into the binary form using the variable threshold method defined in Equation (20), where b i d ( t + 1 ) is a new binary position of the ith search agent, and a variable threshold θ is 0.5 that is set by the user.
b i d ( t + 1 ) = { 1                   i f               x i d ( t + 1 ) > θ       0                 i f               x i d ( t + 1 ) θ      
Figure 3 represents the flowchart of the proposed BSMO algorithm, which is a binary version of the SMO algorithm to solve the feature selection problem. As shown in this figure, the optimization process is started by initializing the input variables, including a maximum number of iterations (MaxIt), population size (N), problem size (D), and flocks size (k). First, N starlings are randomly distributed in a D-dimensional search space. Then, a portion of starlings (Psep) using Equation (2) are randomly selected to separate from the population and explore the search space using the separating strategy defined in Equation (3). The rest of the starlings are partitioned between different flocks to exploit the search space using the whirling strategy defined in Equation (13) or explore using the diving strategy defined in Equation (7). The obtained solutions from such search strategies are mapped to binary using two binarization approaches demonstrated in Table 1 and Equation (20). The obtained solutions are restricted to binary values 0 or 1 using Equations (17), (19), and (20). Finally, the solutions are evaluated using Equation (15). The optimization process is repeated until the termination condition, or MaxIt, is satisfied, and the global best solution is reported as the output variable.

4.4. The Computational Complexity of the BSMO Algorithm

Since BSMO has six distinct phases: initialization, separating search strategy, multi-flock construction, diving or whirling search strategy, mapping, and fitness evaluation, its computational complexity can be computed as follows. The initialization phase’s computational complexity is O (ND), considering N starlings are randomly allocated in a D-dimensional search space using Equation (1). Then, a portion of the starlings is randomly selected using Equation (2) to explore the search space with computational complexity O (ND). The cost of the multi-flock construction phase to build k flocks by partitioning N starlings is O (NlogN + k). In the next phase, the cost of each flock containing n subpopulation for determining its quality utilizing Equation (5) is O (nD), and for moving by either diving or whirling search strategy is also O (nD). Thus, the overall complexity of this phase is O (knD) or O (ND) in the worst case. In the mapping phase, the continuous solutions are transformed into binary ones based on Table 1 and Equation (20) with computational complexity O (ND). Finally, in the fitness evaluation phase, the quality of binary solutions is assessed using Equation (15), consisting of a K-fold cross-validation method, k-NN classifier, and updating. The computational complexity of a K-fold cross-validation method with M samples is O (KM). Since K is a constant value, complexity equals O (M). The k-NN classifier with M samples and D features for training the classifier is O (MD), and the complexity of updating is O (ND). Since these phases are repeated T times, therefor the summation of the computational complexity of BSMO is O (ND + T (ND + (NlogN+k) + ND + ND + M + MD + ND)), which is equal to O (TD (N+M)).

5. Experimental Evaluation

The performance of the proposed BSMO algorithm is assessed in finding the optimal feature subset from targeted datasets, Diabetes, Heart, Hepatitis, and Coronavirus diseases 2019, downloaded from [91,92]. Then, the nine BSMO variants’ outcomes are then compared with those of competitive algorithms, ACO [28], BBA [42], bGWO [43], and BWOA [39].
All experiments are run under the same experimental conditions. MATLAB R2019b programming language is considered for implementing the BSMO and running all comparative algorithms. All experiments are run using an Intel (R) Core (TM) i5-3770 CPU, 3.4 GHz, 8 GB RAM, and Windows 10 with the 64-bit operating system.

5.1. Parameter Settings of Algorithms and k-NN Classifier

In this study, the k-nearest neighbor (k-NN) classifier with k = 5 is used to classify the feature subsets in all algorithms [93]. To learn the k-NN classifier, each dataset is randomly partitioned using a K-fold cross-validation method into training and testing sets, where K is a constant value equal to 10. One fold is used for the testing set, and the K−1 folds are applied for the training set [94,95].
For a fair comparison, all results were obtained under the same experimental conditions. The common parameters in BSMO and comparative algorithms, such as termination criterion and population size (N), are the same. In most optimization algorithms, the termination criterion is defined using the maximum number of iterations (MaxIt) or maximum function evaluations (MaxFEs), where MaxIt = MaxFEs/N and it is set to 300 and N is 30. Due to the stochastic nature of the algorithms, all simulations and obtained results are conducted with 15 independent runs. All results are reported using the standard statistical metrics maximum (Max), average (Avg), and minimum (Min) values. In each table, the best result is highlighted in boldface.
Table 2 shows the values of parameters used for BSMO and other comparative algorithms. The parameter values of all contender algorithms were set as same as their original papers. Moreover, a sensitivity analysis on key parameters of the BSMO algorithm, such as flock size (k), and population size (N), is performed to tune the values of these parameters using the offline parameter tuning method. The tuning results were reported in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 of Appendix A in terms of fitness, error, accuracy, sensitivity, specificity, and precision metrics.

5.2. Evaluation Criteria

The performance of proposed BSMO and contender algorithms are assessed using evaluation criteria such as fitness, accuracy, sensitivity, specificity, precision, and error. The fitness evaluation metric is computed using Equation (15). The accuracy, sensitivity, specificity, precision, and error are calculated using Equations (21)–(25) [96,97]. In these equations, parameters TP and TN specify the number of positive and negative samples that are correctly classified by the classifier, respectively. FN is the number of positive samples incorrectly predicted as negative, and FP is the number of negative samples incorrectly predicted as positive using a classifier [98].
Accuracy = T P + T N T P + T N + F P + F N
Sensitivity   = T P T P + F N
Specificity = T N T P + F N
Precision = T P T P + F P
The error metric is computed using the mean square error (MSE) denoted in Equation (25), where N is the number of samples, y i is the observed values and y ^ i is the predicted value. Moreover, evaluating the proposed algorithm does not use any constraint handling methods since no constraints are considered in the feature selection problem.
Error = 1 N i = 1 N ( y i y ^ i ) 2

5.3. Numerical Results and Discussion

In this section, the simulation results of the proposed BSMO algorithm are presented on targeted medical datasets.

5.3.1. Comparison of Algorithms to Detect Diabetes Disease

The Pima Indian Diabetes dataset [91] consists of eight features, 268 samples with diabetes-positive labeling and 500 samples with diabetes-negative. The objective of this dataset is to detect whether or not a patient has diabetes. Table 3 shows that the proposed Threshold-BSMO can achieve the best performance compared to all comparative algorithms.

5.3.2. Comparison of Algorithms to Detect Heart Disease

The Statlog (Heart) dataset [91] consists of 13 features and 270 samples without no missing values to detect the absence or presence of heart disease. In this dataset 120 of the samples are labeled with the presence of heart disease and 150 samples are labeled with the absence of this disease. The performance of the proposed BSMO with nine variants is assessed and compared with well-known optimizers to diagnose heart disease. The results in Table 4 show that the proposed Threshold-BSMO can obtain a minimum fitness value of 0.1322 and a maximum accuracy of 87.037 than other algorithms.

5.3.3. Comparison of Algorithms to Detect Hepatitis Disease

The Hepatitis disease dataset [91] is complex with many missing values that contain occurrences of hepatitis in people. This dataset consists of 19 features with 155 samples, of which 123 samples are categorized in the live class, and 32 are categorized in the die class. The optimization algorithms try to find the best feature set which can detect Hepatitis disease with high accuracy. In this evaluation, the performance of the proposed algorithm is assessed and reported in Table 5. The results show that the BSMO using the variable threshold can obtain the optimum feature set with a minimum fitness value. Additionally, the Threshold-BSMO achieves the highest classification accuracy compared to the contender algorithm.

5.3.4. Comparison of Algorithms to Detect Coronavirus Disease 2019 (COVID-19)

The COVID-19 pandemic is an infectious disease of severe acute respiratory syndrome Coronavirus 2019 [99] which was initiated in Wuhan, China, in December 2019 and profoundly affected human life [100]. Early detection of Coronavirus disease can reduce the transmission rate and slow the epidemic outbreak. Many optimization algorithms have been developed to alleviate this global crisis [101]. In this section, the performance of the proposed algorithm is evaluated in the Coronavirus disease 2019 (COVID-19) dataset [92]. This dataset consists of two classes, death or recovery, and 13 features, including location, country, gender, age, whether the patients visited Wuhan, whether the patients from Wuhan had fever, cough, cold, fatigue, body pain, malaise, and day’s difference between the symptoms being noticed and admission to the hospital. The results reported in Table 6 indicate the proposed Threshold-BSMO outperforms all contender algorithms and BSMO variants to detect COVID-19.

5.4. Convergence Comparison

In addition, to compare the efficiency of BSMO with other comparative algorithms, convergence curves were drawn for each dataset used in the evolution. Figure 4 shows the convergence curves of all algorithms based on the fitness value. According to the figure, Threshold-BSMO has the highest efficiency in diagnosing Diabetes, Hepatitis, Heart, and Coronavirus 2019 diseases with the lowest fitness value compared to competitive algorithms.

5.5. Statistical Analysis

To compare the algorithms fairly and to choose the best transfer function for mapping the continuous solutions to binary ones, Friedman’s statistical test was used to rank the algorithms. Table 7 shows the results of Friedman’s test according to the fitness values of the algorithms in which the Threshold-BSMO is a great variant to select the effect features from Diabetes, Heart, Hepatic, and Coronavirus diseases.

6. Conclusions

Many metaheuristic algorithms have been applied in the wrapper-based methods to select effective features from medical data; however, most cannot find those features that can fulfill an acceptable accurate diagnosis of diseases. To deal with this weakness, a new binary metaheuristic algorithm named binary starling murmuration optimization (BSMO) is proposed to select the effective features from different important diseases such as Diabetes, Heart, Hepatitis, and Coronavirus. The proposed BSMO used two different approaches: S-shaped and V-shaped transfer functions and a variable threshold method to convert the continuous solutions to binary ones. Moreover, metrics such as fitness, accuracy, sensitivity, specificity, precision, and error were used to assess the proposed BSMO’s performance compared to competing algorithms. Finally, the Friedman non-parametric test was also used to show the proposed algorithm’s superiority statistically. The statistical and experimental tests proved that the proposed BSMO algorithm is very competitive in selecting effective features from targeted medical datasets. The proposed Threshold-BSMO can effectively find the optimal feature subset for Diabetes, Heart, Hepatitis, and Coronavirus diseases. Overall, considering the fitness criterion as the main criterion for identifying the most effective binary algorithm in selecting the effective features from the medical datasets targeted in this study, Threshold-BSMO was a superior variant to the contender algorithms.
Although the proposed algorithm can select effective features compared to other comparative algorithms, it was limited to four disease datasets targeted in this study. Therefore, the proposed BSMO algorithm can be applied and improved for other real-world applications. Moreover, a self-adapting parameter tuning method can be applied instead of the try-and-test method used for tuning some parameters of BSMO. The BSMO can be armed by other binarization techniques and transfer functions for selecting effective features in other applications. In addition, the SMO’s search strategies can be hybridized with other metaheuristic algorithms to generate better candidate continues solutions.

Author Contributions

Conceptualization, M.H.N.-S., Z.A.V. and H.Z.; methodology, M.H.N.-S., Z.A.V. and H.Z.; software, M.H.N.-S., Z.A.V. and H.Z.; validation, M.H.N.-S., Z.A.V., H.Z. and S.M.; formal analysis, M.H.N.-S., Z.A.V., H.Z. and S.M.; investigation, M.H.N.-S., Z.A.V. and H.Z.; resources, M.H.N.-S. and H.Z.; data curation, M.H.N.-S., Z.A.V., H.Z. and S.M.; writing, original draft preparation, M.H.N.-S., Z.A.V. and H.Z.; writing—review and editing, M.H.N.-S., Z.A.V., H.Z. and S.M.; visualization, M.H.N.-S., Z.A.V. and H.Z.; supervision, M.H.N.-S. and S.M.; project administration, M.H.N.-S. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data and code used in the research may be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The metaheuristic optimization algorithms’ performance is strongly dependent on selecting the proper values for their parameters. Therefore, in this section, the sensitivity on different values for key parameters of the BSMO algorithm, such as flock size (k) and population size (N), are analyzed and tuned using the offline parameter tuning method. The detailed results of pretests and experiments for tuning the BMSO’s parameter values to find its best robustness in solving feature selection problems on targeted medical datasets were reported in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 in terms of fitness, error, accuracy, sensitivity, specificity, and precision. The Friedman rank in Table A1 and Table A2 specifies the highest performance of BSMO when k and N are equal to 5 and 30, respectively.
Table A1. Parameters setting of BSMO algorithm in terms of fitness values.
Table A1. Parameters setting of BSMO algorithm in terms of fitness values.
AlgorithmsMetricsk = 3, N = 30k = 5, N = 20
DiabetesHeartHepaticsCOVID-19DiabetesHeartHepaticsCOVID-19
S1-BSMOAvg0.23490.14630.12350.05190.23540.14820.12380.0523
Min0.22950.13820.11290.04930.23050.14180.10140.0490
S2-BSMOAvg0.23580.14860.12220.05170.23680.14920.12300.0522
Min0.23170.14110.10670.04970.22920.14030.10800.0505
S3-BSMOAvg0.23700.14870.11960.05160.23670.15060.12420.0519
Min0.23310.14030.10690.04930.22410.14030.11380.0485
S4-BSMOAvg0.23600.14970.12250.05190.23690.15170.12340.0521
Min0.23050.14320.11180.05050.23190.14030.10500.0505
V1-BSMOAvg0.23380.14190.11030.05130.23610.14180.11090.0519
Min0.23050.13580.09900.04790.23190.13800.09940.0510
V2-BSMOAvg0.23350.14130.10960.05150.23650.14280.11080.0510
Min0.23190.13800.09950.04930.23450.13870.10590.0497
V3-BSMOAvg0.23410.14100.10910.05070.23470.14230.11030.0508
Min0.23190.13510.09810.04970.23050.13950.10030.0475
V4-BSMOAvg0.23300.14100.10920.05050.23440.14220.11010.0514
Min0.22400.13800.09900.04820.23190.13800.09990.0486
Threshold-BSMOAvg0.23140.13750.10440.04870.23240.13950.11440.0497
Min0.22680.13080.08840.04630.22540.13370.09780.0482
Friedman rank24
Algorithms Metrics k = 5, N = 30 k = 7, N = 30
Diabetes Heart Hepatics COVID-19 Diabetes Heart Hepatics COVID-19
S1-BSMOAvg0.23420.14600.12650.05150.23520.14630.12300.0517
Min0.22660.14110.11470.04930.23300.13740.11460.0497
S2-BSMOAvg0.23520.14810.12180.05160.23440.14620.12080.0518
Min0.22670.14240.11180.04900.22930.14030.10880.0509
S3-BSMOAvg0.23730.14950.12130.05170.23600.14840.12110.0517
Min0.22910.14320.10510.04970.23310.14320.11280.0505
S4-BSMOAvg0.23680.14920.12090.05160.23670.14970.12380.0518
Min0.22910.14030.10700.04900.23310.14400.11200.0501
V1-BSMOAvg0.23440.14230.11090.05100.23430.14270.10960.0509
Min0.22940.13870.09770.04970.22930.14110.09900.0489
V2-BSMOAvg0.23430.14170.11060.05090.23390.14100.10980.0508
Min0.22660.13800.09980.04890.22940.13870.10460.0497
V3-BSMOAvg0.23530.14110.11070.05100.23540.14130.11250.0515
Min0.23060.13510.09940.04860.23200.13800.10730.0496
V4-BSMOAvg0.23350.14140.10960.05060.23300.14250.11000.0507
Min0.22920.13800.09900.04780.22930.14030.10490.0490
Threshold-BSMOAvg0.23060.13780.10810.04880.23020.13700.09200.0491
Min 0.2229 0.1308 0.0924 0.0451 0.22660.13080.09200.0478
Friedman rank 1 3
Table A2. Parameters setting of BSMO algorithm in terms of error values.
Table A2. Parameters setting of BSMO algorithm in terms of error values.
AlgorithmsMetricsk = 3, N = 30k = 5, N = 20
DiabetesHeartHepaticsCOVID-19DiabetesHeartHepaticsCOVID-19
S1-BSMOAvg0.25050.19540.15940.05230.25410.20810.16920.0538
Min0.24220.15560.12120.04980.23830.15930.14120.0498
S2-BSMOAvg0.24920.19490.16280.05180.25560.20960.16930.0531
Min0.23700.15930.13580.04750.24220.17040.1342 0.0487
S3-BSMOAvg0.25170.19560.16240.05230.25410.20570.16720.0535
Min0.24080.15190.14290.0476 0.2369 0.15190.14210.0498
S4-BSMOAvg0.24980.20160.15730.05290.25460.20370.16640.0536
Min0.23710.15930.12250.04870.23830.16670.13580.0498
V1-BSMOAvg0.25510.19300.15490.05300.25890.20040.16230.0527
Min0.24610.15190.11620.04860.2488 0.1519 0.13460.0510
V2-BSMOAvg0.25780.19460.15320.05280.25600.20440.16110.0542
Min0.24610.1630 0.1096 0.0464 0.23960.14820.13000.0510
V3-BSMOAvg0.25040.19060.15880.05360.25900.20470.16690.0546
Min 0.2357 0.15190.11710.04860.24740.17040.14790.0509
V4-BSMOAvg0.25570.18780.15450.05280.25540.20490.16350.0538
Min0.24740.16300.12790.04870.24220.15560.14170.0509
Threshold-BSMOAvg0.25140.19250.16400.05250.25630.21630.16290.0534
Min0.2370 0.1481 0.12330.04870.24480.1593 0.1150 0.0497
Friedman rank24
Algorithms Metrics k = 5, N = 30 k = 7, N = 30
Diabetes Heart Hepatics COVID-19 Diabetes Heart Hepatics COVID-19
S1-BSMOAvg0.25040.19640.16590.05110.25060.20200.16120.0521
Min0.23820.15930.1292 0.0452 0.24090.16300.13630.0463
S2-BSMOAvg0.25160.20150.15980.05200.25340.19830.17020.0520
Min0.23690.1556 0.1171 0.04980.24230.15930.15000.0475
S3-BSMOAvg0.25080.19300.15990.05210.25320.18470.15980.0521
Min0.23970.15560.12370.04870.24210.16300.12880.0464
S4-BSMOAvg0.25330.19070.16030.05320.25680.19010.16500.0532
Min0.23840.14810.12960.04980.24870.15190.14290.0487
V1-BSMOAvg0.25520.18840.15870.05370.25370.19150.15720.0533
Min0.24220.15930.12920.0476 0.2384 0.15930.13460.0498
V2-BSMOAvg0.25480.19110.15890.05300.25390.18960.16370.0527
Min 0.2345 0.1481 0.13420.04740.24470.16300.14830.0521
V3-BSMOAvg0.25470.19560.16170.05300.25580.20960.15310.0528
Min0.23830.16670.14120.04750.24750.18890.1225 0.0464
V4-BSMOAvg0.25340.19590.16170.05320.25650.19590.14720.0538
Min0.23830.15190.1425 0.0452 0.2396 0.1556 0.1163 0.0521
Threshold-BSMOAvg0.25300.19520.16230.05180.24980.19860.15580.0528
Min0.24080.15190.13420.04870.23950.15930.15580.0487
Friedman rank 1 3
Table A3. Parameters setting of BSMO algorithm in terms of accuracy values.
Table A3. Parameters setting of BSMO algorithm in terms of accuracy values.
AlgorithmsMetricsk = 3, N = 30k = 5, N = 20
DiabetesHeartHepaticsCOVID-19DiabetesHeartHepaticsCOVID-19
S1-BSMOAvg76.922885.790188.123695.446776.888585.580288.108395.401
Max77.325786.666789.12595.596177.476186.296390.291795.707
S2-BSMOAvg76.825185.456888.162595.36476.71785.444488.073695.3321
Max77.358286.296389.7595.592177.476186.296389.62595.4852
S3-BSMOAvg76.652885.419888.366795.338976.67785.222287.916795.2899
Max77.089986.296389.666795.485277.997386.296389.041795.4878
S4-BSMOAvg76.729385.308688.044495.347776.668485.098887.970895.3
Max77.219885.925989.12595.487877.093386.296389.87595.4838
V1-BSMOAvg76.91438689.270895.211576.7078689.15895.197
Max77.21886.666790.37595.475877.08886.29690.33395.362
V2-BSMOAvg76.920386.061789.295895.215876.64485.92689.22995.238
Max77.077986.296390.37595.363576.82286.29689.83395.364
V3-BSMOAvg76.917386.074189.345895.269476.83485.95189.21195.268
Max77.209586.666790.458395.372977.22786.29690.29295.595
V4-BSMOAvg76.987286.086489.370895.289576.87685.95189.28995.204
Max78.004186.296390.37595.8377.08186.29690.33395.481
Threshold-BSMOAvg77.377186.530989.897295.512477.13686.3788.91995.467
Max78.120387.407491.595.71577.86287.03790.54295.599
Algorithms Metrics k = 5, N = 30 k = 7, N = 30
Diabetes Heart Hepatics COVID-19 Diabetes Heart Hepatics COVID-19
S1-BSMOAvg76.971985.814887.831995.41776.89585.77888.18995.427
Max77.740986.29638995.598877.21686.66788.95895.599
S2-BSMOAvg76.853785.518588.170895.386176.92785.70488.395.389
Max77.734185.925989.166795.596177.34486.29689.54295.482
S3-BSMOAvg76.610185.333388.215395.330876.76385.45788.19295.336
Max77.489785.925989.916795.600176.96585.92689.08395.482
S4-BSMOAvg76.665485.370488.230695.334776.64385.33387.92295.341
Max77.486386.296389.666795.594876.9685.92689.16795.484
V1-BSMOAvg76.88985.938389.154295.246976.86485.96389.31395.24
Max77.341186.296390.595.597477.34886.29690.37595.376
V2-BSMOAvg76.887286.03789.206995.26376.91886.14889.26795.227
Max77.612886.296390.37595.481277.35386.29689.79295.365
V3-BSMOAvg76.771686.074189.198695.269576.77486.03789.01395.23
Max77.216386.666790.37595.483877.07586.29689.58395.607
V4-BSMOAvg76.963986.012389.327895.269276.99285.92689.29295.293
Max77.47186.296390.37595.489277.34686.29689.83395.365
Threshold-BSMOAvg77.307786.530989.519495.53777.32886.617391.083395.4834
Max77.990487.407491.041795.835377.617987.407491.083395.7124
Table A4. Parameters setting of BSMO algorithm in terms of sensitivity values.
Table A4. Parameters setting of BSMO algorithm in terms of sensitivity values.
AlgorithmsMetricsk = 3, N = 30k = 5, N = 20
DiabetesHeartHepaticsCOVID-19DiabetesHeartHepaticsCOVID-19
S1-BSMOAvg88.760993.897168.637799.267588.36593.492468.816299.2336
Max90.227896.246785.387699.768489.784295.29278.4811100
S2-BSMOAvg88.366893.603571.885699.356888.2292.921269.665999.3498
Max89.166595.364284.144199.829589.715195.257179.7631100
S3-BSMOAvg88.119193.029672.795299.391288.079193.051569.580599.383
Max89.059295.223585.883310089.705995.836879.1405100
S4-BSMOAvg88.19193.177572.637299.449988.024893.083471.603799.2982
Max89.073395.304981.598310089.403896.653588.514299.7212
V1-BSMOAvg88.362894.236578.83599.908888.32794.01276.67999.504
Max89.444995.544488.350710089.71594.95285.144100
V2-BSMOAvg87.642394.53679.63799.925287.51593.41980.87999.696
Max88.119396.343190.689510088.52394.4788.078100
V3-BSMOAvg87.856594.080779.331299.861988.0594.20675.10699.811
Max88.930496.269986.703310089.35995.94783.572100
V4-BSMOAvg88.458194.105980.335799.837188.10493.94878.81199.775
Max89.11595.433989.916610089.71694.76991.569100
Threshold-BSMOAvg89.046894.65580.11699.393688.50394.57575.67699.348
Max91.060796.416588.83810089.4496.61284.28699.548
Algorithms Metrics k = 5, N = 30 k = 7, N = 30
Diabetes Heart Hepatics COVID-19 Diabetes Heart Hepatics COVID-19
S1-BSMOAvg88.514293.660870.640499.290688.51493.83171.70399.26
Max89.845495.087680.229899.749689.31195.56988.02199.609
S2-BSMOAvg88.242293.351770.892499.394788.38793.47567.96599.381
Max89.263195.335184.53210089.71994.74875.61399.822
S3-BSMOAvg88.178793.147571.870599.370388.32592.86369.77599.512
Max90.179694.403385.873810090.14295.5175.39100
S4-BSMOAvg88.308593.213272.85199.409388.23493.46774.8999.486
Max89.708895.029782.136910088.44796.50890.417100
V1-BSMOAvg88.284893.857178.883299.859888.24494.33478.98199.861
Max89.713296.262185.862410088.97896.08287.181100
V2-BSMOAvg88.626194.491879.352199.818288.21794.23776.88499.87
Max90.008596.252587.397210089.53696.22585.649100
V3-BSMOAvg88.24594.104278.590999.769388.32894.28778.43599.707
Max89.691195.644386.196410089.59896.60685.285100
V4-BSMOAvg88.100994.009679.705199.784588.37794.32777.52699.802
Max89.48495.434588.427510089.57695.54584.209100
Threshold-BSMOAvg8994.680480.243899.377489.151294.839781.516199.4531
Max89.987196.662691.371510090.832896.886481.5161100
Table A5. Parameters setting of BSMO algorithm in terms of precision values.
Table A5. Parameters setting of BSMO algorithm in terms of precision values.
AlgorithmsMetricsk = 3, N = 30k = 5, N = 20
DiabetesHeartHepaticsCOVID-19DiabetesHeartHepaticsCOVID-19
S1-BSMOAvg83.302489.775981.845297.649682.935888.952877.968397.6013
Max84.511291.112692.555698.080884.062991.353492.571497.9844
S2-BSMOAvg83.165689.619883.302897.552582.99588.985177.808597.5018
Max83.997991.660396.888997.941484.22692.192296.666798.2263
S3-BSMOAvg82.879289.391680.678897.603182.770289.422578.853597.4893
Max84.043992.13169497.984684.449892.941196.666798.1287
S4-BSMOAvg82.922690.026382.09197.532982.866289.014178.504497.5208
Max83.529793.456998.333398.033484.187691.780492.464398.099
V1-BSMOAvg83.303489.27582.387397.338182.91689.01482.31697.303
Max84.594691.978489.959697.687583.9390.5888.61197.645
V2-BSMOAvg82.869889.596785.351597.388682.77790.14778.36897.291
Max83.278891.913210097.892483.49592.7792.05197.604
V3-BSMOAvg83.058189.637284.231497.429182.75289.06380.44997.369
Max84.483992.345394.642998.217384.2492.86796.34997.735
V4-BSMOAvg83.034690.078584.476497.415483.01389.09781.09997.38
Max83.654592.45310098.019983.78891.56793.09997.932
Threshold-BSMOAvg83.729291.675985.268797.700683.56491.30181.07897.67
Max85.471493.751297.598.365684.93692.86995.32598.032
Algorithms Metrics k = 5, N =30 k = 7, N =30
Diabetes Heart Hepatics COVID-19 Diabetes Heart Hepatics COVID-19
S1-BSMOAvg83.228889.284181.391497.726683.48789.17584.4797.634
Max84.24291.481795.025698.261684.57189.75696.598.064
S2-BSMOAvg83.142689.40378.933297.592383.22489.76181.57497.574
Max84.272691.571891.528997.967284.22592.719897.908
S3-BSMOAvg82.92589.713681.338597.557682.85589.22978.52397.55
Max84.497491.658196.904898.238984.54991.13386.99897.873
S4-BSMOAvg82.76489.476381.916397.595483.23689.53882.03497.459
Max83.847691.437993.111198.12684.58290.93898.7597.872
V1-BSMOAvg83.176489.341783.841497.338482.95889.32783.8497.356
Max86.184891.955895.555697.92484.32390.7319197.672
V2-BSMOAvg82.907289.350384.315197.36482.98789.72582.10197.369
Max83.76191.919896.349297.823784.15992.14792.37297.734
V3-BSMOAvg83.181289.690885.713997.331983.2189.9981.33797.334
Max84.509191.857997.597.925984.09291.81786.73897.62
V4-BSMOAvg83.265889.726684.250397.405883.28989.883.03697.41
Max84.421491.485598.7597.95684.47492.31895.23897.716
Threshold-BSMOAvg83.582391.440885.198197.717883.577791.657882.591297.6967
Max84.737693.863195.777898.050284.558794.566282.591297.9926
Table A6. Parameters setting of BSMO algorithm in terms of specificity values.
Table A6. Parameters setting of BSMO algorithm in terms of specificity values.
AlgorithmsMetricsk = 3, N = 30k = 5, N = 20
DiabetesHeartHepaticsCOVID-19DiabetesHeartHepaticsCOVID-19
S1-BSMOAvg66.234286.54299.458782.486265.129885.532999.291582.346
Max68.250287.964410085.936666.409388.788399.837484.4318
S2-BSMOAvg65.457586.907399.505381.737365.617985.911399.339981.8105
Max66.628889.65610085.024367.70791.216199.918785.9674
S3-BSMOAvg64.841386.400299.476682.337765.138286.300399.278281.4987
Max66.196989.223510084.896967.439991.974710086.2247
S4-BSMOAvg65.783687.436199.494881.907965.041586.358299.461781.5022
Max66.593591.847499.918784.945367.806190.308710086.1819
V1-BSMOAvg65.7588.542799.36880.256765.41388.03999.06380.501
Max67.406990.737610083.944666.64889.37599.39982.194
V2-BSMOAvg66.606589.171799.442880.739664.97589.52999.28679.295
Max68.493491.156310084.592366.43592.30710082.191
V3-BSMOAvg66.145988.747399.314581.021664.92788.06699.25880.392
Max67.598591.995999.828685.110166.81590.5899.82982.735
V4-BSMOAvg65.897488.777399.446181.00765.45188.15399.24380.932
Max67.037690.734710085.011266.8791.09499.83783.784
Threshold-BSMOAvg66.904889.147899.330982.742966.57288.88999.05881.727
Max69.252792.08410087.261270.24190.6310084.678
Algorithms Metrics k = 5, N = 30 k = 7, N = 30
Diabetes Heart Hepatics COVID-19 Diabetes Heart Hepatics COVID-19
S1-BSMOAvg65.960286.043399.42283.017366.50985.88399.51382.554
Max68.291688.751210087.320869.42386.76110084.83
S2-BSMOAvg65.793286.279499.467482.174366.12387.24299.45182.462
Max68.02388.212810085.562968.29390.41410084.963
S3-BSMOAvg65.580687.012399.37781.717366.10386.47699.36981.929
Max67.866289.353110087.29869.39988.54399.83784.695
S4-BSMOAvg65.029586.376499.356882.140765.93986.43699.4881.524
Max66.810489.28410086.207466.9288.33610085.479
V1-BSMOAvg65.734589.049799.47180.37665.88988.63199.32480.379
Max70.578791.274710084.76168.17489.59599.91982.705
V2-BSMOAvg65.784688.457999.296480.695465.4589.02299.27980.59
Max67.650391.182899.918784.074966.88490.65699.91982.872
V3-BSMOAvg66.020488.581799.443380.47765.03888.87399.17480.363
Max69.162690.50310083.928367.18890.79399.47383.175
V4-BSMOAvg66.256488.557999.412780.799166.36688.71799.43881.74
Max69.189691.152610084.448767.64791.44499.66683.986
Threshold-BSMOAvg66.632188.891199.453183.101166.56289.127499.665282.6134
Max69.202892.113610087.207568.509792.948599.665285.0483

References

  1. Nilashi, M.; bin Ibrahim, O.; Ahmadi, H.; Shahmoradi, L. An analytical method for diseases prediction using machine learning techniques. Comput. Chem. Eng. 2017, 106, 212–223. [Google Scholar] [CrossRef]
  2. Abdulkhaleq, M.T.; Rashid, T.A.; Alsadoon, A.; Hassan, B.A.; Mohammadi, M.; Abdullah, J.M.; Chhabra, A.; Ali, S.L.; Othman, R.N.; Hasan, H.A.; et al. Harmony search: Current studies and uses on healthcare systems. Artif. Intell. Med. 2022, 131, 102348. [Google Scholar] [CrossRef]
  3. Qader, S.M.; Hassan, B.A.; Rashid, T.A. An improved deep convolutional neural network by using hybrid optimization algorithms to detect and classify brain tumor using augmented MRI images. Multimedia Tools Appl. 2022, 81, 44059–44086. [Google Scholar] [CrossRef]
  4. Golestan Hashemi, F.S.; Razi Ismail, M.; Rafii Yusop, M.; Golestan Hashemi, M.S.; Nadimi Shahraki, M.H.; Rastegari, H.; Miah, G.; Aslani, F. Intelligent mining of large-scale bio-data: Bioinformatics applications. Biotechnol. Biotec. Eq. 2018, 32, 10–29. [Google Scholar] [CrossRef] [Green Version]
  5. Van Woensel, W.; Elnenaei, M.; Abidi, S.S.R.; Clarke, D.B.; Imran, S.A. Staged reflexive artificial intelligence driven testing algorithms for early diagnosis of pituitary disorders. Clin. Biochem. 2021, 97, 48–53. [Google Scholar] [CrossRef] [PubMed]
  6. Shah, D.; Patel, S.; Bharti, S.K. Heart Disease Prediction using Machine Learning Techniques. SN Comput. Sci. 2020, 1, 1–6. [Google Scholar] [CrossRef]
  7. Sharma, P.; Choudhary, K.; Gupta, K.; Chawla, R.; Gupta, D.; Sharma, A. Artificial plant optimization algorithm to detect heart rate & presence of heart disease using machine learning. Artif. Intell. Med. 2019, 102, 101752. [Google Scholar] [CrossRef]
  8. Devaraj, J.; Elavarasan, R.M.; Pugazhendhi, R.; Shafiullah, G.; Ganesan, S.; Jeysree, A.K.; Khan, I.A.; Hossain, E. Forecasting of COVID-19 cases using deep learning models: Is it reliable and practically significant? Results Phys. 2021, 21, 103817. [Google Scholar] [CrossRef]
  9. Remeseiro, B.; Bolon-Canedo, V. A review of feature selection methods in medical applications. Comput. Biol. Med. 2019, 112, 103375. [Google Scholar] [CrossRef]
  10. Gokulnath, C.B.; Shantharajah, S.P. An optimized feature selection based on genetic approach and support vector machine for heart disease. Clust. Comput. 2018, 22, 14777–14787. [Google Scholar] [CrossRef]
  11. Huda, S.; Yearwood, J.; Jelinek, H.F.; Hassan, M.M.; Fortino, G.; Buckland, M. A Hybrid Feature Selection With Ensemble Classification for Imbalanced Healthcare Data: A Case Study for Brain Tumor Diagnosis. IEEE Access 2016, 4, 9145–9154. [Google Scholar] [CrossRef]
  12. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2015, 20, 606–626. [Google Scholar] [CrossRef] [Green Version]
  13. García-Torres, M.; Gómez-Vela, F.; Melián-Batista, B.; Moreno-Vega, J.M. High-dimensional feature selection via feature grouping: A Variable Neighborhood Search approach. Inf. Sci. 2016, 326, 102–118. [Google Scholar] [CrossRef]
  14. Yu, L.; Liu, H. Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), Washington, DC, USA, 21–24 August 2003; pp. 856–863. [Google Scholar]
  15. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  16. Gnana, D.A.A.; Balamurugan, S.A.A.; Leavline, E.J. Literature review on feature selection methods for high-dimensional data. Int. J. Comput. Appl. 2016, 136, 9–17. [Google Scholar]
  17. Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef] [Green Version]
  18. Karegowda, A.G.; Manjunath, A.S.; Jayaram, M.A. Feature Subset Selection Problem using Wrapper Approach in Supervised Learning. Int. J. Comput. Appl. 2010, 1, 13–17. [Google Scholar] [CrossRef]
  19. Kabir, M.; Shahjahan; Murase, K. A new local search based hybrid genetic algorithm for feature selection. Neurocomputing 2011, 74, 2914–2928. [Google Scholar] [CrossRef]
  20. Tran, B.; Xue, B.; Zhang, M. Adaptive multi-subswarm optimisation for feature selection on high-dimensional classification. In Proceedings of the Genetic and Evolutionary Computation Conference, Boston, MA, USA, 13–17 July 2019; pp. 481–489. [Google Scholar] [CrossRef] [Green Version]
  21. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. CCSA: Conscious Neighborhood-based Crow Search Algorithm for Solving Global Optimization Problems. Appl. Soft Comput. 2019, 85, 105583. [Google Scholar] [CrossRef]
  22. Benyamin, A.; Farhad, S.G.; Saeid, B. Discrete farmland fertility optimization algorithm with metropolis acceptance criterion for traveling salesman problems. Int. J. Intell. Syst. 2020, 36, 1270–1303. [Google Scholar] [CrossRef]
  23. Fard, E.S.; Monfaredi, K.; Nadimi, M.H. An Area-Optimized Chip of Ant Colony Algorithm Design in Hardware Platform Using the Address-Based Method. Int. J. Electr. Comput. Eng. (IJECE) 2014, 4, 989–998. [Google Scholar] [CrossRef]
  24. Sayadi, M.K.; Hafezalkotob, A.; Naini, S.G.J. Firefly-inspired algorithm for discrete optimization problems: An application to manufacturing cell formation. J. Manuf. Syst. 2013, 32, 78–84. [Google Scholar] [CrossRef]
  25. Gharehchopogh, F.S.; Nadimi-Shahraki, M.H.; Barshandeh, S.; Abdollahzadeh, B.; Zamani, H. CQFFA: A Chaotic Quasi-oppositional Farmland Fertility Algorithm for Solving Engineering Optimization Problems. J. Bionic Eng. 2022, 1–26. [Google Scholar] [CrossRef]
  26. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Oliva, D. Hybridizing of Whale and Moth-Flame Optimization Algorithms to Solve Diverse Scales of Optimal Power Flow Problem. Electronics 2022, 11, 831. [Google Scholar] [CrossRef]
  27. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95 Proceedings of the sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  28. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat No 99TH8406), Washington, DC, USA, 6–9 July 1999; pp. 1470–1477. [Google Scholar]
  29. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Global. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  30. Rajabioun, R. Cuckoo optimization algorithm. Appl. Soft. Comput. 2011, 11, 5508–5518. [Google Scholar] [CrossRef]
  31. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  32. James, J.; Li, V.O. A social spider algorithm for global optimization. Appl. Soft. Comput. 2015, 30, 614–627. [Google Scholar] [CrossRef] [Green Version]
  33. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  34. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  35. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  36. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  37. Abu Khurmaa, R.; Aljarah, I.; Sharieh, A. An intelligent feature selection approach based on moth flame optimization for medical diagnosis. Neural Comput. Appl. 2020, 33, 7165–7204. [Google Scholar] [CrossRef]
  38. Moorthy, U.; Gandhi, U.D. A novel optimal feature selection technique for medical data classification using ANOVA based whale optimization. J. Ambient. Intell. Humaniz. Comput. 2020, 12, 3527–3538. [Google Scholar] [CrossRef]
  39. Zamani, H.; Nadimi-Shahraki, M.H. Feature selection based on whale optimization algorithm for diseases diagnosis. Int. J. Comput. Sci. Inf. Secur. 2016, 14, 1243. [Google Scholar]
  40. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef]
  41. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  42. Mirjalili, S.; Mirjalili, S.M.; Yang, X.-S. Binary bat algorithm. Neural Comput. Appl. 2014, 25, 663–681. [Google Scholar] [CrossRef]
  43. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  44. Gandomi, A.H.; Deb, K.; Averill, R.C.; Rahnamayan, S.; Omidvar, M.N. Using semi-independent variables to enhance optimization search. Expert Syst. Appl. 2018, 120, 279–297. [Google Scholar] [CrossRef]
  45. Kira, K.; Rendell, L.A. A practical approach to feature selection. In Machine Learning Proceedings; Elsevier: Amsterdam, The Netherlands, 1992; pp. 249–256. [Google Scholar]
  46. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. ACM Comput. Surv. (CSUR) 2017, 50, 1–45. [Google Scholar] [CrossRef] [Green Version]
  47. Bolón-Canedo, V.; Sánchez-Maroño, N.; Alonso-Betanzos, A. A review of feature selection methods on synthetic data. Knowl. Inf. Syst. 2013, 34, 483–519. [Google Scholar] [CrossRef]
  48. Brezočnik, L.; Fister Jr, I.; Podgorelec, V. Swarm intelligence algorithms for feature selection: A review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef] [Green Version]
  49. Solorio-Fernández, S.; Carrasco-Ochoa, J.A.; Martínez-Trinidad, J.F. A review of unsupervised feature selection methods. Artif. Intell. Rev. 2020, 53, 907–948. [Google Scholar] [CrossRef]
  50. Aljawarneh, S.; Aldwairi, M.; Yassein, M.B. Anomaly-based intrusion detection system through feature selection analysis and building hybrid efficient model. J. Comput. Sci. 2018, 25, 152–160. [Google Scholar] [CrossRef]
  51. Ambusaidi, M.A.; He, X.; Nanda, P.; Tan, Z. Building an Intrusion Detection System Using a Filter-Based Feature Selection Algorithm. IEEE Trans. Comput. 2016, 65, 2986–2998. [Google Scholar] [CrossRef] [Green Version]
  52. Khater, B.; Wahab, A.A.; Idris, M.; Hussain, M.; Ibrahim, A.; Amin, M.; Shehadeh, H. Classifier Performance Evaluation for Lightweight IDS Using Fog Computing in IoT Security. Electronics 2021, 10, 1633. [Google Scholar] [CrossRef]
  53. Naseri, T.S.; Gharehchopogh, F.S. A Feature Selection Based on the Farmland Fertility Algorithm for Improved Intrusion Detection Systems. J. Netw. Syst. Manag. 2022, 30, 1–27. [Google Scholar] [CrossRef]
  54. Mohammadzadeh, H.; Gharehchopogh, F.S. Feature Selection with Binary Symbiotic Organisms Search Algorithm for Email Spam Detection. Int. J. Inf. Technol. Decis. Mak. 2021, 20, 469–515. [Google Scholar] [CrossRef]
  55. Zhang, Y.; Wang, S.; Phillips, P.; Ji, G. Binary PSO with mutation operator for feature selection using decision tree applied to spam detection. Knowl.-Based Syst. 2014, 64, 22–31. [Google Scholar] [CrossRef]
  56. Lin, F.; Liang, D.; Yeh, C.-C.; Huang, J.-C. Novel feature selection methods to financial distress prediction. Expert Syst. Appl. 2014, 41, 2472–2483. [Google Scholar] [CrossRef]
  57. Kwak, N.; Choi, C.-H. Input feature selection for classification problems. IEEE Trans. Neural Networks 2002, 13, 143–159. [Google Scholar] [CrossRef]
  58. Sharda, S.; Srivastava, M.; Gusain, H.S.; Sharma, N.K.; Bhatia, K.S.; Bajaj, M.; Kaur, H.; Zawbaa, H.M.; Kamel, S. A hybrid machine learning technique for feature optimization in object-based classification of debris-covered glaciers. Ain Shams Eng. J. 2022, 13, 101809. [Google Scholar] [CrossRef]
  59. Xue, B.; Zhang, M.; Browne, W.N. Particle Swarm Optimization for Feature Selection in Classification: A Multi-Objective Approach. IEEE Trans. Cybern. 2012, 43, 1656–1671. [Google Scholar] [CrossRef]
  60. Akyol, S.; Alatas, B. Plant intelligence based metaheuristic optimization algorithms. Artif. Intell. Rev. 2016, 47, 417–462. [Google Scholar] [CrossRef]
  61. Alatas, B. Chaotic bee colony algorithms for global numerical optimization. Expert Syst. Appl. 2010, 37, 5682–5687. [Google Scholar] [CrossRef]
  62. Alatas, B.; Bingol, H. Comparative Assessment Of Light-based Intelligent Search And Optimization Algorithms. Light Eng. 2020, 6, 51–59. [Google Scholar] [CrossRef]
  63. Mafarja, M.; Eleyan, D.; Abdullah, S.; Mirjalili, S. S-shaped vs. V-shaped transfer functions for ant lion optimization algorithm in feature selection problem. In Proceedings of the International Conference on Future Networks and Distributed Systems, Cambridge, UK, 19–20 July 2017; pp. 1–7. [Google Scholar]
  64. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. BGSA: Binary gravitational search algorithm. Nat. Comput. 2009, 9, 727–745. [Google Scholar] [CrossRef]
  65. De Souza, R.C.T.; dos Santos Coelho, L.; De Macedo, C.A.; Pierezan, J. A V-shaped binary crow search algorithm for feature selection. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  66. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  67. Mirjalili, S.; Zhang, H.; Mirjalili, S.; Chalup, S.; Noman, N. A Novel U-Shaped Transfer Function for Binary Particle Swarm Optimisation. In Soft Computing for Problem Solving 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 241–259. [Google Scholar] [CrossRef]
  68. Ahmed, S.; Ghosh, K.K.; Mirjalili, S.; Sarkar, R. AIEOU: Automata-based improved equilibrium optimizer with U-shaped transfer function for feature selection. Knowl. Based. Syst. 2021, 228, 107283. [Google Scholar] [CrossRef]
  69. Ghosh, K.K.; Singh, P.K.; Hong, J.; Geem, Z.W.; Sarkar, R. Binary social mimic optimization algorithm with X-shaped transfer function for feature selection. IEEE Access 2020, 8, 97890–97906. [Google Scholar] [CrossRef]
  70. Guo, S.-S.; Wang, J.-S.; Guo, M.-W. Z-Shaped Transfer Functions for Binary Particle Swarm Optimization Algorithm. Comput. Intell. Neurosci. 2020, 2020, 1–21. [Google Scholar] [CrossRef]
  71. Ramasamy, A.; Mondry, A.; Holmes, C.; Altman, D.G. Key Issues in Conducting a Meta-Analysis of Gene Expression Microarray Datasets. PLOS Med. 2008, 5, e184. [Google Scholar] [CrossRef]
  72. Alirezanejad, M.; Enayatifar, R.; Motameni, H.; Nematzadeh, H. Heuristic filter feature selection methods for medical datasets. Genomics 2019, 112, 1173–1181. [Google Scholar] [CrossRef]
  73. Varzaneh, Z.A.; Orooji, A.; Erfannia, L.; Shanbehzadeh, M. A new COVID-19 intubation prediction strategy using an intelligent feature selection and K-NN method. Informatics Med. Unlocked 2021, 28, 100825. [Google Scholar] [CrossRef]
  74. Pashaei, E.; Pashaei, E. An efficient binary chimp optimization algorithm for feature selection in biomedical data classification. Neural Comput. Appl. 2022, 34, 6427–6451. [Google Scholar] [CrossRef]
  75. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S. Binary Approaches of Quantum-Based Avian Navigation Optimizer to Select Effective Features from High-Dimensional Medical Data. Mathematics 2022, 10, 2770. [Google Scholar] [CrossRef]
  76. Alweshah, M.; Alkhalaileh, S.; Al-Betar, M.A.; Abu Bakar, A. Coronavirus herd immunity optimizer with greedy crossover for feature selection in medical diagnosis. Knowl.-Based Syst. 2021, 235, 107629. [Google Scholar] [CrossRef] [PubMed]
  77. Anter, A.M.; Ali, M. Feature selection strategy based on hybrid crow search optimization algorithm integrated with chaos theory and fuzzy c-means algorithm for medical diagnosis problems. Soft Comput. 2019, 24, 1565–1584. [Google Scholar] [CrossRef]
  78. Singh, N.; Singh, P. A hybrid ensemble-filter wrapper feature selection approach for medical data classification. Chemom. Intell. Lab. Syst. 2021, 217, 104396. [Google Scholar] [CrossRef]
  79. Too, J.; Abdullah, A.R. Binary atom search optimisation approaches for feature selection. Connect. Sci. 2020, 32, 406–430. [Google Scholar] [CrossRef]
  80. Elgamal, Z.; Sabri, A.Q.M.; Tubishat, M.; Tbaishat, D.; Makhadmeh, S.N.; Alomari, O.A. Improved Reptile Search Optimization Algorithm using Chaotic map and Simulated Annealing for Feature Selection in Medical Filed. IEEE Access 2022, 10, 51428–51446. [Google Scholar] [CrossRef]
  81. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary ant lion approaches for feature selection. Neurocomputing 2016, 213, 54–65. [Google Scholar] [CrossRef]
  82. Zhang, Y.; Song, X.-F.; Gong, D.-W. A return-cost-based binary firefly algorithm for feature selection. Inf. Sci. 2017, 418, 561–574. [Google Scholar] [CrossRef]
  83. Sayed, G.I.; Tharwat, A.; Hassanien, A.E. Chaotic dragonfly algorithm: An improved metaheuristic algorithm for feature selection. Appl. Intell. 2018, 49, 188–205. [Google Scholar] [CrossRef]
  84. Wang, J.; Khishe, M.; Kaveh, M.; Mohammadi, H. Binary Chimp Optimization Algorithm (BChOA): A New Binary Meta-heuristic for Solving Optimization Problems. Cogn. Comput. 2021, 13, 1297–1316. [Google Scholar] [CrossRef]
  85. Kundu, R.; Chattopadhyay, S.; Cuevas, E.; Sarkar, R. AltWOA: Altruistic Whale Optimization Algorithm for feature selection on microarray datasets. Comput. Biol. Med. 2022, 144, 105349. [Google Scholar] [CrossRef]
  86. Balakrishnan, K.; Dhanalakshmi, R.; Seetharaman, G. S-shaped and V-shaped binary African vulture optimization algorithm for feature selection. Expert Syst. 2022, 10, e13079. [Google Scholar] [CrossRef]
  87. Akinola, O.A.; Ezugwu, A.E.; Oyelade, O.N.; Agushaka, J.O. A hybrid binary dwarf mongoose optimization algorithm with simulated annealing for feature selection on high dimensional multi-class datasets. Sci. Rep. 2022, 12, 1–22. [Google Scholar] [CrossRef]
  88. Huang, C.-L.; Wang, C.-J. A GA-based feature selection and parameters optimizationfor support vector machines. Expert. Syst. Appl. 2006, 31, 231–240. [Google Scholar] [CrossRef]
  89. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  90. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, man, and Cybernetics Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; pp. 4104–4108. [Google Scholar]
  91. Blake, C. UCI repository of machine learning databases. Available online: http://www.ics.uci.edu/~mlearn/MLRepository-html-1998 (accessed on 22 July 2021).
  92. Iwendi, C.; Bashir, A.K.; Peshkar, A.; Sujatha, R.; Chatterjee, J.M.; Pasupuleti, S.; Mishra, R.; Pillai, S.; Jo, O. COVID-19 patient health prediction using boosted random forest algorithm. Front. Public Health 2020, 8, 357. [Google Scholar] [CrossRef] [PubMed]
  93. Guo, G.; Wang, H.; Bell, D.; Bi, Y.; Greer, K. KNN model-based approach in classification. In Proceedings of the OTM Confederated International Conferences on the Move to Meaningful Internet Systems, Rhodes, Greece, 22–26 October 2003; pp. 986–996. [Google Scholar]
  94. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Cheng, D. Learning k for knn classification. ACM Trans. Intell. Syst. Technol. (TIST) 2017, 8, 1–19. [Google Scholar] [CrossRef] [Green Version]
  95. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient kNN Classification With Different Numbers of Nearest Neighbors. IEEE Trans. Neural Networks Learn. Syst. 2017, 29, 1774–1785. [Google Scholar] [CrossRef] [PubMed]
  96. Garcia, S.; Fernandez, A.; Luengo, J.; Herrera, F. A study of statistical techniques and performance measures for genetics-based machine learning: Accuracy and interpretability. Soft Comput. 2008, 13, 959–977. [Google Scholar] [CrossRef]
  97. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  98. Glas, A.S.; Lijmer, J.G.; Prins, M.H.; Bonsel, G.J.; Bossuyt, P.M.M. The diagnostic odds ratio: A single indicator of test performance. J. Clin. Epidemiol. 2003, 56, 1129–1135. [Google Scholar] [CrossRef]
  99. Ciotti, M.; Ciccozzi, M.; Terrinoni, A.; Jiang, W.-C.; Wang, C.-B.; Bernardini, S. The COVID-19 pandemic. Crit. Rev. Clin. Lab. Sci. 2020, 57, 365–388. [Google Scholar] [CrossRef]
  100. Chakraborty, I.; Maity, P. COVID-19 outbreak: Migration, effects on society, global environment and prevention. Sci. Total Environ. 2020, 728, 138882. [Google Scholar] [CrossRef]
  101. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
Figure 1. The representation scheme used by BSMO, (a) Starling population (Matrix X), (b) Binary population (Matrix B), and (c) Selected features (SF).
Figure 1. The representation scheme used by BSMO, (a) Starling population (Matrix X), (b) Binary population (Matrix B), and (c) Selected features (SF).
Applsci 13 00564 g001
Figure 2. The S-shaped and V-shaped transfer functions [89].
Figure 2. The S-shaped and V-shaped transfer functions [89].
Applsci 13 00564 g002
Figure 3. Flowchart of the proposed BSMO algorithm.
Figure 3. Flowchart of the proposed BSMO algorithm.
Applsci 13 00564 g003
Figure 4. Convergence comparison of the BSMO and comparative algorithms.
Figure 4. Convergence comparison of the BSMO and comparative algorithms.
Applsci 13 00564 g004
Table 1. The formulation of S-shaped and V-shaped transfer functions.
Table 1. The formulation of S-shaped and V-shaped transfer functions.
NameS-Shaped Transfer FunctionsNameV-Shaped Transfer Functions
S1-shaped T ( x ) = 1 1 + e 2 x V1-shaped T ( x ) = | erf ( π 2 x ) |
S2-shaped T ( x ) = 1 1 + e x V2-shaped T ( x ) = | tan h ( x ) |
S3-shaped T ( x ) = 1 1 + e x 2 V3-shaped T ( x ) = | x 1 + x 2 |
S4-shaped T ( x ) = 1 1 + e x 3 V4-shaped T ( x ) = | 2 π arctan ( π 2 x ) |
Table 2. Parameters setting.
Table 2. Parameters setting.
AlgorithmsParameters
ACO τ = 1 , η = 1 , ρ = 0.2 , α = 1 , and β = 0.1
BBA Qmin = 0 and Qmax = 2
bGWO a linearly decreases from 2 to 0, C1, C2, and C3 are a random numbers
BWOA a linearly decreases from 2 to 0, b = 1, r1 and r2 ∈ rand (0, 1)
BSMOk = 5, λ = 20, µ = 0.5, θ and ϕ ∈ (0, 1.8)
Table 3. Diabetes disease detection.
Table 3. Diabetes disease detection.
AlgorithmsFitnessAccuracySensitivityPrecisionSpecificityError
AvgMinAvgMaxAvgMaxAvgMaxAvgMaxAvgMin
ACO0.23840.231876.510977.086585.234586.617360.183264.041479.966382.14510.23510.2291
BBA0.23310.228176.997477.467586.408988.74879.873483.427959.36563.00960.230.2253
bGWO0.22950.225377.357377.872586.212489.313580.066483.526759.820965.91140.22640.2213
BWOA0.23860.234476.474476.82585.843287.866479.875482.396159.594464.9610.23530.2317
S1-BSMO0.23420.226676.971977.740988.514289.845483.228884.24265.960268.29160.25040.2382
S2-BSMO0.23520.226776.853777.734188.242289.263183.142684.272665.793268.0230.25160.2369
S3-BSMO0.23730.229176.610177.489788.178790.179682.92584.497465.580667.86620.25080.2397
S4-BSMO0.23680.229176.665477.486388.308589.708882.76483.847665.029566.81040.25330.2384
V1-BSMO0.23440.229476.88977.341188.284889.713283.176486.184865.734570.57870.25520.2422
V2-BSMO0.23430.226676.887277.612888.626190.008582.907283.76165.784667.65030.25480.2345
V3-BSMO0.23530.230676.771677.216388.24589.691183.181284.509166.020469.16260.25470.2383
V4-BSMO0.23350.229276.963977.47188.100989.48483.265884.421466.256469.18960.25340.2383
Threshold-BSMO0.2306 0.2229 77.3077 77.99048989.987183.582384.737666.632169.20280.2530.2408
Table 4. Heart disease detection.
Table 4. Heart disease detection.
AlgorithmsFitnessAccuracySensitivityPrecisionSpecificity Error
AvgMinAvgMaxAvgMaxAvgMaxAvgMaxAvgMin
ACO0.1470.138785.481586.296388.818694.153786.845289.66582.776486.63250.14520.137
BBA0.14140.138086.012386.296394.009695.434589.726691.485588.557991.15260.19590.1519
bGWO0.13830.135886.419886.666787.489893.058685.425990.242280.717587.47380.15780.1444
BWOA0.14090.138786.172886.296389.465691.360986.960690.118982.878788.0870.13830.137
S1-BSMO0.1510.143285.185285.925989.221695.2683.651289.958878.6787.04740.14810.1407
S2-BSMO0.1460.141185.814886.296393.660895.087689.284191.481786.043388.75120.19640.1593
S3-BSMO0.14810.142485.518585.925993.351795.335189.40391.571886.279488.21280.20150.1556
S4-BSMO0.14950.143285.333385.925993.147594.403389.713691.658187.012389.35310.19300.1556
V1-BSMO0.14920.140385.370486.296393.213295.029789.476391.437986.376489.2840.19070.1481
V2-BSMO0.14230.138785.938386.296393.857196.262189.341791.955889.049791.27470.18840.1593
V3-BSMO0.14170.138086.03786.296394.491896.252589.350391.919888.457991.18280.1911 0.1481
V4-BSMO0.14110.135186.074186.666794.104295.644389.690891.857988.581790.5030.19560.1667
Threshold-BSMO0.1371 0.1322 86.5432 87.037 89.899893.419286.733790.521282.236687.31230.13460.1296
Table 5. Hepatitis disease detection.
Table 5. Hepatitis disease detection.
AlgorithmsFitnessAccuracySensitivityPrecisionSpecificity Error
AvgMinAvgMaxAvgMaxAvgMaxAvgMaxAvgMin
ACO0.12150.107488.063989.62564.537776.41194.471997.895775.717689.83690.11940.1037
BBA0.11160.097789.108390.564.428680.512278.700690.21495.039597.96040.1090.095
bGWO0.10670.093289.541790.958363.856482.911779.123185.598395.314597.52290.10460.0904
BWOA0.12090.113588.180688.958360.830574.330678.018493.455795.269798.81170.11820.1104
S1-BSMO0.12650.114787.83198970.640480.229881.391495.025699.4221000.16590.1292
S2-BSMO0.12180.111888.170889.166770.892484.53278.933291.528999.46741000.1598 0.1171
S3-BSMO0.12130.105188.215389.916771.870585.873881.338596.904899.3771000.15990.1237
S4-BSMO0.12090.107088.230689.666772.85182.136981.916393.111199.35681000.16030.1296
V1-BSMO0.11090.097789.154290.578.883285.862483.841495.555699.4711000.15870.1292
V2-BSMO0.11060.099889.206990.37579.352187.397284.315196.349299.296499.91870.15890.1342
V3-BSMO0.11070.099489.198690.37578.590986.196485.713997.599.44331000.16170.1412
V4-BSMO0.10960.099089.327890.37579.705188.427584.250398.7599.41271000.16170.1425
Threshold-BSMO0.1081 0.0924 89.519491.041780.243891.371585.198195.777899.45311000.16230.1342
Table 6. Coronavirus disease 2019 (COVID-19) detection.
Table 6. Coronavirus disease 2019 (COVID-19) detection.
AlgorithmsFitnessAccuracySensitivityPrecisionSpecificityError
AvgMinAvgMaxAvgMaxAvgMaxAvgMaxAvgMin
ACO0.05210.049395.280595.482598.332599.060196.384497.458974.099478.57740.04770.0452
BBA0.05080.049495.357595.483898.41198.928196.503997.154274.777879.47310.04640.0452
bGWO0.04820.045595.491595.713798.606199.367896.127397.542673.375780.91490.04510.0429
BWOA0.05180.049395.266795.716498.304599.022996.415397.199874.62682.05790.04790.0428
S1-BSMO0.05150.049395.41795.598899.290699.749697.726698.261683.017387.32080.05110.0452
S2-BSMO0.05160.04995.386195.596199.394710097.592397.967282.174385.56290.0520.0498
S3-BSMO0.05170.049795.330895.600199.370310097.557698.238981.717387.2980.05210.0487
S4-BSMO0.05160.04995.334795.594899.409310097.595498.12682.140786.20740.05320.0498
V1-BSMO0.0510.049795.246995.597499.859810097.338497.92480.37684.7610.05370.0476
V2-BSMO0.05090.048995.26395.481299.818210097.36497.823780.695484.07490.0530.0474
V3-BSMO0.0510.048695.269595.483899.769310097.331997.925980.47783.92830.0530.0475
V4-BSMO0.05060.047895.269295.489299.784510097.405897.95680.799184.44870.05320.0452
Threshold-BSMO0.0488 0.0451 95.537 95.8353 99.377410097.717898.050283.101187.20750.05180.0487
Table 7. Friedman test.
Table 7. Friedman test.
AlgorithmsMedical Problems
Diabetes (Rank)Heart (Rank)Hepatics (Rank)COVID-19 (Rank)
ACO10.37(11)8.67 (8)9.23 (8)9.70 (11)
BBA10.37 (11)8.67 (8)9.23 (8)9.70 (11)
bGWO2.80 (2)2.17 (2)3.07 (2)2.27 (2)
BWOA10.40 (12)11.23 (12)9.23 (8)8.70 (9)
S1-BSMO5.53 (4)8.57 (7)11.87 (11)7.80 (7)
S2-BSMO7.43 (8)9.30 (9)8.87 (7)8.97 (10)
S3-BSMO9.47 (10)10.73 (11)9.27 (9)10.07 (12)
S4-BSMO9.27 (9)10.37 (10)9.43 (10)8.40 (8)
V1-BSMO6.13 (5)5.67 (6)4.87 (6)5.53 (3)
V2-BSMO6.27 (6)5.13 (5)4.40 (4)6.73 (6)
V3-BSMO6.70 (7)3.90 (3)4.60 (5)6.20 (5)
V4-BSMO4.67 (3)4.80 (4)4.20 (3)5.60 (4)
Threshold-BSMO 1.60 (1) 1.80 (1) 2.73 (1) 1.33 (1)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nadimi-Shahraki, M.H.; Asghari Varzaneh, Z.; Zamani, H.; Mirjalili, S. Binary Starling Murmuration Optimizer Algorithm to Select Effective Features from Medical Data. Appl. Sci. 2023, 13, 564. https://doi.org/10.3390/app13010564

AMA Style

Nadimi-Shahraki MH, Asghari Varzaneh Z, Zamani H, Mirjalili S. Binary Starling Murmuration Optimizer Algorithm to Select Effective Features from Medical Data. Applied Sciences. 2023; 13(1):564. https://doi.org/10.3390/app13010564

Chicago/Turabian Style

Nadimi-Shahraki, Mohammad H., Zahra Asghari Varzaneh, Hoda Zamani, and Seyedali Mirjalili. 2023. "Binary Starling Murmuration Optimizer Algorithm to Select Effective Features from Medical Data" Applied Sciences 13, no. 1: 564. https://doi.org/10.3390/app13010564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop