Next Article in Journal
Tribological Performance of Direct Metal Laser Sintered 20MnCr5 Tool Steel Countersamples Designed for Sheet Metal Forming Applications
Next Article in Special Issue
Quantification of Overlapping and Network Complexity in News: Assessment of Top2Vec and Fuzzy Topic Models
Previous Article in Journal
Metamodeling Approach to Sociotechnical Systems’ External Context Digital Twins Building: A Higher Education Case Study
Previous Article in Special Issue
Analysing Social Media Discourse on Electric Vehicles with Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Logarithmic Transfer Function for Binary Swarm Intelligence Algorithms: Enhanced Feature Selection with White Shark Optimizer

by
Seyma Gules
*,
Alper Kılıç
,
Mustafa Servet Kiran
and
Mesut Gunduz
Department of Computer Engineering, Faculty of Computer and Information Sciences, Konya Technical University, 42075 Konya, Türkiye
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8710; https://doi.org/10.3390/app15158710
Submission received: 10 July 2025 / Revised: 2 August 2025 / Accepted: 4 August 2025 / Published: 6 August 2025
(This article belongs to the Special Issue Machine Learning-Based Feature Extraction and Selection: 2nd Edition)

Abstract

With the increasing size of datasets in data mining applications, feature selection has become critical for enhancing classification accuracy and reducing computational complexity. In this study, a novel binary feature selection algorithm, called bWSO-log, is proposed based on the White Shark Optimizer (WSO). Unlike the commonly used S-shaped and V-shaped transfer functions in the literature, the WSO algorithm is converted into a binary form for the first time using a logarithmic transfer function. The performance of the proposed method was tested on nineteen benchmark datasets and compared with eight widely used metaheuristic algorithms. The results show that the bWSO-log algorithm demonstrates superior or competitive performance in terms of classification accuracy and the number of selected features. These findings reveal the effectiveness of the proposed logarithmic function and highlight the potential of WSO-based binary optimization in feature selection problems.

1. Introduction

With the widespread use of the internet today, there has been a significant increase in the size of emerging datasets. The rapid growth of datasets leads to challenges such as difficulties in extracting relevant information from real-world applications, increased processing time, hardware limitations, and higher computational complexity. Therefore, feature selection methods, which hold an important place in data mining, are gaining importance [1]. Feature selection methods aim to identify relevant features prior to training models on real-world datasets containing many features. Relevant features are those directly related to the target concept and are neither irrelevant nor redundant; irrelevant features do not affect the target concept, whereas redundant features contain duplicate or unnecessary information [2]. Given a dataset with N features, there exist possible subsets. As the number of features increases, the search space grows exponentially, making it increasingly challenging to find the optimal subset. Therefore, removing unnecessary features reduces both training time and memory requirements during model training [2]. Moreover, reducing the dataset size facilitates simpler representation, visualization, and interpretation of the data [3]. In line with the advantages offered by feature selection methods, various techniques have been developed in the literature. These methods are generally classified into three main categories: filter methods, wrapper methods, and hybrid methods [4].
Filter methods operate independently of the learning algorithm; data are scored based on statistical criteria, and feature selection is performed according to these scores [5]. These methods are generally faster than wrapper methods. However, due to uncertainties in determining the threshold that distinguishes between relevant and irrelevant features, filter methods are often considered less effective and less successful compared to wrapper methods [3].
Wrapper methods comprise feature evaluation criterion, a search strategy, and a stopping criterion. Unlike filter methods, wrapper models use the performance of the learning algorithm as the evaluation metric and select the subset of features that achieves the best performance [6]. Wrapper methods can be categorized into three main groups based on their feature search strategies: sequential, bio-inspired, and iterative methods. Sequential search methods are easy to implement and fast. Bio-inspired methods incorporate randomness into the search process to avoid getting trapped in local minima, while iterative methods aim to avoid combinatorial search [4].
Hybrid methods have been developed to overcome the disadvantages of filter and wrapper methods. These approaches incorporate both a feature selection algorithm and a classification algorithm, and perform both processes simultaneously [7].
Feature selection methods aim to determine which features should be selected from a dataset. The resulting solutions take binary values of 0 or 1, indicating whether each feature is excluded or included. In order to obtain an exact solution, all possible subsets must be evaluated; however, this significantly increases the computational cost. As the size of datasets increases, the use of heuristic methods to achieve an optimal solution has become increasingly common [8]. Continuous versions of optimization algorithms need to be discretized in order to be applied to decision problems. Numerous discretization methods have been proposed in the literature for this purpose. By developing a reupdate rule as in discrete Jaya algorithm (DJAYA) [9] algorithm, using XOR method as in Jaya-based binary optimization algorithm (JAYAX) [10] or using transfer functions [11,12], discrete versions of these algorithms have been developed. Although these algorithms do not always guarantee the best solution, they can produce acceptable solutions within a reasonable time. Despite the existence of many optimization methods in the literature, researchers continue to develop new approaches specifically for feature selection problems due to unresolved issues such as getting stuck in local minima and premature convergence [11].
The WSO is an optimization method inspired by the hunting behavior of white sharks [13]. Developed in 2022, the WSO algorithm is designed for solving continuous optimization problems. It is recommended for further development due to its applicability to high-dimensional engineering problems and its ability to produce fast and accurate solutions [13].

Motivation and Contribution of the Study

The increasing size and complexity of real-world data have made feature selection a critical step in data mining and machine learning processes. As the number of features grows, the search space required to identify the optimal subset expands exponentially; this situation increases computational complexity and adversely affects model performance. Therefore, developing efficient and scalable feature selection methods is of great importance.
Although many metaheuristic algorithms have been proposed for feature selection, most of these algorithms face challenges such as premature convergence, imbalance between exploration and exploitation, or insufficient adaptation to binary search spaces. The WSO, originally developed for continuous problem domains, has strong exploration capabilities and, inspired by the predatory behavior of sharks, provides a promising foundation for binary optimization problems.
In this study, to overcome the aforementioned limitations and leverage WSO’s potential in binary problem domains, a binary version of WSO (bWSO) specifically adapted for feature selection tasks is proposed. The contributions of this study to the literature can be summarized as follows:
  • A novel logarithmic transfer function offering a flexible and adaptive mechanism for converting continuous values into binary decisions is proposed.
  • The original WSO algorithm is adapted to binary problem solving by incorporating the proposed logarithmic transfer function along with eight other commonly used transfer functions, resulting in the bWSO algorithm.
  • The performance of the bWSO-log algorithm is evaluated on nineteen benchmark datasets, and it is observed to outperform other well-known metaheuristic algorithms in the context of feature selection.
  • In terms of classification accuracy, the algorithm is comprehensively compared with Artificial Algae Algorithm (AAA) [11], Bat Optimization Algorithm (BAT) [14], Firefly Algorithm (FA) [15], Grey Wolf Optimizer (GWO) [16], Moth Flame Optimizer (MFO) [17], Multi-Verse Optimizer (MVO) [18], Particle Swarm Optimization (PSO) [19], and Whale Optimization Algorithm (WOA) [20]
  • A simple yet effective feature selection framework combining WSO’s powerful search mechanism with adaptive discretization techniques is presented.
The remaining sections of the paper are organized as follows: Section 2 provides a brief overview of related work. Section 3 introduces the original WSO algorithm. The details of the proposed bWSO approach are presented in Section 4. Section 5 describes the experimental setup. Section 6 presents the results along with a discussion, while Section 7 offers conclusions and suggestions for future work.

2. Related Works

Swarm intelligence-based metaheuristic optimization algorithms have recently been widely used not only for solving continuous-valued problems but also for addressing feature selection problems effectively [21]. This section examines feature selection approaches based on swarm intelligence optimization algorithms.
Various methods have been employed to discretize swarm intelligence-based optimization algorithms. These methods include [8,10], threshold-based transformation approaches [22], transfer functions [9,23,24], entropy-based methods [25], similarity-based approaches [26,27], and hybrid methods [28].
In most metaheuristic algorithms, an initial set of solutions—i.e., a population—is generated randomly, and these solutions are evaluated using a fitness function [26]. New generations are iteratively produced until a termination criterion is met, and at each step, the solutions are re-evaluated according to the fitness function [29,30].
The WSO algorithm is a bio-inspired optimization method based on swarm intelligence. Therefore, this study focuses on the effects of swarm intelligence-based optimization algorithms on feature selection within the context of the literature. Table 1 presents some of the methods used for this purpose.

3. Original White Shark Optimizer (WSO)

The WSO is a nature-inspired metaheuristic optimization algorithm developed by Braik et al. in 2022 [13]. The fundamental principle of the algorithm is based on the dynamic and directional movements of white sharks during the hunting process. These creatures prefer to detect their prey using their sense of smell and hearing rather than directly chasing it. Their advanced hearing abilities enable them to explore the search space effectively. As illustrated in Figure 1, white sharks can detect changes in water pressure, electromagnetic waves, and wave frequencies through the lateral lines extending along both sides of their bodies, allowing them to locate the prey.
The structure of the algorithm is designed to model exploration and exploitation behaviors in the solution space based on these biological features. WSO begins with a randomly generated initial population, where each individual is represented by a d-dimensional solution vector corresponding to the problem’s dimensionality. The initial positions of individuals are randomly selected within predefined lower and upper bounds. The quality of each solution is evaluated using a fitness function.
The WSO algorithm operates based on three main behavioral models:
  • Moving towards prey: White sharks estimate the approximate position of their prey by sensing wave trails and odor signals. This orientation is modeled through velocity and position updates, allowing individuals to explore better regions in the solution space.
  • Moving towards optimal prey: Similarly to fish swarms, white sharks perform random exploratory behaviors to locate their prey. This mechanism enables a broader coverage of the solution space and reduces the risk of premature convergence.
  • Movement towards the best white shark: Individuals move toward the position of the best-performing shark (closest to the prey) and tend to cluster around it. This behavior simulates collective movement and enhances information sharing, leading to faster convergence to better solutions.
In each iteration, the population is updated according to these three strategies, and the best-found solution is preserved. Additionally, the tendency of individuals to move closer to one another reinforces dense convergence within the solution space. The overall functioning of the algorithm is summarized in the pseudo-code presented in Algorithm 1.
Algorithm 1 Pseudo-code WSO [13]
1:Initialize the parameters of the problem
2:Initialize the parameters of WSO
3:Randomly generate the initial positions of WSO
4:Initialize the velocity of the initial population
5:Evaluate the positions of the initial population
6:while  ( k < K )   do
7:   Update the parameters, p 1 , p 2 , μ , a , b , w 0 , f , m v , and s s , using their corresponding update rules defined in the algorithm.
8:   for  i   =   1 to   n to do
9:     v k + 1 i = μ [ v k i + p 1 w g b e s t k w k i × c 1 + p 2   w b e s t v k i w k i × c 2 ]
10:    end for
11:   for  i   =   1 to   n to do
12:     if   r a n d < m v   then
13:      w k + 1 i = w k i   . ¬   w o + u . a + l . b
14:     else
15:      w k + 1 i = w k i + v k i / f
16:     end if
17:    end for
18:   for  i   =   1 to   n to do
19:     if   r a n d < s s   then
20:      D w = | r a n d   × w g b e s t k w k i |
21:      if   i = = 1   then
22:       w k + 1 i = w g b e s t k + r 1 D w s g n r 2 0.5
23:      else
24:       w k + 1 i = w g b e s t k + r 1 D w s g n r 2 0.5
25:        w k + 1 i = w k i + w ̀ k + 1 i 2 × r a n d
26:      end if
27:       end if
28:    end for
29:   Adjust the position of the white sharks that proceed beyod the boundary
30:   Evaluate and update the news positions
31:    k = k + 1
32:end while
33:Return the optimal solution obtained so far
Beyond the biological and behavioral modeling of the algorithm’s structure, there are also methodological reasons that support the preference for WSO in the context of the feature selection problem. Although numerous optimization algorithms have been proposed in the literature, there are several reasons why WSO was chosen in this study. First, WSO provides an effective balance between exploration and exploitation phases, which is particularly beneficial for solving complex and multimodal problems such as feature selection. Second, WSO has a low computational cost and has demonstrated competitive performance on various high-dimensional benchmark problems. Finally, the biologically inspired search mechanism based on the predatory behavior of white sharks offers flexible adaptation capabilities that align well with the structure of binary search spaces. All these characteristics make WSO a suitable and potentially powerful candidate for the feature selection problem addressed in this study.

4. The Proposed Binary White Shark Optimizer

The WSO algorithm is designed to provide optimal solutions in continuous search spaces. However, to apply WSO to feature selection problems, a discretization process is required. This section presents the details of the bWSO_log algorithm, which has been developed specifically for solving feature selection problems. In this context, the details of the bWSO_log algorithm developed for feature selection are presented, and its variants obtained using different discretization strategies are also included.
  • bWSO_log: In each position update step of the original WSO algorithm, the logarithmic transfer function described in Section 4.2 is used.
  • bWSO-0: In this version, discretization is performed using a direct thresholding method without employing any transfer function. The obtained continuous value is compared with a randomly selected threshold; if it is greater, it is assigned 1, otherwise 0.
  • bWSO_s: Instead of the logarithmic function, the S-Shape transfer function is used.
  • bWSO_v: Instead of the logarithmic function, the V-Shape transfer function is used.
To rigorously assess the performance contribution of the proposed logarithmic transfer function, a baseline variant without any transfer function, denoted as bWSO-0, was also included in the evaluation. Accordingly, the effectiveness of the logarithmic function was empirically analyzed by comparing its optimization performance against the S-Shape, V-Shape, and transfer-function-free (bWSO-0) counterparts.
This section is organized as follows: Section 4.1 explains the Problem Formulation and Solution Representation. Section 4.2 discusses the binarization strategy applied to the original WSO algorithm.

4.1. Problem Formulation and Solution Representation

In this study, the feature selection problem is addressed as a binary optimization task. The primary objective is to identify the optimal subset of features that maximizes classification accuracy while minimizing the number of selected features.
Each candidate solution is represented as a binary vector with a length equal to the number of features in the dataset, as shown in Equation (1).
X = x 1 ,   x 2 ,   . . , x n ,           x i   { 0,1 }
where n represents the total number of features; a value of x i = 1 indicates that the i th feature is selected, while x i = 0 indicates that the th feature is not selected. An example of the solution representation is shown in Figure 2. Figure 2 illustrates the binary vector representation of a candidate solution, where each element corresponds to a feature in the dataset. A value of 1 indicates that the corresponding feature is selected, whereas a value of 0 means the feature is not selected. For example, in the given illustration, features 3, 7, 8 and 19 are included in the selected subset.
Fitness Function: In feature selection problems, the goal is to identify an ideal subset by minimizing the number of selected features while maximizing classification performance. The features selected by each search agent in the algorithm are evaluated using a fitness function based on the K-nearest neighbor (KNN) classifier to estimate classification accuracy. The fitness function is used by optimization algorithms to measure the performance of solution vectors. The fitness function used in this study is shown in Equation (2).
F i t n e s s = α γ R   D + β | M | | N |
The fitness function represents the fitness value of the White Shark; γ R   D denotes the classification error rate; M is the number of selected features; and N is the total number of features. The parameters α and β correspond to the importance of the number of selected features and classification accuracy, respectively. In the literature, they are commonly used as α 0,1 , β = ( 1 α ) [11,28,33,38]. In this study, alpha was selected as 0.99 and beta as 0.01.
Classification Process: The K-nearest neighbor (K-NN) classifier is a commonly used method in feature selection problems [11,28,33]. The Euclidean distance is used in the K-NN classifier, as shown in Equation (3). The value of K was set to 5 to obtain the optimal feature subset.
d i s t a n c e x a x b = ( i = 1 N ( x a i x b i ) 2 ) 0.5
where x a and x b denote two instances represented as N -dimensional feature vectors, where N corresponds to the total number of features. The term x a i refers to the value of the i t h feature of instance x a , and similarly x b i corresponds to the i t h feature of instance.

4.2. Discretization Strategy

In the literature, discretization is performed using various transfer functions. To convert the outputs of these transfer functions into binary values (0 or 1), different thresholding approaches are employed. The thresholding method used in this study is presented in Equation (4).
w k + 1 i = 1 ,         i f       r a n d < w k i 0 ,         i f       o t h e r w i s e
where w k i represents white sharks. The rand is a random value in the range. In the WSO algorithm, if the position update value can exceed the randomly generated value, it is equalized to take the value of 1, in other cases it will be zero.
In this study, to evaluate the impact of discretization functions, the bWSO-0 algorithm was first implemented without employing any transfer function. Subsequently, experiments were conducted using S-Shape, V-Shape, and the proposed logarithmic transfer function. The transfer functions used in all proposed bWSO variants are presented in Table 2 and the corresponding Figure 3. Figure 3 presents the graphical representation of the transfer functions used in the bWSO variants, including S-shaped, V-shaped, and the proposed logarithmic transfer function. These curves illustrate how the continuous values generated by the optimization algorithm are converted into binary values (0 or 1) during the discretization process.
The logarithmic transfer function provides a more flexible and gradual probability distribution on the transformation curve compared to traditional S-shaped and V-shaped functions. This characteristic enables the algorithm to maintain a better balance between exploration and exploitation during the search process, thereby preventing entrapment in local minima and improving solution quality. In addition, the natural monotonic structure of the logarithmic function offers a more controlled binary conversion mechanism in the solution space.
The pseudocode of the proposed bWSO algorithm is provided in Algorithm 2.
Algorithm 2 Pseudo-code of binary WSO
1:Initialize the parameters of the problem
2:Initialize the parameters of WSO
3:Randomly generate the initial positions of WSO
4:Initialize the velocity of the initial population
5:Evaluate the positions of the initial population
6:while   ( k < K )   do
7:   Update the parameters, p 1 , p 2 , μ , a , b , w 0 , f , m v , and s s , using their corresponding update rules defined in the algorithm.
8:   for   i   =   1   to   n to do
9:      v k + 1 i = μ [ v k i + p 1 w g b e s t k w k i × c 1 + p 2   w b e s t v k i w k i × c 2 ]
10:    end for
11:   for  i   =   1 to   n to do
12:     if   r a n d < m v   then
13:       w k + 1 i = w k i   . ¬   w o + u . a + l . b
14:       w k + 1 i = t r a n s f e r F u n c t i o n   ( w k + 1 i   )
15:      else
16:       w k + 1 i = w k i + v k i / f
17:       w k + 1 i = t r a n s f e r F u n c t i o n   ( w k + 1 i   )
18:     end if
19:    end for
20:   for  i   =   1 to   n to do
21:     if   r a n d < s s   then
22:       D w = | r a n d   × w g b e s t k w k i |
23:     if   i = = 1   then
24:        w k + 1 i = w g b e s t k + r 1 D w s g n r 2 0.5
25:        w k + 1 i = t r a n s f e r F u n c t i o n   ( w k + 1 i   )
26:      else
27:        w k + 1 i = w g b e s t k + r 1 D w s g n r 2 0.5
28:        w k + 1 i = w k i + w ̀ k + 1 i 2 × r a n d
29:        w k + 1 i = t r a n s f e r F u n c t i o n   ( w k + 1 i   )
30:      end if
31:     end if
32:    end for
33:   Adjust the position of the white sharks that proceed beyod the boundary
34:   Evaluate and update the news positions
35:     k = k + 1
36:end while
37:Return the optimal solution obtained so far
Figure 4 presents a block diagram illustrating the step-by-step workflow of the proposed bWSO algorithm. The process begins with the input of the dataset into the algorithm. Next, the algorithm parameters are initialized, and binary white sharks are generated randomly. Each search agent (white shark) is evaluated using a fitness function. The positions of the agents are then updated based on the relevant formulas, followed by the application of transfer functions for the discretization process. The fitness values are recalculated according to the new positions. This loop continues until the maximum number of iterations is reached. Once the iteration limit is exceeded, the best feature subset is returned as the output. This diagram aims to visualize how each component of the algorithm operates systematically.

5. Experiments

This section describes the experimental details. The datasets used in the study were selected due to their open access availability [42], varying levels of difficulty [28], and their usage in other studies in the literature [11,28,31,37]. Detailed information about the datasets is provided in Table 3. All datasets listed in Table 3 were obtained from the UC Irvine Machine Learning Repository [42]. The datasets used in this study were divided into two groups: training and testing sets. The training and testing data ratios were set to 80% and 20%, respectively.
In the experimental study, the proposed bWSO algorithm was compared with eight widely preferred population-based algorithms from the literature [11,24,28,31,37]. The algorithms used in this comparison include AAA [11], BAT [14], FA [15], GWO [16], MFO [17], MVO [18], PSO [19], WOA [20]. To ensure a fair comparison, AAA, BAT, FA, GWO, MFO, MVO, PSO, WOA, and bWSO algorithms were executed on the same platform. The experiments were conducted on a Windows 10 operating system equipped with an Intel(R) Core(TM) i7-10750H CPU @ 2.60 GHz (6 cores, 12 logical processors) and 16 GB of RAM. Coding was performed using Python (version 3.7.3 64bit) programming language within the Visual Studio Code (Version 1.92.2) editor. The EvolopyFS library was utilized in the experimental studies [43]. EvolopyFS is a Python-based library containing frequently used metaheuristic optimization methods. The experiments were carried out with a population size of 30, 500 iterations, and 30 runs.
The hyperparameter values of all algorithms used in the experiments were determined based on default settings that are commonly used in the relevant literature and have been proven effective in previous studies. In particular, the parameters chosen for the WSO algorithm are those recommended in the original study [13] and selected after comprehensive parameter analyses. This approach was adopted to reduce the need for additional optimization of parameter settings and to ensure a fair comparison among different algorithms. Thus, the focus of the experiments was on comparing the fundamental performance of the algorithms. The hyperparameter values of all algorithms are presented in Table 4.

Evaluation Criteria

The performance of the proposed method was analyzed and compared using various evaluation metrics. In this context, the obtained results were evaluated in terms of accuracy, recall, and F1-Score.
Accuracy: Indicates the ratio of correctly predicted to all predictions. A value close to 1 indicates that the classification is a model that is highly accurate. It is shown in Equation (5)
A c c u r a c y =   T P + T N T P + T N + F P + F N
where T P (True Positive): What is actually true (actual positive) and what is predicted correctly (predicted positive). T N (True Negative): It is not actually true (actual negative) and not predicted correctly (predicted negative). F P (False Positive): Actually incorrect (actual negative) and correctly predicted (predicted positive). F N (False Negative): What is actually correct (actual positive) and not correctly predicted (predicted negative).
Recall: Shows the ratio of correctly predicted to true positives. A value close to 1 indicates that the classification is a model that is highly recalled. It is shown in Equation (6).
R e c a l l = T P T P + F N  
F1-Score: Recall and Precision harmonic average. A value close to 1 indicates that the classification is a model that has a high F1-Score. It is shown in Equation (7).
F 1 - S c o r e = 2   ×   P r e c i s i o n   ×   R e c a l l ( P r e c i s i o n + R e c a l l )
Wilcoxon signed-rank test: It is a test frequently used in the literature to test whether the results obtained by the algorithms are significant or not if they are remarkably close to each other. If the obtained value is greater than the significance level, H0 (Null Hypothesis) is accepted. Accepting the H0 hypothesis indicates that there is no significant difference between the two methods. If the obtained value is less than the significance level, H1 (Alternative Hypothesis) is accepted. H1 Hypothesis is the opposite of H0 Hypothesis. In other words, the H1 hypothesis states that there is a significant difference. In this study, the significance level was determined as p-Value 0.05 [44].

6. Results and Discussion

In this section, the proposed bWSO-log algorithm and the other algorithms were executed on the relevant datasets, and the experimental results were comprehensively analyzed. Initially, ten different versions of the bWSO algorithm were evaluated. During the analysis, the results obtained using 19 different datasets, as listed in Table 3, were compared. Performance comparisons between the different versions of bWSO and the other algorithms were conducted based on the mean and standard deviation values obtained from 30 independent runs for each algorithm.

6.1. Comparison Between Versions of bWSO

In Table 5, the average accuracy value, the best standard deviation, and the worst result of the binary bWSO algorithm are given as a result of 30 independent runs. The best results are highlighted in bold font. Considering the average success value, it was seen that using the appropriate transfer function increased the success rate when comparing the bWSO-0 version with the value bWSO versions. It has been seen that the bWSO versions proposed using the V-shape transfer function are more successful than the bWSO versions proposed with the S-shape transfer functions. Considering the average ranking value in total, it is seen that the proposed logarithmic transfer function is in the first place. In the bWSO versions whose Recall values are given in Table 6, the recommended bWSO-log method achieved successful results in nine of the nineteen datasets. In the bWSO versions with the F1-Score values in Table 7, the proposed bWSO-log method was found to be better in ten of the nineteen datasets.

6.2. Comparison with the State of the Art Approaches

In the second part of the experimental study, the proposed binary bWSO-log method was compared with algorithms widely used in the literature. All other algorithms included in the comparison were discretized using the S1 transfer function, which is frequently preferred in the literature. These algorithms are as follows: bAAA, bBAT, bFA, bGWO, bMFO, bMVO, bPSO, and bWOA.
Table 8 presents the performance of the bWSO-log and the other algorithms in terms of classification accuracy. According to the results obtained, the bWSO-log algorithm demonstrated superior performance in eleven out of nineteen datasets. In other words, the bWSO-log method achieved approximately 58% superior performance across the datasets.
Figure 5 presents a comparative analysis of the convergence performance of the proposed bWSO-log algorithm against other algorithms for the N1, N5, N7, and N10 datasets. Only these four datasets are included as examples to avoid visual clutter and maintain clarity in the figure. Including all datasets would have made the diagram overly complex and difficult to interpret. However, the algorithm applies the same processing steps to the entire feature set of each dataset. As seen in the convergence curves, the proposed algorithm achieves the best results for the N1 and N5 datasets and produces results close to the best for the N7 and N10 datasets.
Table 9 presents the comparison of the accuracy rates of the proposed bWSO-log method with other algorithms based on the p-value results of the Wilcoxon signed-rank test. In Table 10, p-values less than 0.05 are highlighted in bold. Values highlighted in bold indicate that the null hypothesis (H0) is rejected and the alternative hypothesis (H1) is accepted. In other words, a statistically significant difference is assumed. Upon examination of Table 10, it is observed that the proposed bWSO-log method achieves a significant difference in terms of fitness values compared to the other algorithms.
Mean value, standard deviation, best and worst values of F1-Score values are given in Table 10. It has been seen that the proposed bWSO-log value has better values than other algorithms.

7. Conclusions and Future Works

In this study, a novel approach called bWSO-log was proposed for addressing feature selection problems. The WSO, one of the swarm intelligence-based optimization algorithms, was adapted into a binary form using various discretization strategies, and the resulting bWSO-log method was evaluated on nineteen datasets with varying levels of complexity. In addition to commonly used algorithms in the literature (bAAA, bBAT, bFA, bGWO, bMFO, bMVO, bPSO, and bWOA), comparative analyses were conducted with bWSO variants employing S-Shape and V-Shape transfer functions.
To more clearly assess the contribution of the proposed logarithmic transfer function, a version of the algorithm without any transfer function, referred to as bWSO-0, was also included in the experiments. The results demonstrated that the logarithmic function not only significantly enhanced optimization performance but also offered a strong alternative to the widely used S-Shape and V-Shape transfer functions.
The findings indicate that the bWSO-log method is an effective and competitive solution for feature selection problems. However, this study also has certain limitations. First, the proposed algorithm was tested on a limited number of datasets; applying the method to a broader range of datasets from diverse domains would allow for a more comprehensive assessment of its generalizability. Furthermore, the performance of the algorithm is highly dependent on the structure of the logarithmic transfer function and the proper tuning of algorithm parameters, making parameter selection a critical factor.
For future research, different variants or parametric versions of the logarithmic function can be explored to examine their impact on performance, and the proposed algorithm can be compared with recently developed optimization algorithms. Additionally, the bWSO algorithm can be integrated with other feature selection strategies or deep learning-based models to develop more robust and comprehensive hybrid approaches.

Author Contributions

Conceptualization: M.G., A.K. and M.S.K.; Methodology: M.G. and S.G.; Software: S.G.; Validation: M.G., A.K. and M.S.K.; Formal Analysis: M.G., A.K. and M.S.K.; Investigation: S.G., A.K. and M.S.K.; Resources: S.G.; Data curation: S.G. and M.G.; Writing—original draft preparation: S.G., A.K., M.S.K. and M.G.; Writing—review and editing: S.G., A.K., M.S.K. and M.G.; Visualization: S.G.; Supervision: M.G. and M.S.K.; Project administration: M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset in the study is open access. It was taken from UCI. (https://archive.ics.uci.edu/ (accessed on 4 August 2025)).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tan, F.; Fu, X.; Zhang, Y.; Bourgeois, A.G. A genetic algorithm-based method for feature subset selection. Soft Comput. 2008, 12, 111–120. [Google Scholar] [CrossRef]
  2. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  3. Kahramanli, S.; Hacibeyoglu, M.; Arslan, A. A Boolean function approach to feature selection in consistent decision information systems. Expert Syst. Appl. 2011, 38, 8229–8239. [Google Scholar] [CrossRef]
  4. Solorio-Fernández, S.; Carrasco-Ochoa, J.A.; Martínez-Trinidad, J.F. A review of unsupervised feature selection methods. Artif. Intell. Rev. 2020, 53, 907–948. [Google Scholar] [CrossRef]
  5. Budak, H. Özellik seçim yöntemleri ve yeni bir yaklaşım. Süleyman Demirel Üniversitesi Fen Bilim. Enstitüsü Derg. 2018, 22, 21–31. [Google Scholar] [CrossRef]
  6. De Silva, A.M.; Leong, P.H.; De Silva, A.M.; Leong, P.H. Feature selection. In Grammar-Based Feature Generation for Time-Series Prediction; Springer: Berlin/Heidelberg, Germany, 2015; pp. 13–24. [Google Scholar] [CrossRef]
  7. Summair, M.; Qamar, U. Understanding and Using Rough Set Based Feature Selection: Concepts, Techniques and Applications; Springer: Singapore, 2020. [Google Scholar] [CrossRef]
  8. Kiran, M.S.; Gündüz, M. XOR-based artificial bee colony algorithm for binary optimization. Turk. J. Electr. Eng. Comput. Sci. 2013, 21, 2307–2328. [Google Scholar] [CrossRef]
  9. Gunduz, M.; Aslan, M. DJAYA: A discrete Jaya algorithm for solving traveling salesman problem. Appl. Soft Comput. 2021, 105, 107275. [Google Scholar] [CrossRef]
  10. Aslan, M.; Gunduz, M.; Kiran, M.S. JayaX: Jaya algorithm with xor operator for binary optimization. Appl. Soft Comput. 2019, 82, 105576. [Google Scholar] [CrossRef]
  11. Turkoglu, B.; Uymaz, S.A.; Kaya, E. Binary artificial algae algorithm for feature selection. Appl. Soft Comput. 2022, 120, 108630. [Google Scholar] [CrossRef]
  12. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  13. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  14. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar] [CrossRef]
  15. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  18. Aljarah, I.; Mafarja, M.; Heidari, A.A.; Faris, H.; Mirjalili, S. Multi-verse optimizer: Theory, literature review, and application in data clustering. In Nature-Inspired Optimizers: Theories, Literature Reviews and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 123–141. [Google Scholar] [CrossRef]
  19. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Abdollahzadeh, B.; Gharehchopogh, F.S. A multi-objective optimization algorithm for feature selection problems. Eng. Comput. 2022, 38, 1845–1863. [Google Scholar] [CrossRef]
  22. Wang, H.; Khoshgoftaar, T.M.; Van Hulse, J. A comparative study of threshold-based feature selection techniques. In Proceedings of the 2010 IEEE International Conference on Granular Computing, San Jose, CA, USA, 14–16 August 2010; pp. 499–504. [Google Scholar]
  23. Hancer, E.; Xue, B.; Karaboga, D.; Zhang, M. A binary ABC algorithm based on advanced similarity scheme for feature selection. Appl. Soft Comput. 2015, 36, 334–348. [Google Scholar] [CrossRef]
  24. Hans, R.; Kaur, H. Binary multi-verse optimization (BMVO) approaches for feature selection. Int. J. Interact. Multimed. Artif. Intell. 2020, 6, 91–106. [Google Scholar] [CrossRef]
  25. Zhou, Y.; Lin, J.; Guo, H. Feature subset selection via an improved discretization-based particle swarm optimization. Appl. Soft Comput. 2021, 98, 106794. [Google Scholar] [CrossRef]
  26. Rostami, M.; Berahmand, K.; Nasiri, E.; Forouzandeh, S. Review of swarm intelligence-based feature selection methods. Eng. Appl. Artif. Intell. 2021, 100, 104210. [Google Scholar] [CrossRef]
  27. Zhu, X.; Wang, Y.; Li, Y.; Tan, Y.; Wang, G.; Song, Q. A new unsupervised feature selection algorithm using similarity-based feature clustering. Comput. Intell. 2019, 35, 2–22. [Google Scholar] [CrossRef]
  28. Baş, E.; Ülker, E. An efficient binary social spider algorithm for feature selection problem. Expert Syst. Appl. 2020, 146, 113185. [Google Scholar] [CrossRef]
  29. Hu, Y.; Zheng, J.; Zou, J.; Yang, S.; Ou, J.; Wang, R. A dynamic multi-objective evolutionary algorithm based on intensity of environmental change. Inf. Sci. 2020, 523, 49–62. [Google Scholar] [CrossRef]
  30. Wang, C.; Pan, H.; Su, Y. A many-objective evolutionary algorithm with diversity-first based environmental selection. Swarm Evol. Comput. 2020, 53, 100641. [Google Scholar] [CrossRef]
  31. Rodrigues, D.; Pereira, L.A.; Nakamura, R.Y.; Costa, K.A.; Yang, X.-S.; Souza, A.N.; Papa, J.P. A wrapper approach for feature selection and optimum-path forest based on bat algorithm. Expert Syst. Appl. 2014, 41, 2250–2258. [Google Scholar] [CrossRef]
  32. Zawbaa, H.M.; Emary, E.; Parv, B.; Sharawi, M. Feature selection approach based on moth-flame optimization algorithm. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4612–4617. [Google Scholar]
  33. Mafarja, M.M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Song, X.-f.; Gong, D.-w. A return-cost-based binary firefly algorithm for feature selection. Inf. Sci. 2017, 418, 561–574. [Google Scholar] [CrossRef]
  35. Aziz, M.A.E.; Hassanien, A.E. Modified cuckoo search algorithm with rough sets for feature selection. Neural Comput. Appl. 2018, 29, 925–934. [Google Scholar] [CrossRef]
  36. Abdel-Basset, M.; El-Shahat, D.; El-Henawy, I.; De Albuquerque, V.H.C.; Mirjalili, S. A new fusion of grey wolf optimizer algorithm with a two-phase mutation for feature selection. Expert Syst. Appl. 2020, 139, 112824. [Google Scholar] [CrossRef]
  37. Kılıç, F.; Kaya, Y.; Yildirim, S. A novel multi population based particle swarm optimization for feature selection. Knowl. Based Syst. 2021, 219, 106894. [Google Scholar] [CrossRef]
  38. Alawad, N.A.; Abed-alguni, B.H.; Al-Betar, M.A.; Jaradat, A. Binary improved white shark algorithm for intrusion detection systems. Neural Comput. Appl. 2023, 35, 19427–19451. [Google Scholar] [CrossRef]
  39. Zhang, K.; Liu, Y.; Mei, F.; Sun, G.; Jin, J. IBGJO: Improved binary golden jackal optimization with chaotic tent map and cosine similarity for feature selection. Entropy 2023, 25, 1128. [Google Scholar] [CrossRef]
  40. Ragab, M. Hybrid firefly particle swarm optimisation algorithm for feature selection problems. Expert Syst. 2024, 41, e13363. [Google Scholar] [CrossRef]
  41. Turabieh, H.; Mafarja, M.; Li, X. Iterated feature selection algorithms with layered recurrent neural network for software fault prediction. Expert Syst. Appl. 2019, 122, 27–42. [Google Scholar] [CrossRef]
  42. Aha, D. UC Irvine Machine Learning Repository. Available online: https://archive.ics.uci.edu/ (accessed on 3 August 2025).
  43. Khurma, R.A.; Aljarah, I.; Sharieh, A.; Mirjalili, S. Evolopy-fs: An open-source nature-inspired optimization framework in python for feature selection. In Evolutionary Machine Learning Techniques; Springer: Berlin/Heidelberg, Germany, 2020; pp. 131–173. [Google Scholar] [CrossRef]
  44. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar] [CrossRef]
Figure 1. The white shark has auditory lines on both sides of its body [13].
Figure 1. The white shark has auditory lines on both sides of its body [13].
Applsci 15 08710 g001
Figure 2. An example of solution representation [41].
Figure 2. An example of solution representation [41].
Applsci 15 08710 g002
Figure 3. Graphical representation of the transfer functions (a) S-shaped, (b) V-shaped, (c) logarithmic.
Figure 3. Graphical representation of the transfer functions (a) S-shaped, (b) V-shaped, (c) logarithmic.
Applsci 15 08710 g003
Figure 4. The block diagram of the proposed bWSO method.
Figure 4. The block diagram of the proposed bWSO method.
Applsci 15 08710 g004
Figure 5. Average of fitness for the proposed bWSO-log algorithm in comparison to other algorithms (a) For N1 Breast cancer Wisconsin (original) dataset (b) For N5 Zoo dataset (c) For N7 Climate Model Simulation Crashes dataset (d) N10 Parkinsons dataset.
Figure 5. Average of fitness for the proposed bWSO-log algorithm in comparison to other algorithms (a) For N1 Breast cancer Wisconsin (original) dataset (b) For N5 Zoo dataset (c) For N7 Climate Model Simulation Crashes dataset (d) N10 Parkinsons dataset.
Applsci 15 08710 g005
Table 1. Overview of swarm intelligence techniques employed in feature selection.
Table 1. Overview of swarm intelligence techniques employed in feature selection.
YearAlgorithmInstruction
2014BBA [31]A feature selection method based on the BAT algorithm and the Optimum-Path Forest classifier has been proposed.
2015DisABC [23]The ABC algorithm has been discretized using bitwise operators, transfer functions, and a Jaccard similarity-based mutation strategy.
2016MFO [32]The MFO algorithm, developed by drawing inspiration from the nocturnal flight strategy of moths, has been employed in the feature selection process.
2017WOASA [33]To achieve optimal feature selection, a hybrid binary whale optimization algorithm (WOASA) was proposed by integrating Simulated Annealing (SA) with the WOA.
2017Rc-BBFA [34]A return-cost-based binary Firefly Algorithm (Rc-BBFA) has been proposed for feature selection. The method incorporates a mechanism specifically designed to prevent premature convergence.
2018MCSRS [35]A modified cuckoo search-based RS (MCSRS) method with coarse clusters has been developed for feature selection.
2020TMGWO [36]A novel GWO algorithm incorporating a two-stage mutation mechanism has been proposed for feature selection.
2020BMVO [24]For feature selection, the MVO algorithm was discretized using transfer functions, and the BMVO algorithm was introduced.
2020BinSSA [28]In this study, a binary version of the Social Spider Algorithm, called the Binary Social Spider Algorithm (BinSSA), was proposed. Additionally, transfer functions (S-Shape, V-Shape) and a crossover operator were utilized.
2021MPPSO [37]A novel multi-population-based Particle Swarm Optimization (MPPSO) algorithm has been proposed for feature selection.
2021IDPSO-FS [25]Using an entropy-based cut-point method, predefined cut-points were selected and optimized by PSO to discretize the features.
2022BAAA [11]For feature selection, a binary version of the AAA was developed using transfer functions (S-Shape, V-Shape).
2023BIWSO [38]A binary version of the WSO using transfer functions (S-shape, V-shape) has been developed for feature selection.
2023IBGJO [39]An improved Binary Golden Jackal Optimization (IBGJO) was proposed for feature selection, using Chaotic Tent Map and Cosine Similarity to enhance solution quality and diversity.
2024HFPSO [40]A hybrid metaheuristic algorithm combining PSO and FA methods is proposed for feature selection
Table 2. Definition of the transfer functions (S-shaped, V-shaped and proposed function).
Table 2. Definition of the transfer functions (S-shaped, V-shaped and proposed function).
Proposed bWSOTransfer FunctionsProposed bWSOTransfer Functions
bWSO-logApplsci 15 08710 i001Logarithmic = log 2 ( x + 1 ) bWSO-v1Applsci 15 08710 i002 V 1 = e r f ( π 2 x )
bWSO-s1Applsci 15 08710 i003 S 1 = 1 1 + e 2 x bWSO-v2Applsci 15 08710 i004 V 2 = t a n h x
bWSO-s2Applsci 15 08710 i005 S 2 = 1 1 + e x bWSO-v3Applsci 15 08710 i006 V 3 = x / 1 + x 2
bWSO-s3Applsci 15 08710 i007 S 3 = 1 1 + e ( x / 2 ) bWSO-v4Applsci 15 08710 i008 V 4 = 2 π a r c tan ( π 2 x )
bWSO-s4Applsci 15 08710 i009 S 4 = 1 1 + e ( x / 3 ) bWSO-0Transfer function not used.
Table 3. Dataset description.
Table 3. Dataset description.
Dataset NameFeaturesSamplesClassesMissing ValueSubject Area
N1Breast cancer Wisconsin (original)96992YesHealth and Medicine
N2Planning relax121822NoComputer Science
N3Heart failure clinical records122992NoHealth and Medicine
N4Wine131783NoPhysics and Chemistry
N5Zoo161017NoBiology
N6Lymphography181484NoHealth and Medicine
N7Climate Model Simulation Crashes185402NoClimate and Environment
N8Hepatitis191552YesHealth and Medicine
N9Waveform (Version 1)2150003NoPhysics and Chemistry
N10Parkinsons221952NoHealth and Medicine
N11Breast Cancer Wisconsin (Diagnostic)305692NoHealth and Medicine
N12Ionosphere343512NoPhysics and Chemistry
N13Dermatology343666YesHealth and Medicine
N14Soybean Small35474NoBiology
N15Qsar Biodegradation4110552NoOther
N16Lung Cancer56323YesHealth and Medicine
N17Spambase5746012NoComputer Science
N18Sonar602082NoPhysics and Chemistry
N19CNAE-985610809NoBusiness
Table 4. Hyperparameters values used in all algorithms.
Table 4. Hyperparameters values used in all algorithms.
AlgorithmHyperparametersValues
WSO f m a x 0.75
f m i n 0.07
p m a x 1.5
p m i n 0.5
t a u 4.11
a 0 6.25
a 1 100
a 2 0.0005
AAA   S h a r e   f o r c e 2
e   E n e r g y   l o s s 0.3
A p   A d a p t a t i o n 0.2
BAT A   ( L o u d n e s s ) 0.5
r   ( P u l s e   r a t e ) 0.5
Q [0, 2]
FA a l p h a   ( R a n d o m n e s s ) 0.5
B e t a m i n     ( m i n i m u m   v a l u e   o f   b e t a ) 0.2
G a m a ( a b s o r p t i o n   c o e f f i c i e n t ) 1
GWO a l p h a   2
MFO b   1
MVO w e p m a x 1
w e p m i n 0.2
PSO c 1 , 2
c 2 2
w m a x 0.9
w m i n 0.2
WOA a l p h a   2
Table 5. The test accuracy and detailed results of the bWSO-Log and other versions.
Table 5. The test accuracy and detailed results of the bWSO-Log and other versions.
ID bWSO-logbWSO-0bWSO-s1bWSO-s2bWSO-s3bWSO-s4bWSO-v1bWSO-v2bWSO-v3bWSO-v4
N1↑mean0.9600.9590.9520.9550.9460.9540.9500.9520.9460.940
±std0.0170.0190.0220.0160.0250.0170.0240.0180.0230.028
↑best0.9861.0000.9930.9860.9860.9860.9860.9790.9860.986
↑worst0.9210.9140.9140.9290.8860.9070.8860.9290.8790.850
N2↑mean0.6610.6450.6270.6120.6380.6220.6100.6380.6320.634
±std0.0560.0710.0670.0790.0680.0670.0790.0510.0870.073
↑best0.7570.8380.7840.7570.7840.7570.7570.7300.7570.784
↑worst0.5410.5140.4860.4590.4590.4860.4320.5410.3780.514
N3↑mean0.8410.8260.8190.8090.8170.8350.7990.8000.8210.798
±std0.0500.0690.0640.0760.0610.0630.0770.0740.0650.087
↑best0.9170.9500.9330.9000.9330.9330.9330.8830.9170.900
↑worst0.7330.6330.6670.6000.6830.6670.6170.5830.7000.533
N4↑mean0.9120.9050.9030.8890.9080.8960.8970.9120.8950.882
±std0.0570.0520.0720.0740.0570.0690.0540.0510.0530.079
↑best1.0001.0001.0001.0000.9721.0000.9720.9721.0001.000
↑worst0.7500.7780.6940.6670.6940.6940.7500.7500.7780.722
N5↑mean0.8560.8060.8290.8130.8300.8520.8240.8350.7940.830
±std0.0850.1150.0930.1010.0990.1030.0810.0920.1200.105
↑best1.0001.0000.9521.0001.0001.0000.9520.9521.0000.952
↑worst0.6190.4760.5240.5710.6190.5710.6670.6190.4290.524
N6↑mean0.7680.7610.7490.7220.7400.7640.7270.7310.7460.753
±std0.0620.0730.0730.0950.1030.0870.0840.0910.0890.088
↑best0.9330.9000.9000.9000.9330.9000.9000.9000.9000.900
↑worst0.6670.6000.6000.5000.4670.5330.5330.5000.6000.500
N7↑mean0.9400.9350.9260.9300.9250.9250.9270.9230.9260.923
±std0.0200.0210.0230.0280.0260.0260.0270.0310.0300.026
↑best0.9720.9630.9810.9720.9720.9720.9720.9720.9810.981
↑worst0.9070.8800.8610.8800.8700.8800.8700.8520.8520.880
N8↑mean0.8220.8130.8040.7820.8050.8140.8050.7900.7890.799
±std0.0570.0430.0660.0550.0670.0590.0660.0640.0850.082
↑best0.9350.9030.9680.9030.9350.9030.9350.9030.9350.935
↑worst0.7420.7420.6770.6770.6450.6450.6130.6450.5810.613
N9↑mean0.8200.8170.8170.8150.8180.8120.8140.8170.8120.808
±std0.0100.0110.0140.0140.0140.0110.0160.0110.0110.013
↑best0.8450.8430.8480.8410.8420.8320.8400.8360.8350.827
↑worst0.8020.7900.7980.7890.7850.7920.7760.7930.7870.778
N10↑mean0.8170.8580.8540.8580.8500.8650.8400.8550.8370.814
±std0.0130.0530.0660.0610.0760.0630.0760.0660.0480.077
↑best0.8440.9740.9740.9490.9740.9740.9490.9490.9230.949
↑worst0.7800.7440.7440.6920.6670.6920.5900.6410.7440.641
N11↑mean0.9360.9270.9330.9280.9340.9300.9280.9190.9230.924
±std0.0240.0390.0200.0250.0240.0270.0270.0240.0250.021
↑best0.9910.9740.9820.9740.9740.9820.9740.9650.9650.965
↑worst0.8600.7630.8950.8600.8680.8510.8680.8860.8680.895
N12↑mean0.8550.8640.8230.8380.8460.8490.8660.8800.8840.860
±std0.0360.0440.0430.0610.0420.0510.0450.0470.0370.052
↑best0.9300.9860.9010.9440.9300.9440.9440.9580.9440.944
↑worst0.7890.7890.7040.6340.7320.7610.7890.7890.8170.732
N13↑mean0.9330.9320.9570.9480.9450.9320.9160.8990.9230.918
±std0.0470.0380.0350.0330.0310.0370.0380.0390.0420.035
↑best0.9861.0001.0001.0000.9860.9860.9730.9731.0000.986
↑worst0.8110.8380.8510.8650.8510.8240.8380.8240.8380.838
N14↑mean0.9330.8370.9370.9530.9500.9430.8800.8730.9230.873
±std0.0960.1900.1100.0900.1140.1010.1240.1340.1140.178
↑best1.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
↑worst0.7000.4000.6000.7000.5000.7000.6000.6000.6000.100
N15↑mean0.8300.8340.8410.8390.8360.8290.8370.8280.8260.834
±std0.0200.0290.0220.0220.0180.0250.0210.0230.0280.025
↑best0.8580.8960.8770.8820.8720.9050.8770.8720.8910.867
↑worst0.7770.7770.8010.7960.8010.7870.7820.7820.7630.768
N16↑mean0.5570.3670.4710.5240.4670.4430.4290.4190.4380.395
±std0.1890.1790.1880.1520.2400.2170.1880.1540.1870.218
↑best0.8570.8570.8570.8571.0000.8570.7140.7140.7140.857
↑worst0.1430.1430.0000.2860.0000.0000.1430.1430.1430.000
N17↑mean0.9180.9150.9180.9130.9120.9130.9140.9160.9140.914
±std0.0080.0100.0100.0100.0120.0080.0120.0070.0090.011
↑best0.9330.9360.9400.9370.9350.9320.9310.9290.9330.935
↑worst0.9070.8970.9020.8950.8710.9000.8730.9030.8950.893
N18↑mean0.7970.7670.7810.7960.7810.7900.7710.7640.7710.775
±std0.0650.0740.0630.0670.0550.0550.0750.0680.0600.069
↑best0.9050.9050.9050.9290.8810.8810.9050.8810.9290.881
↑worst0.6670.5710.6430.6670.6900.6430.6430.6190.6430.619
N19↑mean0.8880.8770.8640.8520.8160.8180.8380.8360.7990.803
±std0.0210.0260.0290.0260.0340.0280.0360.0270.0290.037
↑best0.9310.9440.9030.9210.8800.8560.9030.9070.8660.875
↑worst0.8290.7820.7780.8010.7180.7410.7450.7870.7220.722
↑Best Count14.0000.0003.0001.0000.0001.0000.0001.0001.0000.000
↓Average Sort2.3684.7894.4745.4745.0534.9476.5796.2637.1587.579
↓Ranking1.0003.0002.0006.0005.0004.0008.0007.0009.00010.000
Table 6. The test recall and detailed results of the bWSO-log and other versions.
Table 6. The test recall and detailed results of the bWSO-log and other versions.
ID bWSO-logbWSO-0bWSO-s1bWSO-s2bWSO-s3bWSO-s4bWSO-v1bWSO-v2bWSO-v3bWSO-v4
N1↑mean0.9580.9570.9460.9520.9420.9500.9460.9460.9400.935
±std0.0220.0220.0250.0170.0270.0200.0320.0200.0270.037
↑best0.9911.0000.9950.9840.9900.9890.9890.9790.9840.990
↑worst0.9130.9000.8940.9150.8860.9030.8540.9140.8750.808
N2↑mean0.4970.4880.4740.4650.4810.4780.4710.4690.4800.482
±std0.0490.0670.0520.0730.0760.0640.0590.0550.0760.067
↑best0.6130.6700.6310.6130.6690.6150.5870.5980.6330.669
↑worst0.4110.4200.3970.3150.2930.3620.3530.3790.3040.339
N3↑mean0.7970.7780.7620.7560.7590.7860.7550.7490.7690.743
±std0.0570.0830.0790.0900.0820.0730.0930.0750.0860.104
↑best0.8890.9120.9460.8890.8700.9110.9130.8380.8700.877
↑worst0.6750.5790.6010.4810.5200.5500.5500.5290.5420.447
N4↑mean0.9250.9130.9100.9020.9140.9070.9020.9170.9050.895
±std0.0460.0500.0730.0680.0580.0640.0520.0570.0480.077
↑best1.0001.0001.0001.0000.9821.0000.9810.9781.0001.000
↑worst0.8060.7780.7060.6750.7120.7120.7500.7150.7900.694
N5↑mean0.7210.6760.7040.6690.6850.7370.6730.6670.6540.693
±std0.1440.1390.1210.1700.1490.1660.1320.1560.1390.165
↑best1.0001.0000.9791.0001.0001.0000.9440.9821.0000.983
↑worst0.4290.3430.5000.3330.3810.4080.4290.3330.4720.408
N6↑mean0.5730.5500.5990.5410.5110.5650.5630.5480.5330.556
±std0.1520.1450.1680.1660.1480.1670.1520.1410.1300.156
↑best0.9150.9060.8520.8970.9380.8920.9350.9000.8710.906
↑worst0.3760.3750.3260.2970.3210.2920.3180.3330.3470.338
N7↑mean0.6770.6920.6490.6770.6520.6440.6800.6620.6620.640
±std0.1240.1010.0830.0880.0770.0960.0940.0880.0820.093
↑best0.8700.8810.8950.8900.8330.8330.8230.8410.8000.828
↑worst0.4810.4950.4950.5420.4900.4750.4900.5190.4850.485
N8↑mean0.6440.6530.6540.6430.6660.6860.6850.6490.6530.650
±std0.1200.1020.1200.0940.1190.0900.1200.0860.1110.120
↑best0.9640.8770.9810.7980.9500.8360.9290.8380.8190.946
↑worst0.4660.4140.4800.4620.3970.5000.4460.5000.4420.432
N9↑mean0.8190.8170.8170.8150.8180.8120.8140.8180.8120.808
±std0.0100.0110.0140.0140.0140.0110.0160.0110.0110.014
↑best0.8460.8410.8470.8410.8410.8320.8400.8360.8380.827
↑worst0.8020.7900.8000.7900.7880.7940.7770.7950.7870.774
N10↑mean0.7750.7750.7780.7820.7770.7820.7550.7700.7600.736
±std0.0920.1000.1020.0860.1010.1020.1020.0900.0730.112
↑best0.9330.9850.9520.9370.9580.9850.9110.9160.9550.969
↑worst0.6000.5850.5940.5930.4920.5540.5280.5000.6000.538
N11↑mean0.9260.9170.9220.9200.9250.9220.9190.9060.9110.914
±std0.0270.0460.0220.0270.0260.0310.0300.0300.0310.024
↑best0.9870.9740.9810.9690.9740.9770.9740.9630.9620.953
↑worst0.8490.7200.8810.8470.8540.8310.8480.8680.8300.877
N12↑mean0.7980.8300.7710.7870.7990.7930.8330.8460.8550.828
±std0.0410.0580.0480.0600.0440.0620.0530.0570.0420.058
↑best0.9030.9790.8590.9150.8860.9330.9370.9530.9390.909
↑worst0.7170.7200.6670.6180.7220.6880.7290.7280.7780.677
N13↑mean0.9170.9210.9480.9360.9440.9270.9010.8700.9060.892
±std0.0640.0420.0440.0460.0310.0350.0520.0590.0610.052
↑best0.9901.0001.0001.0000.9890.9850.9690.9781.0000.986
↑worst0.7410.8280.8210.8070.8540.8460.7820.7530.7740.775
N14↑mean0.9210.8730.9490.9530.9480.9380.8720.8700.9290.856
±std0.1100.1520.0820.0950.1090.1130.1260.1470.1010.173
↑best1.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
↑worst0.6670.4250.6880.6250.6250.5830.5420.5000.7000.250
N15↑mean0.8170.8230.8280.8250.8220.8170.8240.8110.8100.818
±std0.0240.0300.0290.0250.0200.0270.0290.0300.0300.031
↑best0.8480.8960.8790.8670.8630.8960.8690.8610.8770.869
↑worst0.7630.7540.7660.7790.7820.7650.7520.7610.7510.745
N16↑mean0.6010.3870.5120.6030.4940.4910.4600.4840.4460.414
±std0.1950.1800.1930.1720.2480.2430.2070.1650.2040.218
↑best0.8890.8890.8330.9171.0000.8330.7780.8330.8670.833
↑worst0.2220.0670.0000.2220.0000.0000.0830.1670.0560.000
N17↑mean0.9140.9110.9140.9080.9060.9070.9090.9110.9090.909
±std0.0090.0110.0110.0120.0140.0090.0100.0080.0090.011
↑best0.9290.9330.9380.9390.9310.9270.9260.9270.9290.930
↑worst0.8990.8880.8930.8840.8560.8890.8790.8990.8900.887
N18↑mean0.7940.7660.7770.7920.7790.7890.7700.7630.7690.776
±std0.0650.0730.0620.0660.0590.0570.0710.0680.0600.065
↑best0.9040.9040.8990.9310.8840.8930.8870.8820.9240.880
↑worst0.6550.6060.6190.6670.6600.6650.6450.6430.6460.625
N19↑mean0.8900.8780.8670.8530.8160.8200.8390.8360.8030.802
±std0.0190.0280.0270.0250.0330.0280.0360.0280.0280.037
↑best0.9300.9450.9020.9120.8780.8620.9030.9080.8670.869
↑worst0.8360.7730.7960.8030.7350.7550.7450.7780.7350.711
↑Best Count9.0001.0003.0003.0000.0002.0000.0000.0001.0000.000
↓Average Sort3.0534.8423.9475.3165.0534.9476.1056.8957.1057.737
↓Ranking1.0003.0002.0006.0005.0004.0007.0008.0009.00010.000
Table 7. The test F1-Score and detailed results of the bWSO-log and other versions.
Table 7. The test F1-Score and detailed results of the bWSO-log and other versions.
ID bWSO-logbWSO-0bWSO-s1bWSO-s2bWSO-s3bWSO-s4bWSO-v1bWSO-v2bWSO-v3bWSO-v4
N1↑mean0.9550.9540.9460.9500.9400.9490.9440.9470.9400.932
±std0.0190.0210.0250.0180.0270.0190.0270.0200.0260.033
↑best0.9841.0000.9920.9850.9850.9840.9840.9770.9840.984
↑worst0.9160.9040.8900.9150.8720.9040.8660.9190.8700.820
N2↑mean0.4740.4660.4470.4420.4610.4500.4410.4450.4650.457
±std0.0610.0820.0610.0860.0790.0760.0700.0620.0850.074
↑best0.6240.7020.6480.6240.6480.6110.5890.5800.6330.681
↑worst0.3730.3390.3510.3150.3150.3270.3390.3620.2750.339
N3↑mean0.8070.7870.7730.7630.7680.7970.7610.7570.7770.752
±std0.0590.0850.0790.0940.0880.0800.0980.0860.0910.107
↑best0.8990.9350.9210.8720.8880.9110.9230.8540.8810.877
↑worst0.6830.5760.6030.4670.4890.5200.5510.4960.5310.444
N4↑mean0.9130.9030.9030.8880.9080.8970.8970.9090.8960.882
±std0.0560.0530.0730.0760.0570.0680.0530.0570.0490.082
↑best1.0001.0001.0001.0000.9761.0000.9740.9741.0001.000
↑worst0.7560.7710.6910.6650.7040.7050.7540.7230.7740.679
N5↑mean0.6800.6260.6610.6270.6340.7100.6170.6210.6050.638
±std0.1580.1470.1220.1830.1690.1700.1450.1560.1530.172
↑best1.0001.0000.9331.0001.0001.0000.9430.9701.0000.925
↑worst0.2880.2890.4290.2560.3300.3430.3640.2950.4060.334
N6↑mean0.5630.5350.5910.5280.4970.5560.5470.5300.5220.547
±std0.1580.1460.1710.1720.1520.1720.1550.1500.1350.159
↑best0.9150.9000.8670.8990.9330.8980.8770.8900.8670.900
↑worst0.3480.3610.3260.2630.3100.2760.2910.2270.3420.338
N7↑mean0.7090.7190.6860.7130.6930.6710.7050.6900.6940.668
±std0.1340.0940.0880.0880.0940.1160.1030.0910.0950.113
↑best0.8930.8470.8950.8770.8930.8930.8770.8520.8700.856
↑worst0.4760.4780.4810.5500.4680.4680.4650.5210.4810.471
N8↑mean0.6370.6550.6510.6390.6570.6880.6610.6430.6430.639
±std0.1170.1020.1190.1010.1060.1040.1150.0900.1260.122
↑best0.8560.8540.9350.7930.8970.8540.8560.8050.8340.897
↑worst0.4260.4360.4150.4360.4260.4360.3800.4360.4150.380
N9↑mean0.8190.8160.8160.8140.8180.8110.8140.8170.8110.807
±std0.0100.0110.0140.0140.0140.0110.0160.0110.0110.014
↑best0.8450.8420.8470.8410.8410.8310.8390.8360.8350.827
↑worst0.8010.7900.7980.7890.7860.7920.7760.7930.7870.774
N10↑mean0.7850.7800.7820.7940.7850.7910.7530.7820.7570.734
±std0.0880.0920.0970.0800.1070.1020.0960.0990.0630.113
↑best0.9330.9540.9470.9370.9690.9540.9020.9130.8760.933
↑worst0.6070.6070.6060.6060.4630.5410.5190.4500.6060.478
N11↑mean0.9300.9200.9280.9230.9270.9240.9230.9110.9160.918
±std0.0260.0450.0210.0270.0250.0290.0280.0270.0270.022
↑best0.9900.9710.9810.9730.9720.9810.9730.9630.9620.962
↑worst0.8530.7280.8870.8420.8620.8410.8610.8700.8470.884
N12↑mean0.8200.8430.7870.8030.8160.8120.8460.8590.8680.840
±std0.0410.0550.0520.0700.0470.0640.0520.0560.0410.059
↑best0.9170.9840.8800.9280.9120.9390.9370.9550.9390.930
↑worst0.7430.7350.6580.6020.7090.7000.7430.7430.7970.683
N13↑mean0.9170.9210.9490.9370.9370.9220.9020.8660.9040.895
±std0.0680.0430.0430.0420.0320.0370.0470.0610.0630.050
↑best0.9861.0001.0001.0000.9890.9850.9690.9781.0000.980
↑worst0.7300.8200.8150.8190.8510.8460.8010.7470.7370.771
N14↑mean0.9030.8310.9370.9470.9410.9310.8440.8390.9100.822
±std0.1390.1930.1110.1050.1260.1260.1510.1710.1320.209
↑best1.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
↑worst0.5460.3210.5830.6310.5670.5930.6000.5000.6220.050
N15↑mean0.8130.8180.8220.8200.8170.8120.8190.8070.8070.817
±std0.0240.0300.0270.0230.0200.0290.0270.0260.0300.030
↑best0.8440.8830.8660.8660.8600.8980.8670.8570.8760.860
↑worst0.7510.7580.7650.7730.7800.7620.7500.7660.7540.744
N16↑mean0.5360.3230.4430.5140.4420.4240.3850.3730.3810.335
±std0.1950.1790.2020.1750.2500.2190.1890.1770.1920.204
↑best0.8670.8670.8220.8861.0000.7780.7220.7220.7500.822
↑worst0.1110.0830.0000.1480.0000.0000.0830.0950.0830.000
N17↑mean0.9140.9110.9140.9090.9070.9080.9090.9120.9100.910
±std0.0080.0110.0100.0100.0130.0090.0120.0080.0090.011
↑best0.9280.9340.9380.9340.9310.9280.9280.9260.9300.931
↑worst0.9020.8920.8970.8880.8620.8940.8690.8980.8890.885
N18↑mean0.7900.7610.7720.7900.7740.7830.7660.7570.7660.770
±std0.0670.0760.0660.0680.0600.0570.0750.0720.0610.069
↑best0.9040.9040.9030.9280.8810.8800.8960.8800.9270.880
↑worst0.6580.5710.6050.6600.6580.6330.6370.5960.6410.619
N19↑mean0.8880.8770.8630.8510.8150.8200.8380.8360.8060.804
±std0.0200.0270.0290.0250.0320.0290.0350.0280.0260.035
↑best0.9290.9430.9050.9110.8720.8650.9070.9080.8620.864
↑worst0.8320.7760.7800.8050.7300.7520.7390.7740.7410.726
↑Best Count10.001.0004.0002.0000.0002.0000.0000.0001.0000.000
↓Average Sort2.9475.0004.1055.0005.0005.0006.3166.7896.8427.947
↓Ranking1.0003.0002.0003.0003.0003.0007.0008.0009.00010.00
Table 8. Average of classification accuracy for the proposed bWSO-log algorithm in comparison to other algorithms.
Table 8. Average of classification accuracy for the proposed bWSO-log algorithm in comparison to other algorithms.
ID bWSO-logbAAAbBATbFAbGWObMFObMVObPSObWOA
N1↑mean0.9600.9500.9480.9540.9590.9580.9520.9580.952
±std0.0170.0230.0230.0180.0180.0200.0200.0160.019
↑best0.9860.9930.9790.9860.9930.9930.9860.9860.986
↑worst0.9210.9000.8860.9070.9140.9210.9140.9290.893
N2↑mean0.6610.6280.6480.6330.6350.6370.6300.6270.633
±std0.0560.0730.0690.0560.0690.0660.0540.0790.073
↑best0.7570.7840.7570.7570.7570.7570.7300.7840.757
↑worst0.5410.4590.4860.5140.5140.5140.5140.4860.486
N3↑mean0.8410.8330.6360.7850.8310.8290.8210.6910.818
±std0.0500.0510.0560.0790.0460.0640.0810.1110.085
↑best0.9170.9170.7170.9000.9170.9500.9330.8830.950
↑worst0.7330.7000.4670.5500.7500.7170.5830.5000.633
N4↑mean0.9120.9240.7050.9080.8690.9240.9020.7670.879
±std0.0570.0420.1000.0640.1020.0480.0520.1090.048
↑best1.0001.0000.9721.0000.9720.9721.0000.9720.972
↑worst0.7500.8060.4720.6940.5280.8060.7500.5560.778
N5↑mean0.8560.8600.8510.8430.8890.8440.8320.8560.846
±std0.0850.0940.0690.1350.0780.0930.1040.0820.088
↑best1.0001.0001.0001.0001.0001.0001.0000.9520.952
↑worst0.6190.5710.6670.4290.6190.6190.5710.6670.571
N6↑mean0.7800.7620.7040.7380.7490.7730.7580.7480.752
±std0.0620.0780.0770.0770.0650.0800.0810.0970.087
↑best0.8670.9330.8670.8670.9001.0000.9330.9000.900
↑worst0.6000.6330.5000.5330.6000.6330.5670.4670.500
N7↑mean0.9400.9310.9150.9200.9230.9260.9270.9190.920
±std0.0200.0190.0230.0270.0270.0280.0170.0280.032
↑best0.9720.9630.9540.9630.9630.9910.9540.9810.963
↑worst0.9070.8890.8700.8520.8610.8700.8890.8610.815
N8↑mean0.8220.7950.7600.8040.7830.8150.7800.7900.792
±std0.0570.0700.0590.0570.0700.0830.0610.0690.068
↑best0.9350.9030.8390.9350.9350.9680.9030.9030.935
↑worst0.7420.6130.6450.6770.6770.6450.6450.6450.677
N9↑mean0.8200.8150.6680.8170.7980.8160.8120.8090.815
±std0.0100.0110.0550.0110.0140.0140.0140.0160.013
↑best0.8450.8400.7700.8460.8350.8420.8370.8410.838
↑worst0.8020.7940.5030.7920.7700.7900.7900.7780.789
N10↑mean0.8170.8610.8420.8470.8440.8480.8540.8520.852
±std0.0130.0410.0450.0530.0640.0670.0670.0400.047
↑best0.8440.9230.9490.9490.9230.9490.9740.9230.949
↑worst0.7800.7690.7690.7690.6670.6920.7180.7690.769
N11↑mean0.9360.9320.9180.9340.9370.9300.9340.9250.920
±std0.0240.0200.0230.0230.0190.0200.0200.0210.028
↑best0.9910.9740.9560.9820.9820.9740.9740.9650.965
↑worst0.8600.8860.8600.9040.8950.8860.8950.8860.868
N12↑mean0.8550.8400.8380.8380.8530.8430.8530.8380.854
±std0.0360.0440.0490.0480.0380.0530.0460.0480.049
↑best0.9300.9440.9010.9440.9150.9010.9300.9580.972
↑worst0.7890.7610.7460.7320.7750.6760.7460.7610.732
N13↑mean0.9330.9470.7220.9410.9540.9510.9410.8960.939
±std0.0470.0290.0640.0380.0280.0230.0300.0760.036
↑best0.9861.0000.8240.9861.0001.0001.0000.9861.000
↑worst0.8110.8650.5810.8240.8780.8920.8650.7570.838
N14↑mean0.9330.9230.9070.8970.9300.9270.9400.9270.963
±std0.0960.0970.1200.1500.1150.0910.0970.1260.061
↑best1.0001.0001.0001.0001.0001.0001.0001.0001.000
↑worst0.7000.6000.6000.4000.5000.7000.7000.4000.800
N15↑mean0.8300.8440.7990.8380.8340.8400.8350.8160.835
±std0.0200.0220.0400.0180.0320.0290.0270.0290.028
↑best0.8580.8820.8770.8670.8910.8820.8820.8630.886
↑worst0.7770.8060.7010.7960.7680.7580.7680.7490.777
N16↑mean0.5570.4480.4140.4710.3860.4480.4290.4710.433
±std0.1890.1750.1320.1640.1880.1900.2120.2090.223
↑best0.8570.8570.5710.8570.7140.7141.0001.0001.000
↑worst0.1430.1430.1430.2860.0000.1430.0000.1430.000
N17↑mean0.9180.9180.7770.9090.8990.9120.9120.8560.915
±std0.0080.0080.0170.0120.0120.0100.0130.0520.009
↑best0.9330.9370.8150.9330.9320.9440.9310.9260.935
↑worst0.9070.8950.7470.8740.8810.8970.8870.7760.893
N18↑mean0.7970.7960.7920.7700.7880.7880.7710.7900.767
±std0.0650.0510.0710.0580.0500.0470.0700.0640.071
↑best0.9050.8810.9050.8570.9050.8570.9290.9290.881
↑worst0.6670.7140.6670.6670.7140.6670.6190.6900.595
N19↑mean0.8880.8650.5660.8160.7610.8620.8500.8160.869
±std0.0210.0220.0870.0310.0440.0310.0370.0500.025
↑best0.9310.9030.7360.8840.8330.9030.9070.9030.931
↑worst0.8290.8100.3800.7450.6300.7550.7820.7220.819
↑Best Count11.0004.0000.0000.0003.0001.0000.0000.0001.000
↓Average Sort2.4743.5267.9475.5795.0003.9475.0536.2635.000
↓Ranking1.0002.0009.0007.0004.0003.0006.0008.0004.000
Table 9. p-values of Wilcoxon signed rank test comparing bWSO-log fitness results with other algorithms (p < 0.05 in bold).
Table 9. p-values of Wilcoxon signed rank test comparing bWSO-log fitness results with other algorithms (p < 0.05 in bold).
bAAAbBATbFAbGWObMFObMVObPSObWOA
N12.52 × 10−26.02 × 10−21.96 × 10−18.28 × 10−14.83 × 10−18.32 × 10−24.83 × 10−17.59 × 10−2
N21.04 × 10−13.86 × 10−15.87 × 10−21.55 × 10−16.10 × 10−21.38 × 10−23.06 × 10−21.28 × 10−1
N34.75 × 10−11.86 × 10−95.17 × 10−33.30 × 10−15.01 × 10−15.69 × 10−12.99 × 10−53.41 × 10−1
N45.73 × 10−12.83 × 10−68.39 × 10−18.47 × 10−25.22 × 10−14.73 × 10−16.41 × 10−62.40 × 10−2
N57.77 × 10−18.54 × 10−17.60 × 10−14.04 × 10−25.29 × 10−13.90 × 10−19.90 × 10−15.53 × 10−1
N61.73 × 10−17.24 × 10−42.38 × 10−21.71 × 10−18.29 × 10−12.03 × 10−12.22 × 10−11.07 × 10−1
N75.15 × 10−26.29 × 10−48.25 × 10−37.72 × 10−38.71 × 10−22.66 × 10−22.91 × 10−33.13 × 10−3
N81.25 × 10−11.94 × 10−33.43 × 10−13.41 × 10−27.92 × 10−11.72 × 10−21.06 × 10−11.24 × 10−1
N91.68 × 10−11.73 × 10−65.93 × 10−12.59 × 10−53.49 × 10−14.83 × 10−27.26 × 10−31.50 × 10−1
N108.29 × 10−13.36 × 10−15.23 × 10−14.04 × 10−15.52 × 10−18.08 × 10−17.21 × 10−16.19 × 10−1
N114.74 × 10−11.06 × 10−25.47 × 10−17.77 × 10−11.99 × 10−17.09 × 10−15.39 × 10−23.70 × 10−2
N122.44 × 10−11.13 × 10−11.44 × 10−19.09 × 10−15.30 × 10−18.39 × 10−11.51 × 10−18.29 × 10−1
N131.74 × 10−11.73 × 10−65.50 × 10−17.00 × 10−21.00 × 10−14.93 × 10−11.43 × 10−26.14 × 10−1
N149.86 × 10−14.01 × 10−13.61 × 10−17.34 × 10−18.77 × 10−18.12 × 10−11.00 × 1001.24 × 10−1
N151.02 × 10−21.67 × 10−32.55 × 10−23.93 × 10−14.58 × 10−22.03 × 10−16.91 × 10−24.71 × 10−1
N165.26 × 10−24.30 × 10−33.10 × 10−22.31 × 10−33.13 × 10−24.03 × 10−21.08 × 10−17.69 × 10−2
N178.29 × 10−11.72 × 10−61.38 × 10−38.81 × 10−61.84 × 10−22.97 × 10−23.88 × 10−62.99 × 10−1
N189.80 × 10−18.03 × 10−11.35 × 10−14.94 × 10−14.83 × 10−12.55 × 10−15.72 × 10−18.50 × 10−2
N197.18 × 10−41.73 × 10−63.15 × 10−61.73 × 10−61.90 × 10−42.50 × 10−43.50 × 10−67.16 × 10−3
Table 10. Average of F1-Score for the proposed bWSO-log algorithm in comparison to other algorithms.
Table 10. Average of F1-Score for the proposed bWSO-log algorithm in comparison to other algorithms.
ID bWSO-logbAAAbBATbFAbGWObMFObMVObPSObWOA
N1↑mean0.9550.9450.9410.9490.9550.9530.9460.9540.946
±std0.0190.0250.0260.0210.0200.0220.0210.0180.022
↑best0.9840.9920.9760.9850.9920.9920.9840.9850.985
↑worst0.9160.8910.8660.8960.9100.9140.9040.9230.881
N2↑mean0.4740.4510.4650.4640.4460.4740.4660.4620.474
±std0.0610.0750.0800.0610.0700.0880.0620.0730.082
↑best0.6240.6120.6240.6560.6240.6560.5800.6240.624
↑worst0.3730.3150.3270.3830.3390.3510.3620.3270.327
N3↑mean0.8070.7950.5030.7400.7920.7900.7790.6060.768
±std0.0590.0680.0610.0860.0540.0820.0970.1440.108
↑best0.8990.9090.6300.8780.8810.9240.9210.8480.928
↑worst0.6830.6210.3840.5000.6660.6070.4960.3940.549
N4↑mean0.9130.9230.6710.9100.8660.9220.9030.7470.880
±std0.0560.0430.1060.0650.1050.0510.0520.1260.050
↑best1.0001.0000.9731.0000.9770.9781.0000.9760.968
↑worst0.7560.8000.4440.6850.5310.7830.7580.4890.759
N5↑mean0.6800.7260.6620.6670.7480.6800.6460.6970.643
±std0.1580.1560.1330.2050.1230.1690.1610.1380.139
↑best1.0001.0001.0001.0001.0001.0001.0000.9670.959
↑worst0.2880.3960.3830.3210.5160.2310.3860.4730.400
N6↑mean0.5990.5500.5040.5130.5470.5980.5660.5510.551
±std0.1620.1790.1420.1280.1420.1870.1520.1730.157
↑best0.8670.9330.7920.7960.8141.0000.8900.8900.877
↑worst0.3540.2890.2610.2630.3100.3170.3620.2450.307
N7↑mean0.7090.6770.5190.6660.6060.6930.6790.6130.672
±std0.1340.0910.0610.0920.1000.1000.0880.0930.099
↑best0.8930.8210.6650.8650.8210.9260.8180.7980.821
↑worst0.4760.4730.4650.5150.4730.4710.4780.4630.468
N8↑mean0.6370.6420.5010.6690.5970.6700.6460.5470.623
±std0.1170.1240.1020.0900.1140.1220.1030.1140.130
↑best0.8560.8680.7650.8320.8340.8970.8150.8050.881
↑worst0.4260.3800.3920.4460.4150.4650.4260.3920.426
N9↑mean0.8190.8140.6660.8170.7980.8150.8110.8080.814
±std0.0100.0110.0550.0110.0150.0140.0130.0160.012
↑best0.8450.8400.7690.8450.8350.8420.8370.8400.837
↑worst0.8010.7930.5020.7920.7690.7900.7890.7770.789
N10↑mean0.7850.7960.7520.7850.7710.7840.7770.7730.797
±std0.0880.0590.0670.0710.0890.0810.1000.0650.065
↑best0.9330.8870.8610.9330.8870.9330.9620.9020.933
↑worst0.6070.6620.6060.6620.5110.5970.5860.6770.686
N11↑mean0.9300.9270.9110.9280.9310.9240.9280.9180.913
±std0.0260.0210.0260.0250.0220.0210.0230.0230.028
↑best0.9900.9710.9550.9810.9810.9690.9690.9600.959
↑worst0.8530.8760.8540.8890.8720.8810.8820.8740.859
N12↑mean0.8200.8040.8050.7970.8180.8090.8260.8010.828
±std0.0410.0510.0590.0600.0480.0590.0520.0540.059
↑best0.9170.9300.8910.9150.9040.8890.9170.9490.965
↑worst0.7430.7160.6890.6400.7220.6340.6960.7180.674
N13↑mean0.9170.9390.6820.9330.9480.9470.9330.8750.928
±std0.0680.0360.0640.0420.0300.0310.0340.0960.045
↑best0.9861.0000.7910.9851.0001.0001.0000.9841.000
↑worst0.7300.8430.5760.8030.8760.8380.8620.6880.776
N14↑mean0.9030.9020.8950.8950.9320.9150.9300.9200.941
±std0.1390.1250.1320.1480.1010.1090.1220.1270.112
↑best1.0001.0001.0001.0001.0001.0001.0001.0001.000
↑worst0.5460.5330.6020.3890.6360.6520.5440.4670.625
N15↑mean0.8130.8260.7760.8210.8170.8240.8160.8000.816
±std0.0240.0250.0460.0220.0360.0290.0310.0300.032
↑best0.8440.8630.8610.8590.8790.8760.8640.8460.875
↑worst0.7510.7770.6510.7710.7440.7460.7420.7270.742
N16↑mean0.5360.4040.3800.4220.3740.4150.3950.4330.377
±std0.1950.1880.1390.1870.1980.2090.2110.2280.217
↑best0.8670.8410.6570.8860.7780.7781.0001.0001.000
↑worst0.1110.1110.0830.1670.0000.0950.0000.0830.000
N17↑mean0.9140.9140.7660.9040.8930.9070.9080.8490.911
±std0.0080.0080.0170.0120.0130.0100.0130.0550.010
↑best0.9280.9330.8070.9300.9270.9420.9280.9210.932
↑worst0.9020.8890.7360.8710.8740.8920.8810.7640.889
N18↑mean0.7900.7890.7830.7590.7820.7820.7660.7840.762
±std0.0670.0510.0750.0620.0510.0480.0700.0660.072
↑best0.9040.8790.9050.8570.8930.8570.9270.9290.879
↑worst0.6580.7080.6370.6470.6970.6600.6190.6760.595
N19↑mean0.8880.8630.5650.8160.7640.8610.8480.8140.866
±std0.0200.0200.0850.0280.0420.0290.0350.0480.025
↑best0.9290.9070.7380.8780.8260.9040.9070.9010.931
↑worst0.8320.8140.3950.7590.6440.7700.7880.7180.832
↑Best Count8.0002.0000.0000.0003.0002.0000.0000.0003.000
↓Average Sort2.7374.0008.1055.4215.3683.5264.6846.1585.000
↓Ranking1.0003.0009.0007.0006.0002.0004.0008.0005.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gules, S.; Kılıç, A.; Kiran, M.S.; Gunduz, M. A Logarithmic Transfer Function for Binary Swarm Intelligence Algorithms: Enhanced Feature Selection with White Shark Optimizer. Appl. Sci. 2025, 15, 8710. https://doi.org/10.3390/app15158710

AMA Style

Gules S, Kılıç A, Kiran MS, Gunduz M. A Logarithmic Transfer Function for Binary Swarm Intelligence Algorithms: Enhanced Feature Selection with White Shark Optimizer. Applied Sciences. 2025; 15(15):8710. https://doi.org/10.3390/app15158710

Chicago/Turabian Style

Gules, Seyma, Alper Kılıç, Mustafa Servet Kiran, and Mesut Gunduz. 2025. "A Logarithmic Transfer Function for Binary Swarm Intelligence Algorithms: Enhanced Feature Selection with White Shark Optimizer" Applied Sciences 15, no. 15: 8710. https://doi.org/10.3390/app15158710

APA Style

Gules, S., Kılıç, A., Kiran, M. S., & Gunduz, M. (2025). A Logarithmic Transfer Function for Binary Swarm Intelligence Algorithms: Enhanced Feature Selection with White Shark Optimizer. Applied Sciences, 15(15), 8710. https://doi.org/10.3390/app15158710

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop