Next Article in Journal / Special Issue
Travel Time Reliability-Based Rescue Resource Scheduling for Accidents Concerning Transport of Dangerous Goods by Rail
Previous Article in Journal / Special Issue
Metaheuristics for a Flow Shop Scheduling Problem with Urgent Jobs and Limited Waiting Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Selection for High-Dimensional Datasets through a Novel Artificial Bee Colony Framework

College of Computer and Information Sciences, Fujian Agriculture and Forestry University, Fuzhou 350002, China
*
Authors to whom correspondence should be addressed.
Algorithms 2021, 14(11), 324; https://doi.org/10.3390/a14110324
Submission received: 16 October 2021 / Revised: 1 November 2021 / Accepted: 2 November 2021 / Published: 4 November 2021
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications 2021)

Abstract

:
There are generally many redundant and irrelevant features in high-dimensional datasets, which leads to the decline of classification performance and the extension of execution time. To tackle this problem, feature selection techniques are used to screen out redundant and irrelevant features. The artificial bee colony (ABC) algorithm is a popular meta-heuristic algorithm with high exploration and low exploitation capacities. To balance between both capacities of the ABC algorithm, a novel ABC framework is proposed in this paper. Specifically, the solutions are first updated by the process of employing bees to retain the original exploration ability, so that the algorithm can explore the solution space extensively. Then, the solutions are modified by the updating mechanism of an algorithm with strong exploitation ability in the onlooker bee phase. Finally, we remove the scout bee phase from the framework, which can not only reduce the exploration ability but also speed up the algorithm. In order to verify our idea, the operators of the grey wolf optimization (GWO) algorithm and whale optimization algorithm (WOA) are introduced into the framework to enhance the exploitation capability of onlooker bees, named BABCGWO and BABCWOA, respectively. It has been found that these two algorithms are superior to four state-of-the-art feature selection algorithms using 12 high-dimensional datasets, in terms of the classification error rate, size of feature subset and execution speed.

1. Introduction

Due to the rapid development of data acquisition technology, a great deal of digital information is becoming more easily collected and included in datasets. However, not all features in datasets are useful for a target problem. In other words, there are many redundant and irrelevant features in high-dimensional datasets, so feature selection (FS) is used as a vital data preprocessing step in data mining and machine learning [1]. However, FS is an NP-hard problem. For an n-dimensional dataset, there are 2n feature subsets, which is difficult to solve with an exhaustive method. With a good FS method, we can not only get higher classification accuracy, but also reduce the complexity of calculation. In order to improve the search efficiency of FS algorithms, many scholars propose algorithms, which can be roughly divided into three types: filter method, wrapper method and embedded method [2]. Among them, the wrapper method is widely used because of its good classification ability. Therefore, this paper studies the wrapper FS method.
The wrapper approach mainly consists of three parts: classifiers, feature subset evaluation criteria and search techniques [3]. Among them, an effective search technique is crucial for the performance of FS algorithms. It is worth mentioning that meta-heuristic methods, such as the artificial bee colony (ABC) algorithm [4], the particle swarm optimization (PSO) algorithm [5], the differential evolution (DE) algorithm [6], the grey wolf optimization (GWO) algorithm [7], the whale optimization algorithm (WOA) [8], and many other algorithms [9] have provided good search strategies for the FS task. Unlike the exact search mechanisms, meta-heuristic methods exhibit superior performance, as they do not generate all possible solutions for a given task. Meta-heuristic algorithms have exploration and exploitation abilities, and the trade-off between both abilities is very important for the performance of these algorithms. The exploration acts to discover various unknown regions for more potential solutions, while the exploitation attempts to generate better solutions on the basis of the information provided by existing solutions. In some meta-heuristic search techniques, the ability of exploration is stronger, while in others, the exploitation performs better [10,11]. Exploring the search region and exploiting the best solution are two contradictory criteria that must be considered simultaneously when designing a good meta-heuristic algorithm. The key to improving an algorithm is to achieve a good balance between exploration and exploitation [3,12].
The ABC algorithm is an optimization algorithm that is inspired by the foraging behavior of a honey bee swarm. ABC has been successfully applied to various optimization problems due to its good properties, such as few parameters to control, its high flexibility, and its strong global search ability [11]. However, ABC converges slowly because of the absence of a strong local exploitation ability [10,13]. From the above considerations, we propose a new framework to enhance the exploitation performance of the ABC algorithm, so as to realize a trade-off between the exploration and exploitation capabilities of the FS method, and raise the optimization efficiency and effectiveness. The contributions of this paper are as follows:
(1)
In order to trade off the exploitation and exploration abilities of ABC, we use operators with strong exploitation abilities to enhance the exploitation ability in the phase of onlooker bee;
(2)
This paper analyzes the functional behavior of the scout bee phase and finds that this phase may be redundant while dealing with high-dimensional FS problems, and so eliminating this phase can reduce the computational time of the algorithm;
(3)
The proposed framework is designed as a general framework that can be used to adapt many ABC variants for the FS problems.
The remainder of this paper is illustrated as follows: Section 2 briefly describes the related works of the ABC algorithm. In Section 3, the original ABC algorithm is introduced and analyzed. Section 4 presents the details of our proposed approach. In Section 5, comparisons of the experimental results are presented and discussed. The proposed algorithms are further analyzed in Section 6. At last, the conclusions and future work are outlined in Section 7.

2. Related Works

Recently, meta-heuristic algorithms have attracted the attention of many scholars. These algorithms can be used to solve many real engineering tasks, such as path planning [14,15,16], feature selection [17,18,19], function optimization [20,21,22], and the traveling salesman problem [23,24,25]. Although various meta-heuristics have been developed to deal with FS over the years, the significant increase in data dimensionality brings great challenges; therefore, it is worth continuing looking for effective strategies to make meta-heuristic algorithms perform better for high-dimensional FS problems [26].
ABC was proposed in 2005 by Karaboga group to optimize algebraic problems [27]. Single-objective ABC was first used to address the FS problem in 2012 [18,28]. Almost all meta-heuristic algorithms have the problem of an imbalance between exploration and exploitation [29], and the ABC algorithm is no exception. There are a lot of studies on the ABC algorithm, seeking to improve its exploitation capability. To accelerate the convergence speed of the ABC algorithm, Chao et al. [30] proposed the KnABC algorithm, which introduced Knee Points into the employed bee phase and onlooker bee phase. The results show that this algorithm has a significant effect on reducing the number of features and increasing the classification accuracy. Shunmugapriya et al. [31] utilized the ACO algorithm for colony initialization, and took the initialization results as the food sources of the ABC algorithm for further optimization so as to integrate the ACO and ABC algorithms; the resulting algorithm’s performance was better than that of ABC or ACO alone. Djellali et al. [10] proposed two hybrid ABC algorithms, i.e., ABC-PSO and ABC-GA, which integrate the PSO algorithm and GA algorithm into the framework of the original ABC algorithm in different bee phases, respectively. The experimental results showed that ABC-GA obtained better results than some other existing methods. Shunmugapriya et al. [32] proposed the EABC-FS algorithm, in which the employed bees and onlooker bees made full use of the best solutions in the current swarm to enhance the exploitation ability of the ABC algorithm. The experimental results showed that the performance of the algorithm achieved by introducing such fusion strategies was greatly improved. Moreover, many other studies have shown that the ABC algorithm faces the problem of an insufficient exploitation ability, which results in it becoming trapped in a local optimum and having a low convergence speed [28,33].
Although the above-mentioned hybrid variants of the ABC algorithm have achieved promising performance, they do not deeply analyze the exploitation and exploration abilities in different bee phases of the overall framework. Moreover, few of these algorithms have been developed for high-dimensional FS. Therefore, this paper proposes a novel exploration and exploitation trade-off ABC algorithm by modifying the original overall framework, and applies it to high-dimensional datasets. This new framework strengthens the exploitation ability in the onlooker bee phase by using operators with high exploitation capacities. Additionally, the function of scout bees is discussed in detail, and verified by experiments.

3. Introduction and Analysis of ABC Algorithm

The ABC algorithm is a kind of swarm intelligence (SI) algorithm that simulates the honey-gathering behavior of a bee swarm. This algorithm includes three types of bees: employed bees, onlooker bees and scout bees. Each food source corresponds to a solution to the given task, and the fitness of the solution indicates the quality of the food source. The overall process of the ABC algorithm is as follows [34].
First of all, it initializes a population of size SN randomly. This is calculated by Equation (1):
x i d = x d m i n + r * ( x d m a x x d m i n )
where i = 1, 2,…, SN, d = 1, 2,…, D. SN is the number of food sources. D is the dimensionality of the search space. Additionally, the number of employed bees or onlooker bees is equal to the number of food sources. r is a random number in [0, 1], distributed uniformly. x d m i n and x d m a x represent the maximum and minimum values of the dth dimension feature, respectively. After initialization, the bees begin to search.
(1)
Employed bee phase: According to Equation (2), a new food source is produced around the current food source, as follows:
x i d = x i d + φ i d * ( x i d x k d )
where φ i d is a random number within [−1,1]. x i d and x k d represent the dth dimension feature of x i and x k , respectively. x i d is compared with x i d , and if the fitness of x i d is superior to x i d , x i d is replaced by x i d for entry into the next step, and its counter is reset to 0. Otherwise, x i d is retained for entry into the next step, and its counter increases by 1.
(2)
Onlooker bee phase: Every onlooker bee selects a food source depending on the probability value p i via the roulette-wheel scheme. p i is associated with the food resource information given by the employed bee. The value of p i is generated by Equation (3).
p i = f i t i i = 1 S N f i t i
where f i t i is the fitness value of solution x i . Each selected food source is updated using Equation (2).
(3)
Scout bee phase: If the counter of a food source is greater than or equal to the preset number of trials, then this food source is discarded. The value of the preset number of trials is usually called the limit for abandonment. If a food source is abandoned, then the scout bee translated from the employed bee will regenerate a food source via Equation (1) to replace the food source that is abandoned.
In the ABC algorithm, the employed bees are in charge of finding viable solutions throughout the search area, and providing the onlooker bees with food information. Based on the food information, the onlooker bees search new food sources near to the existing found food sources. In the updating process of the onlooker bee phase, the same updating formula (Equation (2)) is used to update the population as in the employed bee phase. As we can see from the above, the ABC algorithm does not take advantage of the elitism principle. Both the employed bees and onlooker bees use Equation (2) to obtain new food resources, as it has a powerful global search ability, but its search efficiency is low and its exploitation ability is not optimal. The roulette-wheel scheme can make food sources with higher fitness values easier to select, and the use of the roulette-wheel scheme in the onlooker bee phase can strengthen the exploitation ability, but this exploitation ability is far less strong than its powerful exploration ability. Therefore, as Hong and Ahn [35] have pointed out, the exploitation level of the onlooker bee phase should increase. In addition, the scout bee phase not only reduces the probability of falling into the local optimum, but it also reduces the rate of convergence. Under the action of the scout bees, the optimal solution may also be discarded [34]. Therefore, the ABC algorithm has an outstanding exploration capacity but inefficient exploitation. This imbalance renders the ABC algorithm unable to reach a better solution, because the convergence is too slow.

4. Proposed Algorithm for Feature Selection

4.1. The Proposed Framework

A meta-heuristic algorithm having a balance between exploration and exploitation ability has a great impact on its performance. For an algorithm with good exploration ability, we can enhance its exploitation ability by introducing operators with strong exploitation ability so as to regain the balance between exploration and exploitation abilities. Based on the analysis in Section 3, this paper presents a novel ABC framework. There are three points in the description of the framework:
(1)
The employed bee phase of the ABC algorithm is retained so that it can explore the search space widely and avoid reaching the local optimum;
(2)
The updating mode of the ABC algorithm’s onlooker bee phase is changed to the new updating strategy, as inspired by other algorithms with more powerful exploitation capacities. The searching scheme of these algorithms with powerful exploitation abilities is introduced as an operator. According to our observation, higher diversity in the bee swarm can help the algorithm to find more potential search space, but after a certain period, the solutions should converge and approach optimal solutions with reductions in colony diversity. We believe that applying operators with strong exploitation abilities to the optimization process can reduce the diversity of the algorithm in the late stage, and bring about a higher convergence speed. Therefore, the introduction of operators with powerful exploitation abilities can help our novel ABC framework find better solutions;
(3)
The scout bee phase is removed, because the exploration ability of the scout bee phase will increase the diversity of the algorithm during the later period. Moreover, the scout bee phase will waste the execution time, and consume computational resources and memory during the calculation process.
Figure 1 illustrates our proposed framework and its differences from the processes of the original ABC algorithm. Overall, the two methods utilize the same updating mechanism in the employed bee phase. However, without the scout bee phase, our method does not need to compute the value of the counter throughout the algorithm. Since the onlooker bee phase in our method is updated by the operators of an algorithm with strong exploitation abilities, we do not use roulette-wheel selection, so we do not need to calculate the selection probability of each individual.
FS is, in essence, an optimization problem in a binary searching space. The value of each element of the solutions is limited to 0 or 1 [36]. However, the ABC algorithm originally proposed is used in continuous space. To adapt our proposed ABC framework to FS, we need to transform the continuous values to binary values. This transformation is fulfilled by Equation (4).
x b i d = { 1 i f   r < sigmoid ( x i d ) 0 o t h e r w i s e
where r is a random value in [0, 1]. The function of s i g m o i d ( x ) is formulated as in Equation (5):
s i g m o i d ( x ) = 1 1 + e x p ( 10 * ( x 0.5 ) )

4.2. Abandonment of Scout Bee Phase to Reduce the Exploration Capacity

As the last phase of the ABC algorithm, the scout bee abandons any individual that has not changed for a long time, and then creates a new individual to replace the abandoned individual. This phase has certain exploration advantages for the algorithm. However, it has been proven [34] that the scout bee phase is not active in processing high-dimensional tasks, and runs the risk of missing local optimal solutions due to its exploration ability; as such, we have removed this phase. In the following experiments, we analyze the influence of removing the scout bee phase on the diversity and convergence ability of the algorithm.

4.3. Enhancement of Exploitation—Illustrative Example with GWO and WOA

The original ABC algorithm has a low exploitation capacity, especially in the onlooker bee phase. The enhancement of the exploitation capacity in this phase is the most vital factor to regaining the trade-off between exploitation and exploration in the whole procedure of the ABC algorithm. There are many algorithms that have powerful exploitation capacities, such as the GWO algorithm and WOA algorithm. Compared with other algorithms, the GWO algorithm and WOA algorithm make full use of the information related to excellent individuals in the updating process, which gives them powerful exploitation abilities. As such, we take these two algorithms as examples. In our research, we fuse each algorithm as an operator into the onlooker bee phase, and replace the updating mode of the original ABC algorithm in the same phase to enhance the exploitation capacity of our whole framework.
In the GWO algorithm, the grey wolves are divided into four hierarchies, namely, alpha (α), beta (β), delta (δ) and omega (ω). In solving optimization problems, the α wolf is the best solution, the β and δ wolves are the second- and third-best solutions, respectively, while the ω wolves are the remaining candidates. The α, β, and δ wolves lead the wolf pack to search for prey. The position of each wolf is given as follows:
D α = | C 1 * X α X | , D β = | C 2 * X β X | , D δ = | C 3 * X δ X |
X 1 = X α A 1 * D α , X 2 = X β A 2 * D β , X 3 = X δ A 3 * D δ ,
X ( t + 1 ) = X 1 + X 2 + X 3 3
where X α , X β and X δ refer to the position vectors of α, β and δ, respectively. D α , D β and D δ denote the distance between the prey (α, β, δ) and the current wolf, respectively. t indicates the current iteration. A = 2 a * r 1 a , C = 2 * r 2 , r 1 and r 2 are random numbers in [0, 1] that are distributed uniformly. The value of a decreases linearly from 2 to 0 as the number of cycles increases. The three best solutions are used to be learnt from during the updating process of the GWO algorithm, which gives it a strong exploitation ability [7,37,38].
The WOA algorithm is also an SI algorithm, which employs the current optimal solution as the prey. The search agents update their positions based on the best solution. The mathematical model is described by the equations:
X ( t + 1 ) = { X p ( t ) A * D p < 0.5 , | A | < 1 D * e b l * cos ( 2 π l ) + X p ( t ) p 0.5
X ( t + 1 ) = X rand ( t ) A * D p < 0.5 , | A | 1
where X p ( t ) is the best search agent, X rand ( t ) is a random position vector, b is a manually determined constant, and l is a random number in [–1,1]. The equations for calculating D , D and D are as follows:
D = | C * X p ( t ) X ( t ) | , D = | X p ( t ) X ( t ) | , D = | C * X rand ( t ) X ( t ) |
where A and C are calculated in the same way as above. The process of updating the WOA algorithm selects the best solution for learning, which makes the exploitation ability of the algorithm more powerful [8,12].
This paper introduces the operators of the GWO algorithm and WOA algorithm into our proposed framework to verify its validity. The names of the two methods are BABCGWO and BABCWOA, respectively. The pseudocode is outlined in Algorithm 1.
Algorithm 1: Pseudocode of BABCGWO/BABCWOA
Input: Population size SN, Maximum number of iterations NMAX.
Output: The optimal individual xbest, the best fitness value f(xbest).
Initialize the population by using Equation (1).
Evaluate the fitness value of each individual.
For it = 1 to NMAX do
  For i = 1 to SN do
    Select a different food source xk at random.
    Produce a new food source according to Equation (2) and map it to discrete values by Equation (4).
    Evaluate the fitness value of each food source.
    Update xi according to greedy selection.
  End
  For i = 1 to SN do
    Update the position using operators of GWO algorithm or WOA algorithm and map it to discrete values by Equation (4).
    Evaluate the fitness value of each individual.
  End
End
Output xbest and f(xbest).

4.4. Computational Complexity Analysis

The computational complexity of an algorithm is an important measure to evaluate its running time, which is usually expressed by the big O notation. The computational complexity of the algorithm depends on the number of individuals (SN), the dimension of the problem (D) and the number of iterations (NMAX). The time complexity of the basic ABC, BABCGWO and BABCWOA is discussed here.
For basic ABC:
(1)
In the initialization stage of the algorithm, the time complexity is O ( S N * D ) ;
(2)
The time complexity of each iteration in the updating phase of the employed bee, the onlooker bee and the scout bee is O ( S N * D ) + O ( S N * D ) + O ( S N * D ) O ( S N * D ) ;
(3)
The time complexity in the process of calculating individual fitness is O ( S N ) .
For BABCGWO:
(1)
During initialization, the time complexity is O ( S N * D ) ;
(2)
O ( S N * D ) + O ( S N * D ) O ( S N * D ) is required for each iteration in the evolution of the employed bee phase and grey wolf phase;
(3)
The time complexity of calculating the fitness is O ( S N ) .
For BABCWOA:
(1)
The time complexity of the initialization step is O ( S N * D ) ;
(2)
The time complexity of each iteration in the updating process of employed bees and whales is O ( S N * D ) + O ( S N * D ) O ( S N * D ) ;
(3)
O ( S N ) is consumed by evaluating the fitness of each individual.
According to the above analysis, it can be concluded that the basic ABC algorithm, BABCGWO algorithm and BABCWOA algorithm have the same computational complexity, and the total computational complexity is O ( S N * D * N M A X ) after many cycles. In Section 5.2, we will conduct an experimental analysis on the specific execution time of each algorithm.

5. Experimental Studies

5.1. Experimental Design

To verify the effectiveness of the proposed FS algorithms, a series of experiments are carried out on 12 standard datasets, including two-category and multi-classification datasets. These were obtained from http://featureselection.asu.edu/datasets.php (accessed on 18 January 2020) and http://archive.ics.uci.edu/mL/datasets.php (accessed on 18 January 2020). They include microarray gene expression data, image detection data, email text data and so on. In addition, they are not only from different application fields, but also the number of features varies from 310 to 22,283, and the instances vary from 62 to 165, and this provides comprehensive experiments of the proposed and employed algorithms. Table 1 shows the details of the datasets.
We verify the effectiveness of the BABCGWO and BABCWOA algorithms by comparing them with the ABC algorithm and their variants applied to high-dimensional datasets. The ABC algorithm without the scout bee phase is named the none-scout ABC algorithm (NSABC). The variants of the BABCGWO algorithm and BABCWOA algorithm with the added scout bee phase are named the BABCGWO with scout bees algorithm (BABCGWOWS) and the BABCWOA with scout bees algorithm (BABCWOAWS), respectively. To avoid contingency, all algorithms are run 10 times independently. The population size is set to 50; the number of iterations is 100. Each algorithm is implemented in MATLAB language.
A suitable classifier is important when assessing the feature subsets. K-nearest neighbor (KNN) [39] is a common classification method that determines which category the classifier should be assigned to according to its K neighbors. In this research, the value of K is set to 5. In order to reduce the influence of over-fitting, the average classification error rate of 10-fold cross validation is taken as the fitness value. The fitness function is computed as follows:
e r r o r = N u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   samples
fitness = i = 1 10 e r r o r 10

5.2. Experimental Results and Analysis

To test the performance of our proposed framework, the diversity, convergence curves, classification error rate, size of feature subset and computing time of algorithms are investigated in this subsection. The best results are shown in bold in the tables.
Figure 2 shows the diversity curves of six algorithms on 12 datasets. We can see that the diversity of ABC is obviously higher than that of other algorithms on all datasets, except DBWorld and Pixraw10P. According to the search process of the ABC algorithm, the exploration performance of the ABC algorithm is stronger, so its diversity is higher. In the early stage, high diversity can avoid trapping in the local optimization, but after a limited number of cycles, we need to find the optimal solution. The diversity of NSABC decreases a lot, which weakens the exploration ability of the ABC algorithm. After introducing the GWO and WOA operators into the framework, the diversity of these algorithms decreases faster than that of the NSABC algorithm in most datasets. The lower the diversity, the weaker the exploration ability of the algorithm and the stronger the exploitation ability of the algorithm. This shows that the introduction of the GWO and WOA operators strengthens the exploitation ability of the algorithm effectively. In addition, Figure 2 shows that the diversity curves of the BABCGWOWS algorithm and BABCGWO algorithm are similar, and the diversity curves of the BABCWOAWS algorithm and BABCWOA algorithm are not very different. It can be seen that scout bees have little effect on the diversity of the algorithms in this framework.
The convergence curves of the algorithms are plotted in Figure 3. This shows the decline of the error rate. Each curve is plotted by averaging the error rate obtained at each generation of the 10 runs. The convergence results of NSABC and ABC are similar on LSVT, Yale, colon, DBWorld, DLBCL, Pixraw10P and GLI_85 datasets, and the convergence results of the NSABC algorithm are slightly higher than that of the ABC algorithm on other datasets. Obviously, compared with the ABC algorithm, the BABCGWO and BABCWOA algorithms converge faster with a good-quality solution. On most datasets, the error rates of BABCGWO and BABCWOA are similar to or lower than those of BABCGWOWS and BABCWOAWS, respectively. It can be concluded that the BABCGWO and BABCWOA algorithms perform better than ABC in terms of both convergence speed and solution quality.
Table 2 shows the worst, the best, the mean and the standard deviation of the error rate results of each algorithm. The ultimate goal of FS is to improve generalization performance, which means achieving a lower error rate when used on unforeseeable data. A lower error rate indicates that the algorithm can find a better feature subset. Since almost all the SI-based algorithms are stochastic in nature, they may produce different results in each run. Therefore, standard deviation is conducted to measure the variations in the results. The smaller the standard deviation is, the more stable the algorithm is.
From Table 2, we can see that the error rate of the NSABC algorithm is slightly higher than that of the ABC algorithm used on most datasets, but the increase is not more than 0.01. The error rate was improved on all datasets except Pixraw10P after the introduction of the operator with strong exploitation ability in the onlooker bee phase. Specifically, BABCWOA’s average error rate is at least 0.005 lower than the average error rate of the ABC algorithm, applied on the Prostate dataset. When used on the Yale dataset, the average error rate decreased the most, by nearly 0.06, and on the SRBCT and DBWorld datasets, the average error rate decreased by about 0.05. On other datasets, the average error rate also decreased by about 0.01 to 0.03. Moreover, the average error rate of BABCGWO was reduced more, especially on the Yale dataset, and BABCGWO decreased by 0.125 compared with the ABC algorithm. The average error rate of BABCGWO is at least 0.017 lower than that of the ABC algorithm when used on the GLI_85 dataset. There are seven datasets on which the average error rate decreased by more than 0.04. As you can see, the error rate of BABCGWOWS did not change much on most datasets compared to BABCGWO, and the results of comparison between BABCWOAWS and BABCWOA are similar.
In terms of the worst error rate, the BABCWOA algorithm was reduced on half of the datasets, while the BABCGWO algorithm improved on all datasets except Pixraw10P, and the maximum error rate decreased the most on the Yale dataset (by 0.119). Both BABCWOA and BABCGWO decreased in terms of the error rate. It can be seen from the standard deviation of the algorithm that although the error rate of the BABCWOA algorithm improved on the whole, its error rate was not as stable as the ABC algorithm’s on a few datasets, such as Yale and ALLAML. The stability of the BABCGWO algorithm is similar to that of the ABC algorithm. It can be concluded that the introduction of operators with strong exploitation ability into the proposed framework can indeed improve the ABC algorithm to a certain extent, and the scout bee phase is not active and has little effect on reducing the error rate of the algorithm.
As per the results in Table 3, the number of features of the improved algorithm is more than that of the ABC algorithm. Although dimensionality reduction is one of the targets of FS, it is more important to achieve a lower error rate in many practical applications. Although the ABC algorithm has a small feature subset, it can be observed from the error rate in Table 2 that such a low number of selected features cannot achieve a low error rate.
According to Figure 4, it is obvious that the calculation time will be reduced when the scout bee phase is removed. After the introduction of the GWO operator, the BABCGWO algorithm displayed little difference in time compared with the ABC or NSABC algorithms used on some datasets. In addition, BABCGWO was much faster than the ABC algorithm on colon, SRBCT, Leukemia1, DLBCL, ALLAML, Pixraw10P, Prostate, and Leukemia2 datasets. After the introduction of the WOA operator, the running time of the BABCWOA algorithm increased more or less on all datasets, except SRBCT and ALLAML, which may be because the running speed is also proportional to the feature number. It can be observed from Table 3 that the BABCWOA algorithm selects more features than other algorithms.
To sum up, the proposed framework can effectively make the convergence speed faster, reduce the diversity, and find a better optimal solution. Although the number of features increases, the classification error rate of the algorithm decreases significantly after introducing an operator with strong exploitation ability into the framework. The scout bee phase has very little effect on improving the fitness value of the solution and consumes computational resources and memory, so the scout bee phase is omitted in this framework. From the above analysis, we can conclude that using the proposed framework can effectively improve the performance of the ABC algorithm.

6. Further Analysis

The comparisons in Section 5 show that the proposed BABCGWO algorithm and BABCWOA algorithm are more efficient than the ABC algorithm. To make a complete evaluation, we further verify the effectiveness of BABCGWO algorithm and BABCWOA algorithm by comparing them with four state-of-the-art FS algorithms on high-dimensional datasets, including the popular PSO variants named CSO [40] and VSCCPSO [41], the novel GWO variant ALO_GWO [42], and an ABC variant named ACABC [31]. In particular, CSO, VSCCPSO and ALO_GWO achieved excellent results in dealing with high-dimensional datasets. The parameters applied in CSO, VSCCPSO, ALO_GWO and ACABC here are the same as their own parameter settings.
In this section, the classification error rate, the size of the feature subset, the computational time and the convergence curve of the six algorithms are investigated. The best results are shown in bold in the table. To further verify the improved effect of the two algorithms proposed in this paper, the Wilcoxon’s rank sum test [43,44] with a significance level of 0.05 is applied to test the statistical significance between two different algorithms. The error rate, number of features and execution time of the two algorithms are tested by Wilcoxon rank sum test with another four FS algorithms. In the table of Wilcoxon rank sum test, the symbol “+” indicates that the proposed algorithms are significantly better than the compared algorithm, the symbol “=” means that the performance of the two algorithms is similar, and the symbol “−” is opposite to “+”, indicating that the proposed algorithms are significantly worse than other algorithms.
Table 4 shows the worst, best, average, and standard deviation of the error rate for each algorithm. In terms of the maximum error rate and average error rate, the BABCGWO algorithm performed worse than any other algorithms in all datasets except Yale and Pixraw10P, and it was reduced by several percentage points on most datasets. The BABCWOA algorithm outperformed the four compared algorithms on half of the datasets, and achieved the best average error rate of all algorithms on the Pixraw10P dataset. In terms of minimum error rate, the BABCGWO algorithm’s was lower than the other algorithms when applied all datasets except LVST, Yale and GLI_85. The BABCWOA algorithm also outperformed the four algorithms on more than half of the datasets. The standard deviation of the BABCGWO algorithm was ≤0.01 on most datasets and 0.02 on the colon dataset only, which is significantly superior to most other algorithms, indicating that the BABCGWO algorithm has better stability compared with the other algorithms. However, the standard deviation of the BABCWOA algorithm is mostly about 0.02, which is not much different from the four compared algorithms.
To further illustrate whether the error rates of the algorithms proposed in this paper are significantly different from those of other algorithms, we use the Wilcoxon rank sum test. As can be seen from the Wilcoxon rank sum test results of the error rate in Table 5, compared with the ACABC and CSO algorithms, the error rates of the algorithms proposed in this paper were almost significantly lower than those of the other four algorithms when used on 12 datasets. Compared with the VSCCPSO algorithm, the BABCGWO algorithm was superior to the VSCCPSO algorithm when used on all datasets except LSVT, Yale and Pixraw10P. The BABCWOA algorithm was also significantly improved compared with the VSCCPSO algorithm when used on some datasets, and there was little difference in the error rate between BABCWOA and VSCCPSO for most datasets. Compared with the ALO_GWO algorithm, the error rate of the BABCGWO algorithm was significantly lower than that of the ALO_GWO algorithm for all datasets except SRBCT and DBWorld, and there was almost no notable difference between the BABCWOA and ALO_GWO algorithms.
The experimental results in Table 6 show the average number of selected features and average execution time of the six algorithms across 10 runs on 12 datasets. One of the purposes of FS is to remove redundant and irrelevant features so as to strengthen the classification performance of the algorithm. In the case of the same error rate, the smaller number of selected features indicates that the algorithm can find a better feature subset. The experimental results show that the average number of features selected by the BABCGWO algorithm is less than that of other algorithms for 12 datasets, its running time is shorter than that of other algorithms in all datasets except Yale, and its running speed is only slower than that of the CSO algorithm for the Yale dataset. The BABCWOA algorithm selects fewer features than the four compared algorithms on all datasets except LSVT and Yale, and the BABCWOA algorithm runs faster than the compared algorithms on 8 of the 12 datasets, and is second only to the CSO algorithm on the remaining 4 datasets. Therefore, although the error rate of the BABCWOA algorithm is not significantly improved on some datasets, it does improve the size of feature subsets and running time. This indicates that, compared with other algorithms, the algorithms proposed in this paper can find a feature subset with smaller size in a shorter time, and achieve a lower error rate.
The Wilcoxon rank sum test results in Table 7 show that the feature subsets selected by the algorithms proposed in this paper are significantly smaller than those of the ACABC and CSO algorithms applied on the 12 datasets. Compared with ALO_GWO and VSCCPSO, the feature numbers of the proposed algorithms are not significantly lower than those of the two algorithms, but for only a few datasets.
As can be seen from the results in Table 8, the proposed algorithms are not much different from, or are slower than, the other algorithms for only a few datasets. On most datasets, the two algorithms are significantly faster than other algorithms.
The convergence curves of the algorithms for 12 datasets are shown in Figure 5. These curves confirm that the BABCGWO algorithm converges more rapidly, with a good quality of solution, than other algorithms in the first 20 iterations, which indicates that the optimization precision and optimization speed of BABCGWO algorithm are better than those of other algorithms. The BABCWOA algorithm also has a faster convergence curve on most datasets, and can obtain a lower error rate.
In conclusion, the proposed framework is effective. The exploration ability of the ABC algorithm is successfully combined with the updating mode of the algorithm with a strong exploitation ability, such that the BABCGWO algorithm and BABCWOA algorithm can find optimal solutions with lower error rates and fewer feature numbers in a shorter period of time.

7. Conclusions

There are often redundant and irrelevant features in high-dimensional datasets, so the FS method is used for data preprocessing. Aiming at the strong exploration ability of the ABC algorithm, this study proposes a framework that integrates the updating operators of the algorithm with strong exploitation abilities into the ABC algorithm to make the exploration and exploitation abilities balanced. Moreover, since the removal of the scout bee phase can weaken the exploration ability and save computational resources when processing high-dimensional datasets, the scout bee phase in the ABC algorithm is left out in our framework, and thus the BABCGWO algorithm and BABCWOA algorithm are proposed to deal with the FS problem in high-dimensional datasets. The experimental results show that on 12 high-dimensional datasets, the BABCGWO algorithm and BABCWOA algorithm are significantly superior to other algorithms as regards dimensionality reduction, classification error rate and execution time. This shows that the proposed framework can balance the capabilities of exploration and exploitation, and effectively improve the overall performance in FS.
However, the proposed method mainly focuses on the single-objective feature selection problem, where the main aim is to reduce the algorithm’s classification error rate. In the future, we will investigate a multi-objective FS algorithm that simultaneously maximizes the classification performance and minimizes the number of selected features. Moreover, we would like to employ algorithms in different domains to verify their universality.

Author Contributions

Methodology, data curation, software, formal analysis, visualization, validation, writing—original draft preparation, Y.Z.; writing—review and editing, J.W., Y.Z. and S.H.; project administration, S.H. and X.L.; supervision, funding acquisition, S.H., X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (31870641), Natural Science Foundation of Fujian Province (2018J01612), and Forestry Science and Technology Projects in Fujian Province (Memorandums 26), China.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  2. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  3. Mafarja, M.M.; Mirjalili, S. Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  4. Gao, W.F.; Liu, S.Y.; Huang, L.L. A global best artificial bee colony algorithm for global optimization. J. Comput. Appl. Math. 2012, 236, 2741–2753. [Google Scholar] [CrossRef] [Green Version]
  5. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  6. Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  8. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  9. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2015, 20, 606–626. [Google Scholar] [CrossRef] [Green Version]
  10. Djellali, H.; Djebbar, A.; Zine, N.G.; Azizi, N. Hybrid artificial bees colony and particle swarm on feature selection. In Proceedings of the International Conference on Computational Intelligence and Its Applications, Oran, Algeria, 8–10 May 2018; Springer: Cham, Switzerland, 2018; pp. 93–105. [Google Scholar]
  11. Zorarpacı, E.; Özel, S.A. A hybrid approach of differential evolution and artificial bee colony for feature selection. Expert Syst. Appl. 2016, 62, 91–103. [Google Scholar] [CrossRef]
  12. Al-Tashi, Q.; Kadir, S.J.A.; Rais, H.M.; Mirjalili, S.H. Alhussian, Binary Optimization Using Hybrid Grey Wolf Optimization for Feature Selection. IEEE Access 2019, 7, 39496–39508. [Google Scholar] [CrossRef]
  13. Shi, Y.; Pun, C.M.; Hu, H.; Gao, H. An improved artificial bee colony and its application. Knowl. Based Syst. 2016, 107, 14–31. [Google Scholar] [CrossRef]
  14. Garg, D.P.; Kumar, M. Optimization techniques applied to multiple manipulators for path planning and torque minimization. Eng. Appl. Artif. Intell. 2002, 15, 241–252. [Google Scholar] [CrossRef] [Green Version]
  15. Roberge, V.; Tarbouchi, M.; Labonté, G. Comparison of parallel genetic algorithm and particle swarm optimization for real-time UAV path planning. IEEE Trans. Ind. Inform. 2012, 9, 132–141. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Gong, D.-W.; Zhang, J.-H. Robot path planning in uncertain environment using multi-objective particle swarm optimization. Neurocomputing 2013, 103, 172–185. [Google Scholar] [CrossRef]
  17. Oh, I.-S.; Lee, J.-S.; Moon, B.-R. Hybrid genetic algorithms for feature selection. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1424–1437. [Google Scholar]
  18. Palanisamy, S.; Kanmani, S. Artificial bee colony approach for optimizing feature selection. Int. J. Comput. Sci. Issues 2012, 9, 432. [Google Scholar]
  19. Tran, B.; Xue, B.; Zhang, M. Improved PSO for feature selection on high-dimensional datasets. In Proceedings of the Asia-Pacific Conference on Simulated Evolution and Learning, Dunedin, New Zealand, 15–18 December 2014; Springer: Cham, Switzerland, 2014; pp. 503–515. [Google Scholar]
  20. Liang, Y.; Leung, K.-S. Genetic algorithm with adaptive elitist-population strategies for multimodal function optimization. Appl. Soft Comput. 2011, 11, 2017–2034. [Google Scholar] [CrossRef]
  21. Mirjalili, S.; Hashim, S.Z.M. A new hybrid PSOGSA algorithm for function optimization. In Proceedings of the 2010 International Conference on Computer and Information Application, Tianjin, China, 3–5 December 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 374–377. [Google Scholar]
  22. Pan, Q.-K.; Sang, H.-Y.; Duan, J.-H.; Gao, L. An improved fruit fly optimization algorithm for continuous function optimization problems. Knowl. Based Syst. 2014, 62, 69–83. [Google Scholar] [CrossRef]
  23. Clerc, M. Discrete particle swarm optimization, illustrated by the traveling salesman problem. In New Optimization Techniques in Engineering; Springer: Berlin/Heidelberg, Germany, 2004; pp. 219–239. [Google Scholar]
  24. Kan, J.M.; Zhang, Y. Application of an improved ant colony optimization on generalized traveling salesman problem. Energy Procedia 2012, 17, 319–325. [Google Scholar]
  25. Mahi, M.; Baykan, Ö.K.; Kodaz, H. A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem. Appl. Soft Comput. 2015, 30, 484–490. [Google Scholar] [CrossRef]
  26. Li, A.-D.; Xue, B.; Zhang, M. Improved binary particle swarm optimization for feature selection with new initialization and search space reduction strategies. Appl. Soft Comput. 2021, 106, 107302. [Google Scholar] [CrossRef]
  27. Gao, W.F.; Liu, S.Y.; Jiang, F. An improved artificial bee colony algorithm for directing orbits of chaotic systems. Appl. Math. Comput. 2011, 218, 3868–3879. [Google Scholar] [CrossRef]
  28. Hancer, E.; Xue, B.; Karaboga, D.; Zhang, M. A binary ABC algorithm based on advanced similarity scheme for feature selection. Appl. Soft Comput. 2015, 36, 334–348. [Google Scholar] [CrossRef]
  29. Gaidhane, P.J.; Nigam, M.J. A hybrid grey wolf optimizer and artificial bee colony algorithm for enhancing the performance of complex systems. J. Comput. Sci. 2018, 27, 284–302. [Google Scholar] [CrossRef]
  30. Chao, X.Q.; Li, W. Feature selection method optimized by artificial bee colony algorithm. J. Front. Comput. Sci. Technol. 2019, 13, 300–309. [Google Scholar]
  31. Shunmugapriya, P.; Kanmani, S. A hybrid algorithm using ant and bee colony optimization for feature selection and classification (AC-ABC Hybrid). Swarm Evol. Comput. 2017, 36, 27–36. [Google Scholar] [CrossRef]
  32. Shunmugapriya, P.; Kanmani, S.; Supraja, R.; Saranya, K. Feature selection optimization through enhanced Artificial Bee Colony algorithm. In Proceedings of the 2013 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 25–27 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 56–61. [Google Scholar]
  33. Zhu, G.; Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 2010, 217, 3166–3173. [Google Scholar] [CrossRef]
  34. Singh, A.; Deep, K. Exploration–exploitation balance in Artificial Bee Colony algorithm: A critical analysis. Soft Comput. 2019, 23, 9525–9536. [Google Scholar] [CrossRef]
  35. Hong, P.N.; Ahn, C.W. Fast artificial bee colony and its application to stereo correspondence. Expert Syst. Appl. 2016, 45, 460–470. [Google Scholar] [CrossRef]
  36. Emary, E.; Zawba, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  37. Tu, Q.; Chen, X.C.; Liu, X.C. Multi-strategy ensemble grey wolf optimizer and its application to feature selection. Appl. Soft Comput. 2019, 76, 16–30. [Google Scholar] [CrossRef]
  38. Long, W.; Jiao, J.J.; Liang, X.M.; Tang, M.Z. An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng. Appl. Artif. Intell. 2018, 68, 63–80. [Google Scholar] [CrossRef]
  39. Liao, Y.; Vemuri, V.R. Use of K-Nearest Neighbor classifier for intrusion detection. Comput. Secur. 2002, 21, 439–448. [Google Scholar] [CrossRef]
  40. Gu, S.K.; Cheng, R.; Jin, Y.C. Feature selection for high-dimensional classification using a competitive swarm optimizer. Soft Comput. 2018, 22, 811–822. [Google Scholar] [CrossRef] [Green Version]
  41. Song, X.-F.; Zhang, Y.; Guo, Y.-N.; Sun, X.-Y.; Wang, Y.-L. Variable-Size Cooperative Coevolutionary Particle Swarm Optimization for Feature Selection on High-Dimensional Data. IEEE Trans. Evol. Comput. 2020, 24, 882–895. [Google Scholar] [CrossRef]
  42. Zawbaa, H.M.; Emary, E.; Grosan, C.; Snasel, V. Large-dimensionality small-instance set feature selection: A hybrid bio-inspired heuristic approach. Swarm Evol. Comput. 2018, 42, 29–42. [Google Scholar] [CrossRef]
  43. El-Kenawy, E.M.; Eid, M.M.; Saber, M.; Ibrahim, A. MbGWO-SFS: Modified Binary Grey Wolf Optimizer Based on Stochastic Fractal Search for Feature Selection. IEEE Access 2020, 8, 107635–107649. [Google Scholar] [CrossRef]
  44. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: New York, NY, USA, 1992; pp. 196–202. [Google Scholar]
Figure 1. The flowchart of the ABC algorithm and our proposed framework.
Figure 1. The flowchart of the ABC algorithm and our proposed framework.
Algorithms 14 00324 g001
Figure 2. The diversity curves between different ABC-based methods.
Figure 2. The diversity curves between different ABC-based methods.
Algorithms 14 00324 g002
Figure 3. The convergence curves between different ABC-based methods.
Figure 3. The convergence curves between different ABC-based methods.
Algorithms 14 00324 g003
Figure 4. The execution time between different ABC-based methods.
Figure 4. The execution time between different ABC-based methods.
Algorithms 14 00324 g004
Figure 5. The convergence curves of algorithms.
Figure 5. The convergence curves of algorithms.
Algorithms 14 00324 g005
Table 1. Description for datasets.
Table 1. Description for datasets.
DatasetsFeaturesSamplesClasses
LSVT3101262
Yale102416515
colon2000622
SRBCT2308834
DBWorld4702642
Leukemia15327723
DLBCL5469772
ALLAML7129722
Pixraw10P10,00010010
Prostate10,5091022
Leukemia211,225723
GLI_8522,283852
Table 2. Comparisons of error rate between different ABC-based methods.
Table 2. Comparisons of error rate between different ABC-based methods.
DatasetsIndexAlgorithms
ABCNSABCBABCWOABABCWOAWSBABCGWOBABCGWOWS
LSVTworst0.1120.1210.1030.1040.0560.064
mean ± std0.102 ± 0.010.106 ± 0.010.075 ± 0.020.074 ± 0.020.044 ± 0.010.046 ± 0.01
best0.0870.0890.0470.0470.0310.031
Yaleworst0.3570.3700.3260.3450.2380.240
mean ± std0.345 ± 0.010.351 ± 0.010.288 ± 0.030.299 ± 0.040.220 ± 0.010.220 ± 0.01
best0.3270.3270.2410.2430.2100.207
colonworst0.1120.10.1120.0830.0640.064
mean ± std0.089 ± 0.020.094 ± 0.010.066 ± 0.020.064 ± 0.020.037 ± 0.020.038 ± 0.02
best0.06430.0810.050.0480.0140.014
SRBCTworst0.0630.0710.0240.0460.0000.000
mean ± std0.052 ± 0.010.061 ± 0.010.007 ± 0.010.018 ± 0.020.0000.000
best0.0220.0470.0000.0000.0000.000
DBWorldworst0.1210.1100.0930.0740.0330.048
mean ± std0.103 ± 0.010.092 ± 0.010.048 ± 0.020.046 ± 0.020.025 ± 0.010.031 ± 0.01
best0.0880.0790.0170.0290.0140.014
Leukemia1worst0.0680.0860.0430.0710.0140.029
mean ± std0.047 ± 0.020.065 ± 0.010.023 ± 0.020.043 ± 0.020.001 ± 0.010.004 ± 0.01
best0.0270.0390.0000.0140.0000.000
DLBCLworst0.0390.05180.0410.0410.0250.038
mean ± std0.029 ± 0.010.027 ± 0.020.016 ± 0.010.025 ± 0.010.010 ± 0.010.012 ± 0.01
best0.0250.0000.0000.0000.0000.000
ALLAMLworst0.0570.0710.0700.0680.0140.014
mean ± std0.045 ± 0.010.053 ± 0.010.022 ± 0.030.030 ± 0.030.001 ± 0.010.004 ± 0.01
best0.0290.0290.0000.0000.0000.000
Pixraw10Pworst0.0000.0100.0100.0100.0100.010
mean ± std0.0000.001 ± 0.000.002 ± 00.003 ± 00.003 ± 0.010.005 ± 0.01
best0.0000.0000.0000.0000.0000.000
Prostateworst0.0890.0890.0890.0780.0600.060
mean ± std0.074 ± 0.010.084 ± 0.000.069 ± 0.020.062 ± 0.010.044 ± 0.010.039 ± 0.01
best0.0490.0780.0400.0490.0290.020
Leukemia2worst0.0430.0700.0430.0390.0270.000
mean ± std0.032 ± 0.010.040 ± 0.010.014 ± 0.010.013 ± 0.010.004 ± 0.010.000
best0.0140.0130.0000.0000.0000.000
GLI_85worst0.0810.0790.0740.0820.0610.061
mean ± std0.062 ± 0.010.064 ± 0.010.050 ± 0.010.058 ± 0.020.045 ± 0.010.040 ± 0.01
best0.0460.0470.0330.0250.0350.035
Table 3. Comparisons of the number of selected features between different ABC-based methods.
Table 3. Comparisons of the number of selected features between different ABC-based methods.
DatasetsIndexAlgorithms
ABCNSABCBABCWOABABCWOAWSBABCGWOBABCGWOWS
LSVTworst20151671576564
mean ± std8.9 ± 4.586.8 ± 3.8883.7 ± 50.61104.5 ± 34.0730.0 ± 16.8333.4 ± 15.07
best4327611516
Yaleworst147284421477126124
mean ± std96.9 ± 46.24122.6 ± 88.40245.6 ± 97.77347.9 ± 103.06102.4 ± 14.5597.9 ± 21.37
best30371272108655
colonworst3040266532112169
mean ± std18.5 ± 6.6920.3 ± 8.53142.8 ± 68.85155.1 ± 138.1880.4 ± 17.39105.9 ± 28.05
best121246615876
SRBCTworst584104497829233353
mean ± std121.2 ± 166.6964.8 ± 27.37109.4 ± 254.02394.5 ± 191.01164.6 ± 45.35204.0 ± 86.09
best2521112151107116
DBWorldworst4440484423297263
mean ± std32.3 ± 5.2531.7 ± 5.08249.4 ± 112.93317.7 ± 99.99216.2 ± 59.40205.6 ± 52.63
best232292154109119
Leukemia1worst14767616941247692410
mean ± std71.6 ± 30.28149 ± 210.69648.2 ± 504.44741.6 ± 324.58277.4 ± 156.79300.5 ± 73.80
best4230237305160182
DLBCLworst1286951029148414271413
mean ± std59.4 ± 30.28126.1 ± 202.79545.3 ± 314.94811.6 ± 532.06458.4 ± 358.90684.7 ± 464.70
best3228230143228190
ALLAMLworst97826461168370444
mean ± std66.5 ± 19.1750.9 ± 11.54386.4 ± 144.24500.4 ± 286.34276.4 ± 54.91305.7 ± 87.46
best4542168217198156
Pixraw10Pworst11588294370405349
mean ± std73.7 ± 17.8166.8 ± 8.34196.1 ± 56.32200.4 ± 74.22218.5 ± 76.059253.5 ± 63.50
best5258137121157159
Prostateworst14610314541505624897
mean ± std91.3 ± 32.6676.9 ± 15.571055.4 ± 362.08797.8 ± 314.48444.5 ± 129.06569.6 ± 227.40
best5655442445199250
Leukemia2worst295124130012461036844
mean ± std115.8 ± 66.7188.9 ± 18.75876.4 ± 303.41661.4 ± 273.88582.6 ± 234.36425.7 ± 153.55
best6870318329357311
GLI_85worst2481735716669129201453
mean ± std161.8 ± 32.20150.1 ± 12.051576.3 ± 1508.281553.6 ± 1840.901204.4 ± 674.091099.2 ± 272.00
best138131437520697681
Table 4. Comparison of error rates of algorithms.
Table 4. Comparison of error rates of algorithms.
DatasetsIndexAlgorithms
CSOVSCCPSOALO_GWOACABCBABCWOABABCGWO
LSVTworst0.0810.0640.0780.0800.0750.056
mean ± std0.063 ± 0.010.046 ± 0.010.065 ± 0.040.065 ± 0.010.075 ± 0.020.044 ± 0.01
best0.0550.0240.0540.0560.0470.031
Yaleworst0.3200.2300.3090.3150.3260.238
mean ± std0.295 ± 0.020.216 ± 0.010.273 ± 0.020.288 ± 0.020.288 ± 0.030.220 ± 0.01
best0.2680.2000.2540.2660.2410.210
colonworst0.1760.0810.0880.1570.1120.064
mean ± std0.113 ± 0.030.065 ± 0.010.069 ± 0.010.119 ± 0.020.066 ± 0.020.037 ± 0.02
best0.0810.0480.0640.0950.0500.014
SRBCTworst0.0490.0240.0250.0630.0240.000
mean ± std0.033 ± 0.020.011 ± 0.010.005 ± 0.010.035 ± 0.010.007 ± 0.010.000
best0.0000.0000.0000.0220.0000.000
DBWorldworst0.2550.0910.0620.1980.0930.033
mean ± std0.126 ± 0.050.048 ± 0.010.034 ± 0.010.139 ± 0.030.048 ± 0.020.025 ± 0.01
best0.0620.0260.0170.0930.0170.014
Leukemia1worst0.0840.0560.0570.070.0430.014
mean ± std0.062 ± 0.010.031 ± 0.010.034 ± 0.020.058 ± 0.010.023 ± 0.020.001 ± 0.01
best0.0390.0280.0000.0410.0000.000
DLBCLworst0.0750.0910.0380.0640.0410.025
mean ± std0.051 ± 0.010.038 ± 0.020.021 ± 0.010.038 ± 0.020.016 ± 0.010.010 ± 0.01
best0.0380.0260.0000.0130.0000.000
ALLAMLworst0.1130.0560.0820.1210.0700.014
mean ± std0.102 ± 0.010.031 ± 0.010.044 ± 0.030.103 ± 0.010.022 ± 0.030.001 ± 0.01
best0.0930.0140.0000.0820.0000.000
Pixraw10Pworst0.0500.0100.0400.0400.0100.010
mean ± std0.041 ± 0.000.0100.012 ± 0.010.0400.002 ± 00.003 ± 0.01
best0.0400.0100.0000.0400.0000.000
Prostateworst0.1260.0780.0790.1170.0890.060
mean ± std0.112 ± 0.010.063 ± 0.010.066 ± 0.010.106 ± 0.010.069 ± 0.020.044 ± 0.01
best0.0890.0490.0490.0870.0400.029
Leukemia2worst0.0820.0970.0290.0950.0430.027
mean ± std0.061 ± 0.010.063 ± 0.020.015 ± 0.010.054 ± 0.020.014 ± 0.010.004 ± 0.01
best0.0410.0280.0000.0270.0000.000
GLI_85worst0.1290.1060.0710.1500.0740.061
mean ± std0.094 ± 0.020.074 ± 0.020.058 ± 0.010.109 ± 0.020.050 ± 0.010.045 ± 0.01
best0.0810.0470.0470.0820.0330.035
Table 5. Wilcoxon rank sum test on error rates of algorithms.
Table 5. Wilcoxon rank sum test on error rates of algorithms.
DatasetsCSOVSCCPSOALO_GWOACABC
BABCGWOBABCWOABABCGWOBABCWOABABCGWOBABCWOABABCGWOBABCWOA
LSVT0(+)0.04(−)0.68(=)0(−)0(+)0.08(=)0(+)0.10(=)
Yale0(+)0.57(=)0.52(=)0(−)0(+)0.20(=)0(+)0.97(=)
colon0(+)0(+)0(+)0.47(=)0(+)0.09(=)0(+)0(+)
SRBCT0(+)0(+)0(+)0.55(=)0.08(=)0.62(=)0(+)0(+)
DBWorld0(+)0(+)0(+)0.84(=)0.06(=)0.11(=)0(+)0(+)
Leukemia10(+)0(+)0(+)0.72(=)0(+)0.17(=)0(+)0(+)
DLBCL0(+)0(+)0(+)0.01(+)0(+)0.44(=)0(+)0.01(+)
ALLAML0(+)0(+)0(+)0.16(=)0(+)0.05(=)0(+)0(+)
Pixraw10P0(+)0(+)0.10(=)0.01(+)0.01(+)0(+)0(+)0(+)
Prostate0(+)0(+)0(+)0.38(=)0(+)0.73(=)0(+)0(+)
Leukemia20(+)0(+)0(+)0(+)0.02(+)0.82(=)0(+)0(+)
GLI_850(+)0(+)0(+)0(+)0(+)0.10(=)0(+)0(+)
Table 6. Comparison of the average numbers of selected features and average execution times of algorithms.
Table 6. Comparison of the average numbers of selected features and average execution times of algorithms.
DatasetsIndexAlgorithms
CSOVSCCPSOALO_GWOACABCBABCWOABABCGWO
LSVTSubsets151.537.5102.5150.983.730.0
Time55.583.489.6172.171.741.9
YaleSubsets504.1150.1240.6506.3245.6102.4
Time147.2482.0291.2464.6245.6156.0
colonSubsets983.0194.0261.2995.6142.880.4
Time55.8153.4287.6271.763.853.4
SRBCTSubsets1127.1205.8408.31136.2109.4164.6
Time101.5275.0311.0381.8116.993.2
DBWorldSubsets2316.2601.4446.52338.0249.4216.2
Time268.7419.9841.0901.8144.1115.5
Leukemia1Subsets2664.6721.2867.42645.6648.2277.4
Time417.7549.0774.61257.0238.1153.9
DLBCLSubsets2737.6625.61097.82733.1545.3458.4
Time436.2631.81259.82409.6330.4182.1
ALLAMLSubsets3563.31248.9846.53537.0386.4276.4
Time646.2828.0780.62954.4252.0180.3
Pixraw10PSubsets5015.62366.7882.15006.7196.1218.5
Time1647.93417.91428.14187.3367.0253.3
ProstateSubsets5246.21558.71440.35194.91055.4444.5
Time1418.72261.72435.98370.3829.8391.0
Leukemia2Subsets5627.12091.21336.55608.7876.4582.6
Time852.61820.42342.92722.6693.8282.9
GLI_85Subsets11,157.55167.92971.211,682.51576.31204.4
Time3996.33872.93013.89230.02023.4836.3
Table 7. Wilcoxon rank sum test on the numbers of features selected by algorithms.
Table 7. Wilcoxon rank sum test on the numbers of features selected by algorithms.
DatasetsCSOVSCCPSOALO_GWOACABC
BABCGWOBABCWOABABCGWOBABCWOABABCGWOBABCWOABABCGWOBABCWOA
LSVT0(+)0(+)0.10(=)0.03(−)0(+)0.33(=)0(+)0(+)
Yale0(+)0(+)0(+)0.57(=)0(+)0.05(=)0(+)0(+)
colon0(+)0(+)0(+)0.03(+)0(+)0.02(+)0(+)0(+)
SRBCT0(+)0(+)0.04(+)0.16(=)0(+)0.01(+)0(+)0(+)
DBWorld0(+)0(+)0(+)0(+)0(+)0.01(+)0(+)0(+)
Leukemia10(+)0(+)0(+)0.05(=)0(+)0.04(+)0(+)0(+)
DLBCL0(+)0(+)0.01(+)0.340(+)0.01(+)0(+)0(+)
ALLAML0(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Pixraw10P0(+)0(+)0(+)0(+)0.03(+)0.02(+)0(+)0(+)
Prostate0(+)0(+)0(+)0(+)0(+)0.19(=)0(+)0(+)
Leukemia20(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
GLI_850(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Table 8. Wilcoxon rank sum test on the execution times of algorithms.
Table 8. Wilcoxon rank sum test on the execution times of algorithms.
DatasetsCSOVSCCPSOALO_GWOACABC
BABCGWOBABCWOABABCGWOBABCWOABABCGWOBABCWOABABCGWOBABCWOA
LSVT0(+)0.91(=)0(+)0.43(=)0(+)0.24(=)0(+)0(+)
Yale0.19(=)0(−)0(+)0(+)0(+)0.03(+)0(+)0(+)
colon0(+)0.06(=)0(+)0(+)0(+)0(+)0(+)0(+)
SRBCT0.12(=)0.03(−)0(+)0(+)0(+)0(+)0(+)0(+)
DBWorld0(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Leukemia10(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
DLBCL0(+)0.03(+)0(+)0(+)0(+)0(+)0(+)0(+)
ALLAML0(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Pixraw10P0(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Prostate0(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Leukemia20(+)0.14(=)0(+)0(+)0(+)0(+)0(+)0(+)
GLI_850(+)0(+)0(+)0(+)0(+)0(+)0(+)0(+)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wang, J.; Li, X.; Huang, S.; Wang, X. Feature Selection for High-Dimensional Datasets through a Novel Artificial Bee Colony Framework. Algorithms 2021, 14, 324. https://doi.org/10.3390/a14110324

AMA Style

Zhang Y, Wang J, Li X, Huang S, Wang X. Feature Selection for High-Dimensional Datasets through a Novel Artificial Bee Colony Framework. Algorithms. 2021; 14(11):324. https://doi.org/10.3390/a14110324

Chicago/Turabian Style

Zhang, Yuanzi, Jing Wang, Xiaolin Li, Shiguo Huang, and Xiuli Wang. 2021. "Feature Selection for High-Dimensional Datasets through a Novel Artificial Bee Colony Framework" Algorithms 14, no. 11: 324. https://doi.org/10.3390/a14110324

APA Style

Zhang, Y., Wang, J., Li, X., Huang, S., & Wang, X. (2021). Feature Selection for High-Dimensional Datasets through a Novel Artificial Bee Colony Framework. Algorithms, 14(11), 324. https://doi.org/10.3390/a14110324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop