Next Article in Journal
Stair-Climbing Wheeled Robot Based on Rotating Locomotion of Curved-Spoke Legs
Next Article in Special Issue
Peak Identification in Evolutionary Multimodal Optimization: Model, Algorithms, and Metrics
Previous Article in Journal
Data-Driven Sparse Sensor Placement Optimization on Wings for Flight-By-Feel: Bioinspired Approach and Application
Previous Article in Special Issue
A Multimodal Multi-Objective Feature Selection Method for Intelligent Rating Models of Unmanned Highway Toll Stations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FTDZOA: An Efficient and Robust FS Method with Multi-Strategy Assistance

1
Department of Artificial Intelligence, Guangzhou Huashang College, Guangzhou 511300, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(10), 632; https://doi.org/10.3390/biomimetics9100632
Submission received: 10 September 2024 / Revised: 11 October 2024 / Accepted: 15 October 2024 / Published: 17 October 2024

Abstract

:
Feature selection (FS) is a pivotal technique in big data analytics, aimed at mitigating redundant information within datasets and optimizing computational resource utilization. This study introduces an enhanced zebra optimization algorithm (ZOA), termed FTDZOA, for superior feature dimensionality reduction. To address the challenges of ZOA, such as susceptibility to local optimal feature subsets, limited global search capabilities, and sluggish convergence when tackling FS problems, three strategies are integrated into the original ZOA to bolster its FS performance. Firstly, a fractional order search strategy is incorporated to preserve information from the preceding generations, thereby enhancing ZOA’s exploitation capabilities. Secondly, a triple mean point guidance strategy is introduced, amalgamating information from the global optimal point, a random point, and the current point to effectively augment ZOA’s exploration prowess. Lastly, the exploration capacity of ZOA is further elevated through the introduction of a differential strategy, which integrates information disparities among different individuals. Subsequently, the FTDZOA-based FS method was applied to solve 23 FS problems spanning low, medium, and high dimensions. A comparative analysis with nine advanced FS methods revealed that FTDZOA achieved higher classification accuracy on over 90% of the datasets and secured a winning rate exceeding 83% in terms of execution time. These findings confirm that FTDZOA is a reliable, high-performance, practical, and robust FS method.

Graphical Abstract

1. Introduction

With the booming development of artificial intelligence technology, an increasing number of fields need to utilize big data technology to enhance performance. Examples include medical processing [1,2,3,4,5,6], digital media [7], and natural language processing [8].
Unfortunately, however, the original dataset often contains a great deal of redundant information, which reduces the interpretability of the data and results in a waste of computational resources. To reduce the loss of computational resources in the process of data computation and improve the interpretability of data, it is usually necessary to reduce the noise of redundant information in the dataset, which is a key issue in the field of big data technology [9]. If the original dataset contains N feature information, the denoising process requires searching a combination of subsets in 2 N . However, searching for the optimal subset of combinations from many combinations consumes a power-exponential amount of computational cost [10]. To reduce the cost of the denoising process, utilizing the FS algorithm to cope with this NP-hard problem will appear more appropriate [11]. Therefore, the key research issue of this study aims to propose a reliable, robust, and practical feature selection method to reduce the loss of computational resources during data computation and enhance the interpretability of the data.
Currently, the main common FS algorithms are filter and wrapper methods [12]. Among them, the filter method evaluates the statistical properties or information-theoretic metrics between features and target variables and then cleans the redundant features by these information-theoretic metrics, which is advantageous in terms of high computational efficiency and small computational cost. However, the filter method cannot fully consider the characteristics of the subsequent learner, which leads to the poor performance of the selected feature subset on the learner, resulting in the lack of classification accuracy [13]. Unlike the filter approach, the wrapper approach treats the feature selection (FS) process as a search problem by continuously searching in the feature space and selecting features by evaluating the impact of different feature subsets on the model performance through learners such as K-Nearest Neighbors (KNN) [14], support vector machines [15], and neural networks [16], which in turn greatly ensures the classification accuracy. However, in wrapper methods, the feature subset search process using traditional methods will consume a lot of computational resources, while the meta-heuristic algorithm can effectively reduce the computational cost of the optimal combination search process due to its simplicity and flexibility [17].
Meta-heuristic algorithms are an optimization technique formed by simulating natural behavior, which has the advantages of easy implementation, high flexibility, and avoiding the trap of local optimality. They are currently categorized into four main types, which are evolutionary-based, chemical and physical-based, human-based, and population-based [18]. Among them, evolution-based algorithms commonly include the genetic algorithm [19], evolutionary strategies [20], and biogeography-based optimization [21]. Chemical and physical-based algorithms commonly include thermal exchange optimization [22] and the big bang–big crunch algorithm [23]. Human-based algorithms include teaching–learning-based optimization [24] and search and rescue optimization [25]. Population-based algorithms mainly include the slime mold algorithm [26], competitive swarm optimizer [27], and salp swarm algorithm [28].
Due to the simplicity and flexibility of the meta-heuristic algorithm, which makes it possible to effectively reduce the cost of the optimal feature subset search process, many scholars have proposed many wrapper methods based on the meta-heuristic algorithm to solve the FS problem. For example, Wang et al. proposed a binary grey wolf optimizer for solving the FS problem by combining foraging–following and Lévy flight strategies, called BFLGWO. By combining the stochastic nature of the learning strategy, the disadvantage of premature convergence was avoided, and the accuracy was improved to 4% on 12 datasets. Despite the improvement in accuracy, however, it suffered a substantial loss in time consumption, resulting in a less practical algorithm [29]. Mostafa et al. proposed an adaptive hybrid mutated differential evolution for feature dimensionality reduction in medical datasets, called A-HMDE, which, combined with the adaptive dynamic adjustability of the hybrid mutated strategy, enabled it to maintain a classification accuracy of more than 88% on different datasets, but its performance in removing redundant features is not obvious and there is still room for enhancement [30]. Gao et al. proposed a particle swarm algorithm based on information gain ratio-based sub-feature grouping to solve the FS problem, called ISPSO, due to the dynamically adjustable type of information gain ratio used, which makes it achieve good experimental results in eliminating the redundant features, and its disadvantage lies in the fact that the indexes considered are too singular, and no comprehensive consideration is given to the disadvantage, that the indexes considered are too single without comprehensive consideration of the indexes involved in FS [31]. Malik Braik et al. proposed an improved capuchin search algorithm to solve the FS problem by combining the chaotic strategy to address the problem that the capuchin search algorithm is prone to falling into the local optimum, called LCBCSA, and the proposed method has achieved good advantages in terms of accuracy and fitness value, but it suffers from the problem of falling into the local optimum when solving the high-dimensional FS problem, which is the problem of easily falling into local optimization, which makes its performance of high-dimensional FS suffer [32].
Heba Askr et al. proposed an improved golden jackal optimization algorithm in combination with copula entropy to solve the high-dimensional FS problem, called BEGJO, due to the dynamic adjustability of copula entropy during the search process, and it has achieved a great advantage in processing accuracy. However, for this reason, it sacrifices the feature dimension and running time, resulting in the reliability and practicality of the algorithm not being well guaranteed [33]. Mahmoud Abdel-Salam et al. proposed an adaptive chaotic dynamic learning gazelle optimization algorithm to solve the FS problem, called ACD-GOA, which, combined with adaptive inertia weights, improves the classification accuracy to more than 78%, but it still suffers from insufficient global search performance in solving the high-dimensional FS problem, and there is still the problem of insufficient global search performance, resulting in redundant features that cannot be effectively eliminated [34]. Law Kumar Singh et al. proposed a hybrid optimization algorithm by combining the emperor penguin optimization algorithm and bacterial foraging optimization algorithm to solve the FS problem, called EPO-BFO. Although it achieved good results in low-dimensional problems results, its accuracy is not guaranteed as the dimensionality increases [35]. For a deeper understanding, we summarize the advantages and disadvantages of the above work in terms of accuracy, execution time, error rate, and amount of feature dimensionality reduction in Table 1, where ‘Yes’ indicates that the indicator is assured and ‘No’ indicates that the indicator is not assured.
The above facts confirm that the FS methods based on meta-heuristic algorithms have strong feature dimensionality reduction performance, but although good progress has been made in classification accuracy, it has to be admitted that there is often a problem of falling into the local optimal feature subset in dealing with the high-dimensional FS problem, resulting in the loss of classification accuracy in high-dimensional datasets, which is due to the fact that the algorithm’s global optimal search performance has some. The essential reason is that the algorithm’s global optimization performance is somewhat insufficient. At the same time, with the gradual complexity of the data environment, the current FS algorithms have the inability to weigh indicators such as running time, feature subset size, and accuracy, thus limiting the effectiveness of capturing the intrinsic patterns and features of the data, resulting in the FS method not being guaranteed to be practical and reliable. The above facts motivate us to explore a novel and suitable meta-heuristic algorithm with efficient search performance to ensure that the algorithm can fully explore the search space when performing feature dimensionality reduction and to alleviate the problem of falling into a locally optimal subset of features due to high dimensional data. Fortunately, the zebra optimization algorithm (ZOA) has been shown to be a robust tool with efficient search capability [36]. Related studies have shown that ZOA has strong exploration capabilities and application scalability. For example, ZOA has been successfully applied to solve the transmission expansion planning problem [37], the renewable energy distributed energy system problem [38], and the cyber threat detection problem [39]. In addition, existing papers have not attempted to solve the FS problem with ZOA, and to fill this application gap, we apply ZOA to solve the FS problem. Meanwhile, considering that ZOA may still have the problem of falling into the local optimal feature subset when solving the high-dimensional FS problem, an improved ZOA is proposed to solve the FS problem in this paper by combining the fractional order search strategy, triple mean point guidance strategy, and differential strategy, and an improved ZOA is proposed to solve the FS problem, known as the FTDZOA.
By analyzing the status of the most novel FS methods mentioned above, we find that the problems of the current FS methods mainly focus on the following points. First, the algorithms excessively pursue classification accuracy but do not fully consider the significance of the execution time in the actual environment, which makes the proposed FS methods not guaranteed in terms of practicality. Secondly, some algorithms consider the indexes to be too homogenized and do not comprehensively consider the indexes involved in the FS problem, which makes the reliability of the algorithms lower. Meanwhile, most of the algorithms have excellent performance when dealing with low-dimensional FS problems, but there is the problem of easily falling into the local optimal subset on high-dimensional FS problems, which leads to a decrease in the algorithm’s solving ability. Compared with the existing FS methods, FTDZOA has its special features. Aiming at the problems of existing FS methods, firstly, the ability of the fractional order search strategy to retain information about the previous generation of individuals is fully utilized in order to reduce the execution time during the solution process and thus increase the utility of the algorithm. Secondly, this paper fully considers the classification accuracy, feature subset size, execution time, and error rate to comprehensively analyze and evaluate the FS problem. At the same time, the triple mean point guidance strategy and a differential strategy are used to enhance the global exploration capability of the algorithm by combining the information of the global optimal point, random point, and current point, as well as combining the information difference between different individuals to avoid the local optimization trap problem when FS methods solve high-dimensional problems. The above improvements make FTDZOA highly reliable as well as practical, and it can be considered a promising FS method. The main contributions of this paper are as follows:
  • The fractional order search strategy is introduced to improve the exploitation of ZOA in solving FS problems.
  • The introduction of the triple mean point guidance strategy effectively improves the exploration capability of ZOA and also ensures the exploitation of the algorithm.
  • Introducing a differential strategy to enhance the global exploration capability of ZOA.
  • A FS method based on FTDZOA is proposed by combining the above strategies.
  • The FTDZOA-based FS method is used for 23 FS problems and achieves efficient performance.
The remainder of the paper is organized as follows: Section 2 focuses on the theoretical approach of ZOA. Section 3 proposes FTDZOA by introducing a fractional order search strategy, triple mean point guidance strategy, and differential strategy on the basis of ZOA. Section 4 applies the proposed FTDZOA-based FS method to solve 23 FS problems involving low, medium, and high dimensions to evaluate the FS performance of FTDZOA. Section 5 summarizes the conclusions of this paper and the future work schedule.

2. Zebra Optimization Algorithm

ZOA [36] is a novel optimization algorithm developed by modeling the behavior of zebra groups, which is mainly inspired by the zebra’s foraging behavior and defense strategies in the environment. ZOA mainly consists of an initialization phase, a foraging phase, and a defense phase during execution. In this section, we give the mathematical model of the above stages along with the complete execution of ZOA.

2.1. Initialization Phase

Like other metaheuristic algorithms, the execution of ZOA starts with initializing the population. In this section, the mathematical model of the initialization process of ZOA is presented. Each zebra individual corresponds to a solution of the problem to be solved, and by modeling each zebra individual using vectors, it is then possible to mathematically recognize the problem to be solved. At the same time, multiple zebra individuals are mathematically modeled to form an initialization population, which in turn starts the optimization iteration process. The initialization process is represented as Equation (1).
X = X 1 X i X N N × D = x 1 , 1 x 1 , j x 1 , D x i , 1 x i , j x i , D x N , 1 x N , j x N , D N × D
where X denotes the zebra population, X i denotes the i t h zebra individual, N denotes the population size, D denotes the dimension of the problem to be solved, and x i , j denotes the value of the j t h variable for the ith individual.
Each zebra represents a candidate solution to the problem to be solved, and the quality of the different individual zebras is usually evaluated using the fitness value, expressed using Equation (2).
F = F 1 F i F N N × 1 = F ( X 1 ) F ( X i ) F ( X N ) N × 1
where F denotes the vector of fitness values for the zebra population and F i denotes the fitness value of the i t h individual. If the problem to be solved is a minimization optimization problem, the smaller the fitness value, the higher the quality of the individual and vice versa.

2.2. Foraging Phase

After initializing the population, individual positions are updated through the zebra’s foraging behavior in search of higher solution quality, and the mathematical model of the foraging phase is expressed as Equation (3).
x i , j n e w , P 1 = x i , j + r · ( P Z j I · x i , j )
where x i , j n e w , P 1 denotes the new value of the j t h variable of the i t h individual after passing through the foraging phase, r denotes a random number in the interval [0, 1], P Z j denotes the value of the j t h variable of the optimal individual in the population, and I is any value in the set {1, 2}. Subsequently, the quality of the new and old states of the individual zebra was compared using the fitness values, which led to the retention of the individuals, expressed using Equation (4).
X i = X i n e w , P 1 F i n e w , P 1 < F i X i e l s e
where X i n e w , P 1 denotes the new position of the i t h individual after updating through the foraging phase, and F i n e w , P 1 is the fitness value of the new state of the individual zebra formed after the individual passes through the foraging phase.

2.3. Defense Phase

After a zebra individual has made a mass gain through foraging behavior, it will subsequently use defense behavior to make a gain in individual mass. The mathematical model of the defense phase is represented as Equation (5).
x i , j n e w , P 2 = x i , j + R · ( 2 r 1 ) · ( 1 t T ) · x i , j           P s 0.5 x i , j + r · ( A Z j I · x i , j )                                         e l s e
where x i , j n e w , P 2 denotes the new value of the j t h variable of the i t h individual after passing through the defense phase, R is a constant taking the value of 0.01, t denotes the current number of iterations of the algorithm, T denotes the maximum number of iterations of the algorithm, P s is a random number in the interval [0, 1], and A Z j denotes the value of the j t h variable of a random zebra individual in the population. Subsequently, the quality of the new and old states of the individual zebra was compared using the fitness values, which led to the retention of the individuals, expressed using Equation (6).
X i = X i n e w , P 2 F i n e w , P 2 < F i X i e l s e
where X i n e w , P 2 denotes the new position of the i t h individual after updating through the defense phase, and F i n e w , P 2 is the fitness value of the new state of the individual zebra formed after the individual passes through the defense phase.

2.4. Implementation of ZOA

This section focuses on the specific execution of the ZOA implementation process. At the end of the initialization population, the zebra individual positions are updated through the foraging phase and the defense phase, which in turn improves the quality of the solution until the optimal solution to the problem to be solved is output after the iteration stopping condition is reached. The flowchart of the algorithm is shown in Figure 1a.

3. Mathematical Modeling of FTDZOA

As the complexity of the current FS problem increases, solving the FS problem requires searching among an exponentially growing number of combinations, and the original ZOA is prone to falling into a locally optimal subset of features during the search process due to its insufficient exploration performance and exploitation performance, resulting in a lower classification accuracy and an increase in the running time. To improve the above drawbacks, this section proposes FTDZOA by combining three learning strategies. Firstly, the fractional order search strategy is introduced to enhance the exploitation ability of ZOA in solving the FS problem, which makes full use of the retention ability of the fractional order search strategy on the information of the previous generation of individuals and improves the classification accuracy. Secondly, the triple mean point guidance strategy is introduced, which combines the information of the global optimal point, random point, and current point to effectively enhance the exploration ability of ZOA, and ensures the exploitation ability of the algorithm so that the convergence performance of the algorithm is effectively enhanced. Finally, the exploration ability of ZOA is enhanced by introducing the differential strategy, which combines the information difference between different individuals and reduces the risk of falling into the subset of local optimal features. The introduction of the fractional order search strategy, triple mean point guidance strategy, and differential strategy based on ZOA makes the algorithm effectively avoid falling into the problem of locally optimal feature subsets when solving the FS problem so that the algorithm can effectively downsize redundant feature attributes in the data set and, at the same time, ensure the classification accuracy of the algorithm. The strategies proposed above are described in detail in the subsequent subsections.

3.1. Fractional Order Search Strategy

Due to the FS problem that requires searching for the optimal feature subset in a power exponential number of combinations, the original ZOA has insufficient exploitation performance, which can easily lead to the inability to quickly and effectively locate the optimal feature subset, reduce classification accuracy, and increase runtime. The above deficiencies motivate us to propose learning strategies with stronger exploitation capabilities. For example, in the high-dimensional datasets, Meadelon and Isolet, the algorithm needs to search for feature subsets among 2 500 and 2 617 feature combinations, which requires the algorithm to perform fast and efficient exploitation of the locally optimal region after locking it in order to enhance the classification accuracy and reduce the execution time. Additionally, the fractional order can describe the dynamic behavior of the system more finely. In our algorithm, the fractional order search strategy makes the search process able to transition more smoothly by introducing fractional order derivatives, avoiding the jump phenomenon that may occur in the traditional integer order search, which in turn enhances the ability to develop in the local region and can effectively improve the algorithm’s search efficiency and accuracy. Especially when dealing with high-dimensional datasets, this smooth transition property helps the algorithm to better adapt to the local characteristics of the data and enhance the exploitation ability. Meanwhile, in the literature [40], it is pointed out that the fractional order search strategy, due to its excellent ability to consider individual historical information, enhances the ability of individuals to summarize themselves by storing individual historical information, thereby effectively enhancing the algorithm’s exploitation capability. Based on the above inspiration, we introduce a fractional order search strategy to enhance the exploitation performance of ZOA, improve the algorithm’s ability to locate the optimal feature subset, and enhance the classification accuracy of the dataset, and we give an anomaly to the algorithm. We give an expression for the order fractional order search strategy as shown in Equation (7) [40].
x i , j n e w , P 1 = 1 1 ! q x i , j t + 1 2 ! q ( 1 q ) x i , j t 1 + 1 3 ! q ( 1 q ) ( 2 q ) x i , j t 2                         + 1 4 ! q ( 1 q ) ( 2 q ) ( 3 q ) x i , j t 3 + l i , j 2 × ( x i , j t x k , j t )
where x i , j n e w , P 1 denotes the new value of the j t h variable value of the i t h individual after updating by fractional order search strategy, “ ! ” denotes the factorial operation, x i , j t denotes the j t h variable value of the i t h individual in the current iteration, x i , j t 1 denotes the j t h variable value of the i t h individual in the previous generation, and x k , j t denotes the j t h variable value of a random individual in the population in the current iteration. l i , j is a random number obeying a standard normal distribution, denoted as l i , j N ( 0 , 1 ) . q is the adaptive factor expressed as Equation (8).
q = 1 ( 1 + e L ) · c o s ( 2 · π · L )
where e represents the exponential operation, L represents the adaptive decreasing factor, expressed as Equation (9).
L = ( 1 · ( t T ) 2 ) · r + 1
In Equation (7), it can be found that the update of individuals mainly relies on the information of the first four generations of individuals. By integrating and summarizing the historical information of the first four generations of individuals, the quality of the solution is effectively enhanced. At the same time, combined with the adaptive factor q and utilizing the adaptability of the strategy, the exploitation ability of the enhanced algorithm is greatly promoted, enabling the algorithm to better locate the optimal feature subset. At the same time, the fractional order search strategy not only combines individual historical information but also ensures population diversity by learning from random individuals, reducing the risk of falling into local optimal feature subsets due to over-exploitation.

3.2. Triple Mean Point Guidance Strategy

The original ZOA algorithm was designed to reduce redundant features in the dataset and improve the classification accuracy when solving FS problems. However, the number of combinations to be searched by the algorithm grows exponentially as the dimensionality of the data gradually increases. For example, in the datasets Clean and Semeion, the algorithm needs to search among 2 167 and 2 256 combinations of feature subsets, which makes the algorithm easily fall into the trap of locally optimal feature subsets due to the large search space and the limitations of the algorithm and makes it difficult to reduce the dimensionality of the dataset effectively. The root cause of the above problem is the insufficient global search performance of the algorithm. Therefore, the above facts inspire us to urgently propose a learning strategy with an efficient global search performance to improve the global search performance of the algorithm and increase the classification accuracy of the FS problem. Traditional mean point guidance methods tend to consider only a single mean point as the search direction, which may cause the algorithm to lack sufficient explorability in some cases. The literature [41] points out that the global search capability of the algorithm can be enhanced by generating new individuals by averaging the current individual with random individuals. Considering the high randomness of the individuals generated by averaging current individuals with random individuals, although it can effectively improve the global search ability of the algorithm, the exploitation ability cannot be guaranteed. Therefore, we consider the global optimal individual on this basis. The triple mean point guidance strategy in this paper provides more direction and diversity to the feature subset search process by introducing three different mean points. This strategy helps the algorithm to enhance its ability to explore unknown regions while maintaining convergence, thus exhibiting better performance when dealing with complex datasets with multiple local optima. The triple mean point guidance strategy is represented by Equation (10).
x i , j n e w , P 2 = x i , j · r + ( x m , j + ( 1 t T ) · ( x m , j x b , j ) ) · ( 1 r )
where x i , j n e w , P 2 represents the new value of the j t h variable of the i t h individual updated through the triple mean point guidance strategy, r represents the random number in the interval [0, 1], and x b , j represents the j t h variable value of the random individual in the population. x m , j represents the j t h variable value of the new individual generated through the triple mean point, expressed as Equation (11), and the simulation of triple mean point is shown in Figure 2.
x m , j = ( x c , j + P Z j + x i , j ) 3.0
where x c , j represents the j t h variable value of a random individual in the population, and P Z j represents the j t h variable value of the globally optimal individual in the population.
From Figure 2, it can be seen that the triple mean point individuals help individuals escape from local optimal traps, enhance the algorithm’s exploration ability, and also have advantages in improving the accuracy of solutions. As can be seen from Equation (10), the individual learns from the triple-mean individual, which, due to the randomness and optimality of the triple-mean individual, allows the individual to explore a wider range of potential optimal regions, and, at the same time, the exploitation performance is guaranteed to enhance the ability to locate the optimal subset of feature regions during the FS problem-solving. In addition, the individual also learns from the gap between three-mean individuals and random individuals, which further enhances the exploration ability of the algorithm. It should be noted that the process of learning from the gap between three-mean individuals and random individuals is adaptive; namely, with the increase in the number of iterations, the degree of learning is gradually reduced, which not only ensures that there is a very good performance of global search in the early iteration period and that the optimal feature subset region can be well localized but also guarantees the exploitation performance, which makes the algorithm able to further exploit the potential optimal feature subset region and realize the improvement of classification accuracy.

3.3. Differential Strategy

When solving FS problems, due to the diversity of the feature subset combinations, ZOA is prone to getting trapped in suboptimal feature subsets when searching for the optimal feature subset. The reason is that the algorithm has low population diversity during execution, which makes it easy to fall into the trap of suboptimal feature subsets, and it cannot effectively eliminate redundant features in the dataset. In the literature [42], it is pointed out that the differential strategy helps to enhance the population diversity of the algorithm and alleviate the problem of falling into local optimal traps. Therefore, in order to improve the performance of the algorithm in solving FS problems, this subsection proposes a novel differential strategy that combines traditional differential strategies to enhance population diversity during the algorithm execution, which is expressed as Equation (12), and the simulation is shown in Figure 3.
x i , j n e w , P 2 = x i , j + r · a b s ( r a n d n ) · ( x r a n d 1 , j x i , j ) + ( 1 r ) · K · ( x r a n d 2 , j x r a n d 3 , j )
where x i , j n e w , P 2 represents the new value of the j t h variable of the i t h individual updated through differential strategy, while x r a n d 1 , j , x r a n d 2 , j , and x r a n d 3 , j represent the j t h variable values of three random individuals in the population that are different from each other. The value of K is a random number in the set {0,1}, a b s represents the absolute value operation, and r a n d n denotes a random number that follows a normal distribution.
From Equation (12), it can be seen that the individual learns by learning from the gap between itself and a random individual and also learns from the gap between two random individuals and, by learning from different gaps, it enables the individual to receive more information and explore more promising optimal regions in the solution space, which greatly reduces the probability of falling into the trap of the local optimal subset. Meanwhile, in Figure 3, the blue dashed line indicates the generation process of the difference strategy, and it can be seen that with the individuals, after learning through the difference strategy, the population space becomes larger and the population diversity becomes higher, which is more promising to explore the optimal feature subset region. In summary, updating the position of individuals through the differential strategy makes the population diversity improve during the execution of the algorithm, which in turn can effectively downsize the redundant features in the dataset, reduce the running time, and improve the classification accuracy.

3.4. Implementation of FTDZOA

In view of the complexity of the current FS problem, ZOA is prone to falling into the subset of local optimal features during the search process due to the lack of its exploration performance and exploitation performance, which leads to the problem of not being able to effectively downsize the redundant features. In this section, FTDZOA is proposed by combining the fractional order search strategy, the triple mean point guidance strategy, and the differential strategy, so that the algorithm effectively avoids falling into the local optimal subset of features when solving the FS problem and makes the algorithm effectively downsize redundant feature attributes in the data set; at the same time, it ensures the classification accuracy of the algorithm. The implementation flowchart of the FTDZOA is shown in Figure 1b.

4. Results and Discussion

In this section, the performance of the FTDZOA-based FS method is mainly tested by using 23 datasets involving high-, medium-, and low-dimensional data for the experiments, followed by analyzing the population diversity, exploration–exploitation balance property, strategy effectiveness, fitness value, nonparametric test, convergence property, classification accuracy, feature subset size, running time, and comprehensive performance of FTDZOA. At the same time, the experimental performance comparison with nine efficient algorithms, namely ABO, DE, GWO, PSO, BOA, EO, MVO, WOA, and ZOA, objectively and fairly verifies that the FS method based on FTDZOA proposed in this paper has an efficient performance in solving FS problems. Among them, the parameter information of the nine comparison algorithms refers to the parameters set in the original literature of the corresponding algorithms in order to ensure the excellent performance of the algorithms. The information in the dataset used in this paper is shown in Table 2, and it can be found at the URL https://archive.ics.uci.edu/datasets (accessed on 15 October 2024). The details of the comparison algorithm parameter settings are shown in Table 3.
To ensure the fairness and reproducibility of the experiment, the experimental conditions were uniformly set to a population size of 10 and a maximum iteration count of 100. Each experiment was independently run 30 times without repetition to statistically analyze the experimental results. All experiments were conducted under hardware conditions of AMD Ryzen 6-core processor and 8Gb memory produced by Advanced Micro Devices (AMD) in California, USA. All code involved in the experiments was run in MATLAB R2021b on the Windows 11 system environment. Some of the codes involved can be found at the URL https://github.com/JingweiToo/Wrapper-Feature-Selection-Toolbox (accessed on 15 October 2024).

4.1. FS Optimization Model

The main purpose of the FS problem is to extract the effective feature information from the original chaotic data set and eliminate the redundant features, which in turn reduces the data complexity and improves the classification accuracy, and it can be regarded as a high-dimensional combinatorial optimization problem whose fitness function is defined as shown in Equation (13).
m i n   f ( X i ) = λ 1 · e r r o r + λ 2 · R / n
where X i denotes the i t h solution, which is the valid subset of features extracted from the original dataset; e r r o r denotes the error rate of classification using this subset of features; R denotes the size of the proposed subset of features; and n denotes the size of the features in the original dataset. λ 1 [ 0 , 1 ] , λ 2 = 1 λ 1 . In this paper λ 1 takes the value 0.9.
Since the individual information is always real-valued during the iterative process of FTDZOA, but due to the discrete type of the FS problem, it is necessary to convert the real-valued value to discrete-valued operation for the individual before calculating the fitness value, and the specific flowchart for calculating the fitness value is shown in Figure 4, with the main steps as follows:
  • Step 1: real-valued individual X i = ( x i , 1 x i , j x i , D ) is converted to a discrete-valued individual X i c = ( x i , 1 c x i , j c x i , D c ) , expressed as the following Equation (14):
x i , j c = 1 , x i , j > 0.5 0 , x i , j 0.5   i = 1 , 2 , . . . , N ; j = 1 , 2 , . . . , D
  • Step 2: Selection of feature subsets in the original dataset by discrete-valued individual X i c , where x i , j c = 1 means that the j t h feature is selected and vice versa means that the j t h feature is not selected.
  • Step 3: The selected subset of features is used to calculate the classification accuracy using KNN. In this paper, K takes the value of 5.
  • Step 4: Calculate the fitness value using Equation (13).

4.2. Population Diversity Analysis

In this section, we mainly analyze the population diversity of the FS method based on FTDZOA in solving the FS problem, and the higher population diversity indirectly reflects the algorithm’s ability to jump out of the locally optimal feature subset, which means that the algorithm is able to explore a wider subset of the optimal feature region and effectively eliminate the redundant features from the original dataset. The experimental results are shown in Figure 5, where the X-axis represents the number of iterations and the Y-axis represents the value of population diversity.
From Figure 5, it can be seen that the FTDZOA-based FS method has a higher population diversity than ZOA throughout the iteration process when performing data dimensionality reduction on the low-dimensional datasets Aggregation and Iris, which implies that the FTDZOA-based FS method possesses a stronger capability of jumping out of the locally optimal features subset when solving the low-dimensional FS problem. This is mainly due to the introduction of the triple mean point guidance strategy, and differential strategy in this paper, which improves the population diversity of the algorithm. Meanwhile, when dealing with the medium-dimensional FS problem, the population diversity of FTDZOA is higher than that of ZOA throughout the iterative process, which indicates that with the increase of data dimension, the triple mean point guidance strategy, and differential strategy proposed in this paper can still effectively enhance the algorithm’s population diversity and improve its ability to jump out of the local optimal feature subset. Finally, when dealing with high-dimensional FS problems, the algorithm needs a higher ability to jump out of the local optima due to the challenge of the complex search space brought by the dimension enhancement. Fortunately, FTDZOA effectively enhances the population diversity, and the population diversity is always ahead of ZOA when dealing with high-dimensional FS problems, which demonstrates that for high-dimensional FS problems, the triple mean point guidance strategy, and differential strategy proposed in this paper can still effectively and robustly enhance the population diversity and improve the algorithm’s performance in the high-dimensional FS problem. In summary, we can conclude that the triple mean point guidance strategy, and differential strategy proposed in this paper can effectively enhance the population diversity, improve the ability of the algorithm to jump out of the suboptimal feature subset, and effectively eliminate redundant features, regardless of the low-dimensional, medium-dimensional, and high-dimensional FS datasets.

4.3. Exploration-Exploitation Balance Analysis

In this section, we mainly analyze the exploration–exploitation phase of the FTDZOA-based FS method when dealing with FS problems, in which the main purpose of the exploration phase is to explore the optimal region in the vast solution space, while the main purpose of the exploitation phase is to carry out a deeper step in the discovered optimal region to speed up the convergence speed and convergence accuracy. A good algorithm should achieve a good balance between these two phases in order to make the exploration and exploitation phases complement each other and jointly promote the performance of the algorithm. The experimental results are shown in Figure 6, where the X-axis represents the number of iterations and the Y-axis represents the proportion of exploration/exploitation phases in the running process.
From Figure 6, it can be seen that the FS method based on FTDZOA has a higher exploration ratio in the early stage of the iteration when solving the low-dimensional FS problem, which indicates that the algorithm has a higher global search capability in the early stage of the iteration when solving the low-dimensional FS problem, mainly due to the fact that the triple mean point guidance strategy and the differential strategy are introduced in this paper, which makes the exploration ability improved and enables the algorithm to locate the global optimal region more effectively. Subsequently, the exploitation ratio of the algorithm gradually increases, which is mainly due to the introduction of a fractional order search strategy in this paper, which improves the exploitation ability of the algorithm, and the increase of the exploitation ratio helps to further develop the optimal region and improve the classification accuracy. When solving the medium- and high-dimensional FS problem in the early iteration period, due to the introduction of triple mean point guidance strategy, and differential strategy, the algorithm has a stronger exploration performance and is able to locate the optimal region quickly and efficiently. Then, due to the introduction of a fractional order search strategy that is introduced, the exploitation phase is dominated and the algorithm’s local search capability is strengthened, which in turn improves the convergence performance of the algorithm and enables the algorithm to effectively eliminate redundant features. The above facts illustrate that due to the introduction of the three strategies in this paper, the performance of the FTDZOA-based FS method is enhanced when dealing with low, medium, and high-dimensional FS problems.

4.4. Strategies Effectiveness Testing

Population diversity and exploration–exploitation capabilities during FTDZOA execution have been analyzed in the above subsections, but we have no way of knowing the improvement in algorithm performance for a single strategy. In this section, the effectiveness of the strategies is mainly evaluated: firstly, the fractional order search strategy is introduced into ZOA to form the algorithm FZOA, the triple mean point guidance strategy is introduced into ZOA to form the algorithm TZOA, and the differential strategy is introduced into ZOA to form algorithm DZOA. At the same time, the above three strategies are introduced into ZOA to form the algorithm FTDZOA. Subsequently, the FS performance of ZOA, FZOA, TZOA, DZOA, and FTDZOA is evaluated on 23 FS datasets, involving high dimensionality, medium dimensionality, and low dimensionality. The experimental results are shown in Figure 7, which shows the average ranking of the algorithm based on the fitness values on different dimensional datasets. From the figure, it can be seen that the introduction of the fractional order search strategy in ZOA avoids the jumping phenomenon that may occur in the traditional integer order search, and at the same time utilizes the individual history information and enhances the exploitation ability in the local area, which makes the average ranking of FZOA better than that of the original ZOA in different dimensional datasets. Meanwhile, the introduction of the triple mean point guidance strategy in ZOA avoids the limitation of the traditional mean point and strengthens the ability of the algorithm to jump out of the local optimal trap by combining the randomness and optimality of the strategy, which makes the average rankings of TZOA outperform those of the original ZOA on different dimensional datasets. The introduction of a differential strategy in ZOA improves the global search performance of the algorithm and enriches the optimal solution region of the problem due to the learning of individuals by learning from the differences in information from other individuals, which makes the average ranking of DZOA better than that of the original ZOA in different dimensional datasets. We can notice that the exploitation performance as well as the global search performance of FTDZOA is improved after the introduction of the above three strategies simultaneously in ZOA, and the average rankings on different dimensional datasets are optimal. Through the above analysis, we can conclude that the introduction of a fractional order search strategy, triple mean point guidance strategy, and differential strategy in ZOA alone can improve the performance of ZOA from different angles, which confirms that the proposed strategies in this paper are all effective. Meanwhile, we can find that the performance of ZOA can be improved by introducing the above three strategies at the same time.

4.5. Fitness Value Analysis

In this section, the main focus is to analyze the fitness value of the FTDZOA-based FS method in executing the FS problem, and at the same time, combine it with the comparison algorithm to provide an objective and fair assessment of the performance of the proposed method in the FS problem in this paper. In order to avoid the chance of experimental results, each group of experiments is executed independently and unrepeatedly for 30 times to count the experimental results. The experimental results are shown in Table 3, where “Best” denotes the optimal fitness value, “Mean” denotes the mean fitness value, “Worst” denotes the worst fitness value, “Rank” denotes the individual ranking on different metrics. “Mean Rank” denotes the mean ranking on different metrics and “Final Rank” denotes the final ranking obtained based on the “Mean Rank”. Among them, the optimal values on the corresponding indicators are represented in bold, and this is also the case in the following text.
From Table 4, it can be seen that the FS method based on FTDZOA solves the low-dimensional FS problem. In terms of the best fitness value, FTDZOA is ranked first with 88% probability, while ZOA is ranked first only with 25% probability, and at the same time, the winning rate of FTDZOA is ahead of the comparison algorithms. This shows that due to the introduction of the fractional order search strategy, the triple mean point guidance strategy, and differential strategy in this paper, the ability of the algorithm to jump out of the locally optimal subset of features is effectively improved, and the redundant features are effectively eliminated, which ensures the performance of the algorithm in the low-dimensional FS problem. However, it is undeniable that although FTDZOA achieves good results in the low-dimensional FS problem, it is weaker than GWO in the Breastcancer dataset, which indicates that the performance of FTDZOA proposed in this paper is still to be improved in some specific low-dimensional datasets. In terms of the mean fitness value, FTDZOA ranks first with a probability of 100%, ahead of ZOA as well as other comparison algorithms, which indicates that FTDZOA possesses high solution stability when dealing with the low-dimensional FS problem. Meanwhile, Figure 8 demonstrates the box plot of FTDZOA when dealing with the low-dimensional FS problem, from which it can be visualized that the box corresponding to FTDZOA has the smallest height, which indicates that FTDZOA possesses high solution stability in solving the low-dimensional FS problem and can be considered a robust FS method. In terms of the worst fitness value, FTDZOA ranks first with 100% probability, ahead of ZOA and other comparative methods, which indicates that the FS method based on FTDZOA possesses higher fault tolerance when solving low-dimensional FS problems, indirectly reflecting that FTDZOA is a reliable FS method. Meanwhile, compared to FTDZOA, the comparison algorithm cannot precisely locate the globally optimal region due to the insufficiency of the algorithm’s exploitation strategy when solving the low-dimensional FS problem, which makes the algorithm limited to the size of the search space, and it is unable to exploit the region efficiently, which leads to the algorithm not being able to fully explore all the possible combinations of features and thus not being able to find the optimal subset of features, resulting in the insufficient performance of the algorithm. Through the above analysis, it can be concluded that due to the fractional order search strategy, the triple mean point guidance strategy, and differential strategy introduced in this paper, the optimization performance, solution stability, and reliability of FTDZOA are improved.
Secondly, it can be seen from Table 4 that the FS method is based on FTDZOA in solving the medium dimension FS problem. In terms of the best fitness value, FTDZOA ranks first with 100% probability while ZOA only ranks first with 13% probability, and at the same time, the winning rate of FTDZOA is ahead of the comparison algorithms. This demonstrates that as the dimensionality of the FS problem increases, the fractional order search strategy, triple mean point guidance strategy, and differential strategy proposed in this paper are still able to contribute to the performance of the algorithms in dealing with the FS problem, allowing the algorithms remain efficient in solving medium dimensional feature problems. In the mean fitness value, FTDZOA still ranks first with 100% probability, far ahead of ZOA and the other comparison algorithms, which shows that with the enhancement of the dimensionality and complexity of the FS problem, the FTDZOA proposed in this paper still possesses a high solution stability, which indirectly reflects that with the enhancement of the problem dimensions, the practicability of the FTDZOA has been more emphasized. At the same time, Figure 9 shows the box diagram of FTDZOA when dealing with the medium-dimensional FS problem, from which it can be intuitively seen that the corresponding box height of FTDZOA is the smallest, which indicates that the distribution of the solutions of FTDZOA is more clustered when solving the medium-dimensional FS problem, and this also reflects that in the realistic feature dimensionality reduction environment, the stability of the solution of FTDZOA is very high and the reliability is guaranteed, and it can be be considered a reliable FS method. In terms of the worst fitness value, FTDZOA ranks first with a probability of 88%, ahead of ZOA and other comparative methods, which indicates that with the improvement of the problem dimensions, the FS method based on FTDZOA also has a higher fault tolerance rate and also confirms its reliability in solving the medium-dimensional FS problem. Meanwhile, compared to FTDZOA, the comparison algorithm is limited by the search efficiency of the algorithm when solving the medium-dimensional FS problem. With the increase in the number of features, the complexity of the search space increases, and the algorithm needs to find the optimal solution in a larger solution space, so it needs a stronger ability to jump out of the trap of local optimality while the comparison algorithm, due to the limitations of the strategy, leads to the algorithm being prone to falling into the local optimality when solving the FS problem, and it is not able to carry out an effective spatial exploration in the large-scale combinations, resulting in a loss of performance. Through the above analysis, it can be concluded that due to the fractional order search strategy, triple mean point guidance strategy, and differential strategy introduced in this paper, FTDZOA can still effectively promote its performance in the face of the increase in the dimensionality of the FS problem. The performance of FTDZOA can still be effectively promoted and the reliability and robustness of the algorithm can be guaranteed.
From Table 4, it can be seen that the FTDZOA-based FS method ranks first with 100% probability in the best fitness value index when dealing with the high-dimensional FS problem, ahead of the other comparative algorithms, which confirms that the strategy introduced in this paper is still effective in dealing with the high-dimensional combinatorial optimization problem. Meanwhile, in the mean fitness value index, it ranks first with 100% probability, which demonstrates high stability in the high-dimensional FS problem. Figure 10 demonstrates the box plot of the FTDZOA-based FS in solving the high-dimensional FS problem, from which it can be seen that the height of the box of FTDZOA is the smallest in different high-dimensional datasets, which reflects that FTDZOA has a much better performance in the high-dimensional FS problems with higher solution stability. In the worst fitness metric, it ranks first with 86% probability and is considered to possess the highest fault tolerance. However, compared to FTDZOA, in high-dimensional FS problems, the comparison algorithm is limited by the dimensionality catastrophe, and the dimensionality of the search space increases dramatically as the number of features increases dramatically, while the global search capability of the comparison algorithm is limited by the search strategy, resulting in an extreme lack of population diversity, which leads to a sharp decrease in the search efficiency of the algorithm, which makes the algorithm’s FS performance drop drastically. In summary, the FTDZOA proposed in this paper has higher stability and reliability in high-dimensional FS problems due to the advanced nature of its strategy.
Finally, from a comprehensive point of view, the FS method based on FTDZOA has an average ranking of 1.043 in the best fitness value metric in 23 FS problems involving low, medium, and high dimensions, ahead of the other comparative algorithms, reflecting that FTDZOA possesses stronger global optimization-seeking capability. In the mean fitness value index, the mean ranking is 1.000, which confirms that FTDZOA possesses a stronger solution stability. In addition, in the worst fitness value index, the average ranking is 1.087, which confirms that FTDZOA has a higher fault tolerance for solving. In order to see more intuitively the performance differences of different algorithms in each metric, Figure 11 shows the ranked histograms of the algorithms, from which it is intuitively clear that FTDZOA ranks not only ahead of ZOA in terms of the worst fitness, the mean fitness, and the best fitness values, but also ahead of the other excellent algorithms. In summary, we can conclude that the FS method based on FTDZOA is an efficient, robust, and reliable FS method.

4.6. Nonparametric Analysis

In some cases, relying solely on numerical results may not be sufficient to comprehensively and accurately assess the differences between the performance of different algorithms. This is because the numerical results are sensitive to outliers, and one or a few outliers may significantly change the numerical statistics, thus affecting the interpretation of the overall data. Therefore, in order to avoid performance bias due to outliers, in this section, we have performed a Friedman mean test with a significance factor of 0.05 for the experimental results to objectively and fairly assess the performance of the FTDZOA-based FS method. The experimental results are shown in Table 5, where “Mean Rank” denotes the average ranking of the algorithm on all datasets and “Final Rank” denotes the final ranking based on the “Mean Rank”. In addition to this, we also performed a Wilcoxon statistical rank sum test with a significance factor of 0.05 for the experimental results. The experimental results are shown in Table 6, where ‘+’ indicates that the algorithm performance is significantly better than FTDZOA, ‘−’ indicates that the algorithm performance is significantly weaker than FTDZOA, and ‘=’ indicates that the algorithm performance is not significantly different from FTDZOA.
As can be seen from Table 5, in the eight low-dimensional FS problems, the average ranking of the FTDZOA-based FS method through the Friedman mean test is 1.74, which is ahead of the other comparison algorithms. Meanwhile, the Friedman mean test ranking shows that the winning rate of FTDZOA in the low-dimensional FS problems is 100%, and the above results confirm the efficiency of the FTDZOA-based FS method in the low-dimensional FS problems. In the eight medium-dimensional FS problems, the average ranking of the FTDZOA-based FS method through the Friedman mean test is 1.4, which is ahead of the other comparative algorithms. Meanwhile, the winning rate of FTDZOA in the medium-dimensional FS problems is 100%. This demonstrates that as the dimensionality of the FS problem increases, the performance of FTDZOA proposed in this paper can still perform efficiently in finding the optimal performance. In the seven high-dimensional FS problems, the average ranking of the FS method based on FTDZOA through the Friedman mean test is 1.68, which is ahead of the other compared algorithms, and at the same time, the winning rate of FTDZOA in the seven high-dimensional FS problems is 100%, which confirms that the FTDZOA proposed in this paper still performs efficiently in solving high-dimensional FS problems. Meanwhile, from a comprehensive point of view, the average ranking of FTDZOA in 23 FS problems is 1.54, which is ahead of the other compared algorithms. Through the above analysis, we can conclude that the algorithm’s exploration and exploitation capabilities have been improved due to the introduction of the fractional order search strategy, triple mean point guidance strategy, and differential strategy, which makes the FTDZOA efficient in solving low, medium and high-dimensional FS problems, and it also confirms that FTDZOA is a robust and reliable FS method.
From Table 6, it can be seen that ABO, BOA, EO, GWO, and MVO perform significantly weaker than FTDZOA in low-dimensional datasets with 87.5% probability; DE, PSO, and WOA with 100% probability; and ZOA with 75% probability. The above statistical analysis of the results shows that due to the FTDZOA’s superiority in strategy, it makes its FS performance better than other algorithms in low-dimensional datasets. Meanwhile, ABO, BOA, DE, GWO, PSO, and WOA are weaker in significance than FTDZOA at 100% probability in the medium dimensional dataset. EO, MVO, and ZOA are significantly weaker than FTDZOA with 87.5% probability. The above analysis shows that the global search performance of FTDZOA is improved due to the introduction of the triple mean point guidance strategy as well as the differential strategy, which in turn enables it to explore the more promising optimal solution regions, which makes FTDZOA outperform the comparison algorithms on the medium-dimensional FS dataset. FTDZOA significantly outperforms ABO, BOA, DE, MVO, PSO, WOA, and ZOA in high dimensional datasets and achieves excellent results, which is mainly due to the fact that the strategy proposed in this paper improves the algorithm’s exploitation ability and global search ability and also confirms that the FS method based on FTDZOA proposed in this paper possesses powerful solving performance of the FS method.

4.7. Convergence Analysis

In the above subsection, the FS performance of the FTDZOA-based FS method is analyzed from the perspectives of population diversity, exploration and exploitation stage, fitness value, and stability, which confirms that FTDZOA possesses an efficient solution accuracy in solving the FS problem, and it can efficiently perform elimination of the redundant features and improve the classification accuracy. However, in addition to this, the convergence property of the algorithm is also important, which is directly related to the practicality and reliability of the algorithm. Figure 12 shows the convergence graph of FTDZOA in solving the low-dimensional FS problem, Figure 13 shows the convergence graph of FTDZOA in solving the medium-dimensional FS problem, and Figure 14 shows the convergence graph of FTDZOA in solving the high-dimensional FS problem, where the X-axis denotes the number of iterations and the Y-axis denotes the adaptation value.
It can be seen in Figure 12 that all the algorithms can effectively reduce the fitness value when solving the low-dimensional FS problem, and it is worth noting that in the 10th iteration, the FTDZOA has taken a clear convergence advantage in convergence accuracy, possessing faster convergence speed, which is mainly due to this paper’s fractional order search strategy, which enhances the local development ability of the algorithm, improves the convergence speed, and enhances the practicality of the algorithm. After the 10th iteration, FTDZOA still optimizes the FS problem steadily, which is mainly due to the introduction of the triple mean point guidance strategy, and differential strategy, which strengthen the global search ability of FTDZOA and further illustrates FTDZOA’s reliability. Through the above analysis, it is confirmed that when solving the low-dimensional FS problem, FTDZOA has the advantage of convergence performance due to the introduction of the learning strategy, which makes it more practical and reliable.
As can be seen from Figure 13, with the increase in the dimensionality of the FS problem, all algorithms are able to effectively reduce the fitness value when solving the medium-dimensional FS problem, and similarly, the introduction of the fractional order search strategy allows the algorithms to take the performance advantage at the 10th iteration, with a stronger convergence speed. After the 10th iteration, FTDZOA is still able to reduce the fitness value, which confirms that FTDZOA has a very good global optimization search performance. The above analysis confirms that FTDZOA has a faster convergence speed and convergence accuracy in solving the medium-dimensional FS problem, and tt has higher practicability and reliability.
As can be seen from Figure 14, as the dimension of the FS problem reaches several hundred, all algorithms are effective in reducing the fitness value when solving the high-dimensional FS problem, although all algorithms suffer a loss in convergence speed. In most cases, FTDZOA occupies a clear convergence advantage after the 40th iteration, and the curve is still trending downward after the 40th iteration, which indicates that FTDZOA can continue to be stable in the extraction of effective features. This is due to the fact that the introduction of the fractional order search strategy, triple mean point guidance strategy, and differential strategy in this paper improves the algorithm’s global optimization ability. Through the above analysis, it is confirmed that FTDZOA also has faster convergence speed and convergence accuracy when solving the high-dimensional FS problem, and has higher practicality and reliability.

4.8. Feature Subset and Accuracy Analysis

The purpose of FS is mainly to remove the redundant features from the original dataset so as to improve the classification accuracy. Therefore, this subsection mainly analyzes the relationship between feature subset size and classification accuracy during the experiment. Table 7 shows the classification accuracy of the algorithm in solving the FS problem, and Table 8 shows the feature subset size of the algorithm in solving the FS problem, where “Mean Rank” denotes the average rank on all FS problems.
Combining Table 7 and Table 8, it can be seen that when solving the low-dimensional FS problem, all the algorithms can effectively downscale the redundant features of the original dataset and thus improve the classification accuracy, but it is worth noting that FTDZOA achieved excellent feature downsizing performance by having the first accuracy rate on 88% of the low-dimensional FS problem, but it is undeniably weaker than GWO in the Breastcancer dataset, which also reflects that there is still room for improvement of FTDZOA in some specific datasets. Meanwhile, through Table 8, we can find that the feature subset size obtained by FTDZOA is not dominant, which is mainly due to the fact that other algorithms eliminate the useful feature information, thus obtaining a smaller feature subset size, but the classification accuracy is greatly reduced. This also proves that the FS method based on FTDZOA has better global optimization ability, which can effectively jump out of the local optimal feature subset and improve the classification accuracy. When solving the medium-dimensional FS problem, FTDZOA has an accuracy rate of 63% in the medium-dimensional FS problems and the first overall ranking; although the winning rate is reduced, it still has a good solving advantage compared with other algorithms. It can be noticed that FTDZOA is not as effective as other algorithms in Zoo, WDBC, and BreastEW datasets, which indicates that FTDZOA still has the problem of falling into the locally optimal subset of features when solving certain FS problems, which is due to the increase in the dimension of the FS problem, resulting in the complexity of the search space. However, FTDZOA still has the best performance in the medium-dimension FS problem compared with the other algorithms. Compared with the other algorithms, FTDZOA still occupies a certain performance advantage in the middle-dimensional FS problem. When solving the high-dimensional FS problem, it can be seen that the solution performance of the other algorithms is greatly affected due to the sharp increase in the problem dimension, while FTDZOA still achieves excellent results by winning first place with a probability of 72%. It is weaker than DE and EO only in the Musk and Clean datasets. However, it is undeniable that from a comprehensive point of view, FTDZOA occupies a great advantage in solving high-dimensional FS problems. In summary, although FTDZOA does not perform as well as other algorithms in a few specific FS problems, from a comprehensive point of view, FTDZOA can be regarded as a FS method with efficient solution performance, good global optimization performance, and effective removal of redundant features from the dataset.

4.9. Runtime Analysis

In the above subsections, the solution performance and reliability of FTDZOA have been verified, but in addition, the actual running time of the algorithm should also be taken into account, which determines the practicality of the algorithm in real-world environments. Therefore, in this section, the running time of the algorithm is mainly analyzed to confirm the practicality of FTDZOA in a real environment. Table 9 shows the running time of the algorithm in solving the FS problem, where “Mean Rank” indicates the average rank of the algorithm in different FS problems. From Table 9, it can be seen that FTDZOA has the least running time on 19 datasets, occupying 83%, which is mainly due to the simplicity of the logic of the FTDZOA algorithm, which means the actual execution time of the algorithm will be greatly saved. Meanwhile, the average ranking shows that FTDZOA has an average ranking of 1.26, which is ahead of the other algorithms. From this, we can conclude that the FS method based on FTDZOA is a highly practical FS method.

4.10. Synthesized Analysis

The reliability, stability, and practicality of the FS method based on FTDZOA have been verified in the previous subsections, and in this subsection, the main focus is on the comprehensive analysis of the different evaluation metrics, which involve the best fitness value, the mean fitness value, the worst fitness value, the classification accuracy, the feature subset size, and the running time. Figure 15 shows the bar stacked plot of the average ranking on different metrics, where the lower the height of the bar indicates that the algorithm’s comprehensive performance on the FS problem is better. From Figure 15, it can be seen that FTDZOA has the smallest stacking height, so from a comprehensive point of view, the FS method based on FTDZOA is a FS method with efficient solving performance, robustness, reliability, and practicality.

5. Conclusions

In this paper, we address the problem of the original ZOA’s tendency to fall into a subset of locally optimal features when solving the FS problem, resulting in poor classification accuracy. We propose an improved version of ZOA called FTDZOA. Firstly, the fractional order search strategy is introduced to enhance the exploitation ability of ZOA in solving the FS problem, which makes full use of the retention ability of the fractional order search strategy on the information of the previous generation of individuals and improves the classification accuracy. Secondly, the triple mean point guidance strategy is introduced, which combines the information of the global optimal point, random point, and current point to effectively enhance the exploration ability of ZOA, and also ensures the exploitation ability of the algorithm so that the convergence performance of the algorithm is effectively enhanced. Finally, the exploration ability of ZOA is enhanced by introducing the differential strategy, which combines the information difference between different individuals and reduces the risk of falling into the subset of local optimal features. Subsequently, the proposed FTDZOA-based FS method is used to solve 23 FS problems involving low, medium, and high dimensions, and the experimental results confirm that the FTDZOA-based FS method is an efficient, robust, reliable, and practical FS method.
However, it is undeniable that the performance of FTDZOA in certain FS issues still needs improvement. Therefore, in future work, we will propose more effective search strategies tailored to specific feature selection datasets to address this limitation. Additionally, the FTDZOA algorithm presented in this paper is a binary algorithm suitable for solving combinatorial optimization problems. In this paper, only its performance in solving feature selection problems has been evaluated. In future work, we will extend FTDZOA to the field of aviation scheduling, image processing, and natural language processing to tackle challenging problems in real-world scenarios. Furthermore, we will model feature selection issues in real-world scenarios according to the actual situation and develop efficient algorithms for solving specific FS models based on FTDZOA.

Author Contributions

Conceptualization, F.C. and R.X.; methodology, F.C. and R.X.; software, F.C.; validation, F.C. and S.Y.; formal analysis, F.C.; investigation, F.C. and L.X.; resources, F.C.; data curation, F.C. and S.Y.; writing—original draft preparation, F.C.; writing—review and editing, F.C. and R.X.; visualization, F.C., S.Y. and L.X.; supervision, F.C.; project administration, S.Y. and L.X.; funding acquisition, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Guangzhou Huashang College Daoshi Project, grant number 2024HSDS06; in part by the Key Area Special Project for General Colleges and Universities in Guangdong Province, grant number 2024ZDZX3035; in part by the Guangdong Province Ordinary University Characteristic Innovation Project, grant number 2019KTSCX236; and in part by the Guangdong Province Ordinary University Characteristic Innovation Project, grant number 2022KTSCX379.

Data Availability Statement

All data in this paper can be obtained by contacting the corresponding author.

Acknowledgments

We are grateful to the publisher for formatting the paper and to the editor and reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Albukhanajer, W.A.; Briffa, J.A.; Jin, Y. Evolutionary multiobjective image feature extraction in the presence of noise. IEEE Trans. Cybern. 2014, 45, 1757–1768. [Google Scholar] [CrossRef]
  2. Sun, T.; Lv, J.; Zhao, X.; Li, W.; Zhang, Z.; Nie, L. In vivo liver function reserve assessments in alcoholic liver disease by scalable photoacoustic imaging. Photoacoustics 2023, 34, 100569. [Google Scholar] [CrossRef]
  3. Huang, H.H.; Shu, J.; Liang, Y. MUMA: A multi-omics meta-learning algorithm for data interpretation and classification. IEEE J. Biomed. Health Inform. 2024, 28, 2428–2436. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Xu, Y.; Song, J.; Zhou, Q.; Rasol, J.; Ma, L. Planet craters detection based on unsupervised domain adaptation. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 7140–7152. [Google Scholar] [CrossRef]
  5. Zhang, C.; Ge, H.; Zhang, S.; Liu, D.; Jiang, Z.; Lan, C.; Hu, R. Hematoma evacuation via image-guided para-corticospinal tract approach in patients with spontaneous intracerebral hemorrhage. Neurol. Ther. 2021, 10, 1001–1013. [Google Scholar] [CrossRef]
  6. Huang, H.; Wu, N.; Liang, Y.; Peng, X.; Shu, J. SLNL: A novel method for gene selection and phenotype classification. Int. J. Intell. Syst. 2022, 37, 6283–6304. [Google Scholar] [CrossRef]
  7. Zawbaa, H.M.; Emary, E.; Grosan, C.; Snasel, V. Large-dimensionality small-instance set feature selection: A hybrid bio-inspired heuristic approach. Swarm Evol. Comput. 2018, 42, 29–42. [Google Scholar] [CrossRef]
  8. Manbari, Z.; AkhlaghianTab, F.; Salavati, C. Hybrid fast unsupervised feature selection for high-dimensional data. Expert Syst. Appl. 2019, 124, 97–118. [Google Scholar] [CrossRef]
  9. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2015, 20, 606–626. [Google Scholar] [CrossRef]
  10. Tubishat, M.; Ja’afar, S.; Alswaitti, M.; Mirjalili, S.; Idris, N.; Ismail, M.A.; Omar, M.S. Dynamic salp swarm algorithm for feature selection. Expert Syst. Appl. 2021, 164, 113873. [Google Scholar] [CrossRef]
  11. Kamath, U.; De Jong, K.; Shehu, A. Effective automated feature construction and selection for classification of biological sequences. PLoS ONE 2014, 9, e99982. [Google Scholar] [CrossRef]
  12. Crone, S.F.; Kourentzes, N. Feature selection for time series prediction–A combined filter and wrapper approach for neural networks. Neurocomputing 2010, 73, 1923–1936. [Google Scholar] [CrossRef]
  13. Hu, Z.; Bao, Y.; Xiong, T.; Chiong, R. Hybrid filter–wrapper feature selection for short-term load forecasting. Eng. Appl. Artif. Intell. 2015, 40, 17–27. [Google Scholar] [CrossRef]
  14. Wang, A.; An, N.; Chen, G.; Li, L.; Alterovitz, G. Accelerating wrapper-based feature selection with K-nearest-neighbor. Knowl.-Based Syst. 2015, 83, 81–91. [Google Scholar] [CrossRef]
  15. Jiménez-Cordero, A.; Morales, J.M.; Pineda, S. A novel embedded min-max approach for feature selection in nonlinear support vector machine classification. Eur. J. Oper. Res. 2021, 293, 24–35. [Google Scholar] [CrossRef]
  16. Nemnes, G.A.; Filipoiu, N.; Sipica, V. Feature selection procedures for combined density functional theory—Artificial neural network schemes. Phys. Scr. 2021, 96, 065807. [Google Scholar] [CrossRef]
  17. Xie, R.; Li, S.; Wu, F. An Improved Northern Goshawk Optimization Algorithm for Feature Selection. J. Bionic Eng. 2024, 21, 2034–2072. [Google Scholar] [CrossRef]
  18. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  19. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  20. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  21. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  22. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  23. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access 2019, 7, 66084–66109. [Google Scholar] [CrossRef]
  24. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  25. Shabani, A.; Asgarian, B.; Salido, M.; Gharebaghi, S.A. Search and rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  26. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  27. Cheng, R.; Jin, Y. A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 2014, 45, 191–204. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  29. Wang, Y.; Ran, S.; Wang, G.G. Role-oriented binary grey wolf optimizer using foraging-following and Lévy flight for feature selection. Appl. Math. Model. 2024, 126, 310–326. [Google Scholar] [CrossRef]
  30. Mostafa, R.R.; Khedr, A.M.; Al Aghbari, Z.; Afyouni, I.; Kamel, I.; Ahmed, N. An adaptive hybrid mutated differential evolution feature selection method for low and high-dimensional medical datasets. Knowl.-Based Syst. 2024, 283, 111218. [Google Scholar] [CrossRef]
  31. Gao, J.; Wang, Z.; Jin, T.; Cheng, J.; Lei, Z.; Gao, S. Information gain ratio-based subfeature grouping empowers particle swarm optimization for feature selection. Knowl.-Based Syst. 2024, 286, 111380. [Google Scholar] [CrossRef]
  32. Braik, M.; Hammouri, A.; Alzoubi, H.; Sheta, A. Feature selection based nature inspired capuchin search algorithm for solving classification problems. Expert Syst. Appl. 2024, 235, 121128. [Google Scholar] [CrossRef]
  33. Askr, H.; Abdel-Salam, M.; Hassanien, A.E. Copula entropy-based golden jackal optimization algorithm for high-dimensional feature selection problems. Expert Syst. Appl. 2024, 238, 121582. [Google Scholar] [CrossRef]
  34. Abdel-Salam, M.; Askr, H.; Hassanien, A.E. Adaptive chaotic dynamic learning-based gazelle optimization algorithm for feature selection problems. Expert Syst. Appl. 2024, 256, 124882. [Google Scholar] [CrossRef]
  35. Singh, L.K.; Khanna, M.; Garg, H.; Singh, R. Emperor penguin optimization algorithm-and bacterial foraging optimization algorithm-based novel feature selection approach for glaucoma classification from fundus images. Soft Comput. 2024, 28, 2431–2467. [Google Scholar] [CrossRef]
  36. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  37. Bui, N.D.H.; Duong, T.L. An Improved Zebra Optimization Algorithm for Solving Transmission Expansion Planning Problem with Penetration of Renewable Energy Sources. Int. J. Intell. Eng. Syst. 2024, 17, 202–211. [Google Scholar]
  38. Qi, Z.; Peng, S.; Wu, P.; Tseng, M.L. Renewable Energy Distributed Energy System Optimal Configuration and Performance Analysis: Improved Zebra Optimization Algorithm. Sustainability 2024, 16, 5016. [Google Scholar] [CrossRef]
  39. Amin, R.; El-Taweel, G.; Ali, A.F.; Tahoun, M. Hybrid Chaotic Zebra Optimization Algorithm and Long Short-Term Memory for Cyber Threats Detection. IEEE Access 2024, 12, 93235–93260. [Google Scholar] [CrossRef]
  40. Cui, Y.; Hu, W.; Rahmani, A. Fractional-order artificial bee colony algorithm with application in robot path planning. Eur. J. Oper. Res. 2023, 306, 47–64. [Google Scholar] [CrossRef]
  41. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  42. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artif. Intell. Rev. 2023, 56, 11675–11738. [Google Scholar] [CrossRef]
  43. Kennedy, J.; Eberhart, R. Particle swarm optimization. Proc. IEEE Int. Conf. Neural Netw. 1995, 4, 1942–1948. [Google Scholar]
  44. Rocca, P.; Oliveri, G.; Massa, A. Differential evolution as applied to electromagnetics. IEEE Antennas Propag. Mag. 2011, 53, 38–49. [Google Scholar] [CrossRef]
  45. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  48. Qi, X.; Zhu, Y.; Zhang, H. A new meta-heuristic butterfly-inspired algorithm. J. Comput. Sci. 2017, 23, 226–239. [Google Scholar] [CrossRef]
  49. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  50. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
Figure 1. (a) The flowchart of ZOA algorithm. (b) The flowchart of FTDZOA algorithm.
Figure 1. (a) The flowchart of ZOA algorithm. (b) The flowchart of FTDZOA algorithm.
Biomimetics 09 00632 g001
Figure 2. Triple mean point simulation diagram.
Figure 2. Triple mean point simulation diagram.
Biomimetics 09 00632 g002
Figure 3. The differential strategy simulation diagram.
Figure 3. The differential strategy simulation diagram.
Biomimetics 09 00632 g003
Figure 4. Flowchart for calculating the fitness value.
Figure 4. Flowchart for calculating the fitness value.
Biomimetics 09 00632 g004
Figure 5. Population diversity on FS datasets.
Figure 5. Population diversity on FS datasets.
Biomimetics 09 00632 g005
Figure 6. Exploration–exploitation ratio on FS datasets.
Figure 6. Exploration–exploitation ratio on FS datasets.
Biomimetics 09 00632 g006
Figure 7. Strategies effectiveness evaluation bar chart.
Figure 7. Strategies effectiveness evaluation bar chart.
Biomimetics 09 00632 g007
Figure 8. Box plots on low-dimensional FS problems.
Figure 8. Box plots on low-dimensional FS problems.
Biomimetics 09 00632 g008
Figure 9. Box plots on medium-dimensional FS problems.
Figure 9. Box plots on medium-dimensional FS problems.
Biomimetics 09 00632 g009
Figure 10. Box plots of high-dimensional FS problems.
Figure 10. Box plots of high-dimensional FS problems.
Biomimetics 09 00632 g010
Figure 11. Ranking statistics chart based on fitness values.
Figure 11. Ranking statistics chart based on fitness values.
Biomimetics 09 00632 g011
Figure 12. Convergence curves in low-dimensional FS problems.
Figure 12. Convergence curves in low-dimensional FS problems.
Biomimetics 09 00632 g012
Figure 13. Convergence curves in the medium-dimensional FS problem.
Figure 13. Convergence curves in the medium-dimensional FS problem.
Biomimetics 09 00632 g013
Figure 14. Convergence curves in the high-dimensional FS problem.
Figure 14. Convergence curves in the high-dimensional FS problem.
Biomimetics 09 00632 g014
Figure 15. Comprehensive performance stacking chart.
Figure 15. Comprehensive performance stacking chart.
Biomimetics 09 00632 g015
Table 1. Feature selection algorithm summarization table.
Table 1. Feature selection algorithm summarization table.
AlgorithmsAccuracyTimeErrorFeature
BFLGWOYesNoYesYes
A-HMDEYesNoYesNo
ISPSONoNoYesYes
LCBCSANoYesNoYes
BEGJOYesNoYesNo
ACD-GOANoYesNoNo
EPO-BFONoYesNoYes
Table 2. The information on FS datasets.
Table 2. The information on FS datasets.
CategoryNameFeatures SizeClassification SizeDataset Size
Aggregation27788
Banana225300
Iris43150
LowBupa62345
Glass97214
Breastcancer92699
Lipid102583
HeartEW132270
Zoo167101
Vote162435
Congress162435
MediumLymphography184148
Vehicle184846
WDBC302569
BreastEW302569
SonarEW602208
Libras9015360
Hillvalley1002606
MUSK1662476
HighClean1672476
Semeion256101593
Meadelon50022600
Isolet617261559
Table 3. Comparison algorithm parameter information.
Table 3. Comparison algorithm parameter information.
AlgorithmsProposed TimeParameters Settings
Particle Swarm Optimization (PSO) [43]1995 w = 1 ,   w p = 0.99 ,   c 1 = 1.5 ,   c 2 = 2.0
Differential Evolution (DE) [44]1997 F = 0.5 ,   C R   =   0.9
Grey Wolf Optimizer (GWO) [45]2014 α = 2 2 · ( F E s M a x F E s )
Multi-Verse Optimizer (MVO) [46]2016 W E P M a x = 1 ,   W E P M i n = 0.2
Whale Optimization Algorithm
(WOA) [47]
2016 b = 1 ,   a 1 = 2 ( 2 · F E s / M a x F E s ) ,   a 2 = 1 ( F E s / M a x F E s )
Artificial Butterfly Optimization (ABO) [48]2017 r a t i o e = 0.2 ,   s t e p e = 0.05
Butterfly Optimization Algorithm
(BOA) [49]
2019 p = 0.8 ,   α   =   0.1 ,   c   =   0.01
Equilibrium Optimizer
(EO) [50]
2020 V = 1 ,   a 1 = 2 ,   a 2 = 1 ,   G P = 0.5
ZOA2022No parameters
FTDZOANANo parameters
Table 4. Fitness values on 23 FS datasets.
Table 4. Fitness values on 23 FS datasets.
CategoryDatasetsMetricABOBOADEEOGWOMVOPSOWOAZOAFTDZOA
AggregationBest0.1000.1060.1000.1000.1000.1000.1000.1000.1000.100
Mean0.1000.1150.1000.1000.1000.1000.1000.1000.1000.100
Worst0.1000.3770.1000.1000.1000.1000.1000.1000.1000.100
Rank1/1/10/10/101/1/11/1/11/1/11/1/11/1/11/1/11/1/11/1/1
BananaBest0.1980.1930.1910.1930.1920.2030.1930.2100.1980.187
Mean0.1980.1980.1910.1930.1920.2030.1930.2100.2080.187
Worst0.1980.3650.1910.1930.1920.2030.1930.2100.3350.187
Rank7/6/64/7/102/2/25/4/43/3/39/8/75/4/410/10/88/9/91/1/1
IrisBest0.0250.0250.0800.0250.0250.0500.0550.0800.0250.025
Mean0.0250.0250.0800.0250.0260.0510.0590.0820.0250.025
Worst0.0250.0250.0800.0250.0500.0800.0800.1100.0250.025
Low Rank1/1/11/1/19/9/71/1/11/6/67/7/78/8/79/10/101/1/11/1/1
BupaBest0.3070.3670.3070.3110.2980.3140.3110.3110.3110.251
Mean0.3100.3820.3340.3220.3050.3230.3240.3180.3180.251
Worst0.3500.4190.3480.3410.3590.3690.3890.3500.3460.251
Rank3/3/50/10/103/9/45/6/22/2/79/7/85/8/95/4/55/5/31/1/1
GlassBest0.3440.2480.2790.2790.3120.2260.2690.2170.2800.183
Mean0.3520.2930.2990.2840.3140.2430.2790.2370.2970.196
Worst0.3760.3220.3550.2900.3230.3020.3020.2710.3440.248
Rank0/10/104/6/66/8/96/5/39/9/73/3/55/4/42/2/28/7/81/1/1
BreastcancerBest0.0590.0420.0590.0530.0400.0530.0530.0590.0420.042
Mean0.0630.0520.0610.0540.0430.0570.0580.0660.0480.043
Worst0.0720.0660.0770.0720.0660.0700.0660.0960.0660.053
Rank8/9/72/4/28/8/95/5/71/2/25/6/65/7/28/10/102/3/22/1/1
LipidBest0.2580.2270.2190.2630.2390.2320.2320.2270.2580.216
Mean0.2620.2430.2570.2640.2470.2380.2450.2530.2600.233
Worst0.2660.2660.3040.2660.2740.2660.2660.2970.2660.266
Rank8/9/13/3/12/7/1010/10/17/5/85/2/15/4/13/6/98/8/11/1/1
HeartEWBest0.1060.0970.1050.1640.1470.1470.1550.1640.1370.081
Mean0.1360.1560.1300.1820.1640.1680.1690.1980.1710.094
Worst0.2060.2080.1860.2060.2380.2230.2380.2460.2150.165
Rank4/3/32/4/53/2/29/9/36/5/86/6/78/7/89/10/105/8/61/1/1
ZooBest0.0380.0380.0440.0440.0440.0310.0440.0310.0310.025
Mean0.1030.0860.0560.0660.1000.0390.0810.0960.0450.038
Worst0.1540.1210.0890.1150.1540.0500.1400.1340.0890.076
Rank5/10/95/7/67/4/37/5/57/9/92/2/17/6/82/8/72/3/31/1/2
VoteBest0.0350.0330.0390.0350.0460.0390.0390.0390.0060.006
Mean0.0370.0460.0540.0420.0510.0550.0510.0490.0060.006
Worst0.0370.0660.0770.0640.0580.0790.0730.0790.0060.006
Rank4/3/33/5/66/9/84/4/510/8/46/10/96/7/76/6/91/1/11/1/1
CongressBest0.0290.0330.0270.0580.0370.0380.0350.0560.0480.006
Mean0.0360.0480.0360.0660.0370.0490.0510.0690.0480.006
Worst0.0370.0850.0540.0680.0370.0750.0730.0930.0480.006
Rank3/3/24/6/92/2/510/9/66/4/27/7/85/8/79/10/108/5/41/1/1
MediumLymphographyBest0.0950.0900.0900.1150.0950.0840.0900.1120.0840.064
Mean0.1480.1540.1290.1330.1540.1180.1120.1400.1400.078
Worst0.1920.1830.2000.1770.2140.1770.1380.2050.1720.101
Rank7/8/74/10/64/4/810/5/47/9/102/3/44/2/29/6/92/7/31/1/1
VehicleBest0.2570.2940.2680.2520.2350.2370.2470.2570.2250.220
Mean0.2910.3210.2870.2750.2610.2690.2780.3060.2700.231
Worst0.3260.3700.3160.3200.2790.3050.3100.3430.2990.257
Rank7/8/810/10/109/7/66/5/73/2/24/3/45/6/57/9/92/4/31/1/1
WDBCBest0.0340.0410.0410.0450.0260.0560.0410.0420.0390.026
Mean0.0540.0640.0500.0490.0520.0720.0560.0630.0460.033
Worst0.0760.0860.0640.0700.0660.0900.0900.0850.0620.039
Rank3/6/65/9/85/4/39/3/51/5/410/10/95/7/98/8/74/2/21/1/1
BreastEWBest0.0350.0590.0400.0310.0360.0270.0270.0630.0310.013
Mean0.0450.0690.0560.0420.0480.0470.0410.0810.0510.030
Worst0.0590.0800.0680.0540.0650.0660.0590.0980.0680.041
Rank6/4/39/9/98/8/84/3/27/6/52/5/62/2/30/10/104/7/71/1/1
SonarEWBest0.1080.0690.0690.0130.0370.0540.0300.0660.0320.013
Mean0.1390.1170.0900.0330.0750.0930.0480.1190.0710.029
Worst0.1740.1700.1020.0570.1060.1330.0840.1820.0980.049
Rank10/10/99/8/88/6/51/2/25/5/66/7/73/3/37/9/104/4/41/1/1
LibrasBest0.1160.1760.1420.1300.1310.1480.1720.1620.1000.069
Mean0.1750.2010.1540.1650.1500.1880.1990.2140.1410.106
Worst0.2050.2310.1640.1850.1680.2170.2250.2840.1760.133
Rank3/6/610/9/96/4/24/5/55/3/37/7/79/8/88/10/102/2/41/1/1
HillvalleyBest0.3500.3340.3480.2860.2650.3750.3150.3000.2720.256
Mean0.3710.3610.3710.2990.3000.4080.3460.3330.2960.281
Worst0.3940.3810.3850.3210.3190.4370.3690.3650.3230.306
Rank9/8/97/7/78/9/84/3/32/4/20/10/106/6/65/5/53/2/41/1/1
MuskBest0.0970.0910.0720.0250.0370.0560.0700.0530.0510.025
Mean0.1310.1140.0930.0580.0630.0740.0960.0810.0920.054
Worst0.1520.1380.1100.0940.1140.1150.1220.1120.1200.075
Rank0/10/109/9/98/7/31/2/23/3/56/4/67/8/85/5/44/6/71/1/1
HighCleanBest0.0810.0940.0900.0200.0300.0600.0670.0380.0310.015
Mean0.1010.1110.1290.0420.0620.0770.1020.0850.0580.039
Worst0.1290.1250.1610.0620.0880.1170.1620.1280.0980.074
Rank8/7/710/9/69/10/92/2/13/4/36/5/57/8/105/6/74/3/41/1/2
SemeionBest0.1140.1120.0960.0900.0660.0880.1020.0970.0880.047
Mean0.1260.1260.1120.1070.0850.1110.1160.1180.1030.064
Worst0.1340.1350.1230.1180.1120.1300.1320.1340.1120.071
Rank10/9/89/10/106/6/55/4/42/2/23/5/68/7/77/8/84/3/31/1/1
MeadelonBest0.1760.2300.2170.1110.1140.1830.1960.1900.1130.100
Mean0.2600.2610.2370.1620.1400.2140.2250.2360.1560.133
Worst0.3130.3040.2690.2270.1930.2620.2470.2710.2480.181
Rank5/9/1010/10/99/8/72/4/34/2/26/5/68/6/47/7/83/3/51/1/1
IsoletBest0.1750.1590.1800.1140.0990.1520.1230.1410.1060.077
Mean0.1930.1790.1970.1390.1190.1720.1470.1690.1280.097
Worst0.2170.2040.2060.1630.1430.1980.1720.1930.1570.115
Rank9/9/108/8/810/10/94/4/42/2/27/7/75/5/56/6/63/3/31/1/1
Mean RankBest6.1306.4356.0435.0004.2175.6095.6096.4353.8261.043
Mean6.6097.4356.2614.3914.3915.6525.7397.2174.2171.000
Worst6.1747.1745.7833.4784.4785.9575.5657.5653.8261.087
Final RankBest8974355921
Mean81073356921
Worst89624751031
Table 5. The rank of the Friedman mean test.
Table 5. The rank of the Friedman mean test.
CategoryDatasetsABOBOADEEOGWOMVOPSOWOAZOAFTDZOA
Aggregation5.955.985.886.085.985.926.136.105.971.00
Banana6.974.202.005.473.008.905.479.908.101.00
Iris3.603.559.283.653.526.978.039.473.523.42
LowBupa3.409.976.905.902.936.756.035.776.351.00
Glass9.976.626.375.588.422.774.982.436.771.10
Breastcancer8.284.707.784.851.986.036.488.703.902.28
Lipid7.974.135.878.325.023.484.236.007.202.78
HeartEW3.635.222.908.105.805.956.209.376.501.33
Zoo8.057.004.585.357.722.536.877.653.002.25
Vote3.726.137.934.937.707.827.236.531.521.48
Congress3.375.883.388.973.706.676.379.226.451.00
MediumLymphography7.327.835.335.807.534.233.526.106.321.02
Vehicle7.089.586.635.022.954.285.358.254.681.17
WDBC5.377.775.154.175.639.035.887.283.601.12
BreastEW4.588.887.283.725.074.803.609.885.621.57
SonarEW9.438.226.172.075.036.602.988.304.621.58
Libras6.038.333.775.073.527.408.378.932.551.03
Hillvalley7.977.308.282.933.129.976.085.332.571.45
HighMusk9.708.656.632.532.904.176.875.026.502.03
Clean7.358.679.602.023.755.187.475.933.331.70
Semeion9.079.135.774.702.105.556.627.473.571.03
Madelon8.779.107.503.202.375.476.477.272.931.93
Isolet9.107.739.373.932.576.834.536.633.271.03
Mean Rank6.817.166.284.884.455.975.907.284.731.54
Final Rank89742651031
Table 6. Wilcoxon statistical rank sum test results.
Table 6. Wilcoxon statistical rank sum test results.
CategoryDatasetsABOBOADEEOGWOMVOPSOWOAZOA
Aggregation4.16 × 10−14/−4.16 × 10−14/−6.14 × 10−14/−6.14 × 10−14/−4.16 × 10−14/−4.16 × 10−14/−6.14 × 10−14/−6.14 × 10−14/−4.16 × 10−14/−
Banana1.69 × 10−14/−2.71 × 10−14/−1.69 × 10−14/−1.69 × 10−14/−1.69 × 10−14/−1.69 × 10−14/−1.69 × 10−14/−1.69 × 10−14/−4.16 × 10−14/−
Iris3.34 × 10−1/=3.34 × 10−1/=1.69 × 10−14/−3.34 × 10−1/=3.34 × 10−1/=4.16 × 10−14/−1.18 × 10−13/−1.19 × 10−13/−3.34 × 10−1/=
LowBupa4.16 × 10−14/−9.27 × 10−13/−7.56 × 10−13/−3.80 × 10−13/−8.70 × 10−14/−4.98 × 10−13/−3.00 × 10−13/−6.49 × 10−13/−6.89 × 10−13/−
Glass7.21 × 10−12/−1.01 × 10−11/−7.71 × 10−12/−6.81 × 10−12/−6.42 × 10−12/−2.57 × 10−9/−7.27 × 10−12/−9.75 × 10−9/−8.14 × 10−12/−
Breastcancer4.92 × 10−12/−1.50 × 10−7/−3.35 × 10−12/−1.76 × 10−11/−1.16 × 10−4/−1.42 × 10−11/−1.68 × 10−11/−6.31 × 10−12/−7.35 × 10−2/=
Lipid1.13 × 10−10/−8.62 × 10−3/−1.04 × 10−3/−1.23 × 10−10/−8.49 × 10−4/−5.65 × 10−2/=3.49 × 10−3/−1.39 × 10−5/−1.27 × 10−10/−
HeartEW4.42 × 10−8/−4.14 × 10−10/−2.99 × 10−7/−2.60 × 10−11/−8.80 × 10−11/−7.66 × 10−11/−6.74 × 10−11/−1.14 × 10−11/−5.22 × 10−11/−
Zoo1.78 × 10−9/−1.35 × 10−9/−1.53 × 10−7/−1.13 × 10−8/−1.86 × 10−10/−1.08 × 10−1/=1.11 × 10−9/−1.23 × 10−9/−2.26 × 10−2/−
Vote1.97 × 10−13/−1.13 × 10−12/−1.15 × 10−12/−6.52 × 10−13/−7.82 × 10−13/−1.18 × 10−12/−1.08 × 10−12/−1.59 × 10−13/−3.34 × 10−1/=
Congress1.58 × 10−13/−1.06 × 10−12/−1.03 × 10−12/−3.77 × 10−13/−1.69 × 10−14/−1.16 × 10−12/−1.17 × 10−12/−8.79 × 10−13/−1.69 × 10−14/−
MediumLymphography4.99 × 10−11/−5.00 × 10−11/−2.41 × 10−10/−2.62 × 10−11/−3.12 × 10−11/−3.42 × 10−9/−1.41 × 10−10/−2.34 × 10−11/−1.58 × 10−10/−
Vehicle2.79 × 10−11/−2.65 × 10−11/−2.62 × 10−11/−3.78 × 10−11/−1.30 × 10−9/−5.20 × 10−10/−8.80 × 10−11/−2.81 × 10−11/−1.91 × 10−9/−
WDBC5.00 × 10−10/−1.92 × 10−11/−1.69 × 10−11/−1.03 × 10−11/−2.72 × 10−10/−1.89 × 10−11/−1.87 × 10−11/−1.90 × 10−11/−2.74 × 10−9/−
BreastEW1.06 × 10−9/−2.90 × 10−11/−3.89 × 10−11/−1.78 × 10−8/−8.80 × 10−11/−2.98 × 10−8/−9.41 × 10−6/−2.92 × 10−11/−4.39 × 10−10/−
SonarEW2.91 × 10−11/−2.92 × 10−11/−2.77 × 10−11/−9.98 × 10−2/=2.06 × 10−10/−2.91 × 10−11/−2.90 × 10−5/−2.93 × 10−11/−2.38 × 10−10/−
Libras6.96 × 10−11/−2.98 × 10−11/−2.92 × 10−11/−4.02 × 10−11/−3.29 × 10−11/−2.97 × 10−11/−2.97 × 10−11/−2.98 × 10−11/−2.20 × 10−9/−
Hillvalley2.90 × 10−11/−3.01 × 10−11/−3.00 × 10−11/−6.74 × 10−8/−1.35 × 10−7/−3.00 × 10−11/−3.01 × 10−11/−3.29 × 10−11/−6.75 × 10−5/−
HighMusk3.01 × 10−11/−3.01 × 10−11/−4.06 × 10−11/−2.74 × 10−1/=7.24 × 10−2/=9.18 × 10−6/−7.37 × 10−11/−1.05 × 10−8/−6.70 × 10−10/−
Clean3.00 × 10−11/−3.01 × 10−11/−3.01 × 10−11/−2.67 × 10−1/=3.37 × 10−7/−1.32 × 10−10/−4.06 × 10−11/−6.11 × 10−10/−3.32 × 10−6/−
Semeion3.01 × 10−11/−3.01 × 10−11/−3.02 × 10−11/−3.01 × 10−11/−9.42 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−
Meadelon3.34 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.71 × 10−4/−1.76 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.03 × 10−3/−
Isolet3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.34 × 10−11/−1.56 × 10−8/−3.02 × 10−11/−3.01 × 10−11/−3.02 × 10−11/−1.46 × 10−10/−
+/−/=0/22/10/22/10/23/00/19/40/20/30/21/20/23/00/23/00/20/3
Table 7. Classification accuracy on the FS problem.
Table 7. Classification accuracy on the FS problem.
DatasetsABOBOADEEOGWOMVOPSOWOAZOAFTDZOA
Aggregation100.0096.38100.00100.00100.00100.00100.00100.00100.00100.00
11011111111
Banana89.1588.8989.9189.6289.8188.5889.6287.8387.6790.38
67243849101
Iris100.00100.0096.67100.00100.0099.7896.6796.11100.00100.00
11811781011
Bupa69.3763.1470.4370.4871.5970.5869.5769.1369.3773.91
81054236971
Glass63.1071.9070.1672.4670.0076.9073.5780.0071.6782.06
10685934271
Breastcancer96.3596.7996.8697.2998.4797.2297.1095.8597.2997.60
98731561032
Lipid71.9574.6874.1473.3675.2676.6475.9573.1372.4776.67
10567423891
HeartEW88.5885.8690.0682.5985.1985.3185.6280.1283.5893.02
34297651081
Zoo92.1795.3398.5096.8392.67100.0096.0094.8399.3399.67
10745916832
Vote97.0997.7096.9398.0896.1796.8296.6795.71100.00100.00
54639781011
Congress97.0197.4398.8193.9596.5597.4397.2494.3395.40100.00
63210745981
Lymphography87.3686.3290.1188.7485.5290.9292.0787.2486.7895.40
69451032781
Vehicle71.4467.9772.9073.1274.4274.7573.6369.7073.6177.44
81076324951
WDBC95.8795.4697.3595.5595.3794.7296.2894.4895.8797.17
47168931052
BreastEW98.1496.2898.8597.9497.2698.9198.8294.3796.4398.64
59267131084
SonarEW87.4889.6795.8598.3793.5894.2398.3789.9293.9898.70
10942752861
Libras83.3380.0088.8983.7085.6084.0382.1878.4386.1690.79
79264581031
Hillvalley60.1761.3565.7068.6867.4759.4566.2363.5867.9971.46
98624105731
Musk89.3790.8496.7495.9695.8996.6094.2195.3091.5896.70
10914537682
Clean92.0090.8892.8498.0795.9696.3993.3794.8895.9397.93
91081437652
Semeion91.2590.2594.7993.3894.1593.1192.4791.9993.8796.72
91025367841
Madelon75.6774.0781.1884.0886.8081.7280.3877.7583.9087.11
91063257841
Isolet82.0882.6885.3687.3289.5786.2988.9584.4888.4091.21
10975263841
Mean Rank7.177.574.394.484.874.574.967.965.261.35
Final Rank89235461071
Table 8. Feature subset size on the FS problem.
Table 8. Feature subset size on the FS problem.
DatasetsABOBOADEEOGWOMVOPSOWOAZOAFTDZOA
Aggregation2.00 1.972.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00
2122222222
Banana2.00 1.97 2.00 2.00 2.00 2.00 2.00 2.00 1.932.00
3233333313
Iris1.00 1.00 2.00 1.00 1.03 1.97 1.17 1.87 1.00 1.00
11101697811
Bupa2.07 3.03 4.07 3.37 2.97 3.47 3.00 2.43 2.57 1.00
27108596341
Glass1.803.60 2.70 3.27 3.93 3.20 3.67 5.10 3.77 3.13
16259471083
Breastcancer2.70 2.07 2.97 2.70 2.60 2.87 2.90 2.60 2.10 1.90
62106489431
Lipid1.001.50 2.47 2.40 2.47 2.80 2.83 1.17 1.20 2.33
14767910235
HeartEW4.33 3.80 5.23 3.30 4.00 4.63 5.20 2.472.97 4.00
74103589125
Zoo5.177.00 6.73 6.07 5.50 6.30 7.17 7.97 6.20 5.67
18742691053
Vote1.70 4.00 4.27 3.97 2.67 4.27 3.30 1.60 1.00 1.00
4897596311
Congress1.53 3.93 4.10 1.87 1.00 4.17 4.17 2.87 1.00 1.00
4785199611
Lymphography6.23 5.63 7.23 5.70 4.30 6.50 7.33 4.50 3.836.63
64952710318
Vehicle6.20 5.93 7.80 5.97 5.50 7.50 7.27 5.93 5.80 5.00
74106298431
WDBC4.93 7.00 7.87 2.67 3.07 7.23 6.90 3.87 2.60 2.17
68103497521
BreastEW8.57 10.77 13.80 7.03 7.00 11.10 9.17 9.17 5.73 5.20
58104396621
SonarEW15.67 14.50 31.60 11.23 10.20 24.53 20.23 17.17 10.1310.27
65104298713
Libras22.37 19.03 48.70 16.50 18.80 39.60 35.17 18.17 14.6720.63
75102498316
Hillvalley12.13 13.10 62.67 17.03 7.50 42.83 41.67 5.637.93 24.30
45106298137
Musk57.83 51.87 105.13 36.57 42.87 72.20 73.27 64.03 26.5040.03
65102489713
Clean48.53 47.83 107.03 41.13 42.67 74.00 70.43 64.73 36.07 34.57
65103498721
Semeion119.83 98.20 167.70 121.57 83.37125.67 122.93 118.23 122.07 87.93
53106198472
Madelon205.67 140.60 337.17 92.03 108.23 245.93 243.73 178.10 55.1086.20
75103498612
Isolet198.00 145.40 401.17 152.93 154.87 297.43 295.90 181.63 146.83 112.37
72104598631
Mean Rank4.524.748.5652174.263.747.877.524.832.522.70
Final Rank56104398712
Table 9. Runtime on the FS problem.
Table 9. Runtime on the FS problem.
DatasetsABOBOADEEOGWOMVOPSOWOAZOAFTDZOA
Aggregation5.606.113.512.703.393.463.443.252.032.59
91083576412
Banana10.2211.176.564.816.106.426.306.103.314.54
91083576412
Iris4.345.013.162.913.093.112.962.902.511.95
91084675321
Bupa5.275.693.283.223.233.273.262.972.842.19
91084576321
Glass5.395.273.203.173.183.193.193.012.992.13
10984567321
Breastcancer5.135.693.493.413.443.473.483.203.432.27
91083567241
Lipid4.625.793.423.303.313.373.432.733.252.05
91074568231
HeartEW5.165.783.263.213.243.263.262.803.222.14
91083576241
Zoo5.885.773.133.133.143.173.152.933.102.15
10954687231
Vote4.375.893.343.223.283.343.332.743.312.11
91073486251
Congress4.365.073.323.253.273.343.342.853.302.06
91063478251
Lymphography5.145.603.173.183.183.193.202.783.152.12
91045678231
Vehicle6.436.693.613.613.613.663.673.343.522.48
91054678231
WDBC5.125.653.343.393.423.413.442.983.332.13
91045768231
BreastEW6.046.243.163.493.493.293.403.143.362.23
91037846251
SonarEW5.905.853.053.043.063.053.072.893.042.07
10964758231
Libras6.296.133.293.133.143.253.293.033.182.20
10983467251
Hillvalley5.936.233.683.233.293.493.553.033.352.20
91083467251
Musk6.436.333.783.293.343.583.663.273.392.40
10983467251
Clean6.576.293.803.333.353.563.673.273.402.41
10983467251
Semeion17.2818.5314.158.328.1311.1211.2210.198.457.10
91083267541
Madelon23.7634.9258.2519.7722.8342.0642.1630.8827.9424.78
37101289654
Isolet20.1823.1429.8511.3014.5722.6423.0714.7916.4813.36
69101378452
Mean Rank8.879.577.093.484.876.527.042.703.611.26
Final Rank91083567241
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, F.; Ye, S.; Xu, L.; Xie, R. FTDZOA: An Efficient and Robust FS Method with Multi-Strategy Assistance. Biomimetics 2024, 9, 632. https://doi.org/10.3390/biomimetics9100632

AMA Style

Chen F, Ye S, Xu L, Xie R. FTDZOA: An Efficient and Robust FS Method with Multi-Strategy Assistance. Biomimetics. 2024; 9(10):632. https://doi.org/10.3390/biomimetics9100632

Chicago/Turabian Style

Chen, Fuqiang, Shitong Ye, Lijuan Xu, and Rongxiang Xie. 2024. "FTDZOA: An Efficient and Robust FS Method with Multi-Strategy Assistance" Biomimetics 9, no. 10: 632. https://doi.org/10.3390/biomimetics9100632

APA Style

Chen, F., Ye, S., Xu, L., & Xie, R. (2024). FTDZOA: An Efficient and Robust FS Method with Multi-Strategy Assistance. Biomimetics, 9(10), 632. https://doi.org/10.3390/biomimetics9100632

Article Metrics

Back to TopTop