Next Article in Journal
Advancing Drug–Drug Interaction Prediction with Biomimetic Improvements: Leveraging the Latest Artificial Intelligence Techniques to Guide Researchers in the Field
Previous Article in Journal
Research on Design and Control Method of Flexible Wing Ribs with Chordwise Variable Camber
Previous Article in Special Issue
A Novel Elite-Guided Hybrid Metaheuristic Algorithm for Efficient Feature Selection
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MESPBO: Multi-Strategy-Enhanced Student Psychology-Based Optimization Algorithm for Global Optimization Problems and Feature Selection Problems

1
College of Tourism, Resources and Environment, Zaozhuang University, Zaozhuang 277160, China
2
School of Foreign Languages, Qufu Normal University, Jining 273165, China
3
College of Mechanical and Electrical Engineering, Zaozhuang University, Zaozhuang 277160, China
4
State Key Laboratory of Infrared Detection, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
*
Author to whom correspondence should be addressed.
Biomimetics 2026, 11(1), 37; https://doi.org/10.3390/biomimetics11010037
Submission received: 27 October 2025 / Revised: 22 December 2025 / Accepted: 29 December 2025 / Published: 5 January 2026

Abstract

Feature selection and continuous optimization are fundamental yet challenging tasks in machine learning and engineering design. To address premature convergence and insufficient population diversity in Student Psychology-Based Optimization (SPBO), this paper proposes a Multi-Strategy-Enhanced Student Psychology-Based Optimizer (MESPBO). The proposed method incorporates three complementary strategies: (i) a hybrid heuristic initialization scheme based on Latin Hypercube Sampling and Gaussian perturbation; (ii) an adaptive dual-learning position update mechanism to dynamically balance exploration and exploitation; (iii) a hybrid opposition-based reflective boundary control strategy to enhance search stability. Extensive experiments on the CEC2017 benchmark suite with 10, 30, and 50 dimensions demonstrate that MESPBO consistently outperforms 11 state-of-the-art metaheuristic algorithms. Specifically, MESPBO achieves the best Friedman mean ranks of 2.00, 1.67, and 1.67 under 10D, 30D, and 50D settings, respectively, indicating superior convergence accuracy, robustness, and scalability. In real-world feature selection tasks conducted on 10 benchmark datasets, MESPBO achieves the highest average classification accuracy on 9 datasets, reaching 100% accuracy on several datasets, while maintaining competitive performance on the remaining one. Moreover, MESPBO selects the smallest feature subsets on 7 datasets, typically retaining only 2–4 features without sacrificing classification accuracy. Compared with the original SPBO, MESPBO further reduces the fitness values on 7 out of 10 datasets, achieving an average improvement of approximately 10%. These results verify that MESPBO provides an effective trade-off between optimization accuracy and feature compactness, demonstrating strong adaptability and generalization capability for both global optimization and feature selection problems.

1. Introduction

With the rapid integration of big data and artificial intelligence, machine learning has become an essential analytical tool across diverse fields, including medical diagnosis [1], financial risk assessment [2], image processing [3], and natural language understanding [4]. The effectiveness of these models, however, heavily depends on the quality and relevance of their input features. In real-world datasets, it is common for the feature space to contain redundant, irrelevant, or noisy attributes, which can obscure the intrinsic patterns of data. The presence of such redundant information not only amplifies computational complexity and prolongs training time but also triggers the curse of dimensionality, making optimization in high-dimensional spaces extremely challenging. More critically, it increases the likelihood of overfitting, where a model performs well on the training set but fails to generalize to unseen data [5]. To mitigate these issues, feature selection (FS) has emerged as a fundamental step in data preprocessing. The core objective of FS is to identify a compact subset of informative and discriminative features that preserves the essential characteristics of the original dataset while eliminating redundancy and noise. By reducing dimensionality, feature selection not only enhances learning efficiency and model interpretability but also improves classification accuracy and generalization capability [6]. Consequently, it has become an indispensable component of modern machine learning pipelines and plays a pivotal role in constructing efficient, reliable, and explainable intelligent systems.
Despite the significance of feature selection, identifying the optimal subset of features from high-dimensional data remains an NP-hard combinatorial optimization problem. Traditional methods—such as filter [7], wrapper [8], and embedded approaches [9]—often suffer from limited search capability or dependency on specific learning models, which restricts their generalization across diverse datasets. In particular, filter methods rely heavily on statistical correlations and may overlook complex nonlinear dependencies among features, while wrapper and embedded methods are computationally expensive and prone to overfitting when dealing with large-scale datasets.
In recent years, intelligent optimization algorithms inspired by nature and social behavior have garnered significant attention due to their powerful global search capabilities and flexibility in handling complex, nonlinear, and multimodal search spaces [10]. These algorithms, known as meta-heuristic algorithms, have been successfully applied in fields such as engineering design [11], machine learning, energy management [12], and feature selection [13]. For instance, Mozhdehi et al. proposed a novel Sacred Religion Algorithm based on an evolutionary socioeconomic approach inspired by religious societies [14]. This algorithm models interactions among followers, missionaries, and leaders, demonstrating outstanding performance across 23 standard benchmark functions and five practical optimization problems. Liu et al. proposed a novel graduate student evolutionary algorithm inspired by the daily behaviors of graduate students [15]. By simulating key processes such as identifying research directions and focusing on studies, the algorithm established a mathematical model for GSEA. It demonstrated favorable results on the CEC2017 and CEC2022 test sets. Furthermore, it exhibited the capability to solve real-world optimization problems in unmanned aerial vehicle and robot path planning tasks. Fu et al. simulated the search, pursuit, predation, and food storage behaviors of the Eurasian magpie to establish a mathematical model of RBMO [16]. This model demonstrated remarkable performance across the CEC2014 (Dim = 10, 30, 50, and 100), CEC2017 (Dim = 10, 30, 50, and 100) suites, drone path planning, and five engineering design problems.
Furthermore, as the no free lunch (NFL) theorem demonstrates, no single optimization algorithm performs exceptionally well across all problems. When averaged across all optimization problems, all algorithms exhibit identical performance. This implies that we should select algorithms based on the specific problems we need to solve. For instance, Tang et al. proposed a multi-strategy particle swarm optimization hybrid dandelion optimization algorithm to address the issues of slow optimization speed and susceptibility to local optima in the dandelion optimization algorithm. This approach was developed specifically for three engineering design problems of varying complexity that required resolution. Experimental results demonstrated that the algorithm achieved significant improvements in solving these three problems [17]. To address deployment challenges in wireless sensor networks, Bao et al. proposed a multi-strategy integrated group teaching optimization algorithm (MSIGTOA) employing strategies such as chaotic inverse learning. This approach achieved higher coverage while reducing node usage by at least 10%, thereby significantly lowering WSN deployment costs [18]. To address the issue of intraday operation optimization in microgrids, Lu et al. developed an Enhanced Sardine Optimization Algorithm (ESOA) incorporating composite adversarial learning. Through validation in microgrid dispatch applications, it demonstrated significant improvements over the standard Sardine Optimization Algorithm (SOA) [19].
Among various metaheuristics, the Student Psychology-Based Optimization (SPBO) algorithm, inspired by the learning behavior and psychology of students in a classroom, has recently shown promising performance in balancing exploration and exploitation [20]. SPBO models the process of students learning from top-performing peers and improving their knowledge based on collective interaction and self-learning. Despite its advantages, the original SPBO still suffers from limitations such as random initialization, fixed learning parameters, and inefficient boundary handling, which can hinder its convergence stability and search performance in complex optimization landscapes.
Many researchers have improved SPBO to better address the optimization problems they need to solve. For example, to address power system optimization problems, Balu et al. proposed a novel quasi-oppositional chaotic student psychological optimization algorithm [21]. In two radial distribution systems, considering different load models under three load levels, the algorithm achieves optimal location and size for distributed generation and parallel capacitors, yielding excellent results. To perform big data clustering, Shanmugam et al. proposed a robust and effective IoT routing technique based on SPBO [22]. By performing feature selection during the mapping stage, they effectively improved clustering performance in terms of energy, clustering accuracy, Jaccard coefficient, and Land coefficient. To address the economic dispatch problem, Basu et al. proposed an improved SPBO [23]. Experiments on economic dispatch problems involving valve point results, limited feasible area, ramp rate boundaries, and multi-pipe fueling demonstrate that the MSPBO is capable of providing better results.
To overcome these challenges, this paper proposes a Multi-Strategy-Enhanced Student Psychology-Based Optimization (MESPBO) algorithm. The proposed method integrates several improvement strategies to enhance the robustness, adaptability, and convergence efficiency of the original SPBO. Specifically, a hybrid heuristic population initialization mechanism based on Latin Hypercube Sampling (LHS) and Gaussian perturbation is introduced to improve the diversity and uniformity of the initial population. Furthermore, an adaptive dual-learning position update mechanism dynamically adjusts the learning intensity and direction of each individual according to iteration progress and population diversity, ensuring a smooth transition from exploration to exploitation. Additionally, a hybrid opposition-based reflective boundary control strategy is designed to prevent the loss of potentially valuable individuals and maintain population diversity near the boundaries.
To evaluate the effectiveness of MESPBO, extensive experiments are conducted on a set of well-known benchmark functions from the IEEE CEC2017 test suite, as well as on feature selection problems from real-world datasets. Comparative analyses with 11 state-of-the-art algorithms demonstrate that MESPBO achieves superior optimization accuracy, faster convergence speed, and better robustness across different problem categories. Moreover, the proposed algorithm shows strong generalization capability, making it suitable for both continuous global optimization and discrete feature selection tasks.
The main contributions of this work are summarized as follows:
  • A hybrid heuristic initialization strategy combining LHS and Gaussian perturbation is proposed to ensure a well-distributed and diverse initial population.
  • An adaptive dual-learning mechanism is developed to dynamically balance exploration and exploitation throughout the optimization process.
  • Introduce hybrid oppositional reflection boundary control to enhance the stability and diversity of population evolution and improve the boundary control performance of the algorithm.
  • Comprehensive experiments on benchmark and real-world datasets validate the superior performance and general applicability of MESPBO.
  • The effectiveness of the MESPBO algorithm in solving practical problems was comprehensively analyzed by applying it to the practical application of photovoltaic model parameter extraction.
The remainder of this paper is organized as follows: Section 2 reviews the student psychology-based optimization algorithm. Section 3 presents the detailed formulation of the proposed MESPBO algorithm and its improvement strategies. Section 4 provides experimental settings and performance analysis on benchmark functions. Section 5 discusses the results and comparisons on feature selection tasks, and finally, Section 6 concludes the paper and outlines future research directions.

2. Student Psychology-Based Optimization Algorithm

Since the MESPBO proposed in this paper is an improvement upon SPBO, this section provides a brief introduction to SPBO. SPBO was conceived through research into student behavior across different schools and colleges, drawing inspiration from insights into student psychology. Bikash Das et al. categorize students into four groups: top performers, good students, average students, and those randomly attempting to improve. Each category exhibits distinct psychological activities, which are used to model the algorithm’s iterative update process. Specific details are as follows:

2.1. Best Student

Typically, the student who achieves the highest score on an exam is regarded as the top student. To maintain this position, the top student consistently strives to earn the highest grade in the class, necessitating greater effort. Consequently, the top student’s effort process can be modeled as shown in Equation (1).
X b e s t n e w = X b e s t + ( 1 ) k × r a n d × X b e s t X j ,
where X b e s t and X j represent the top student and the j -th randomly selected student in a specific subject, respectively. r a n d denotes a random number between 0 and 1, while k is a parameter randomly selected as either 1 or 2.

2.2. Good Student

If a student develops an interest in any subject, they will attempt to invest increasing effort into that subject to enhance their overall performance. Such students are defined as good students. The choices made by these students constitute a random process due to variations in student psychology. To achieve the highest scores on exams and become the best students, some students strive to exert effort comparable to or exceeding that of the top performers. The specific effort process of such students can be modeled by Equation (2).
X n e w i = X b e s t + r a n d × X b e s t X i ,
where X i denotes the i -th top student. Additionally, some students exert greater effort in their studies than their peers in the class and strive to emulate the efforts of the most accomplished students. The effort process of such students can be modeled by Equation (3).
X n e w i = X i + r a n d × X b e s t X i + r a n d × X i X m e a n ,
where X m e a n indicates the class’s average performance in a specific subject.

2.3. Average Student

Since the effort students exert depends on their interest in the subjects offered to them, if students are less interested in certain subjects, they will exert average effort in those subjects to improve their overall grades. Such students are defined as average students. Given the differing psychological profiles of students, their choices also constitute a random process, which can be modeled by Equation (4).
X n e w i = X i + r a n d × X m e a n X i ,

2.4. Students Who Try to Improve Randomly

In addition to the aforementioned three categories of students, some students attempt to improve their grades independently. They strive to enhance their overall exam performance by applying effort somewhat randomly across subjects. The efforts of this group of students can be specifically modeled as Equation (5).
X n e w i = X m i n + r a n d × X m a x X m i n ,
where X m a x and X m i n represent the upper and lower bounds of the problems to be solved, respectively, and also denote the maximum and minimum scores achievable in the student subjects. The classification of students is shown in Figure 1. Algorithm 1 presents the pseudocode for the SPBO algorithm. Figure 2 presents the flowchart of the SPBO algorithm.
Algorithm 1: the pseudo-code of the SPBO
1: Begin
2: Initialize the relevant parameters and the population
3: while  t < T m a x
4:    Evaluate the initial performance of the class
5:    check the category of the student
6:    Best students:
7:      Modify performance by Equation (1)
8:    Good students:
9:      Modify performance by Equations (2) and (3)
10:    Average students:
11:      Modify performance by Equation (4)
12:    Students who try to improve randomly:
13:      Modify performance by Equation (5)
14:    Check the boundary
15:    Update the students’ performance
16:  End while
17:  return best student
18: end

3. Proposed MESPBO

Although the SPBO algorithm demonstrates good optimization capability inspired by student psychological behaviors, its search performance can still be limited by insufficient population diversity, static learning mechanisms, and inefficient boundary control. To overcome these drawbacks, the Multi-strategy-Enhanced Student Psychology-Based Optimization (MESPBO) introduces three major improvements, focusing on the population initialization, student position updating, and boundary control mechanisms.

3.1. Hybrid Heuristic Population Initialization

In the original SPBO, the initial population is generated randomly, which may lead to uneven distribution and weak exploration ability in the early stage. To enhance the population diversity and improve convergence performance, MESPBO introduces a hybrid heuristic initialization mechanism, which integrates Latin Hypercube Sampling (LHS) [24] and Gaussian perturbation [25].

3.1.1. Latin Hypercube Sampling (LHS)

For a D -dimensional optimization problem, the initial population matrix is denoted by Equation (6).
X = [ x i , j ] N × D ,   i = 1,2 , , N ; j = 1,2 , , D
where N denotes the population size, while D represents the problem’s dimensionality. The j -th dimension is divided into N equally spaced intervals, which can be expressed by Equation (7).
I k ( j ) = k 1 N , k N , k = 1,2 , , N .
Then, one sample is randomly selected from each interval and permuted to ensure non-overlapping coverage, it can be expressed by Equation (8).
x i , j L H S = L j + U j L j π j r i ,
where L j and U j are the lower and upper bounds of the j -th dimension, r i I k ( j ) is a uniformly distributed random number in the k -th interval, and π j ( · ) is a random permutation function ensuring each interval is used once. This process guarantees that all samples are uniformly distributed in the search space.

3.1.2. Gaussian Perturbation

To prevent clustering and enhance local search diversity, Gaussian noise is applied to each generated solution. Specifically, it can be expressed as Equation (9).
x i , j G = x i , j L H S + N 0 , σ j 2 ,
where N ( 0 , σ j 2 ) is a Gaussian perturbation with zero mean and variance σ j 2 , and it can be expressed as Equation (10).
σ j = κ U j L j ,
where κ [ 0.01 , 0.05 ] is a control parameter determining perturbation intensity. In summary, the final initial population can be expressed as Equation (11).
X = X L H S + N 0 , Σ ,
where Σ = d i a g ( σ 1 2 , σ 2 2 , , σ D 2 ) .
In the initialization stage, each individual is first generated by Latin Hypercube Sampling to ensure global uniformity. Then, a Gaussian perturbation is applied to each dimension of the population to introduce local randomness, thereby enhancing population diversity and preventing premature clustering in specific regions.
This hybrid initialization mechanism combines the global uniformity of Latin Hypercube Sampling with the local stochasticity of Gaussian perturbation. It ensures a well-distributed and diverse initial population, which effectively improves global exploration and convergence stability.

3.2. Adaptive Dual-Learning Position Update Mechanism

In the original SPBO, each category of students updates their positions using fixed coefficients, resulting in a rigid exploration–exploitation balance that may not adapt to different optimization stages. To overcome this limitation, the proposed MESPBO introduces an adaptive dual-learning mechanism, where students dynamically adjust their learning intensity according to both iteration progress and population diversity [26].
Adaptive Learning Coefficients: At the beginning of the optimization, maintaining high population diversity is crucial for avoiding local optima; hence, a stronger global exploration component is adopted. As iterations proceed, the algorithm gradually shifts its focus toward local exploitation to refine the solutions around the best individuals. This transition is controlled by two time-varying learning coefficients α ( t ) and β ( t ) . The coefficients α ( t ) and β ( t ) decrease smoothly with the number of iterations, can be expressed as Equations (12) and (13)
α ( t ) = α m a x ( α m a x α m i n ) × t T λ ,
β ( t ) = β m i n + ( β m a x β m i n ) × 1 e μ t / T ,
where t is the current iteration count; T is the maximum iteration count; α m a x and α m i n control the exploration range; β m a x  and β m i n control the exploitation range; and λ and μ are adaptation factors controlling the decay rate. The values of λ and μ are both set to 1.2, allowing the algorithm to explore better in the early stages and develop better in the later stages. α ( t ) dominates global exploration in the early stage, while β ( t ) gradually strengthens local exploitation in later iterations.
Dual-Learning Position Update Formula: The position of the i t h student is updated according to both the best individual and the population mean, it can be expressed by Equation (14).
X i n e w = X i + α ( t ) r a n d 1 ( X b e s t X i ) + β ( t ) r a n d 2 ( X i X m e a n ) ,
where r a n d 1 and r a n d 2 represent random numbers between 0 and 1, X b e s t indicates the global optimum, and X m e a n denotes the average value across the entire population.
The adaptive dual-learning strategy enables each student to dynamically balance between exploration and exploitation according to the optimization phase and population diversity. Early in the search, larger α ( t ) values promote exploration of the global search space. As the iteration progresses, smaller α ( t ) and larger β ( t ) values focus the search around promising regions, thus improving convergence accuracy and stability.

3.3. Hybrid Opposition-Based Reflective Boundary Control

Boundary handling is critical for preserving the stability and continuity of the population evolution. Instead of the common truncation or random re-initialization, MESPBO adopts a hybrid opposition-based and reflective boundary control [27]. When an individual component X i , d goes outside its feasible interval [ X m i n , d , X m a x , d ] , it is remapped by either a reflective mapping or an opposition mapping. This hybrid strategy helps to keep infeasible individuals in the search process, reintroduce diverse candidate solutions, and reduce the likelihood of getting stuck in local optima.
Reflective mapping: When individuals in the algorithm exceed the bounds, reflective mapping can be used to reflect the out-of-bounds individuals back into the feasible domain, which can be specifically expressed as Equation (15).
X i , d r e f = X m i n , d + X i , d X m i n , d ,     X i , d < X m i n , d X m a x , d X i , d X m a x , d ,     X i , d > X m i n , d .
Opposition mapping: The opposition mapping assigns outlier individuals to opposing positions within the interval to enhance the algorithm’s exploration capability, which can be calculated using Equation (16).
X i , d o p p = X m i n , d + X m a x , d X i , d .
Additionally, to enhance the algorithm’s ability to escape local optima, we introduce small perturbations to opposing particles to increase diversity, which can be expressed as in Equation (17).
X i , d o p p , ϵ = c l i p X i , d o p p + ϵ , X m i n , d , X m a x , d ,
where c l i p ( , a , b ) restricts the value to [ a , b ] .
Hybrid strategy: When individual X i , d crosses the boundary, it selects the opposition mapping with probability P , otherwise it selects the reflection mapping. This can be expressed as Equation (18).
X i , d n e w = c l i p X i , d o p p + ϵ 1 , X m i n , d , X m a x , d ,     w i t h   p r o b a b i l i t y   P c l i p X i , d r e f + ϵ 2 , X m i n , d , X m a x , d ,     w i t h   p r o b a b i l i t y   1 P ,
where ϵ 1 and ϵ 2 represent random disturbances following a uniform distribution.
To robustly handle out-of-bounds individuals, we propose a hybrid opposition-based reflective boundary control. When a decision variable exceeds its feasible range, it is remapped either by an opposition mapping X o p p = X m i n + X m a x X or by a reflective mapping that mirrors the violation back into the feasible interval. The choice between opposition and reflection is governed by a probability P . Small random perturbations are applied to the remapped values to avoid deterministic cycles, and the final result is clipped to [ X m i n , X m a x ] . This hybrid mechanism preserves search continuity, reintroduces potentially promising candidates, and enhances population diversity, thereby reducing the likelihood of premature convergence. To visually illustrate the algorithm’s execution process, Figure 3 presents the flowchart of the MESPBO algorithm.

3.4. Time Complexity Analysis

Time complexity analysis is essential for any heuristic algorithm, as it directly reflects the algorithm’s scalability and computational efficiency on large-scale problems. In this section, we analyze the time complexity of MESPBO, where the main computational overhead comes from the iterative loop and the update operations for all individual students in each iteration. Specifically, the algorithm needs to traverse all students in each iteration and execute an update strategy based on their category. Therefore, the overall time complexity of the algorithm can be expressed as O ( N × T ) , where N is the population size and T is the maximum number of iterations. The time complexity is the same as the original SPBO algorithm, without affecting the algorithm’s performance by orders of magnitude. In conclusion, the improvement to SPBO is acceptable in terms of time complexity.

4. Experimental Analysis of Global Optimization Problems

4.1. IEEE CEC2017 Benchmark Suite

To comprehensively evaluate the performance of the proposed MESPBO algorithm, the IEEE CEC2017 benchmark suite is employed as the standard test platform. The CEC2017 test set is a well-established and widely recognized benchmark collection designed by the IEEE Congress on Evolutionary Computation for assessing the performance of real-parameter optimization algorithms [28]. It consists of 30 continuous optimization functions, including unimodal, multimodal, hybrid, and composition functions, which progressively increase in complexity. These functions effectively represent different optimization challenges, such as local optima entrapment, high-dimensional nonlinearity, and strong variable interactions.
The diversity and difficulty of the CEC2017 test suite make it an authoritative benchmark for verifying the global search capability, convergence accuracy, robustness, and stability of intelligent optimization algorithms. Moreover, it provides a fair and unified testing environment that facilitates direct comparison with other state-of-the-art algorithms. Therefore, this paper adopts the CEC2017 benchmark suite to rigorously test the proposed MESPBO algorithm, ensuring that its performance evaluation is objective, comprehensive, and consistent with current research standards in the optimization community.

4.2. Comparison of Algorithms and Parameter Settings

In this section, the effectiveness of the proposed MESPBO algorithm is systematically examined on the widely recognized CEC2017 benchmark suite and benchmarked against multiple competitive optimization algorithms. The comparison algorithms include: Particle Swarm Optimization (PSO) [29], Snake Optimization (SO) [30], Gold Rush Optimizer (GRO) [31], Secretary Bird Optimization Algorithm (SBOA) [32], enterprise development optimization algorithm (ED) [33], Escape optimization algorithm (ESC) [34], hyper-heuristic whale optimization algorithm (HHWOA) [35], Improved Grey Wolf Optimizer (IGWO) [36], Modified Student Psychology-Based Optimization algorithm (MSPBO) [23], Quasi-oppositional chaotic student psychology-based optimization algorithm (QOCSPBO) [21], and Student Psychology-based optimization algorithm (SPBO) [20]. The algorithms parameters are listed in Table 1. All experiments were conducted in a Windows 11 environment using an AMD Ryzen 7 9700X octa-core processor (3.80 GHz) with 48 GB of memory and MATLAB 2024b software.

4.3. Experimental Results and Analysis of CEC2017 Test Suite

This section evaluates the performance of MESPBO using the CEC2017 benchmark suite. To comprehensively test its capabilities, experiments were conducted on CEC2017 functions with dimensions of 10, 30, and 50. To ensure fairness, the population size for all algorithms was set to 50, and the maximum number of iterations was set to 100. To mitigate the impact of algorithmic randomness on results, each algorithm was independently run 30 times. Table 2, Table 3 and Table 4 records the mean (Ave) and standard deviation (Std) from these 30 independent runs. For a more intuitive analysis of the experimental outcomes, Figure 4 presents the convergence curves of the algorithms. To comprehensively display the results from the 30 runs, Figure 5 shows the box plots of the algorithms.
The convergence curves on the CEC2017 benchmark under 10D, 30D, and 50D scenarios indicate that MESPBO achieves the best overall performance on most test functions. Specifically, MESPBO exhibits a markedly faster decrease in average fitness than PSO, SO, GRO, SBOA, ED, ESC, HHWOA, IGWO, MSPBO, QOCSPBO, and SPBO, while continuing to refine solutions in later iterations to reach lower final fitness values. Its trajectories are generally smoother with smaller fluctuations, demonstrating superior stability and robustness. As the dimensionality increases, many baseline and improved algorithms suffer from slower convergence or premature stagnation, whereas MESPBO maintains strong descending trends and attains higher-precision solutions, with particularly pronounced advantages on complex multimodal and high-dimensional problems. Overall, these results confirm that the proposed multi-strategy enhancements effectively improve population diversity and the exploration–exploitation balance, thereby significantly strengthening global optimization capability and high-dimensional adaptability.
As presented in Table 2, the proposed MESPBO algorithm exhibits outstanding performance on the 10-dimensional CEC2017 benchmark suite. It consistently achieves the best or near-best mean fitness values on the majority of test functions, demonstrating its powerful global optimization capability in low-dimensional search spaces. Moreover, MESPBO reports significantly smaller standard deviations compared with classical algorithms such as PSO, SO, GRO, and SBO, as well as more recent variants including HHWOA, IGWO, MSPBO, QOCSPBO, and SPBO. This evidences the algorithm’s high robustness and stability, ensuring reliable performance across independent runs. Table 3 presents the comparative results of all algorithms tested on the 30-dimensional CEC2017 benchmark suite. As the dimensionality increases, the complexity of the optimization task rises dramatically due to the enlarged search space and more rugged fitness landscapes. Despite these challenges, the proposed MESPBO algorithm maintains excellent optimization performance. MESPBO achieves the best or competitive mean fitness values across a majority of the 30-dimensional test functions, demonstrating that its strong global search capability extends naturally to more complex, higher-dimensional settings. Additionally, the algorithm continues to achieve notably smaller standard deviation values than all other competitors. This indicates that MESPBO remains highly stable and robust even under increased search difficulty. Table 4 reports the experimental results on the 50-dimensional CEC2017 benchmark set. As the dimensionality further increases, the optimization landscape becomes highly rugged and multimodal, greatly intensifying the difficulty of locating global optima. Nevertheless, the proposed MESPBO algorithm continues to deliver remarkable performance. Across the majority of 50-dimensional test functions, MESPBO achieves the lowest mean fitness values, clearly outperforming both traditional and state-of-the-art metaheuristic competitors. Its performance advantage is especially prominent on complex multimodal functions, where maintaining global search ability is crucial. Moreover, MESPBO consistently reports the smallest standard deviations, reaffirming its strong robustness and solution stability even under extremely high-dimensional and complex search scenarios.
Based on the boxplot results of the CEC2017 benchmark functions across 10, 30, and 50 dimensions, the proposed MESPBO algorithm demonstrates consistently superior performance. It exhibits the smallest box heights, short whiskers, and nearly no outliers for most test functions, indicating highly concentrated solution distributions and excellent stability across repeated runs. As the dimensionality increases from 10 to 50, most competing algorithms show noticeable degradation, reflected by significantly expanded box ranges and numerous outliers. In contrast, MESPBO maintains compact distributions and low median fitness values, highlighting its strong capability in handling high-dimensional optimization tasks. For F5, F8, F10, F17, F22, and F29 functions, MESPBO achieves the best or near-best median performance while presenting substantially lower variability than other methods. Overall, these results confirm that MESPBO consistently preserves robust and accurate optimization performance across different dimensional settings and stands out as the most competitive algorithm among all compared methods.
Across the CEC2017 benchmark tests with 10, 30, and 50 dimensions, the experimental results demonstrate that MESPBO consistently delivers superior optimization performance. In terms of mean performance, median values, and distribution stability, MESPBO significantly outperforms the competing algorithms. As the dimensionality increases, most algorithms exhibit larger fluctuations, unstable convergence behaviors, and numerous outliers. In contrast, MESPBO maintains a compact solution distribution, stable convergence, and reliable performance even in high-dimensional scenarios, reflecting its strong adaptability and robustness. Overall, MESPBO demonstrates clear advantages in global search ability, solution stability, and cross-dimensional scalability, making it the most competitive and effective algorithm among all the methods evaluated in this study.

4.4. Friedman Mean Rank Test

To further assess the statistical significance of performance differences among algorithms, we employed Friedman’s median rank test for evaluation. Friedman’s test is a nonparametric statistical test designed to detect performance variations across multiple algorithms on various benchmark functions [37]. Unlike parametric tests such as analysis of variance, it does not assume a normal distribution of data, making it particularly suitable for analyzing optimization results where performance values may not follow a Gaussian distribution.
In this test, each algorithm is assigned a ranking for each benchmark function based on its performance, with the best-performing algorithm receiving the lowest average ranking. The ranking of the algorithm across all test functions is then calculated, yielding the average ranking for each algorithm. A lower average ranking indicates better overall performance. Table 5 shows the rankings of each algorithm across various dimensions on the CEC2017 test set. M . R represents the algorithm’s average ranking across 30 test functions, while T . R indicates its final ranking.
As shown in Table 5, the Friedman mean-rank test results demonstrate that MESPBO consistently achieves the best overall performance on the CEC2017 benchmark suite across 10-, 30-, and 50-dimensional settings. Specifically, the mean ranks of MESPBO are 2.00, 1.67, and 1.67 for the three dimensionalities, respectively, which are significantly lower than those of the eleven competing algorithms. Moreover, MESPBO ranks first in total ranking across all dimensions, indicating that it maintains the top position on the majority of test functions. In contrast, traditional algorithms such as PSO and SO, as well as several enhanced variants including IGWO, MSPBO, and QOCSPBO, exhibit relatively high mean ranks and total ranks, reflecting their inferior performance on most test functions. Although algorithms like SPBO and SBOA show comparatively better rankings in some dimensions, their mean ranks remain noticeably higher than those of MESPBO, implying that they cannot compete with MESPBO in terms of overall optimization performance. Overall, the Friedman test results further validate the stability, superiority, and consistency of MESPBO across different dimensional settings. These findings confirm that MESPBO delivers the most competitive and reliable performance among all compared algorithms and is the strongest algorithm in terms of comprehensive optimization capability.

5. MESPBO for Feature Selection

In this section, the proposed MESPBO algorithm is applied to the feature selection task to further verify its effectiveness and practicality in real-world optimization scenarios. Feature selection plays a crucial role in machine learning and data mining, as it aims to identify the most informative subset of features that can improve classification accuracy while reducing computational cost and model complexity. However, due to the combinatorial and highly nonlinear nature of the search space, traditional deterministic methods often fail to achieve satisfactory results, especially when dealing with high-dimensional datasets.
To address these challenges, population-based metaheuristic algorithms have been widely adopted for feature selection because of their strong global search capability and flexibility. By leveraging its enhanced exploration–exploitation balance and the reinforcement mechanism introduced through RBMO, the proposed MESPBO algorithm is expected to effectively search for optimal feature subsets and achieve a good trade-off between feature reduction and classification performance. The subsequent experiments evaluate MESPBO against several state-of-the-art metaheuristic algorithms on multiple benchmark datasets to demonstrate its robustness, convergence efficiency, and feature selection quality.

5.1. The Proposed MESPBO-KNN

The feature selection (FS) problem refers to the process of selecting an optimal subset of features from an original, high-dimensional feature space to achieve certain optimization objectives [38]. These objectives typically include improving the predictive performance of learning models, enhancing generalization ability, and reducing computational cost and data redundancy. Formally, given an original feature set F = f 1 , f 2 , , f n , the goal of FS is to identify a subset S F that maximizes model accuracy while minimizing the number of selected features.
The K-nearest neighbor (KNN) algorithm is a classic and widely used machine learning classifier that has been successfully applied in many fields [39], including medical image analysis [40], fault diagnosis [41], and natural language processing [42]. KNN performs classification by measuring the similarity between samples using the Euclidean distance. Its mathematical formula is shown in Equation (19).
D i s ( x 1 x 2 ) = k = 1 N ( x 1 k x 2 k ) 2 ,
For feature selection, the ultimate goal is to obtain the highest prediction accuracy by finding the minimum number of features. In this section, we propose a feature selection method called MESPBO-KNN by combining MESPBO with KNN. Assume that the dataset contains D features: X = x 1 , x 2 , , x N , where x i is a D -dimensional feature vector, and y is the response variable. Our goal is to select a subset of features from the original D features to minimize our objective function, which is expressed as Equation (20)
Minimize :   α × C E R + ( 1 α ) × | R | | D | ,
where α is a random number sampled from a uniform distribution; C E R = 1 A c c u r a c y denotes the classification error rate, where A c c u r a c y is calculated by Equation (21); | R | represents the number of selected features; and | D | is the total number of features.
A c c u r a c y = T P + T N T P + T N + F P + F N ,
where T P represents the number of correctly classified positive samples, T N represents the number of correctly classified negative samples, F P indicates the count of false positive instances, and F N refers to positive samples misclassified as negative. The constraints of the optimization problem can be expressed as Equation (22)
Subject   to :   j = 1 D x i , j K , i { 1 , , N } ,
where K denotes the maximum number of features allowed to be selected. The decision variables are modeled as Equation (23)
With :   x i , j { 0 , 1 } , i { 1 , , N } , j { 1 , , D } ,
where x i , j is a binary decision variable indicating whether feature j is selected for sample i : if x i , j = 1 , the feature is selected; otherwise, if x i , j = 0 , the feature is not selected.

5.2. Simulation Experiment Analysis

In this section, we use 10 public datasets to evaluate the performance of MESPBO-KNN. It is worth noting that we divide these datasets into three categories: small datasets, medium datasets, and large datasets. Each dataset is divided into training, testing, and validation subsets using cross-validation, and then classified using the KNN classifier. The detailed information of the datasets is shown in Table 6.
In addition, to verify the effectiveness and competitiveness of the proposed MESPBO-KNN algorithm, a series of comparative experiments were conducted against several state-of-the-art algorithms. For fair evaluation and to minimize randomness, the population size and maximum iteration count were fixed at 50 and 100, respectively, and each algorithm was independently run 30 times. The detailed results are presented in Table 7, Table 8 and Table 9. In order to judge the convergence speed of each algorithm, Figure 6 shows the convergence curves of each algorithm on 10 problems.
As illustrated in Figure 6a–j, the convergence curves on ten datasets clearly demonstrate the superiority of MESPBO in both convergence speed and final solution quality. For Datasets 1–6, MESPBO decreases the fitness value much faster than the competing algorithms, typically reaching the lowest or near-lowest level within the first 10–20 iterations, whereas most baselines still exhibit slow descending trends. For more challenging datasets such as Datasets 3, 5, 6, 9, and 10, many algorithms show premature stagnation or weak late-stage improvement, with their curves trapped at relatively high fitness plateaus. In contrast, MESPBO continues to reduce the fitness steadily and finally attains the lowest stable fitness among all methods, indicating stronger global optimization capability. Moreover, for Datasets 7 and 8 where the final performances of different algorithms are close, MESPBO still achieves the best or tied-best terminal fitness with the smallest fluctuation, reflecting excellent stability and consistency. Overall, these convergence results confirm that MESPBO consistently provides faster convergence and better final optimization outcomes across diverse datasets, outperforming PSO, SO, GRO, SBOA, ED, ESC, HHWOA, IGWO, MSPBO, QOCSPBO, and SPBO.
As shown in Table 7, MESPBO achieves superior optimization performance across the ten datasets, consistently obtaining the lowest or near-lowest average fitness values in almost all cases. Specifically, for Datasets 1, 3, 4, 5, 6, 7, 8, 9, and 10, MESPBO attains the best or tied-best mean performance among all algorithms, indicating its strong ability to adapt to various data characteristics and deliver high-quality solutions. In addition, MESPBO exhibits the smallest standard deviations across all datasets, with values significantly lower than those of other competing algorithms. This demonstrates excellent consistency over multiple runs and highlights the algorithm’s strong stability. In contrast, several enhanced algorithms such as HHWOA, MSPBO, and QOCSPBO occasionally produce comparable mean fitness on certain datasets but generally suffer from larger standard deviations, implying unstable convergence. Traditional methods like PSO, SO, and GRO show relatively higher average fitness values on many datasets, reflecting their tendency to be trapped in local optima and their limited overall performance. Overall, the statistical results in Table 7 further confirm the robustness, reliability, and comprehensive superiority of MESPBO. The algorithm not only delivers the best solution quality but also maintains remarkable stability across diverse datasets, making it the most balanced and effective optimization method among all competitors.
According to the accuracy comparison in Table 8, MESPBO achieves overall leading performance across the ten datasets. For Dataset 2, Dataset 4, Dataset 6, and Dataset 7, all algorithms obtain almost identical accuracies (100% for Datasets 4/6/7 and 97.1% for Dataset 2), indicating that these datasets are relatively easy and MESPBO maintains equally optimal performance. On more discriminative and challenging datasets, MESPBO shows clearer superiority: it attains the highest accuracies of 95.81%, 93.33%, 81.5%, and 81.5% on Datasets 5, 8, 9, and 10, respectively, outperforming all competitors; it also achieves the best or near-best result on Dataset 3 with 89.0%. Although MESPBO is slightly below the top accuracy on Dataset 1 (with a marginal gap of about 0.2%), its performance remains within the top tier. Overall, MESPBO delivers the best accuracies on most complex datasets while preserving optimal consistency on easier ones, demonstrating strong generalization ability and stable classification performance.
As reported in Table 9, MESPBO generally selects fewer or an equal number of features while maintaining competitive classification performance, demonstrating superior feature reduction capability. For Datasets 1–4, the average number of selected features is very close across all algorithms, and MESPBO achieves the same minimal level as the best competitors, indicating that it does not introduce redundant features on relatively simple datasets. On more challenging datasets (Datasets 5–8), MESPBO shows a clearer advantage by selecting notably fewer features; for instance, it chooses about 2.1 features on Dataset 5, which is lower than PSO, SO, GRO, and ESC (typically ranging from about 2.2 to 3.6). For high-dimensional feature datasets such as Dataset 9 and Dataset 10, MESPBO again yields the smallest or tied-smallest feature subset (around 5.7/6 features), significantly fewer than several competitors. Overall, these results confirm that MESPBO can obtain compact and effective feature subsets across diverse datasets, reducing model complexity while preserving search effectiveness, and thus provides a strong and stable feature selection performance.
Overall, the convergence curves and Table 7, Table 8 and Table 9 consistently show that MESPBO delivers the best comprehensive performance across the ten datasets. It achieves the lowest mean fitness values with the smallest standard deviations on most datasets, indicating superior solution quality and robustness. In terms of classification accuracy, MESPBO attains the best or near-best results on challenging datasets while matching the optimal performance on easier ones. Moreover, it generally selects the smallest or tied-smallest number of features, effectively reducing redundancy and model complexity without sacrificing accuracy. In summary, MESPBO demonstrates clear advantages in convergence efficiency, optimization reliability, accuracy, and feature reduction capability, confirming its effectiveness for feature selection and classification optimization tasks.

5.3. MESPBO for Photovoltaic Model Parameter Extraction

5.3.1. Single Diode Model (SDM)

The Single Diode Model (SDM) is one of the most classic and commonly used equivalent circuit models for photovoltaic (PV) devices and arrays. It accurately characterizes the nonlinear I-V/P-V characteristics of the cell with fewer parameters, achieving a good balance between accuracy and complexity. Therefore, it is widely used in engineering simulation, performance evaluation, and control design. Furthermore, SDM parameters have clear physical correspondences, reflecting the impact of factors such as temperature, irradiance aging, and shading on the internal mechanisms and external output of the device. This makes it an important tool for understanding PV degradation mechanisms, diagnosing faults, and conducting reliability analysis. Accurate SDM parameter identification is fundamental to many critical applications, such as maximum power point tracking (MPPT) algorithm design, inverter and grid-connected control, and PV system energy prediction and scheduling optimization. Inaccurate models or parameters will directly lead to power estimation errors, decreased control efficiency, and even system instability. In practical applications, PV systems often operate under dynamic environments (rapid temperature/irradiance changes, partial shading, and multi-peak characteristics). Researching high-precision and robust parameter extraction methods for single-diode (SDM) models helps improve the model’s generalization ability and real-time availability under complex conditions. Therefore, in this section, we investigate parameter extraction for single-diode models. The equivalent circuit of the single-diode model is shown in Figure 7.
The single-diode model comprises a current source that represents the photo-generated current induced by solar irradiation, a diode that characterizes the PN junction behavior of the semiconductor, a series resistance R s reflecting the ohmic losses of electrodes, interconnections, and materials, and a shunt resistance R s h accounting for leakage paths through the semiconductor structure. The output current can be represented by Equation (24).
I o u t = I p h I d I s h ,
where I p h , I d , and I s h denote the photocurrent, the diode current, and the current through the shunt resistor, respectively. The diode and shunt currents can be expressed as follows:
I d = I o · exp q · V out + R s · I out a · k · T 1 ,
I sh = V out + R s · I out R s h ,
where I o is the diode reverse saturation current, a is the diode ideality factor, k is the Boltzmann constant 1.3806503 × 10 23   J · K 1 , q is the electron charge 1.60217646 × 10 19   C , and T ambient temperature in Kelvin.
Substituting Equations (25) and (26) into Equation (24) yields the output current–voltage relationship of the single-diode model:
I o u t = I p h I o · exp q · V out + R s · I out a · k · T 1 V out + R s · I out R s h ,
Thus, the single-diode model is fully characterized by five parameters: I p h , I o , a , R s , R s h .
The unknown parameters are obtained by casting their estimation as an optimization task, in which an objective function g quantifies the mismatch between the measured experimental values and the model’s predicted outputs. The optimization procedure seeks to minimize this mismatch over a predefined search space, thereby yielding the best-fitting set of parameters. Commonly adopted error formulations for this purpose are listed below:
g V out , I out , y = N p · I p h N p · I o · e x p q · V out N s + R s · I out N p a · k · T 1 N p · V out N s + R s · I out N p R sh I out y = I p h , I o , a , R s , R sh ,
where set N s = 1 and N p = 1 . The total discrepancy between the experimental I V curve and the model prediction is assessed using the root mean square error (RMSE):
R M S E y = n = 1 N V out , n , I out , n , y 2 N ,
where N is the total number of measured data points V out , n , I out , n .

5.3.2. Experimental Parameter Setting and Simulation Analysis

In this section, the effectiveness of the proposed MESPBO algorithm in photovoltaic model parameter identification is thoroughly evaluated. We first provide a concise description of the experimental setup and related parameters. Subsequently, MESPBO is employed to estimate the unknown variables of the single-diode model (SDM). The detailed settings and procedures are presented as follows:
(1)
Experimental Parameter setting
Experimental measurements were obtained from a Photowatt-PWP 201 photovoltaic module comprising 36 polycrystalline silicon cells connected in series. At an operating temperature of 33 °C and a solar irradiance of 1000 W/m2, a total of 26 current–voltage (I–V) data points were recorded. These data were used to identify the unknown parameters of both the SDM and DDM for RTC France PV cells, and the resulting estimates were subsequently benchmarked against those produced by other state-of-the-art optimization methods.
All compared algorithms were coded in MATLAB 2024b and run on a personal computer equipped with a 2.5 GHz CPU, 16 GB RAM, and Windows 11. For each problem, every algorithm was executed independently 30 times, using a population size of 50 and a maximum of 1000 iterations. To emphasize performance differences and verify their statistical reliability, the Wilcoxon rank-sum test was adopted. The feasible ranges of the unknown parameters for each model are listed in Table 10, where L b and U b denote the lower and upper bounds, respectively.
As noted above, the root mean square error (RMSE) provides a simple and effective measure of the discrepancy between experimental observations and model simulations. A smaller RMSE indicates a tighter match between the calculated and measured data, thereby demonstrating the algorithm’s stronger ability to identify the unknown parameters of the photovoltaic system. In other words, the extracted diode model can more faithfully capture the real operating characteristics of solar cells and PV modules. Therefore, reducing this error is of vital importance.
Moreover, the absolute error (IAE) and relative error (RE) are adopted to evaluate the deviation at each measured voltage point, which are defined as follows:
I A E = I m e a s u r e I s i m u l a t e ,
R E = I m e a s u r e I s i m u l a t e I m e a s u r e ,
(2)
Experimental Analysis of SDM
In this subsection, we conducted experimental analysis using MESPBO and 11 other comparison algorithms. The experimental results are shown in Table 11, Meanwhile, Figure 8 shows its convergence curve. From the convergence curves on the 1DM model, all algorithms reduce the objective rapidly at the beginning, but their convergence rates and final precisions differ markedly. MESPBO achieves the steepest early decrease, driving the best score down to about the 10−3 level within the first tens of iterations, and stabilizes after roughly 200 iterations with the lowest final error among all competitors. This indicates both fast convergence and high solution accuracy. Algorithms such as SPBO, HHWOA, GRO, SBOA, and ED also keep improving toward the 10−3 range, but they converge more slowly or exhibit mid/late-stage plateaus, reflecting weaker exploitation or stability than MESPBO. In contrast, IGWO, MSPBO, ESC, and SO stagnate at higher error levels, suggesting premature convergence. PSO and QOCSPBO show the slowest convergence and the highest final errors, implying insufficient global exploration and fine local search for this parameter-identification task. Overall, MESPBO combines the fastest early descent with the best final precision and minimal stagnation, confirming that its multi-strategy enhancements significantly improve convergence speed, accuracy, and robustness on the 1DM PV model.
Figure 9 shows the I-V and P-V fitting results of MESPBO after parameter identification of the single diode model (1DM): the estimated curve (red circle) almost coincides with the measured curve (blue line) over the entire voltage range. The current plateau, inflection point/knee point position in the short-circuit region and the rapid drop segment near the open-circuit voltage are all aligned, indicating that the photocurrent, series and parallel resistance, diode parameters, etc., are accurately identified. The voltage position and peak height of the maximum power point in the corresponding P-V curve are also consistent with the measured values. The power drop trend after the peak is well matched, indicating that the algorithm can not only accurately fit the current–voltage characteristics, but also reliably predict the power output and MPP parameters, verifying the high accuracy and stability of MESPBO in 1DM parameter identification.
Table 12 reports the point-wise absolute errors of current and power obtained by MESPBO under the SDM. The overall pattern indicates highly accurate and stable fitting across the whole voltage range: the current errors (IAE_I) are mostly within the 10−4~10−3 order, showing only a slight increase in the knee/high-voltage sensitive region, while still remaining very small; the power errors (IAE_P) are even smaller, generally in the 10−5~10−4 range, with only minor fluctuations around the peak-power area and no systematic deviation. These results confirm that the SDM parameters identified by MESPBO can reconstruct the I–V and P–V characteristics with excellent global accuracy and robustness, achieving the smallest errors in the low-voltage region and maintaining low errors even near the knee point and MPP.

6. Summary and Limitations

In this paper, a novel metaheuristic algorithm called multi-strategy-enhanced student psychology-based optimization (MESPBO) was proposed to improve the optimization capability of the SPBO. By integrating three strategies—hybrid heuristic population initialization, adaptive dual-learning position update, and hybrid opposition-based reflective boundary control—MESPBO effectively enhances population diversity, improves convergence accuracy, and prevents premature stagnation. Extensive experiments conducted on the CEC2017 benchmark suite under 10-, 30-, and 50-dimensional scenarios demonstrated that MESPBO consistently outperforms eleven state-of-the-art optimization algorithms in terms of convergence speed, solution precision, and robustness. Furthermore, its successful application to feature selection tasks confirmed that MESPBO can efficiently reduce redundant features while maintaining or improving classification accuracy, thus proving its strong generalization ability and applicability in both continuous and combinatorial optimization domains.
Although MESPBO demonstrates strong performance and generalization ability across both benchmark optimization and feature selection tasks, several inherent limitations should be acknowledged. First, due to the integration of multiple enhancement strategies, the overall algorithmic structure becomes more complex than that of the original SPBO. This increased computational overhead may lead to relatively higher time consumption when handling extremely large populations or very high-dimensional optimization problems. Second, similar to most population-based metaheuristics, MESPBO still relies on stochastic search operators, which may introduce performance fluctuations across independent runs. Although robustness has been significantly improved, absolute consistency cannot be guaranteed. Third, the algorithm includes several hyperparameters related to learning mechanisms and boundary control, and while their default settings work well across various problems, their sensitivity on domain-specific tasks may still require careful tuning.

Author Contributions

Conceptualization, G.Z. and S.L.; methodology, G.Z. and S.L.; software, G.Z. and S.L.; validation, G.Z. and S.L.; formal analysis, G.Z. and S.L.; investigation, G.Z. and S.L.; resources, G.Z. and S.L.; data curation, G.Z. and S.L.; writing—original draft preparation, G.Z. and S.L.; writing—review and editing, G.Z. and S.L.; visualization, G.Z. and S.L.; supervision, G.Z. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

Shandong Province Humanities and Social Sciences Collaborative Special Project (24H325(C)); Natural Science Foundation of Zaozhuang University (07/102062501).

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C. Toward an Optimal Kernel Extreme Learning Machine Using a Chaotic Moth-Flame Optimization Strategy with Applications in Medical Diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  2. Zhang, L.; Wang, L. An Ensemble Learning-Enhanced Smart Prediction Model for Financial Credit Risks. J. Circuits Syst. Comput. 2024, 33, 7. [Google Scholar] [CrossRef]
  3. Qiao, Q. Image Processing Technology Based on Machine Learning. IEEE Consum. Electron. Mag. 2024, 13, 90–99. [Google Scholar] [CrossRef]
  4. Pandey, S.; Basisth, N.; Sachan, T.; Kumari, N.; Pakray, P. Quantum Machine Learning for Natural Language Processing Application. Phys. A-Stat. Mech. ITS Appl. 2023, 627, 129123. [Google Scholar] [CrossRef]
  5. Cai, J.; Luo, J.; Wang, S.; Yang, S. Feature Selection in Machine Learning: A New Perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
  6. Barrera-García, J.; Cisternas-Caneo, F.; Crawford, B.; Sánchez, M.; Soto, R. Feature Selection Problem and Metaheuristics: A Systematic Literature Review about Its Formulation, Evaluation and Applications. Biomimetics 2024, 9, 9. [Google Scholar] [CrossRef]
  7. Ming, H.; Heyong, W. Filter Feature Selection Methods for Text Classification: A Review. Multimed. Tools Appl. 2024, 83, 2053–2091. [Google Scholar] [CrossRef]
  8. Jain, R.; Xu, W. Artificial Intelligence Based Wrapper for High Dimensional Feature Selection. BMC Bioinform. 2023, 24, 392. [Google Scholar] [CrossRef]
  9. Liu, X.; Liang, Y.; Wang, S.; Yang, Z.; Ye, H. A Hybrid Genetic Algorithm With Wrapper-Embedded Approaches for Feature Selection. IEEE Access 2018, 6, 22863–22874. [Google Scholar] [CrossRef]
  10. Li, W.; Wang, G.; Gandomi, A. A Survey of Learning-Based Intelligent Optimization Algorithms. Arch. Comput. Methods Eng. 2021, 28, 3781–3799. [Google Scholar] [CrossRef]
  11. Duzgun, E.; Acar, E.; Yildiz, A. A Novel Chaotic Artificial Rabbits Algorithm for Optimization of Constrained Engineering Problems. Mater. Test. 2024, 66, 1449–1462. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Gorriz, J.; Nayak, D. Optimization Algorithms and Machine Learning Techniques in Medical Image Analysis. Math. Biosci. Eng. 2023, 20, 5917–5920. [Google Scholar] [CrossRef]
  13. Jia, H.; Xing, Z.; Song, W. A New Hybrid Seagull Optimization Algorithm for Feature Selection. IEEE Access 2019, 7, 49614–49631. [Google Scholar] [CrossRef]
  14. Mozhdehi, A.; Khodadadi, N.; Aboutalebi, M.; El-kenawy, E.; Hussien, A.; Zhao, W.; Nadimi-Shahraki, M.; Mirjalili, S. Divine Religions Algorithm: A Novel Social-Inspired Metaheuristic Algorithm for Engineering and Continuous Optimization Problems. Clust. Comput.-J. Netw. Softw. Tools Appl. 2025, 28, 253. [Google Scholar] [CrossRef]
  15. Liu, X.; Li, S.; Wu, Y.; Fu, Z. Graduate Student Evolutionary Algorithm: A Novel Metaheuristic Algorithm for 3D UAV and Robot Path Planning. Biomimetics 2025, 10, 616. [Google Scholar] [CrossRef]
  16. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-Billed Blue Magpie Optimizer: A Novel Metaheuristic Algorithm for 2D/3D UAV Path Planning and Engineering Design Problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  17. Tang, W.; Cao, L.; Chen, Y.; Chen, B.; Yue, Y. Solving Engineering Optimization Problems Based on Multi-Strategy Particle Swarm Optimization Hybrid Dandelion Optimization Algorithm. Biomimetics 2024, 9, 298. [Google Scholar] [CrossRef]
  18. Bao, Y.; Wang, J.; Xing, Y.; Zhang, S.; Zhao, X.; Zhang, S. Node Coverage Optimization for Wireless Sensor Networks Based on a Multi-Strategy Fusion Group Teaching Optimization Algorithm. Eng. Optim. 2025, 1–24. [Google Scholar] [CrossRef]
  19. Lu, Y.; Liu, L.; Zhang, J.; Peng, Y. Day-Ahead Operation Optimization of Microgrid Based on Enhanced Sardine Optimization Algorithm. Electr. Power Syst. Res. 2025, 249, 112047. [Google Scholar] [CrossRef]
  20. Das, B.; Mukherjee, V.; Das, D. Student Psychology Based Optimization Algorithm: A New Population Based Optimization Algorithm for Solving Optimization Problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  21. Balu, K.; Mukherjee, V. A Novel Quasi-Oppositional Chaotic Student Psychology-Based Optimization Algorithm for Deciphering Global Complex Optimization Problems. Knowl. Inf. Syst. 2023, 65, 5387–5477. [Google Scholar] [CrossRef]
  22. Shanmugam, G.; Thanarajan, T.; Rajendran, S.; Murugaraj, S. Student Psychology Based Optimized Routing Algorithm for Big Data Clustering in IoT with MapReduce Framework. J. Intell. Fuzzy Syst. 2023, 44, 2051–2063. [Google Scholar] [CrossRef]
  23. Basu, S.; Basu, M. Modified Student Psychology Based Optimization Algorithm for Economic Dispatch Problems. Appl. Artif. Intell. 2021, 35, 1508–1528. [Google Scholar] [CrossRef]
  24. Zhong, R.; Yu, J.; Zhang, C.; Munetomo, M. SRIME: A Strengthened RIME with Latin Hypercube Sampling and Embedded Distance-Based Selection for Engineering Optimization Problems. Neural Comput. Appl. 2024, 36, 6721–6740. [Google Scholar] [CrossRef]
  25. Hu, G.; Zheng, Y.; Abualigah, L.; Hussien, A. DETDO: An Adaptive Hybrid Dandelion Optimizer for Engineering Optimization. Adv. Eng. Inform. 2023, 57, 102004. [Google Scholar] [CrossRef]
  26. Hu, Z.; Dai, C.; Su, Q. Adaptive Backtracking Search Optimization Algorithm with a Dual-Learning Strategy for Dynamic Economic Dispatch with Valve-Point Effects. Energy 2022, 248, 123558. [Google Scholar] [CrossRef]
  27. Xie, F.; Yu, H.; Long, Q.; Zeng, W.; Lu, N. Battery Model Parameterization Using Manufacturer Datasheet and Field Measurement for Real-Time HIL Applications. IEEE Trans. Smart Grid 2020, 11, 2396–2406. [Google Scholar] [CrossRef]
  28. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University Singapore: Singapore, 2016; pp. 1–34. [Google Scholar]
  29. Gad, A.G. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  30. Hashim, F.; Hussien, A. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  31. Zolfi, K. Gold Rush Optimizer: A New Population-Based Metaheuristic Algorithm. Oper. Res. Decis. 2023, 33, 113–150. [Google Scholar] [CrossRef]
  32. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary Bird Optimization Algorithm: A New Metaheuristic for Solving Global Optimization Problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  33. Truong, D.-N.; Chou, J.-S. Metaheuristic Algorithm Inspired by Enterprise Development for Global Optimization and Structural Engineering Problems with Frequency Constraints. Eng. Struct. 2024, 318, 118679. [Google Scholar] [CrossRef]
  34. Ouyang, K.; Fu, S.; Chen, Y.; Cai, Q.; Heidari, A.A.; Chen, H. Escape: An Optimization Method Based on Crowd Evacuation Behaviors. Artif. Intell. Rev. 2024, 58, 19. [Google Scholar] [CrossRef]
  35. Su, Y.; Dai, Y.; Liu, Y. A Hybrid Hyper-Heuristic Whale Optimization Algorithm for Reusable Launch Vehicle Reentry Trajectory Optimization. Aerosp. Sci. Technol. 2021, 119, 107200. [Google Scholar] [CrossRef]
  36. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An Improved Grey Wolf Optimizer for Solving Engineering Problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  37. López-Vázquez, C.; Hochsztain, E. Extended and Updated Tables for the Friedman Rank Test. Commun. Stat.-Theory Methods 2019, 48, 268–281. [Google Scholar] [CrossRef]
  38. Li, M.; Wang, J.; Deng, S.; Zhao, Y.; Li, Y. Enhanced Black Widow Optimization Algorithm Incorporating Food Sufficiency Strategy and Differential Mutation Strategy for Feature Selection of High-Dimensional Data. Expert Syst. Appl. 2025, 290, 128506. [Google Scholar] [CrossRef]
  39. Comak, E.; Arslan, A. A New Training Method for Support Vector Machines: Clustering k-NN Support Vector Machines. Expert Syst. Appl. 2008, 35, 564–568. [Google Scholar] [CrossRef]
  40. Di Ruberto, C.; Fodde, G. Evaluation of Statistical Features for Medical Image Retrieval. In International Conference on Image Analysis and Processing; Petrosino, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8156, pp. 552–561. [Google Scholar]
  41. Ren, Z.; Tang, Y.; Zhang, W. Quality-Related Fault Diagnosis Based on k-Nearest Neighbor Rule for Non-Linear Industrial Processes. Int. J. Distrib. Sens. Netw. 2021, 17, 15501477211055931. [Google Scholar] [CrossRef]
  42. Brown, A.; Marotta, T. A Natural Language Processing-Based Model to Automate MRI Brain Protocol Selection and Prioritization. Acad. Radiol. 2017, 24, 160–166. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Student classification diagram.
Figure 1. Student classification diagram.
Biomimetics 11 00037 g001
Figure 2. The flowchart of SPBO.
Figure 2. The flowchart of SPBO.
Biomimetics 11 00037 g002
Figure 3. The flowchart of MESPBO.
Figure 3. The flowchart of MESPBO.
Biomimetics 11 00037 g003
Figure 4. Comparison of convergence speed of different algorithms on CEC2017 test set. (a) cec2017-f1 (dim = 10), (b) cec2017-f10 (dim = 10), (c) cec2017-22 (dim = 10), (d) cec2017-30 (dim = 10), (e) cec2017-f5 (dim = 30), (f) cec2017-f8 (dim = 30), (g) cec2017-f16 (dim = 30), (h) cec2017-f20 (dim = 30), (i) cec2017-f22 (dim = 30), (j) cec2017-f23 (dim = 30), (k) cec2017-f1 (dim = 50), (l) cec2017-f7 (dim = 50), (m) cec2017-f13 (dim = 50), (n) cec2017-f15 (dim = 50), (o) cec2017-f19 (dim = 50), (p) cec2017-f21 (dim = 50).
Figure 4. Comparison of convergence speed of different algorithms on CEC2017 test set. (a) cec2017-f1 (dim = 10), (b) cec2017-f10 (dim = 10), (c) cec2017-22 (dim = 10), (d) cec2017-30 (dim = 10), (e) cec2017-f5 (dim = 30), (f) cec2017-f8 (dim = 30), (g) cec2017-f16 (dim = 30), (h) cec2017-f20 (dim = 30), (i) cec2017-f22 (dim = 30), (j) cec2017-f23 (dim = 30), (k) cec2017-f1 (dim = 50), (l) cec2017-f7 (dim = 50), (m) cec2017-f13 (dim = 50), (n) cec2017-f15 (dim = 50), (o) cec2017-f19 (dim = 50), (p) cec2017-f21 (dim = 50).
Biomimetics 11 00037 g004aBiomimetics 11 00037 g004bBiomimetics 11 00037 g004c
Figure 5. Boxplot analysis for different algorithms on the CEC2017 test set. (a) cec2017-f5 (dim = 10), (b) cec2017-f8 (dim = 10), (c) cec2017-10 (dim = 10), (d) cec2017-17 (dim = 10), (e) cec2017-f22 (dim = 10), (f) cec2017-f29 (dim = 10), (g) cec2017-f4 (dim = 30), (h) cec2017-f5 (dim = 30), (i) cec2017-f10 (dim = 30), (j) cec2017-f13 (dim = 30), (k) cec2017-f20 (dim = 30), (l) cec2017-f28 (dim = 30), (m) cec2017-f5 (dim = 50), (n) cec2017-f10 (dim = 50), (o) cec2017-f15 (dim = 50), (p) cec2017-f16 (dim = 50), (q) cec2017-f21 (dim = 50), (r) cec2017-f29 (dim = 50).
Figure 5. Boxplot analysis for different algorithms on the CEC2017 test set. (a) cec2017-f5 (dim = 10), (b) cec2017-f8 (dim = 10), (c) cec2017-10 (dim = 10), (d) cec2017-17 (dim = 10), (e) cec2017-f22 (dim = 10), (f) cec2017-f29 (dim = 10), (g) cec2017-f4 (dim = 30), (h) cec2017-f5 (dim = 30), (i) cec2017-f10 (dim = 30), (j) cec2017-f13 (dim = 30), (k) cec2017-f20 (dim = 30), (l) cec2017-f28 (dim = 30), (m) cec2017-f5 (dim = 50), (n) cec2017-f10 (dim = 50), (o) cec2017-f15 (dim = 50), (p) cec2017-f16 (dim = 50), (q) cec2017-f21 (dim = 50), (r) cec2017-f29 (dim = 50).
Biomimetics 11 00037 g005aBiomimetics 11 00037 g005bBiomimetics 11 00037 g005c
Figure 6. Comparison of convergence speed of different algorithms on Dataset. (a) Dataset 1, (b) Dataset 2, (c) Dataset 3, (d) Dataset 4, (e) Dataset 5, (f) Dataset 6, (g) Dataset 7, (h) Dataset 8, (i) Dataset 9, (j) Dataset 10.
Figure 6. Comparison of convergence speed of different algorithms on Dataset. (a) Dataset 1, (b) Dataset 2, (c) Dataset 3, (d) Dataset 4, (e) Dataset 5, (f) Dataset 6, (g) Dataset 7, (h) Dataset 8, (i) Dataset 9, (j) Dataset 10.
Biomimetics 11 00037 g006aBiomimetics 11 00037 g006b
Figure 7. Equivalent circuit of the single diode model.
Figure 7. Equivalent circuit of the single diode model.
Biomimetics 11 00037 g007
Figure 8. The convergence curve of MESPBO and other algorithms on the SDM.
Figure 8. The convergence curve of MESPBO and other algorithms on the SDM.
Biomimetics 11 00037 g008
Figure 9. The I-V and P-V characteristics of the estimated SDM identified by MESPBO. (a) I-V of 1DM, (b) P-V of 1DM.
Figure 9. The I-V and P-V characteristics of the estimated SDM identified by MESPBO. (a) I-V of 1DM, (b) P-V of 1DM.
Biomimetics 11 00037 g009
Table 1. Compare algorithm parameter settings.
Table 1. Compare algorithm parameter settings.
AlgorithmsName of the ParameterValue of the Parameter
PSO V m a x , w M a x , w M i n , c 1 , c 2 6, 0.9, 0.6, 2, 2
SO c 1 , c 2 , c 3 0.5, 0.05, 2
GROsigma_initial2
SBOA t t 0.5
EDishow250
ESCeliteSize, beta_base5, 1.5
HHWOAw3
IGWO a 2
MSPBOU, R1, R20.5, 0.33, 0.66
QOCSPBOjumpRate0.2
Table 2. Experimental results of CEC2017 (dim = 10).
Table 2. Experimental results of CEC2017 (dim = 10).
IDMetricPSOSOGROSBOAEDESCHHWOAIGWOMSPBOQOCSPBOSPBOMESPBO
F1mean2.9814 × 1033.3019 × 1031.8098 × 1032.2149 × 1032.4692 × 1031.3422 × 1033.7052 × 1021.7027 × 1041.4412 × 1031.1691 × 1081.5116 × 1021.8885 × 102
std4.0299 × 1037.8603 × 1031.9065 × 1032.3589 × 1031.4839 × 1031.1709 × 1031.4817 × 1035.5602 × 1031.8378 × 1032.6019 × 1088.0474 × 1018.1775 × 101
F2mean2.0000 × 1022.0760 × 1026.1530 × 1022.0000 × 1022.0000 × 1022.0000 × 1022.0043 × 1022.0000 × 1022.5830 × 1041.0809 × 1072.0000 × 1022.0000 × 102
std0.0000 × 1003.2625 × 1016.7882 × 1020.0000 × 1000.0000 × 1000.0000 × 1006.2606 × 10−10.0000 × 1006.8349 × 1042.5789 × 1070.0000 × 1000.0000 × 100
F3mean3.0000 × 1023.0000 × 1023.0034 × 1023.0000 × 1023.3974 × 1023.0102 × 1023.0000 × 1023.0005 × 1021.5391 × 1032.1619 × 1032.9897 × 1033.0000 × 102
std2.9513 × 10−42.9154 × 10−31.2123 × 107.2192 × 10−125.3821 × 1012.8371 × 105.1711 × 10−143.6635 × 10−25.7748 × 1028.8096 × 1021.8069 × 1035.0623 × 10−14
F4mean4.0579 × 1024.0377 × 1024.0435 × 1024.0207 × 1024.0216 × 1024.0453 × 1024.0221 × 1024.0217 × 1024.0661 × 1024.5913 × 1024.0025 × 1024.0046 × 102
std1.1816 × 1011.9050 × 1001.9667 × 1009.0036 × 10−15.0678 × 10−12.6380 × 10−11.1082 × 1019.0710 × 10−16.2636 × 10−15.7352 × 1013.5411 × 10−12.4265 × 10−1
F5mean5.3582 × 1025.1181 × 1025.0769 × 1025.0809 × 1025.1328 × 1025.0426 × 1025.1682 × 1025.0678 × 1025.1721 × 1025.5706 × 1025.0517 × 1025.0425 × 102
std1.1597 × 1014.3230 × 1004.0976 × 1003.5127 × 1003.7001 × 1002.1595 × 1007.6273 × 1005.3774 × 1006.8634 × 1001.9745 × 1011.3427 × 1008.9587 × 10−1
F6mean6.0716 × 1026.0006 × 1026.0000 × 1026.0000 × 1026.0000 × 1026.0000 × 1026.0033 × 1026.0005 × 1026.0000 × 1026.3391 × 1026.0000 × 1026.0000 × 102
std6.2732 × 1001.9504 × 10−13.3782 × 10−35.5268 × 10−61.5177 × 10−44.6974 × 10−69.7174 × 10−11.2289 × 10−21.7287 × 10−41.0239 × 1011.0342 × 10−137.6117 × 10−14
F7mean7.2317 × 1027.2568 × 1027.1808 × 1027.1789 × 1027.2233 × 1027.1462 × 1027.2262 × 1027.2167 × 1027.3359 × 1027.6940 × 1027.1506 × 1027.1081 × 102
std4.8619 × 1007.3540 × 1002.6763 × 1005.7604 × 1003.0334 × 1001.6277 × 1005.6667 × 1008.2655 × 1007.6246 × 1001.7869 × 1011.3952 × 1003.2041 × 100
F8mean8.2043 × 1028.1316 × 1028.0899 × 1028.0909 × 1028.1322 × 1028.0368 × 1028.1496 × 1028.0740 × 1028.1619 × 1028.2864 × 1028.0551 × 1028.0331 × 102
std7.2160 × 1006.0902 × 1003.2053 × 1004.3665 × 1003.2359 × 1001.5683 × 1007.1818 × 1005.4027 × 1006.2505 × 1006.9581 × 1001.8793 × 1007.8205 × 10−1
F9mean9.0000 × 1029.0022 × 1029.0000 × 1029.0000 × 1029.0003 × 1029.0000 × 1029.0028 × 1029.0000 × 1029.0000 × 1021.3009 × 1039.0000 × 1029.0000 × 102
std7.0853 × 10−75.7142 × 10−12.5516 × 10−79.4412 × 10−149.3033 × 10−28.2697 × 10−94.1145 × 10−14.1183 × 10−42.2169 × 10−31.7234 × 1023.6464 × 10−60.0000 × 100
F10mean1.8351 × 1031.4772 × 1031.3651 × 1031.3071 × 1031.7283 × 1031.1977 × 1031.7373 × 1031.3397 × 1031.8416 × 1032.0960 × 1031.1607 × 1031.1192 × 103
std3.0903 × 1021.8638 × 1022.0713 × 1021.7428 × 1021.9398 × 1021.0851 × 1023.4245 × 1023.7366 × 1022.2667 × 1022.2106 × 1029.0718 × 1014.3279 × 101
F11mean1.1370 × 1031.1093 × 1031.1037 × 1031.1036 × 1031.1041 × 1031.1026 × 1031.1168 × 1031.1044 × 1031.1067 × 1031.2519 × 1031.1022 × 1031.1005 × 103
std1.9167 × 1015.0477 × 1001.2459 × 1001.4220 × 1002.3872 × 1001.4531 × 1001.6768 × 1012.1990 × 1002.7680 × 1009.5849 × 1011.1243 × 1004.1158 × 10−1
F12mean1.7014 × 1041.3361 × 1041.5602 × 1041.6323 × 1045.1249 × 1041.1953 × 1041.3330 × 1031.8918 × 1041.4100 × 1054.1247 × 1062.1198 × 1043.5939 × 103
std1.0552 × 1041.0458 × 1041.2176 × 1041.6828 × 1042.9166 × 1049.0781 × 1031.5341 × 1021.6807 × 1041.5868 × 1053.5646 × 1061.6353 × 1048.8142 × 102
F13mean6.3821 × 1034.7231 × 1032.1871 × 1032.2813 × 1033.6262 × 1036.0238 × 1031.3045 × 1032.1359 × 1034.4067 × 1031.2331 × 1041.5060 × 1031.3769 × 103
std5.9593 × 1033.3906 × 1037.7793 × 1021.3675 × 1031.5563 × 1036.5014 × 1032.6734 × 1005.4946 × 1022.7957 × 1037.5568 × 1032.3749 × 1023.5321 × 101
F14mean1.6978 × 1031.4891 × 1031.4383 × 1031.4315 × 1031.5168 × 1032.1039 × 1031.4238 × 1031.4512 × 1031.6156 × 1032.4446 × 1031.4800 × 1031.4179 × 103
std4.5901 × 1024.8873 × 1011.0416 × 1011.0730 × 1017.0773 × 1012.2352 × 1031.9862 × 1011.1742 × 1011.6062 × 1029.3036 × 1026.1648 × 1016.3990 × 10
F15mean2.1077 × 1031.6639 × 1031.5611 × 1031.5223 × 1031.5579 × 1031.8968 × 1031.5170 × 1031.5274 × 1032.1181 × 1037.8864 × 1031.5971 × 1031.5087 × 103
std7.7348 × 1021.1109 × 1025.2152 × 1012.1792 × 1014.1510 × 1017.7996 × 1024.4404 × 1011.3013 × 1014.4678 × 1022.7619 × 1031.1985 × 1023.1644 × 100
F16mean1.8504 × 1031.6932 × 1031.6325 × 1031.6116 × 1031.6054 × 1031.6168 × 1031.6450 × 1031.6038 × 1031.6250 × 1031.9686 × 1031.6074 × 1031.6013 × 103
std1.0783 × 1029.6429 × 1015.4797 × 1014.5520 × 1011.2992 × 1013.3072 × 1015.3681 × 1011.6456 × 101.5599 × 1011.0766 × 1022.3212 × 1012.8320 × 10−1
F17mean1.7649 × 1031.7356 × 1031.7278 × 1031.7178 × 1031.7069 × 1031.7047 × 1031.7224 × 1031.7322 × 1031.7437 × 1031.7800 × 1031.7008 × 1031.7173 × 103
std3.4152 × 1012.4262 × 1011.0797 × 1011.0540 × 1012.2991 × 1007.0844 × 1001.3511 × 1019.8757 × 1001.0337 × 1012.1877 × 1017.1941 × 10−16.2297 × 100
F18mean1.1272 × 1045.9195 × 1032.8207 × 1035.3461 × 1035.4902 × 1037.4367 × 1031.8068 × 1034.9668 × 1037.1777 × 1031.6427 × 1042.5954 × 1032.1796 × 103
std7.2253 × 1034.6359 × 1039.4287 × 1022.5627 × 1031.4442 × 1037.4005 × 1039.1106 × 1013.4350 × 1032.8988 × 1031.2650 × 1047.2833 × 1021.3448 × 102
F19mean3.0216 × 1032.1863 × 1031.9692 × 1031.9171 × 1031.9275 × 1034.7177 × 1031.9001 × 1031.9230 × 1032.6122 × 1031.4083 × 1041.9880 × 1031.9067 × 103
std1.8474 × 1035.4463 × 1021.0842 × 1021.2157 × 1012.4497 × 1014.4077 × 1032.7798 × 10−11.1630 × 1017.8364 × 1029.5644 × 1031.6802 × 1021.7189 × 100
F20mean2.0877 × 1032.0256 × 1032.0163 × 1032.0061 × 1032.0058 × 1032.0002 × 1032.0093 × 1032.0248 × 1032.0187 × 1032.1800 × 1032.0002 × 1032.0010 × 103
std5.4271 × 1012.7431 × 1011.9328 × 1018.4835 × 104.3121 × 104.7606 × 10−11.0720 × 1016.6342 × 101.2386 × 1015.5017 × 1012.6765 × 10−17.0716 × 10−1
F21mean2.3207 × 1032.3090 × 1032.2465 × 1032.2914 × 1032.2158 × 1032.2978 × 1032.2809 × 1032.2776 × 1032.2726 × 1032.2375 × 1032.2037 × 1032.2000 × 103
std4.2719 × 1012.0798 × 1015.3799 × 1014.1424 × 1013.9269 × 1012.8473 × 1015.3017 × 1014.8106 × 1013.3858 × 1011.8187 × 1011.9453 × 1011.2942 × 10−6
F22mean2.3010 × 1032.3017 × 1032.2970 × 1032.2978 × 1032.2879 × 1032.3004 × 1032.2994 × 1032.2981 × 1032.3018 × 1032.3351 × 1032.2582 × 1032.2446 × 103
std1.3015 × 1015.9380 × 10−11.8325 × 1011.4534 × 1013.1827 × 1012.7929 × 10−11.2660 × 1012.6677 × 1017.7753 × 1001.4254 × 1013.5382 × 1014.3063 × 101
F23mean2.6864 × 1032.6149 × 1032.6080 × 1032.6103 × 1032.6141 × 1032.6061 × 1032.6190 × 1032.6108 × 1032.6149 × 1032.6646 × 1032.6039 × 1032.6065 × 103
std2.6026 × 1015.6227 × 1003.0176 × 1004.0292 × 1002.8214 × 1002.3841 × 1009.4645 × 1007.4644 × 1006.6855 × 1002.7099 × 1013.5565 × 1019.3062 × 10−1
F24mean2.7832 × 1032.7430 × 1032.6710 × 1032.6905 × 1032.5815 × 1032.7363 × 1032.7449 × 1032.7265 × 1032.7422 × 1032.6877 × 1032.5199 × 1032.6166 × 103
std9.1167 × 1014.8112 × 1001.0492 × 1029.6898 × 1011.0098 × 1022.9374 × 1006.9177 × 1004.3174 × 1012.6718 × 1011.2556 × 1026.3539 × 1011.1864 × 102
F25mean2.9242 × 1032.9258 × 1032.9050 × 1032.9167 × 1032.8583 × 1032.9320 × 1032.9293 × 1032.8996 × 1032.9390 × 1032.9564 × 1032.7554 × 1032.8979 × 103
std2.2794 × 1012.2640 × 1011.5558 × 1012.2695 × 1017.1246 × 1012.2564 × 1012.3294 × 1018.3610 × 1001.3779 × 1011.8307 × 1011.4080 × 1021.3630 × 10−1
F26mean3.0505 × 1033.0843 × 1032.8633 × 1032.9020 × 1032.8460 × 1032.9386 × 1032.9904 × 1032.9000 × 1032.9685 × 1033.1960 × 1032.6743 × 1032.8509 × 103
std2.5484 × 1021.9785 × 1029.2786 × 1014.1088 × 1016.8196 × 1011.6053 × 1021.8862 × 1022.5704 × 10−33.3328 × 1013.4205 × 1029.8219 × 1018.1519 × 101
F27mean3.1357 × 1033.1021 × 1033.0926 × 1033.0904 × 1033.0932 × 1033.0901 × 1033.0947 × 1033.0894 × 1033.0905 × 1033.1459 × 1033.0928 × 1033.0889 × 103
std5.6496 × 1014.9836 × 1002.5674 × 1001.4881 × 1002.4482 × 1001.0209 × 1003.8125 × 1003.5394 × 10−11.2186 × 1004.7537 × 1012.4456 × 1003.6842 × 10−1
F28mean3.1593 × 1033.3659 × 1033.1313 × 1033.1783 × 1033.0775 × 1033.3036 × 1033.2670 × 1033.2103 × 1033.2225 × 1033.3803 × 1033.0543 × 1033.0700 × 103
std3.9504 × 1018.3022 × 1011.0842 × 1021.7300 × 1026.9108 × 1011.4389 × 1021.3651 × 1021.4758 × 1024.2744 × 1011.2192 × 1021.0384 × 1029.1536 × 101
F29mean3.2431 × 1033.1715 × 1033.1583 × 1033.1458 × 1033.1622 × 1033.1505 × 1033.1780 × 1033.1507 × 1033.2023 × 1033.3319 × 1033.1510 × 1033.1341 × 103
std5.6363 × 1012.5736 × 1011.5211 × 1011.5650 × 1013.9390 × 1011.0004 × 1012.3496 × 1011.2438 × 1011.9303 × 1018.0801 × 1019.2151 × 101.0926 × 10
F30mean2.0786 × 1046.2193 × 1041.0382 × 1048.9056 × 1042.4989 × 1042.3104 × 1052.2292 × 1056.2358 × 1042.3561 × 1051.4363 × 1061.1519 × 1043.9048 × 103
std1.8062 × 1042.0617 × 1055.7099 × 1032.4803 × 1051.7666 × 1043.6582 × 1054.1565 × 1052.1568 × 1051.6920 × 1051.0987 × 1061.3166 × 1041.8510 × 102
Table 3. Experimental results of CEC2017 (dim = 30).
Table 3. Experimental results of CEC2017 (dim = 30).
IDMetricPSOSOGROSBOAEDESCHHWOAIGWOMSPBOQOCSPBOSPBOMESPBO
F1mean2.4596 × 1054.5560 × 1041.5599 × 1066.8940 × 1032.9922 × 1033.5303 × 1034.6072 × 1033.8406 × 1053.9490 × 1064.6773 × 1091.8896 × 1021.5596 × 102
std1.7503 × 1056.6804 × 1041.8106 × 1066.7951 × 1032.3195 × 1033.4202 × 1035.1698 × 1032.2382 × 1051.7832 × 1061.2942 × 1099.7710 × 1015.1543 × 101
F2mean1.7501 × 10102.2070 × 10182.4786 × 10224.7204 × 10123.5066 × 10221.4068 × 10159.2087 × 10163.0351 × 10161.5720 × 10291.4922 × 10386.0646 × 1045.0359 × 108
std5.3363 × 10107.6977 × 10181.2492 × 10231.0226 × 10131.3679 × 10236.7197 × 10153.7255 × 10179.0360 × 10163.4408 × 10295.1992 × 10382.4761 × 1054.9406 × 108
F3mean5.6259 × 1035.6741 × 1043.4051 × 1046.6270 × 1038.0714 × 1044.0339 × 1043.0000 × 1025.7968 × 1031.3640 × 1057.3497 × 1049.1277 × 1042.0269 × 103
std2.8127 × 1038.4409 × 1036.2035 × 1033.1070 × 1031.5312 × 1041.0266 × 1043.9070 × 10−33.1029 × 1032.0652 × 1048.2942 × 1031.9301 × 1043.1745 × 102
F4mean4.6402 × 1025.0305 × 1025.1650 × 1024.9837 × 1024.7466 × 1025.0649 × 1024.7536 × 1024.9601 × 1025.9628 × 1021.6792 × 1034.2963 × 1024.7313 × 102
std2.6125 × 1012.4952 × 1011.9550 × 1011.9485 × 1013.5493 × 1011.4252 × 1013.0436 × 1011.2369 × 1012.3736 × 1015.3309 × 1022.9474 × 1015.0237 × 100
F5mean6.7834 × 1025.6314 × 1025.7199 × 1025.6033 × 1026.4726 × 1025.7734 × 1025.8799 × 1025.7573 × 1026.9789 × 1028.2933 × 1025.5268 × 1025.2816 × 102
std2.7151 × 1011.6095 × 1011.6196 × 1011.9374 × 1011.5657 × 1011.9956 × 1012.0267 × 1014.5163 × 1011.4828 × 1012.7358 × 1019.2838 × 1004.4624 × 100
F6mean6.3977 × 1026.0262 × 1026.0432 × 1026.0048 × 1026.0402 × 1026.0000 × 1026.0500 × 1026.0047 × 1026.0210 × 1026.7152 × 1026.0000 × 1026.0000 × 102
std7.1212 × 1001.9415 × 1001.9594 × 1005.7791 × 10−13.9740 × 1002.0211 × 10−33.6091 × 1002.2918 × 10−15.6066 × 10−15.7272 × 1008.4444 × 10−145.7673 × 10−4
F7mean8.5063 × 1028.2078 × 1028.0959 × 1028.1224 × 1028.6623 × 1028.2038 × 1028.6855 × 1028.2772 × 1029.4479 × 1021.3031 × 1037.7617 × 1027.5892 × 102
std2.7603 × 1012.9088 × 1013.3694 × 1013.1838 × 1012.1647 × 1011.4310 × 1014.5764 × 1015.8842 × 1011.1883 × 1016.8417 × 1017.0361 × 1003.5724 × 100
F8mean9.2021 × 1028.5703 × 1028.7075 × 1028.6112 × 1029.3687 × 1028.7087 × 1028.7879 × 1028.6354 × 1029.9632 × 1021.0348 × 1038.5800 × 1028.3075 × 102
std1.6790 × 1011.0402 × 1011.6366 × 1012.0792 × 1011.9235 × 1011.7022 × 1012.2166 × 1014.4014 × 1011.4977 × 1012.1772 × 1019.9026 × 1004.5211 × 100
F9mean4.7558 × 1031.2920 × 1031.1921 × 1039.6899 × 1021.2481 × 1039.0026 × 1021.3374 × 1039.0984 × 1021.3245 × 1038.1339 × 1031.0401 × 1039.0022 × 102
std1.3851 × 1032.8686 × 1022.6363 × 1028.9781 × 1012.6585 × 1024.6554 × 10−13.2112 × 1022.7461 × 1011.8158 × 1028.1707 × 1028.7559 × 1011.4123 × 10−1
F10mean4.7756 × 1033.6220 × 1034.2873 × 1034.0910 × 1035.0743 × 1036.5289 × 1034.7189 × 1036.4259 × 1038.4369 × 1037.3814 × 1032.9533 × 1033.5077 × 103
std7.3609 × 1021.1449 × 1035.8269 × 1026.3615 × 1022.9302 × 1024.7613 × 1026.5698 × 1022.2447 × 1033.1474 × 1025.5562 × 1022.0419 × 1022.9656 × 102
F11mean1.2083 × 1031.2477 × 1031.2030 × 1031.1668 × 1031.1935 × 1031.1767 × 1031.1823 × 1031.1858 × 1031.5164 × 1032.6927 × 1031.2307 × 1031.1153 × 103
std2.6490 × 1014.9339 × 1013.1518 × 1013.0609 × 1013.9320 × 1012.0054 × 1013.6721 × 1012.7607 × 1018.6856 × 1015.3808 × 1026.4523 × 1014.0984 × 100
F12mean1.5396 × 1067.2948 × 1057.7828 × 1053.5819 × 1053.5131 × 1057.2829 × 1056.1237 × 1041.4945 × 1063.5254 × 1067.5238 × 1083.1685 × 1054.0537 × 104
std1.3335 × 1068.1913 × 1056.1627 × 1052.8788 × 1052.9175 × 1056.3280 × 1057.1867 × 1049.2758 × 1051.5726 × 1063.7548 × 1082.2746 × 1051.1669 × 104
F13mean1.4813 × 1042.0626 × 1042.0060 × 1042.2367 × 1042.7550 × 1041.6355 × 1041.9835 × 1041.4977 × 1052.0470 × 1044.9138 × 1076.7450 × 1032.7933 × 103
std1.7632 × 1041.6147 × 1041.4572 × 1041.9108 × 1041.9773 × 1041.1804 × 1041.8113 × 1048.5169 × 1041.0036 × 1043.7143 × 1074.8294 × 1037.1352 × 102
F14mean1.5928 × 1042.9210 × 1041.2283 × 1041.6767 × 1043.4752 × 1043.7845 × 1041.4855 × 1037.9845 × 1032.0494 × 1051.3408 × 1065.6801 × 1042.4274 × 103
std2.5451 × 1043.9078 × 1049.4987 × 1031.9670 × 1043.6214 × 1044.8333 × 1046.0844 × 1016.2977 × 1031.9066 × 1051.2779 × 1063.9885 × 1044.3621 × 102
F15mean8.3592 × 1037.5755 × 1036.8856 × 1038.9023 × 1036.2047 × 1035.0609 × 1031.6050 × 1032.2718 × 1044.6087 × 1031.3321 × 1062.0984 × 1032.5486 × 103
std9.8873 × 1037.0157 × 1035.3948 × 1038.5654 × 1034.8499 × 1034.2108 × 1032.4807 × 1021.7717 × 1042.6104 × 1039.6807 × 1056.7721 × 1024.8055 × 102
F16mean2.5776 × 1032.3622 × 1032.1912 × 1032.2377 × 1032.8702 × 1032.0357 × 1032.5562 × 1032.1639 × 1033.2951 × 1034.0369 × 1032.1409 × 1031.8364 × 103
std2.8817 × 1022.4040 × 1021.6478 × 1022.8943 × 1021.6047 × 1021.9427 × 1023.5644 × 1024.4080 × 1021.9081 × 1024.0050 × 1021.1976 × 1021.0272 × 102
F17mean2.3070 × 1031.9907 × 1031.8437 × 1031.8421 × 1032.1334 × 1031.8305 × 1032.1745 × 1031.8507 × 1032.3505 × 1032.6310 × 1031.8361 × 1031.8012 × 103
std2.1975 × 1021.6693 × 1025.9343 × 1018.4069 × 1011.2962 × 1029.3385 × 1011.7479 × 1021.1620 × 1021.4710 × 1022.3809 × 1026.0487 × 1011.8146 × 101
F18mean3.6281 × 1054.0398 × 1053.1386 × 1053.0627 × 1057.0699 × 1054.9221 × 1059.4064 × 1031.4821 × 1053.2111 × 1067.2249 × 1061.2175 × 1057.6168 × 104
std2.7583 × 1053.1315 × 1052.2033 × 1052.5376 × 1053.5569 × 1055.3267 × 1051.1557 × 1048.8047 × 1041.5455 × 1066.1290 × 1066.1649 × 1042.3466 × 104
F19mean1.1334 × 1041.0154 × 1047.9412 × 1031.1190 × 1049.4179 × 1036.1317 × 1035.6914 × 1031.5536 × 1048.3464 × 1031.0231 × 1072.9665 × 1032.2927 × 103
std1.0137 × 1048.8497 × 1036.2186 × 1031.0766 × 1041.2030 × 1046.8140 × 1031.3776 × 1041.5239 × 1045.1228 × 1037.4936 × 1061.1357 × 1031.9748 × 102
F20mean2.6406 × 1032.3675 × 1032.2448 × 1032.1960 × 1032.5458 × 1032.1610 × 1032.4907 × 1032.1540 × 1032.6468 × 1032.7801 × 1032.1671 × 1032.0962 × 103
std2.1483 × 1021.4741 × 1021.0063 × 1021.0229 × 1021.1028 × 1021.1205 × 1021.8798 × 1027.8455 × 1011.7222 × 1021.7913 × 1026.2669 × 1012.3384 × 101
F21mean2.4736 × 1032.3630 × 1032.3575 × 1032.3479 × 1032.4452 × 1032.3663 × 1032.3867 × 1032.3600 × 1032.4948 × 1032.6302 × 1032.3462 × 1032.3294 × 103
std6.1024 × 1011.1820 × 1011.6702 × 1011.4040 × 1011.7554 × 1012.0893 × 1012.8716 × 1014.0240 × 1011.4360 × 1013.6998 × 1014.3418 × 1014.9411 × 100
F22mean4.7374 × 1033.2874 × 1032.3102 × 1032.3006 × 1035.5191 × 1032.8079 × 1033.9133 × 1032.9150 × 1037.7309 × 1035.4541 × 1032.3821 × 1032.3000 × 103
std2.2074 × 1031.2087 × 1034.7894 × 1001.1814 × 1001.8152 × 1031.5678 × 1032.2077 × 1031.8695 × 1032.3602 × 1031.8640 × 1034.4658 × 1024.5485 × 10−7
F23mean3.0755 × 1032.7434 × 1032.7174 × 1032.7017 × 1032.8089 × 1032.6986 × 1032.7675 × 1032.7060 × 1032.8412 × 1033.2794 × 1032.7113 × 1032.6900 × 103
std1.2654 × 1022.0901 × 1011.8198 × 1011.3847 × 1012.0016 × 1012.0133 × 1013.0742 × 1014.0844 × 1011.3533 × 1011.2719 × 1021.0862 × 1017.1280 × 100
F24mean3.1811 × 1032.8933 × 1032.8786 × 1032.8666 × 1032.9771 × 1032.9091 × 1032.9258 × 1032.9040 × 1033.0199 × 1033.3914 × 1032.8743 × 1032.8589 × 103
std8.8045 × 1011.4178 × 1011.5979 × 1011.7443 × 1012.7855 × 1011.4530 × 1014.3870 × 1016.3692 × 1011.1416 × 1011.2368 × 1021.5190 × 1025.0005 × 100
F25mean2.8845 × 1032.8932 × 1032.9117 × 1032.8980 × 1032.8904 × 1032.8891 × 1032.8981 × 1032.8872 × 1032.9390 × 1033.2345 × 1032.8838 × 1032.8848 × 103
std1.3505 × 1019.8600 × 1001.9462 × 1011.7328 × 1017.9075 × 1004.4094 × 1001.6850 × 1011.6754 × 1001.0540 × 1018.7998 × 1011.8792 × 10−11.5740 × 100
F26mean5.7774 × 1034.7504 × 1033.6235 × 1033.8717 × 1035.0612 × 1034.0406 × 1035.0263 × 1033.8676 × 1035.6073 × 1037.9669 × 1033.0703 × 1033.7417 × 103
std1.9820 × 1032.6490 × 1027.2199 × 1025.9975 × 1026.5764 × 1022.3494 × 1024.0932 × 1024.7102 × 1021.4298 × 1021.3764 × 1035.6179 × 1024.9138 × 102
F27mean3.3211 × 1033.2571 × 1033.2398 × 1033.2130 × 1033.2406 × 1033.2121 × 1033.2461 × 1033.2021 × 1033.2433 × 1033.6311 × 1033.2112 × 1033.1971 × 103
std1.9337 × 1021.9526 × 1011.1989 × 1018.8266 × 1001.8051 × 1016.9371 × 1002.8123 × 1011.0024 × 1019.0040 × 1001.5912 × 1024.6880 × 1004.6605 × 100
F28mean3.2370 × 1033.2646 × 1033.2669 × 1033.2150 × 1033.2334 × 1033.2322 × 1033.1649 × 1033.2194 × 1033.3506 × 1034.0120 × 1033.1591 × 1033.1891 × 103
std2.3163 × 1012.1089 × 1012.6497 × 1011.2274 × 1012.4596 × 1011.6115 × 1016.3881 × 1011.0705 × 1012.1026 × 1012.4593 × 1023.9725 × 1012.3434 × 101
F29mean4.1611 × 1033.8186 × 1033.6129 × 1033.5210 × 1033.7897 × 1033.4786 × 1033.8356 × 1033.4676 × 1034.3602 × 1035.3447 × 1033.4467 × 1033.4540 × 103
std2.0489 × 1021.8372 × 1021.4201 × 1021.5489 × 1021.2275 × 1029.1715 × 1012.4729 × 1029.2550 × 1011.7632 × 1024.6063 × 1026.8469 × 1013.4203 × 101
F30mean4.4794 × 1042.3836 × 1041.6167 × 1041.2819 × 1044.1836 × 1041.0345 × 1049.6933 × 1032.1012 × 1052.2835 × 1057.3960 × 1076.5580 × 1038.7688 × 103
std2.5129 × 1043.9028 × 1048.2884 × 1034.6478 × 1035.4515 × 1043.3683 × 1033.9358 × 1031.4625 × 1051.1340 × 1057.6312 × 1076.0648 × 1021.2881 × 103
Table 4. Experimental results of CEC2017 (dim = 50).
Table 4. Experimental results of CEC2017 (dim = 50).
IDMetricPSOSOGROSBOAEDESCHHWOAIGWOMSPBOQOCSPBOSPBOMESPBO
F1mean8.6734 × 1062.3685 × 1065.3913 × 1081.2142 × 1043.3006 × 1042.7644 × 1033.9216 × 1031.2373 × 1076.9607 × 1082.3090 × 10108.5150 × 1023.8769 × 102
std3.4959 × 1061.5953 × 1063.4521 × 1081.1842 × 1043.8494 × 1042.7582 × 1034.8496 × 1035.8869 × 1062.1505 × 1084.9392 × 1098.9134 × 1021.3170 × 102
F2mean1.1321 × 10237.7286 × 10411.2534 × 10471.8707 × 10331.1498 × 10441.0850 × 10421.5421 × 10425.7637 × 10372.4870 × 10614.1030 × 10732.4739 × 10137.4016 × 1026
std2.3415 × 10231.8387 × 10424.9974 × 10471.0239 × 10345.9854 × 10443.9538 × 10426.2931 × 10422.1849 × 10387.2696 × 10612.2083 × 10741.3437 × 10141.0605 × 1027
F3mean8.0519 × 1041.3709 × 1051.0701 × 1054.2937 × 1042.4239 × 1051.5926 × 1051.4391 × 1033.1961 × 1043.1827 × 1051.7614 × 1052.4043 × 1054.2018 × 104
std1.9113 × 1041.4660 × 1041.8568 × 1041.0579 × 1043.4754 × 1042.7322 × 1041.2826 × 1037.2848 × 1033.9136 × 1041.4209 × 1042.5516 × 1043.7041 × 103
F4mean5.3900 × 1026.0547 × 1027.4178 × 1025.4650 × 1025.5164 × 1025.9117 × 1025.2955 × 1025.9929 × 1029.2187 × 1024.9451 × 1034.2880 × 1024.8216 × 102
std4.7706 × 1015.3270 × 1019.6985 × 1015.7918 × 1013.9988 × 1014.1892 × 1014.7105 × 1014.6263 × 1015.9182 × 1011.4072 × 1031.5886 × 1012.6648 × 101
F5mean7.7107 × 1026.3086 × 1026.9511 × 1026.7350 × 1028.4367 × 1026.9753 × 1026.9542 × 1026.4328 × 1029.2641 × 1021.0411 × 1036.3653 × 1025.6668 × 102
std3.1775 × 1012.2643 × 1013.3310 × 1013.4169 × 1012.3217 × 1013.6683 × 1013.5800 × 1014.8374 × 1012.1667 × 1013.1057 × 1011.3004 × 1015.5336 × 100
F6mean6.5205 × 1026.1019 × 1026.1627 × 1026.0634 × 1026.2313 × 1026.0010 × 1026.1961 × 1026.0213 × 1026.1160 × 1026.8998 × 1026.0000 × 1026.0015 × 102
std6.0171 × 1005.2724 × 1004.6810 × 1003.7230 × 1005.5937 × 1001.1072 × 10−17.0140 × 1009.1144 × 10−12.1198 × 1006.4049 × 1007.3131 × 10−143.6565 × 10−2
F7mean1.0737 × 1039.4840 × 1021.0219 × 1039.8636 × 1021.1433 × 1039.7989 × 1021.1169 × 1039.4569 × 1021.2161 × 1031.9081 × 1038.5993 × 1028.1874 × 102
std6.2901 × 1013.6518 × 1016.2968 × 1016.8683 × 1015.4777 × 1012.6986 × 1017.8483 × 1017.9035 × 1012.0705 × 1018.4598 × 1011.2753 × 1019.5937 × 100
F8mean1.0985 × 1039.2707 × 1021.0027 × 1039.5902 × 1021.1365 × 1039.7553 × 1029.9297 × 1029.7436 × 1021.2307 × 1031.3336 × 1039.3542 × 1028.6486 × 102
std2.9356 × 1011.7435 × 1013.9263 × 1013.2953 × 1012.4816 × 1014.8029 × 1014.1708 × 1019.1353 × 1012.0802 × 1013.4417 × 1011.5205 × 1016.3932 × 100
F9mean2.2509 × 1042.4686 × 1034.0613 × 1032.9595 × 1031.0470 × 1049.6268 × 1023.1924 × 1031.5135 × 1036.1152 × 1032.9383 × 1041.8765 × 1039.1989 × 102
std5.2617 × 1038.7294 × 1021.1291 × 1031.2072 × 1033.7003 × 1035.8707 × 1011.0576 × 1036.6138 × 1021.2177 × 1032.8397 × 1033.0329 × 1026.1594 × 100
F10mean7.3854 × 1039.0578 × 1037.3818 × 1036.6217 × 1038.8334 × 1031.2089 × 1047.8758 × 1031.2131 × 1041.5200 × 1041.2632 × 1044.7369 × 1036.5693 × 103
std9.4706 × 1022.8790 × 1037.8832 × 1029.3155 × 1024.1864 × 1026.9525 × 1029.7975 × 1023.5238 × 1033.4271 × 1026.1356 × 1022.3504 × 1025.1587 × 102
F11mean1.3285 × 1031.5532 × 1032.0524 × 1031.2637 × 1031.5698 × 1031.3574 × 1031.3268 × 1031.4355 × 1035.7594 × 1037.5358 × 1031.7598 × 1031.2226 × 103
std4.4182 × 1011.2958 × 1024.7057 × 1024.3111 × 1011.5205 × 1021.3242 × 1026.4318 × 1016.8734 × 1011.2641 × 1031.3115 × 1034.8893 × 1021.4840 × 101
F12mean1.1469 × 1071.1554 × 1071.3915 × 1074.3907 × 1064.3492 × 1065.0536 × 1066.4706 × 1052.1394 × 1071.5508 × 1085.4347 × 1091.6127 × 1064.8023 × 105
std5.9075 × 1068.2347 × 1061.0430 × 1072.3229 × 1063.0015 × 1063.2520 × 1064.1159 × 1051.0725 × 1074.2305 × 1071.8336 × 1095.6886 × 1051.6396 × 105
F13mean2.9101 × 1043.8254 × 1049.1108 × 1031.1536 × 1048.1246 × 1039.4035 × 1031.1992 × 1043.6843 × 1051.8224 × 1057.6275 × 1083.2039 × 1032.3976 × 103
std1.1205 × 1042.3960 × 1043.6171 × 1039.0202 × 1036.8628 × 1034.1045 × 1039.3041 × 1032.0270 × 1051.3935 × 1054.2349 × 1082.4444 × 1036.2663 × 102
F14mean1.3427 × 1051.8915 × 1051.2733 × 1051.7565 × 1055.0495 × 1053.7153 × 1059.1018 × 1038.6698 × 1041.6267 × 1068.0739 × 1066.0671 × 1052.0301 × 104
std8.7992 × 1041.3875 × 1058.9269 × 1041.4661 × 1053.3528 × 1054.2982 × 1058.0683 × 1036.9063 × 1046.0420 × 1058.8328 × 1064.2494 × 1057.2294 × 103
F15mean1.0373 × 1041.2367 × 1049.1041 × 1031.1467 × 1046.9715 × 1037.6237 × 1031.0527 × 1041.0932 × 1052.5429 × 1046.8868 × 1074.1130 × 1032.4394 × 103
std7.2767 × 1035.8638 × 1034.7893 × 1037.0539 × 1036.4609 × 1034.7378 × 1038.9865 × 1038.0293 × 1041.2912 × 1046.5183 × 1072.6936 × 1033.8271 × 102
F16mean3.2325 × 1032.9354 × 1032.8243 × 1032.7223 × 1034.0200 × 1033.0385 × 1033.2836 × 1032.5526 × 1035.0559 × 1036.4076 × 1032.7871 × 1032.5329 × 103
std4.7497 × 1023.6022 × 1022.9499 × 1023.6445 × 1023.2554 × 1023.6073 × 1024.1614 × 1024.4912 × 1023.0523 × 1028.6449 × 1022.2885 × 1022.0594 × 102
F17mean3.1660 × 1032.7204 × 1032.6991 × 1032.5763 × 1033.3163 × 1032.5442 × 1033.1080 × 1032.5811 × 1033.9918 × 1034.4062 × 1032.4494 × 1032.4128 × 103
std2.3482 × 1022.6615 × 1022.5654 × 1022.9932 × 1022.4172 × 1022.4353 × 1023.2673 × 1024.1630 × 1022.0232 × 1026.4877 × 1021.5883 × 1021.6441 × 102
F18mean1.5490 × 1062.1561 × 1061.5097 × 1061.1952 × 1064.3640 × 1063.5759 × 1064.1173 × 1046.0097 × 1052.1350 × 1073.8823 × 1076.2119 × 1053.2767 × 105
std8.9012 × 1051.3394 × 1061.2341 × 1067.4304 × 1052.6864 × 1062.7580 × 1063.1312 × 1044.0373 × 1058.2142 × 1062.7541 × 1073.4535 × 1057.4511 × 104
F19mean1.6640 × 1041.7242 × 1041.7670 × 1041.8365 × 1041.1502 × 1041.6405 × 1041.8139 × 1046.2506 × 1042.1137 × 1042.1800 × 1075.2383 × 1032.3734 × 103
std8.2542 × 1031.2242 × 1041.0550 × 1041.1407 × 1041.0138 × 1049.5165 × 1031.2900 × 1043.1326 × 1045.9293 × 1031.8551 × 1072.6114 × 1033.6538 × 102
F20mean3.1222 × 1032.9760 × 1032.6872 × 1032.6268 × 1033.5373 × 1032.7731 × 1033.0128 × 1032.9690 × 1034.1382 × 1033.6252 × 1032.6225 × 1032.5935 × 103
std2.8785 × 1024.1701 × 1022.4773 × 1022.4663 × 1021.1407 × 1021.8729 × 1023.3030 × 1026.2917 × 1021.5947 × 1022.4593 × 1021.4247 × 1021.1464 × 102
F21mean2.6546 × 1032.4354 × 1032.4782 × 1032.4320 × 1032.6341 × 1032.4840 × 1032.4960 × 1032.4514 × 1032.7096 × 1032.9960 × 1032.4481 × 1032.3720 × 103
std4.3705 × 1012.4362 × 1013.0549 × 1012.9321 × 1012.7099 × 1014.6545 × 1014.2499 × 1019.2199 × 1011.9422 × 1017.3970 × 1011.8129 × 1018.4225 × 100
F22mean9.3375 × 1031.1584 × 1046.8459 × 1037.3787 × 1031.0977 × 1041.3503 × 1049.7794 × 1031.3032 × 1041.6783 × 1041.4541 × 1046.1198 × 1037.5317 × 103
std1.1614 × 1032.6307 × 1033.1716 × 1032.4111 × 1031.7210 × 1036.0105 × 1028.8270 × 1023.7830 × 1033.3514 × 1028.0271 × 1021.5457 × 1031.4445 × 103
F23mean3.5836 × 1032.9259 × 1032.9283 × 1032.8593 × 1033.1107 × 1032.8762 × 1033.0062 × 1032.8584 × 1033.1461 × 1033.9310 × 1032.8909 × 1032.8289 × 103
std1.2674 × 1022.9619 × 1014.1576 × 1013.3266 × 1014.9950 × 1015.2826 × 1016.5735 × 1016.8289 × 1011.6807 × 1011.6572 × 1022.1512 × 1011.4111 × 101
F24mean3.5913 × 1033.0656 × 1033.0762 × 1033.0285 × 1033.3161 × 1033.1162 × 1033.1666 × 1033.0353 × 1033.3121 × 1034.0659 × 1033.2504 × 1033.0033 × 103
std1.4742 × 1023.3062 × 1013.7799 × 1013.0310 × 1016.4662 × 1013.0405 × 1016.2404 × 1019.1306 × 1011.9104 × 1011.7734 × 1024.0101 × 1011.3501 × 101
F25mean2.9909 × 1033.0948 × 1033.2390 × 1033.0895 × 1033.0797 × 1033.1086 × 1033.0535 × 1033.0938 × 1033.4498 × 1035.9306 × 1032.9687 × 1033.0183 × 103
std4.3426 × 1013.4516 × 1015.6194 × 1013.2142 × 1012.9337 × 1012.7134 × 1013.8784 × 1013.3696 × 1017.4813 × 1016.3242 × 1022.0415 × 1011.6804 × 101
F26mean8.0375 × 1036.0354 × 1035.5124 × 1035.5096 × 1037.1895 × 1034.9611 × 1036.7682 × 1035.4370 × 1037.9446 × 1031.3537 × 1044.0868 × 1034.6816 × 103
std3.7319 × 1034.0146 × 1021.3363 × 1031.7288 × 1033.5661 × 1024.2283 × 1027.8138 × 1027.5542 × 1022.1503 × 1021.3829 × 1031.2892 × 1031.2812 × 102
F27mean4.0311 × 1033.5952 × 1033.5472 × 1033.3197 × 1033.7637 × 1033.4086 × 1033.6112 × 1033.3001 × 1033.7857 × 1034.8827 × 1033.3392 × 1033.2561 × 103
std7.5936 × 1027.7932 × 1017.1038 × 1015.4548 × 1011.3402 × 1026.0966 × 1011.7404 × 1025.4462 × 1018.8730 × 1017.2274 × 1022.6623 × 1011.1242 × 101
F28mean3.2967 × 1033.4429 × 1033.6382 × 1033.3622 × 1033.3717 × 1033.4681 × 1033.3154 × 1033.3646 × 1034.4836 × 1036.0265 × 1033.2636 × 1033.2836 × 103
std2.9886 × 1015.2514 × 1011.0682 × 1024.1064 × 1013.6633 × 1016.7595 × 1012.6669 × 1015.8427 × 1012.1107 × 1026.1755 × 1028.4268 × 1001.2148 × 101
F29mean4.9707 × 1034.3162 × 1034.1629 × 1033.8195 × 1034.8113 × 1033.6392 × 1034.7845 × 1033.8262 × 1035.7029 × 1031.0002 × 1043.7594 × 1033.7718 × 103
std4.2478 × 1022.5397 × 1022.6689 × 1022.7194 × 1024.2288 × 1021.7120 × 1023.4330 × 1022.5904 × 1022.1802 × 1021.6476 × 1031.2522 × 1029.8391 × 101
F30mean3.8617 × 1062.7456 × 1061.5263 × 1061.0568 × 1062.9302 × 1061.2185 × 1061.0653 × 1068.8823 × 1061.1536 × 1073.7356 × 1086.5996 × 1059.2349 × 105
std1.6101 × 1069.2617 × 1052.8278 × 1053.4573 × 1051.0841 × 1063.7175 × 1053.7166 × 1053.7392 × 1064.1323 × 1061.1042 × 1084.3430 × 1041.1365 × 105
Table 5. Friedman mean rank test result.
Table 5. Friedman mean rank test result.
SuitesCEC2017
Dimensions103050
Algorithms M . R T . R M . R T . R M . R T . R
PSO9.23107.7797.309
SO8.2097.1086.477
GRO5.6356.6076.738
SBOA4.6734.6734.503
ED6.1777.97107.9310
ESC5.2345.0346.036
HHWOA5.9065.8365.835
IGWO6.2085.5355.704
MSPBO9.431110.571110.7711
QOCSPBO11.531211.801211.8012
SPBO3.8023.4723.272
MESPBO2.0011.6711.671
Table 6. The description of datasets used in the comparative study.
Table 6. The description of datasets used in the comparative study.
IDNameNumber of FeaturesNumber of InstancesNumber of Classes
Dataset 1Tic-Tac-Toe Endgame909582
Dataset 2BreastCancer Wisconsin (Original)106992
Dataset 3Statlog (Heart)132702
Dataset 4Wine131783
Dataset 5Congressional Voting Records164352
Dataset 6Zoo161017
Dataset 7Lymphography181484
Dataset 8Hepatitis191552
Dataset 9German Credit Dataset Analysis2010002
Dataset 10Waveform2150003
Table 7. The results of MESPBO and other algorithms in fitness.
Table 7. The results of MESPBO and other algorithms in fitness.
FunctionMetricPSOSOGROSBOAEDESCHHWOAIGWOMSPBOQOCSPBOSPBOMESPBO
Dataset 1Ave0.14100.14350.14100.15350.14100.14430.15360.14100.15110.14100.14100.1410
Std3 × 10−170.00793 × 10−170.00793 × 10−170.01050.01333 × 10−170.01303 × 10−173 × 10−173 × 10−17
Dataset 2Ave0.03170.03170.03170.03170.03170.03170.03180.03170.03170.03170.03170.0317
Std3 × 10−173 × 10−173 × 10−173 × 10−173 × 10−173 × 10−170.00033 × 10−173 × 10−173 × 10−173 × 10−170
Dataset 3Ave0.11230.11230.11260.11230.11250.11230.11270.11230.11230.11230.11230.1122
Std1 × 10−171 × 10−171 × 10−170.0031 × 10−171 × 10−170.0051 × 10−171 × 10−171 × 10−171 × 10−171 × 10−17
Dataset 4Ave0.00150.00180.00150.00160.00150.00150.00170.00150.00150.00150.00150.0014
Std00.00500.002000.00300000
Dataset 5Ave0.05990.06120.05990.05720.05880.05130.06140.04710.04890.04760.04570.0441
Std1 × 10−21 × 10−21 × 10−21 × 10−21 × 10−21 × 10−21 × 10−21 × 10−21 × 10−21 × 10−29 × 10−36 × 10−3
Dataset 6Ave0.00120.00120.00130.00120.00120.00120.00130.00120.00120.00120.00120.00012
Std2 × 10−403 × 10−40003 × 10−400000
Dataset 7Ave1 × 10−32 × 10−32 × 10−32 × 10−32 × 10−31 × 10−31 × 10−32 × 10−32 × 10−32 × 10−32 × 10−32 × 10−3
Std8 × 10−41 × 10−37 × 10−49 × 10−43 × 10−48 × 10−47 × 10−46 × 10−48 × 10−49 × 10−48 × 10−40
Dataset 8Ave9 × 10−29 × 10−29 × 10−29 × 10−28 × 10−28 × 10−29 × 10−28 × 10−29 × 10−28 × 10−28 × 10−27 × 10−2
Std5 × 10−25 × 10−23 × 10−23 × 10−23 × 10−21 × 10−24 × 10−25 × 10−23 × 10−23 × 10−24 × 10−42 × 10−4
Dataset 9Ave0.20360.19390.19100.21250.19270.19810.20860.19110.19870.19400.18890.1861
Std0.01010.01110.00050.01280.00380.00770.01960.00810.01060.000480.00620.0084
Dataset 10Ave0.20360.19390.19100.21250.19270.19810.20860.19110.19870.19400.18890.1861
Std0.01010.01110.0050.01280.00380.00770.01960.00810.01060.00480.00620.0084
Table 8. The results of MESPBO and other algorithms in accuracy.
Table 8. The results of MESPBO and other algorithms in accuracy.
FunctionMetricPSOSOGROSBOAEDESCHHWOAIGWOMSPBOOQOCSPBOOSPBOMESPBO
Dataset 1Ave86.3%86.1%86.3%86.1%86.3%86.0%85.2%86.3%85.4%86.3%86.3%86.1%
Dataset 2Ave97.1%97.1%97.1%97.1%97.1%97.1%97.1%97.1%97.1%97.1%97.1%97.1%
Dataset 3Ave88.9%88.9%88.9%88.9%88.9%88.9%88.9%88.9%88.9%88.9%88.9%89.0%
Dataset 4Ave100%100%100%100%100%100%100%100%100%100%100%100%
Dataset 5Ave94.11%93.95%94.11%94.57%94.57%95.04%93.95%94.11%95.50%94.37%94.68%95.81%
Dataset 6Ave100%100%100%100%100%100%100%100%100%100%100%100%
Dataset 7Ave100%100%100%100%100%100%100%100%100%100%100%100%
Dataset 8Ave91.11%91.56%91.11%92.00%92.00%92.89%88.44%92.00%92.68%90.36%91.27%93.33%
Dataset 9Ave79.9%80.7%81.0%79.0%80.8%80.3%79.3%81.0%80.3%80.7%81.2%81.5%
Dataset 10Ave79.9%80.7%81%79%80.8%80.3%79.3%81%80.3%80.7%81.2%81.5%
Table 9. The results of MESPBO and other algorithms in average number of features.
Table 9. The results of MESPBO and other algorithms in average number of features.
FunctionMetricPSOSOGROSBOAEDESCHHWOAIGWOMSPBOQOCSPBOSPBOMESPBO
Dataset 1Ave55.455.455.2756.6555
Dataset 2Ave3333333.133333
Dataset 3Ave3333.3333.333333
Dataset 4Ave2222.1222.222222
Dataset 5Ave2.72.22.73.23.53.62.54.44.34.23.22.1
Dataset 6Ave2.122.222222.222222
Dataset 7Ave3.23.74.13.42.12.13.42.32222
Dataset 8Ave3.23.74.13.42.22.13.52.32222
Dataset 9Ave9.35.75.89.35.46.27.467.55.95.76
Dataset 10Ave9.35.75.89.35.46.27.467.55.965.7
Table 10. The ranges of unknown parameters.
Table 10. The ranges of unknown parameters.
ParametersSingle Diode PV Models
L b U b
I p h A 01
I d μ A 01
R s Ω 00.5
R s h Ω 0100
n 12
I d 1 μ A 01
I d 2 μ A 01
n 1 12
n 1 12
Table 11. Comparison among different algorithms on SDM.
Table 11. Comparison among different algorithms on SDM.
Algorithm I p h A I d μ A R s Ω R s h Ω n R S M E s i g
RTH7.6043 × 10−19.9716 × 10−73.1402 × 10−21.1245 × 1021.6041 × 1009.8615 × 10−4+
SAO7.6084 × 10−11.0000 × 10−63.1385 × 10−21.0000 × 1021.6045 × 1009.8735 × 10−4+
GRO7.6070 × 10−13.3551 × 10−73.6241 × 10−25.5509 × 1011.4850 × 1009.8987 × 10−4+
SO7.6078 × 10−13.2067 × 10−73.6406 × 10−25.3452 × 1011.4805 × 1002.4000 × 10−3+
ESC7.6076 × 10−13.2096 × 10−73.6391 × 10−25.3553 × 1011.4805 × 1001.7000 × 10−3+
INFO7.6084 × 10−18.3538 × 10−73.2367 × 10−21.0000 × 1021.5834 × 1009.8654 × 10−4+
SBOA7.6078 × 10−13.2302 × 10−73.6377 × 10−25.3719 × 1011.4812 × 1009.8633 × 10−4+
GKSO7.6031 × 10−14.0577 × 10−73.5468 × 10−26.5852 × 1011.5045 × 1009.8678 × 10−4+
IGWO7.6061 × 10−13.6866 × 10−73.5871 × 10−25.8965 × 1011.4946 × 1001.1000 × 10−3+
HHWOA7.6107 × 10−12.1506 × 10−73.7916 × 10−24.1703 × 1011.4415 × 1009.8674 × 10−4+
ED7.6077 × 10−13.2394 × 10−73.6366 × 10−25.3675 × 1011.4815 × 1009.8870 × 10−4+
MEED7.6078 × 10−13.2305 × 10−73.6377 × 10−25.3721 × 1011.4812 × 1009.8602 × 10−4/
Table 12. IAE of MESPBO on SDM.
Table 12. IAE of MESPBO on SDM.
MESPBO V V I A I s i m A I A E I A P s i m W I A E p A
1−2.0570 × 10−17.6400 × 10−17.6205 × 10−11.9511 × 10−3−1.5675 × 10−14.0134 × 10−4
2−1.2910 × 10−17.6200 × 10−17.6137 × 10−16.3184 × 10−4−9.8293 × 10−28.1570 × 10−5
3−5.8800 × 10−27.6050 × 10−17.6074 × 10−12.4304 × 10−4−4.4732 × 10−21.4291 × 10−5
45.7000 × 10−37.6050 × 10−17.6017 × 10−13.3215 × 10−44.3330 × 10−31.8932 × 10−6
56.4600 × 10−27.6000 × 10−17.5964 × 10−13.6187 × 10−44.9073 × 10−22.3377 × 10−5
61.1850 × 10−17.5900 × 10−17.5914 × 10−11.3832 × 10−48.9958 × 10−21.6391 × 10−5
71.6780 × 10−17.5700 × 10−17.5864 × 10−11.6371 × 10−31.2730 × 10−12.7470 × 10−4
82.1320 × 10−17.5700 × 10−17.5806 × 10−11.0560 × 10−31.6162 × 10−12.2513 × 10−4
92.5450 × 10−17.5550 × 10−17.5724 × 10−11.7442 × 10−31.9272 × 10−14.4391 × 10−4
102.9240 × 10−17.5400 × 10−17.5587 × 10−11.8746 × 10−32.2102 × 10−15.4813 × 10−4
113.2690 × 10−17.5050 × 10−17.5338 × 10−12.8778 × 10−32.4628 × 10−19.4077 × 10−4
123.5850 × 10−17.4650 × 10−17.4875 × 10−12.2510 × 10−32.6843 × 10−18.0698 × 10−4
133.8730 × 10−17.3850 × 10−17.4052 × 10−12.0165 × 10−32.8680 × 10−17.8098 × 10−4
144.1370 × 10−17.2800 × 10−17.2643 × 10−11.5667 × 10−33.0053 × 10−16.4814 × 10−4
154.3730 × 10−17.0650 × 10−17.0458 × 10−11.9228 × 10−33.0811 × 10−18.4082 × 10−4
164.5900 × 10−16.7550 × 10−16.7168 × 10−13.8212 × 10−33.0830 × 10−11.7539 × 10−3
174.7840 × 10−16.3200 × 10−16.2663 × 10−15.3700 × 10−32.9978 × 10−12.5690 × 10−3
184.9600 × 10−15.7300 × 10−15.6817 × 10−14.8281 × 10−32.8181 × 10−12.3948 × 10−3
195.1190 × 10−14.9900 × 10−14.9706 × 10−11.9389 × 10−32.5445 × 10−19.9254 × 10−4
205.2650 × 10−14.1300 × 10−14.1296 × 10−13.5430 × 10−52.1743 × 10−11.8654 × 10−5
215.3980 × 10−13.1650 × 10−13.1876 × 10−12.2600 × 10−31.7207 × 10−11.2200 × 10−3
225.5210 × 10−12.1200 × 10−12.1494 × 10−12.9383 × 10−31.1867 × 10−11.6222 × 10−3
235.6330 × 10−11.0350 × 10−11.0558 × 10−12.0816 × 10−35.9474 × 10−21.1726 × 10−3
245.7360 × 10−1−1.0000 × 10−2−6.5923 × 10−33.4077 × 10−3−3.7813 × 10−31.9547 × 10−3
255.8330 × 10−1−1.2300 × 10−1−1.2585 × 10−12.8543 × 10−3−7.3411 × 10−21.6649 × 10−3
265.9000 × 10−1−2.1000 × 10−1−2.1250 × 10−12.5047 × 10−3−1.2538 × 10−11.4778 × 10−3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhai, G.; Li, S. MESPBO: Multi-Strategy-Enhanced Student Psychology-Based Optimization Algorithm for Global Optimization Problems and Feature Selection Problems. Biomimetics 2026, 11, 37. https://doi.org/10.3390/biomimetics11010037

AMA Style

Zhai G, Li S. MESPBO: Multi-Strategy-Enhanced Student Psychology-Based Optimization Algorithm for Global Optimization Problems and Feature Selection Problems. Biomimetics. 2026; 11(1):37. https://doi.org/10.3390/biomimetics11010037

Chicago/Turabian Style

Zhai, Guolin, and Sai Li. 2026. "MESPBO: Multi-Strategy-Enhanced Student Psychology-Based Optimization Algorithm for Global Optimization Problems and Feature Selection Problems" Biomimetics 11, no. 1: 37. https://doi.org/10.3390/biomimetics11010037

APA Style

Zhai, G., & Li, S. (2026). MESPBO: Multi-Strategy-Enhanced Student Psychology-Based Optimization Algorithm for Global Optimization Problems and Feature Selection Problems. Biomimetics, 11(1), 37. https://doi.org/10.3390/biomimetics11010037

Article Metrics

Back to TopTop