Next Article in Journal
Biomimetic Visual Information Spatiotemporal Encoding Method for In Vitro Biological Neural Networks
Next Article in Special Issue
An Improved Northern Goshawk Optimization Algorithm for Mural Image Segmentation
Previous Article in Journal
Investigation of Droplet Spreading and Rebound Dynamics on Superhydrophobic Surfaces Using Machine Learning
Previous Article in Special Issue
An Innovative Differentiated Creative Search Based on Collaborative Development and Population Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IPO: An Improved Parrot Optimizer for Global Optimization and Multilayer Perceptron Classification Problems

1
School of Humanities, Minnan Science and Technology College, Quanzhou 362332, China
2
College of Foreign Languages, Fujian Normal University, Fuzhou 350007, China
3
Department of Computer and Information Science, Linköping University, 58183 Linköping, Sweden
4
Faculty of Science, Fayoum University, Faiyum 63514, Egypt
5
Applied Science Research Center, Applied Science Private University, Amman 11831, Jordan
6
New Engineering Industry College, Putian University, Putian 351100, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 358; https://doi.org/10.3390/biomimetics10060358
Submission received: 4 April 2025 / Revised: 28 May 2025 / Accepted: 30 May 2025 / Published: 2 June 2025

Abstract

The Parrot Optimizer (PO) is a new optimization algorithm based on the behaviors of trained Pyrrhura Molinae parrots. In this paper, an improved PO (IPO) is proposed for solving global optimization problems and training the multilayer perceptron. The basic PO is enhanced by using three improvements, which are aerial search strategy, modified staying behavior, and improved communicating behavior. The aerial search strategy is derived from Arctic Puffin Optimization and is employed to enhance the exploration ability of PO. The staying behavior and communicating behavior of PO are modified using random movement and roulette fitness–distance balance selection methods to achieve a better balance between exploration and exploitation. To evaluate the optimization performance of the proposed IPO, twelve CEC2022 test functions and five standard classification datasets are selected for the experimental tests. The results between IPO and the other six well-known optimization algorithms show that IPO has superior performance for solving complex global optimization problems. The results between IPO and the other six well-known optimization algorithms show that IPO has superior performance for solving complex global optimization problems. In addition, IPO has been applied to optimize a multilayer perceptron model for classifying the oral English teaching quality evaluation dataset. An MLP model with a 10-21-3 structure is constructed for the classification of evaluation outcomes. The results show that IPO-MLP outperforms other algorithms with the highest classification accuracy of 88.33%, which proves the effectiveness of the developed method.

1. Introduction

Metaheuristic algorithms are one type of stochastic searching method used to find an optimal solution within a given search space. In recent decades, metaheuristic algorithms have become very popular for solving different types of optimization problems due to their flexibility, no-derivation mechanism, and simplicity [1]. The exploration and exploitation phases are two important processes for the metaheuristic algorithms. The former is used to find the optimal solution in the global scope and avoid the local optimum, while the latter is used to improve the accuracy of the optimal solution. Up to now, researchers have proposed various metaheuristic algorithms according to the natural biological habits, physical and chemical laws, human behaviors and so on, such as Particle Swarm Optimization (PSO) [2], Grey Wolf Optimizer (GWO) [3], Snake Optimizer (SO) [4], Arithmetic Optimization Algorithm (AOA) [5], Reptile Search Algorithm (RSA) [6], Slime Mould Algorithm (SMA) [7], Remora Optimization Algorithm (ROA) [8], and recently proposed algorithms Pied Kingfisher Optimizer (PKO) [9], and Secretary Bird Optimization Algorithm (SBOA) [10].
One of the most important applications of metaheuristic algorithms is the training of artificial neural networks (ANNs). Among different types of artificial neural networks, the multilayer perceptron (MLP) has a simple structure and efficient performance and can be used to solve various classification problems in reality. The parameters, such as weights and biases of the MLP, can be optimized by the metaheuristic algorithms. Although there are many metaheuristic algorithms developed to train the MLP, according to the NFL theorem [11], new algorithms are always required when solving emerging complex optimization problems.
In the field of college oral English teaching, the goal is to improve students’ spoken language skills and develop their abilities to express their views clearly, accurately, and fluently, which are essential for their academic success, career development, and lifelong learning [12]. Strengthening supervision and feedback in the teaching process is a powerful means to ensure the teaching effect. Generally speaking, an integrated teaching quality evaluation system includes information from a supervisor, colleagues, students, and teachers. Therefore, the teaching quality evaluation model needs to consider a variety of factors, which can be a challenging nonlinear optimization problem. The traditional English teaching quality evaluation methods, such as the grey relational analysis method [13], the analytic hierarchy process [14], and fuzzy comprehensive evaluation method [15], are subjective and contain random defects, resulting in inaccurate evaluation results.
In the literature, there are some studies that have discussed teaching quality evaluations. For example, Zhang et al. [13] adopted the principal component analysis and support vector machine to improve the evaluation precision of English teaching quality. Lu et al. [16] constructed the English interpretation teaching quality evaluation model using the RBF neural network, which is optimized by a genetic algorithm. Wei et al. [17] investigated the evaluation performance of the college English teaching effect using an improved quantum particle swarm algorithm and a support vector machine. Zhang [18] also studied the problem of college English teaching effect evaluation, and the least squares support vector machine method was applied for the evaluation. Tan et al. [19] introduced the oral English teaching quality evaluation method based on a BP neural network, which is optimized by an improved crow search algorithm. Miao et al. [20] applied the decision tree algorithm to evaluate the teaching effect of oral English teaching with high accuracy and short time-consuming. Up to now, it still can be a challenge to develop an effective and reliable model for the accurate evaluation in the field of English teaching quality. The related works of English teaching quality evaluation are reported in Table 1.
Inspired by the four different behavioral characteristics of Pyrrhura Molinae parrots, Lian et al. proposed a new metaheuristic algorithm called Parrot Optimizer (PO) in 2024 [21]. The PO simulates parrots’ behaviors of foraging, staying, communicating, and fear of strangers, aiming to achieve a balance between exploration and exploitation. Although the PO algorithm produces satisfactory results for a variety of real-world engineering optimization problems, it does not perform well in solving high-dimensional optimization problems. In this paper, the PO algorithm is improved by using multiple improvement mechanisms and applied to solve the MLP classification problems.
The main contributions of this article are articulated as follows:
  • An improved PO (IPO) is proposed in this paper, which adopts three improvements, namely the aerial search strategy, modified staying behavior, and communicating behavior;
  • The proposed IPO is tested using twelve CEC2022 test functions;
  • The numerical results, Wilcoxon signed-rank test, Friedman ranking test, convergence curves, and boxplots demonstrate the superiority of IPO compared to PO and the other five methods;
  • The effectiveness of IPO-MLP is verified in training the multilayer perceptron for solving the classification problems, including five classification datasets and an oral English teaching quality evaluation problem.
The framework of the rest paper is outlined as follows: In Section 2, the methodology employed in this research is provided in detail. Section 3 gives the improvement methods of PO. The results of the proposed IPO on CEC2022, standard classification datasets, and the oral English teaching quality evaluation problem are presented in Section 4. Finally, Section 5 concludes this paper.

2. Preliminaries

2.1. Multilayer Perceptron

Multilayer perceptron (MLP) is one type of feedforward neural network (FNN), which has shown the powerful ability to solve nonlinear problems [22]. As shown in Figure 1, two MLP models are presented, which have structures of 3-4-3 and 3-4-4-3, respectively. The input information starts at the input layer and passes layer by layer until it reaches the output layer. In general, there is one input layer, one output layer, and one or more hidden layers in an MLP model, and each layer has multiple nodes. These nodes in the hidden or output layers are used to perform the neural network computation. Like other neural networks, the weights between nodes and the biases of nodes are the key parameters that need to be optimized.
The MLP calculation process can be represented by the following equations.
The first step is to calculate the weighted sum Hj of j-th node by using Equation (1):
H j = i = 1 m ( ω i j × x i ) θ j
where ωij, xi, and θj are the weight, input, and bias of the j-th node, respectively.
Then the output of nodes is calculated by using Equation (2):
f ( H j ) = 1 1 + exp ( H j )
where f(Hj) denotes the active function calculation, and the applied function in this paper is the sigmoid function. In the next layer or output layer, the outputs are calculated in a manner similar to Equations (1) and (2). And finally, the output results are obtained.
For the training of MLP, the mean square error (MSE) is treated as the objective function, which is defined as follows:
M S E ¯ = n = 1 S k = 1 M ( e n k o n k ) S
where S is the number of all samples. M is the number of nodes in the output layer. e n k and o n k indicate the output value and real value of the k-th output node for the n-th sample. According to Equation (3), the smaller the mean square error is, the closer the model output value is to the actual value. Therefore, the training process of the MLP model can be regarded as a minimum value problem, and the goal is to find the optimal weights and biases.

2.2. The Parrot Optimizer

The PO is a novel stochastic optimization algorithm inspired by the special behaviors of Pyrrhura Molinae parrots [21]. Four types of behavioral traits are modeled in PO: foraging, staying, communicating, and fear of strangers. The process of PO is described as follows.

2.2.1. Population Initialization

In PO, the first step is to initialize the population. The position of the i-th agent is calculated as follows:
X i 0 = r a n d × u b l b + l b ,   i = 1 ,   2 ,   3 ,   ,   N
where lb and ub are the lower and upper boundaries, respectively; rand is a random value evenly distributed between 0 and 1. N is the size of the population. By using Equation (1), the initial positions of population individuals are randomly generated within the upper and lower boundary ranges.

2.2.2. Foraging Behavior

In the foraging behavior, Pyrrhura Molinae parrots will consider the location of food or the owner and then fly to the estimated location. The mathematical model is described as follows:
X i t + 1 = X i t X b e s t × l e v y D + r a n d × ( 1 t T ) 2 t T × X m e a n t
where X i t + 1 and X i t indicate the positions of the i-th agent at the current and next iteration; Xbest denotes the location of food or the owner; levy(D) denotes the Levy distribution operator and D is the dimension of the objective function; X m e a n t is the average location of the population.

2.2.3. Staying Behavior

In the staying behavior, Pyrrhura Molinae parrots will fly to the owner and stay on the owner’s body for a while. This behavior is formulated as follows:
X i t + 1 = X i t + X b e s t × l e v y D + r a n d × o n e s 1 ,   D
where ones(1, D) represents an all-1 vector of D dimension. Xbest × levy(D) represents the behavior of flying to the owner, and rand × ones(1, D) represents the behavior of staying on the owner’s body for a while.

2.2.4. Communicating Behavior

The communicating behavior of Pyrrhura Molinae parrots can be divided into two types: flying to the flock and without flying to the flock. Considering that both cases have the same probability of happening, these two types of communication behaviors can be represented as follows:
X i t + 1 = 0.2 × r a n d × ( 1 t T ) × X i t X m e a n t ,   P 1 0.5 0.2 × r a n d × exp ( t r a n d × T ) ,   P 1 > 0.5
where P1 is a random number in the range of [0, 1].

2.2.5. Fear of Strangers’ Behavior

Pyrrhura Molinae parrots also have the behavior trait of being afraid of strangers. They will fly towards the owner and move away from the strangers. This behavior is formulated as follows:
X i t + 1 = X i t + O L
O = r a n d × cos ( 0.5 π × t T ) × X b e s t X i t
L = cos ( r a n d × π ) × ( t T ) 2 T × X i t X b e s t
where O denotes the behavior of flying towards the owner, and L denotes the behavior of moving away from the strangers.

2.3. Aerial Search Strategy

The aerial search strategy is developed in Arctic Puffin Optimization (APO) [23], which displays a strong ability for global exploration. The mathematical model of it can be shown in the following equations.
X i t + 1 = X i t + ( X i t X r t ) × l e v y D + R
R = r o u n d 0.5 × 0.05 + r a n d × r a n d n
where randn denotes a random value following the standard normal distribution; r is a random integer in [1, N – 1]. The round denotes a function that is used to round values to the nearest whole number.

2.4. Fitness–Distance Balance (FDB) Selection

Fitness–distance balance (FDB) selection is a well-known and effective improvement method applied in metaheuristic algorithms, which is proposed by Kahraman et al. in 2020 [24]. FDB selection can improve the search process of the optimization method by balancing the fitness and the distance between the current agent and the best agent. The first step of FDB is to calculate the distance between the candidate solution and the best solution, as shown in Equation (13).
D P i = ( x i , 1 x b e s t , 1 ) 2 + ( x i , 2 x b e s t , 2 ) 2 + + ( x i , D x b e s t , D )
where DPi denotes the Euclidean distance between the i-th candidate solution and the best solution. In other cases, Manhattan distance and Minkowski distance can also be adopted as the distance metrics. Then the distance vector DP for the candidate solution can be obtained, which is shown in Equation (14).
D P d 1 d N
In the second step, two factors of fitness and distance are comprehensively considered to obtain the score of each individual, as shown in Equation (15).
S P = ω × n o r m ( f ) + ( 1 ω ) × n o r m ( D P )
where w is a weight coefficient with the value range [0, 1]; norm denotes the normalized operator; f is the fitness function vector of the population. The score vector SP is shown in Equation (16).
S P s 1 s N
In this paper, a variant of the FDB called RFDB is selected to use the roulette wheel selection [25], which selects an individual according to the score vector SP. In the RFDB, the scores of all individuals are summed, which is Ssum = S1 + S2 + …+ SN. Then the probability of each individual being selected is the ratio of the individual’s score to the sum of all scores, which is Si/Ssum. Therefore, the higher the score, the greater the chance of being selected.

3. Improved Parrot Optimizer

3.1. Motivation

The motivation to improve the PO is to improve the performance and adaptability of the algorithm to meet the increasingly complex requirements of optimization problems. Although the basic PO displays good performance in some optimization problems, it still has limitations when dealing with complicated nonlinear problems, such as the oral English teaching quality evaluation problem. Therefore, this paper introduces efficient strategies and improving techniques to the PO, so that the enhanced PO can flexibly adjust the search strategies during the searching process to improve its search ability and convergence speed, and reduce the possibility of falling into the local optima.

3.2. Proposal for IPO

To enhance the global and local search ability of basic PO, several modifications are applied to it, including aerial search strategy, new staying behavior, and roulette fitness–distance balance selection. The details are shown below.

3.2.1. New Exploration Equations Using Aerial Search Strategy

The original PO has a weak exploration phase, so we try to enhance it by using the aerial search strategy that existed in Arctic Puffin Optimization, as shown in Equations (11) and (12).

3.2.2. New Staying Behavior

The staying behavior of PO includes flying to the host and randomly stopping at the host’s body. To increase the local search of the PO, it is assumed that the parrot is already on its owner’s body and might move randomly. Thus, according to Equation (6), the modified staying behavior is described as follows:
X i t + 1 = X i t + X b e s t × l e v y D + r a n d × o n e s 1 ,   D ,   P 2 0.5 X i t + r a n d × o n e s 1 ,   D ,   P 2 > 0.5
where P2 is a random value between 0 and 1, indicating the same possibility of flying and moving for a parrot.

3.2.3. New Communicating Behavior

In the proposed IPO, the RFDB selection is applied to the process of communicating behavior to balance the global and local exploration of PO. The improved communicating behavior can be represented as follows:
X i t + 1 = 0.2 × r a n d × ( 1 t T ) × X i t X m e a n t ,   P 3 0.5 X R F D B t + 0.2 × r a n d × exp ( t r a n d × T ) ,   P 3 > 0.5
where P3 is a random value between 0 and 1. X R F D B t is the selected agent using the RFDB selection method.

3.2.4. Architecture of the Proposed IPO

The Pseudocode of IPO is given in Algorithm 1, and the flowchart of IPO is shown in Figure 2. When the IPO begins the optimizing process, the positions of the population are first initialized and the optimal individual is determined. Then, the algorithm enters a cyclic iterative process, and each individual performs a certain behavior according to the parameter St, including aerial search strategy, modified staying behavior, improved communicating behavior, and fear of strangers’ behavior. When the number of algorithm iterations reaches the terminating condition, the loop exits and outputs the found optimal solution.
Algorithm 1: Pseudocode of the Proposed IPO
1. Initialize the IPO parameters: population size N, maximum iterations T.
2. Initialize the population’s positions randomly and identify the best agent.
3. For t = 1:T
4.        Calculate the fitness function.
5.        Find the best agent.
6.        For i = 1:N
7.               St = randi([1, 4])
8.               If St == 1
9.                       Behavior 1: aerial search strategy
10.                      Update position by Equations (11) and (12).
11.               Elseif St == 2
12.                      Behavior 2: new staying behavior
13.                      Update position by Equation (17).
14.               Elseif St == 3
15.                      Behavior 3: new communicating behavior
16.                      Update position by Equation (18).
17.               Elseif St == 4
18.                      Behavior 4: fear of strangers’ behavior
19.                      Update position by Equations (8)–(10).
20.               End if
21.               i = i + 1
22.        End for
23.        t = t + 1
24. End For
25. Return the best solution

3.2.5. The Computational Complexity Analysis of IPO

The computational complexity is an important indicator for the performance evaluation of optimization methods [5]. Set the population size to N, the maximum number of iterations to T, and the dimension of the objective function to D. For the original PO, the computational complexity of initialization is O(N × D). The computational complexity during the iterations is O(N × D × T). Thus, the total computational complexity of PO is O(N × D) + O(N × D × T) = O(N × D × (1 + T)). For the IPO, all the applied modifications will not increase the computational complexity. Therefore, the computational complexity of the proposed IPO is the same as the PO, which is O(N × D × (1 + T)).

4. Experimental Results

In this section, the optimization performance of IPO is evaluated using two types of test experiments, which are CEC2022 benchmark functions [26] and MLP model training. The results of IPO are compared with six other advanced algorithms, including PO [21], Harris Hawks Optimization (HHO) [27], Sine Cosine Algorithm (SCA) [28], Osprey optimization algorithm (OOA) [29], AOA [5], and Aquila Optimizer (AO) [30]. The detailed control parameters of these algorithms are provided in Table 2. To ensure a fair comparison of experimental results, all experiments were independently run 30 times, and for each optimization problem, the maximum number of iterations of all optimization algorithms is set to 500, and the population size is set to 30. The experiments are performed by MATLAB R2021b on a PC with Windows 11 operating system and Intel(R) Core(TM) i7-10700 CPU @ 2.90 GHz and 32.00 GB RAM.

4.1. Case 1: CEC2022 Test Sets

The proposed IPO is first tested by using twelve CEC2022 benchmark functions [10,31], where F1 is a unimodal function, F2–F5 are multimodal functions, F6–F8 are hybrid functions, and F9–F12 are composition functions. The details of test functions are presented in Table 3. The range indicates the search range of search space. Fmin indicates the theoretical optimal value. The dimensions of all problems are set to 20.

4.1.1. Ablation Test

This section analyzes the influence of different strategies on the optimization performance of the PO algorithm, including aerial search strategy, new staying behavior, and new communicating behavior. Table 4 shows different versions of the improved PO algorithm, where one denotes that a strategy is adopted, and zero means a strategy is not adopted.
Table 5 provides the results of the sensitivity analysis for the original PO and all other variants from IPO1 to IPO7 algorithms on the CEC2022 test functions. It can be observed that IPO7 has better optimization results than other algorithms. It has ranked the 1st in F1−F3, F5, and F7−F11 whereas it ranked 2nd only in F4 (after IPO4) and F6 (after IPO5). Therefore, the three improvement strategies are helpful to improve the optimization performance of the PO algorithm. From this point forward, we will refer to IPO7 simply as IPO.

4.1.2. Comparison and Analysis with Other Methods

Table 6 shows the test results of mean, best, worst, and standard deviation for all algorithms. The best results of mean values for these test functions are marked in bold. It can be seen from Table 6 that the IPO algorithm outperforms the comparison algorithms in test functions F1, F2, F5, F6, F9, F1, and F12, which shows a strong competitive performance. It is also noted that AO obtains the best mean values on F3, F4, F7, and F8, and PO wins on F10. Overall, the proposed IPO displays obviously better performance than other algorithms.
Moreover, Table 7 gives the results of the Friedman ranking test. It is clearly shown that IPO ranks first compared to the other six algorithms, while AO ranks second and basic PO ranks third. It is further proved that the proposed IPO is superior to other algorithms on CEC2022 test functions.
Table 8 shows the p-values results of the Wilcoxon signed-rank test. In this test, The optimization results of IPO are compared respectively with those of each algorithm. The number of the comparison results is 15. p-values smaller than 0.05 indicate that the function results of IPO and comparative algorithms have significant differences. Otherwise, there is no significant difference between them. Moreover, the signs ”+/=/–” denote the results of IPO are better, equal, and worse than those of compare algorithms. From Table 8, it is obviously found that IPO has a significant difference compared to the other six methods and shows better optimization performance.
Table 9 reports the results of the average computational time on each test function. It can be seen that due to the addition of several improvement strategies, the calculation time of the IPO is slightly longer than that of the PO. The AOA algorithm has the shortest calculation time because of its simple structure, while the AO algorithm is more complex and has the longest calculation time.
Figure 3 displays the convergence curves of all algorithms on each of the CEC2022 test functions. It can be seen that the IPO converges faster than other algorithms in most cases. In the early stage of the iteration process, IPO can quickly identify better solutions to avoid local optima, such as F2-F6, and F11. Meanwhile, in the subsequent iteration process, the accuracy of the solution can be continuously improved, such as F1 and F6, which proves the effectiveness of the applied improvement strategies. Therefore, IPO shows excellent exploration and exploitation abilities for solving the CEC2022 problems.
Figure 4 presents the boxplots of all algorithms on each CEC2022 test function. It is shown that the boxplots of IPO on F1, F6, F9, F11, and F12 are noticeably narrower than other algorithms, indicating its good stability when solving these problems. In addition, the medians of boxplots for IPO on F1, F5, F9, F11, and F12 are also significantly lower than other methods, showing better accuracy. In other cases, IPO also presents competitive results. Thus, IPO has the merits for solving these optimization problems.

4.2. Case 2: Standard Classification Datasets

The second test contains five standard classification datasets, which are XOR, Ballon, Iris, Breast cancer, and Heart [32]. The details of these datasets are provided in Table 10. The MLP models are applied to solve these classification problems. It can be seen that each classification dataset has a corresponding MLP with different structures and parameters.
The training results of IPO-MLP and other compared methods are shown in Table 11. The best mean results are marked in bold. It is found that IPO-MLP has obtained the best mean results among these methods and ranks first at the final rank. In particular, IPO-MLP has achieved significantly lower MSE results in the Balloon dataset. Therefore, IPO-MLP exhibits superior performance in the training of MLP models.
Table 12 gives the results of classification accuracy results. The bold values mean the best mean results. It is observed that although the training results of IPO-MLP are the best, its classification accuracy is not the highest on Breast cancer and Heart datasets. Overall, IPO-MLP has obtained competitive results on these five classification datasets. The Friedman ranking results show that the IPO-MLP is the best.

4.3. Case 3: Oral English Education Evaluation Problem

In the third case, the proposed IPO is used to solve the oral English education evaluation problem. The oral English education evaluation problem can be regarded as a classification problem with multiple features. In this paper, the evaluation model is constructed using a multilayer perceptron model with a 10-21-3 structure. Then, the weights and biases of this MLP model are optimized by using the proposed IPO. The results are shown below.

4.3.1. Indexes of Oral English Teaching Quality Evaluation Model

For better evaluating the oral English teaching quality, the indexes of oral English teaching quality evaluation problem are selected according to the works in [19]. The index system is constructed as shown in Figure 5, which has five first-grade indexes: qualities of teacher, teaching attitude, teaching content, teaching method, and teaching effectiveness. In each first-grade index, there are one or more second-grade indexes. These elements play a leading role in the oral English teaching evaluation and ensure the scientific and reasonable teaching quality evaluation system.
As can be seen in Figure 5, ten features in the oral English education evaluation problem are selected to determine the evaluation outcomes of teachers. The evaluation results are divided into three cases, which are excellent, good, and qualified. Therefore, an MLP model with a 10-21-3 structure is constructed to find the relationship between the indexes and evaluation outcomes. The node number of the hidden layer is determined by an empirical formula 2 × n + 1 [33], where n is the number of input parameters.

4.3.2. Analysis of Testing Results

For the experiments, a total of 60 groups of oral English quality evaluation data were collected, of which 20 are excellent, 20 are good, and 20 are qualified. The proposed IPO and other six compared methods are employed to train the constructed MLP models. The experiments are conducted for 30 times. The results are shown in Figure 6, Table 13, and Table 14.
Figure 6 shows the convergence curves of average MSE during the training process. It is observed that the IPO presents the ability to consistently find smaller MSE values throughout the whole process and obtain the smallest value at last, while the original PO shows poor search ability for this problem. It is also noted that HHO and AO also show good optimization results compared to IPO. Table 13 provides the training results of all algorithms. It can be found that IPO has the best performance for training the evaluation model. In terms of the mean value index, IPO-MLP obtains the lowest value with 4.111E-02 and ranks first among all algorithms.
Meanwhile, the accuracy results are given in Table 14. It can be seen that IPO-MLP also ranks first with a mean accuracy of 7.022 × 101. The IPO-MLP also obtains the highest accuracy of 8.833 × 101. Therefore, the proposed IPO has the merits of solving the oral English teaching quality evaluation problem. By using the proposed IPO-MLP, the decision-makers can better allocate teaching resources based on the accurate assessment of teachers’ teaching levels.

5. Conclusions and Future Work

In this paper, the aerial search strategy, modified staying behavior, and roulette fitness–distance balance selection are used to improve the basic PO for better optimization performance. The proposed IPO was tested on twelve CEC2022 test functions and applied to optimize the parameters of MLP models for classification problems. The results of the IPO significantly outperformed the other six advanced algorithms, including PO, HHO, SCA, OOA, AOA, and AO. An evaluation model of oral English teaching quality is also constructed using an MLP model with a 10-21-3 structure. The IPO is used to optimize the weight and bias parameters of the evaluation model. The results show that the IPO-MLP model can more accurately evaluate the outcomes of oral English teaching quality and has obtained the highest accuracy of 88.33%.
In future work, the suggested optimizer can be extended to tackle a broader range of complex optimization problems, such as robotic path planning, feature selection in high-dimensional datasets, and dynamic scheduling tasks. Additionally, the IPO-MLP framework can be integrated with other metaheuristic algorithms to enhance convergence speed and solution quality. Another promising direction is the automatic optimization of MLP architecture, including activation function selection, hidden layer configuration, and neuron count, potentially using self-adaptive or co-evolutionary strategies. These enhancements can further improve the robustness and generalization ability of the proposed approach across diverse application domains.

Author Contributions

Conceptualization, F.L. and C.D.; methodology, F.L. and R.Z.; software, F.L. and R.Z.; formal analysis, C.D.; investigation, F.L.; writing—original draft preparation, F.L., A.G.H., and R.Z.; writing—review and editing, F.L., A.G.H., and R.Z.; visualization, F.L.; supervision, C.D.; project administration, F.L.; funding acquisition, F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Educational Scientific Research Project for Young and Middle-aged Teachers in Fujian Province (JSZW22045).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available as asked.

Acknowledgments

We appreciate the editors and reviewers who spent time on the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  2. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  3. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  4. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  5. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  6. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  7. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime Mould Algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  8. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  9. Bouaouda, A.; Hashim, F.A.; Sayouti, Y.; Hussien, A.G. Pied kingfisher optimizer: A new bio-inspired algorithm for solving numerical optimization and industrial engineering problems. Neural Comput. Appl. 2024, 36, 15455–15513. [Google Scholar] [CrossRef]
  10. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  11. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  12. Hui, Y. Evaluation of Blended Oral English Teaching Based on the Mixed Model of SPOC and Deep Learning. Sci. Program. 2021, 2021, 7044779. [Google Scholar] [CrossRef]
  13. Zhang, M.Y. English teaching quality evaluation based on principal component analysis and support vector machine. Mod. Electron. Technol. 2018, 41, 178–182. [Google Scholar]
  14. Zhou, S.R.; Tan, B. Electrocardiogram soft computing using hybrid deep learning CNN-ELM. Appl. Soft Comput. 2019, 86, 105778. [Google Scholar] [CrossRef]
  15. Liu, Y.G.; Xiong, G. Research on the evaluation of higher vocational computer course teaching quality based on FAHP. Inf. Technol. 2016, 5, 147–149, 153. [Google Scholar]
  16. Lu, C.; He, B.; Zhang, R. Evaluation of English interpretation teaching quality based on GA optimized RBF neural network. J. Intell. Fuzzy Syst. 2021, 40, 3185–3192. [Google Scholar] [CrossRef]
  17. Wei, C.Y.; Tsai, S.B. Evaluation Model of College English Teaching Effect Based on Particle Swarm Algorithm and Support Vector Machine. Math. Probl. Eng. 2022, 2022, 7132900. [Google Scholar] [CrossRef]
  18. Zhang, B.F. Evaluation and Optimization of College English Teaching Effect Based on Improved Support Vector Machine Algorithm. Sci. Program. 2022, 2022, 3124135.1–3124135.9. [Google Scholar]
  19. Tan, M.D.; Qu, L.D. Evaluation of oral English teaching quality based on BP neural network optimized by improved crow search algorithm. J. Intell. Fuzzy Syst. 2023, 45, 11909–11924. [Google Scholar] [CrossRef]
  20. Miao, L.; Zhou, Q. Evaluation Method of Oral English Digital Teaching Quality Based on Decision Tree Algorithm. In Proceedings of the e-Learning, e-Education, and Online Training, Harbin, China, 9–10 July 2022; Fu, W., Sun, G., Eds.; Springer: Cham, Switzerland, 2022; Volume 454, pp. 388–400. [Google Scholar]
  21. Lian, J.B.; Hui, G.H.; Ma, L.; Zhu, T.; Wu, X.C.; Heidari, A.A.; Chen, Y.; Chen, H.L. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  22. Meng, X.Q.; Jiang, J.H.; Wang, H. AGWO: Advanced GWO in multi-layer perception optimization. Expert Syst. Appl. 2021, 173, 114676. [Google Scholar] [CrossRef]
  23. Wang, W.C.; Tian, W.C.; Xu, D.M.; Zang, H.F. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  24. Kahraman, H.T.; Aras, S.; Gedikli, E. Fitness-distance balance (FDB): A new selection method for meta-heuristic search algorithms. Knowl.-Based Syst. 2020, 190, 105169. [Google Scholar] [CrossRef]
  25. Bakır, H. Dynamic fitness-distance balance-based artificial rabbits optimization algorithm to solve optimal power flow problem. Expert Syst. Appl. 2024, 240, 122460. [Google Scholar] [CrossRef]
  26. Kumar, A.; Price, K.V.; Mohamed, A.W.; Hadi, A.A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2022 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report; Indian Institute of Technology: Varanasi, India, 2021. [Google Scholar]
  27. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  28. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  29. Dehghani, M.; Trojovský, P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng. 2023, 8, 1126450. [Google Scholar] [CrossRef]
  30. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  31. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  32. Büşra, I.; Murat, K.; Şaban, G. An improved butterfly optimization algorithm for training the feedforward artificial neural networks. Soft Comput. 2023, 27, 3887–3905. [Google Scholar]
  33. Bansal, P.; Gupta, S.; Kumar, S.; Sharma, S.; Sharma, S. MLP-LOA: A metaheuristic approach to design an optimal multilayer perceptron. Soft Comput. 2019, 23, 12331–12345. [Google Scholar] [CrossRef]
Figure 1. Structure of MLP model: (a) MLP with 3-4-3 structure. (b) MLP with 3-4-4-3 structure.
Figure 1. Structure of MLP model: (a) MLP with 3-4-3 structure. (b) MLP with 3-4-4-3 structure.
Biomimetics 10 00358 g001
Figure 2. The flowchart of IPO.
Figure 2. The flowchart of IPO.
Biomimetics 10 00358 g002
Figure 3. Convergence curves of all algorithms on CEC2022 test functions.
Figure 3. Convergence curves of all algorithms on CEC2022 test functions.
Biomimetics 10 00358 g003
Figure 4. Boxplots of all algorithms on CEC2022 test functions.
Figure 4. Boxplots of all algorithms on CEC2022 test functions.
Biomimetics 10 00358 g004
Figure 5. Oral English teaching quality evaluation index system.
Figure 5. Oral English teaching quality evaluation index system.
Biomimetics 10 00358 g005
Figure 6. The convergence curves of average MSE for all algorithms.
Figure 6. The convergence curves of average MSE for all algorithms.
Biomimetics 10 00358 g006
Table 1. Overview of recent literature studies on English teaching quality evaluation.
Table 1. Overview of recent literature studies on English teaching quality evaluation.
AuthorYearAlgorithmModelRemarks
Zhang [13]2018Principal component analysisSupport vector machineEnglish teaching quality evaluation
Lu [16]2021Genetic algorithmRBF neural networkEvaluation of English interpretation teaching quality
Wei [17]2022Improved quantum particle swarm algorithmSupport vector machineClassification of college English teaching effects
Zhang [18]2022Particle swarm algorithmLeast
squares support vector machine
Evaluation of College English Teaching Effect
Tan [19]2023Improved crow search algorithmBP neural networkEvaluation of oral English teaching quality
Miao [20]2023Decision tree algorithm-Evaluation of the teaching effect of oral English teaching
Table 2. The parameter settings of the proposed algorithm and the compared algorithms.
Table 2. The parameter settings of the proposed algorithm and the compared algorithms.
AlgorithmsYearParameters
IPO-P ∈ [0, 1]
PO [21]2024P ∈ [0, 1]
HHO [27]2019β = 1.5; E0 ∈ [−1, 1]
SCA [28]2016a is linearly decreased from 2 to 0
OOA [29]2023r1 ∈ [0, 1]
AOA [5]2021α = 5; µ = 0.499
AO [30]2021α = 0.1; δ = 0.1
Table 3. The detailed information on the CEC2022 test functions.
Table 3. The detailed information on the CEC2022 test functions.
Function TypeNo.DescriptionRangeFmin
Unimodal functionF1Shifted and full Rotated Zakharov Function[−100, 100]300
Simple multimodal functionsF2Shifted and full Rotated Rosenbrock’s Function[−100, 100]400
F3Shifted and full Rotated Expanded Schaffer’s f6 Function[−100, 100]600
F4Shifted and full Rotated Non-Continuous Rastrigin’s Function[−100, 100]800
F5Shifted and full Rotated Levy Function[−100, 100]900
Hybrid functionsF6Hybrid Function 1 (N = 3)[−100, 100]1800
F7Hybrid Function 2 (N =6)[−100, 100]2000
F8Hybrid Function 3 (N = 5)[−100, 100]2200
Composition functionsF9Composition Function 1 (N = 5)[−100, 100]2300
F10Composition Function 2 (N = 4)[−100, 100]2400
F11Composition Function 3 (N = 5)[−100, 100]2600
F12Composition Function 4 (N = 6)[−100, 100]2700
Table 4. PO and improved PO with different strategies.
Table 4. PO and improved PO with different strategies.
AlgorithmsAerial SearchNew Staying BehaviorNew Communicating Behavior
PO000
IPO1100
IPO2010
IPO3001
IPO4110
IPO5101
IPO6011
IPO7111
Table 5. Sensitivity analysis of PO and its improved versions.
Table 5. Sensitivity analysis of PO and its improved versions.
FunctionPOIPO1IPO2IPO3IPO4IPO5IPO6IPO7
F11.205 × 1047.841 × 1039.183 × 1037.080 × 1033.926 × 1032.296 × 1038.056 × 1031.525 × 103
F25.733 × 1024.935 × 1025.793 × 1025.327 × 1024.616 × 1024.542 × 1024.925 × 1024.151 × 102
F36.366 × 1026.322 × 1026.394 × 1026.378 × 1026.230 × 1026.274 × 1026.365 × 1026.150 × 102
F48.700 × 1028.813 × 1029.006 × 1028.761 × 1028.389 × 1028.703 × 1028.912 × 1028.458 × 102
F52.252 × 1032.060 × 1032.186 × 1031.936 × 1031.832 × 1031.885 × 1031.779 × 1031.671 × 103
F63.097 × 1044.143 × 1031.507 × 1057.059 × 1042.472 × 1032.160 × 1032.245 × 1042.215 × 103
F72.132 × 1032.112 × 1032.087 × 1032.093 × 1032.117 × 1032.067 × 1032.118 × 1032.059 × 103
F82.237 × 1032.228 × 1032.232 × 1032.231 × 1032.230 × 1032.230 × 1032.235 × 1032.227 × 103
F92.514 × 1032.483 × 1032.528 × 1032.498 × 1032.481 × 1032.482 × 1032.496 × 1032.481 × 103
F102.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 103
F114.552 × 1033.763 × 1034.300 × 1034.184 × 1032.850 × 1032.786 × 1034.665 × 1032.625 × 103
F122.968 × 1032.942 × 1032.963 × 1032.954 × 1032.942 × 1032.946 × 1032.973 × 1032.950 × 103
The bold values indicate the best values.
Table 6. Numerical results of IPO and other compared algorithms on CEC2022 test functions.
Table 6. Numerical results of IPO and other compared algorithms on CEC2022 test functions.
FunctionItemIPOPOHHOSCAOOAAOAAO
F1Mean6.184 × 1031.949 × 1042.778 × 1041.886 × 1045.002 × 1043.783 × 1046.688 × 104
Best1.525 × 1031.205 × 1041.441 × 1047.572 × 1032.513 × 1041.870 × 1042.326 × 104
Worst1.249 × 1042.970 × 1044.631 × 1043.451 × 1048.921 × 1046.266 × 1041.030 × 105
Std3.255 × 1034.859 × 1038.889 × 1035.317 × 1031.523 × 1041.243 × 1041.920 × 104
F2Mean4.843 × 1026.911 × 1025.458 × 1028.207 × 1022.946 × 1032.587 × 1035.909 × 102
Best4.151 × 1025.586 × 1024.758 × 1026.461 × 1021.889 × 1031.312 × 1034.872 × 102
Worst6.280 × 1021.020 × 1036.360 × 1021.154 × 1034.530 × 1035.251 × 1037.844 × 102
Std4.196 × 1011.206 × 1024.525 × 1011.260 × 1027.930 × 1029.187 × 1026.130 × 101
F3Mean6.468 × 1026.650 × 1026.654 × 1026.494 × 1026.752 × 1026.669 × 1026.466 × 102
Best6.150 × 1026.366 × 1026.439 × 1026.392 × 1026.603 × 1026.412 × 1026.318 × 102
Worst6.730 × 1026.843 × 1026.794 × 1026.597 × 1026.942 × 1026.863 × 1026.699 × 102
Std1.497 × 1011.160 × 1018.112 × 1006.388 × 1008.921 × 1001.025 × 1019.346 × 100
F4Mean8.849 × 1029.144 × 1028.862 × 1029.590 × 1029.741 × 1029.586 × 1028.747 × 102
Best8.458 × 1028.700 × 1028.394 × 1029.198 × 1029.435 × 1029.115 × 1028.507 × 102
Worst9.134 × 1029.546 × 1029.083 × 1021.003 × 1031.013 × 1039.907 × 1029.006 × 102
Std1.538 × 1011.813 × 1011.312 × 1011.818 × 1011.698 × 1011.945 × 1011.273 × 101
F5Mean2.308 × 1032.856 × 1033.037 × 1033.009 × 1033.057 × 1033.083 × 1032.699 × 103
Best1.671 × 1032.234 × 1032.460 × 1032.140 × 1032.158 × 1032.477 × 1031.975 × 103
Worst2.845 × 1033.577 × 1033.892 × 1034.381 × 1034.163 × 1034.262 × 1033.742 × 103
Std3.376 × 1023.425 × 1023.727 × 1025.836 × 1025.307 × 1023.962 × 1024.439 × 102
F6Mean6.301 × 1031.573 × 1072.003 × 1051.565 × 1082.642 × 1091.622 × 1096.389 × 105
Best2.116 × 1032.660 × 1047.384 × 1044.061 × 1072.125 × 1082.472 × 1075.628 × 104
Worst1.997 × 1041.467 × 1086.035 × 1053.427 × 1086.325 × 1094.011 × 1094.598 × 106
Std4.489 × 1032.812 × 1071.056 × 1059.833 × 1071.415 × 1091.106 × 1098.337 × 105
F7Mean2.160 × 1032.194 × 1032.188 × 1032.163 × 1032.198 × 1032.237 × 1032.133 × 103
Best2.059 × 1032.132 × 1032.089 × 1032.109 × 1032.110 × 1032.123 × 1032.075 × 103
Worst2.434 × 1032.522 × 1032.379 × 1032.215 × 1032.317 × 1032.439 × 1032.203 × 103
Std7.438 × 1016.810 × 1017.016 × 1012.834 × 1014.934 × 1019.596 × 1012.950 × 101
F8Mean2.278 × 1032.273 × 1032.319 × 1032.283 × 1032.396 × 1032.924 × 1032.256 × 103
Best2.226 × 1032.236 × 1032.235 × 1032.248 × 1032.241 × 1032.255 × 1032.232 × 103
Worst2.403 × 1032.368 × 1032.601 × 1032.354 × 1032.820 × 1035.963 × 1032.359 × 103
Std5.843 × 1014.383 × 1011.157 × 1022.381 × 1011.622 × 1021.017 × 1033.545 × 101
F9Mean2.485 × 1032.608 × 1032.558 × 1032.609 × 1033.541 × 1033.242 × 1032.616 × 103
Best2.481 × 1032.510 × 1032.496 × 1032.546 × 1032.902 × 1032.799 × 1032.532 × 103
Worst2.493 × 1032.698 × 1032.715 × 1032.645 × 1034.923 × 1033.789 × 1032.725 × 103
Std3.557 × 1005.436 × 1015.194 × 1012.606 × 1015.358 × 1022.493 × 1025.203 × 101
F10Mean3.680 × 1032.700 × 1034.171 × 1034.028 × 1035.627 × 1035.670 × 1034.180 × 103
Best2.501 × 1032.501 × 1032.501 × 1032.519 × 1032.579 × 1032.780 × 1032.501 × 103
Worst5.181 × 1035.457 × 1035.767 × 1036.969 × 1037.359 × 1037.175 × 1036.202 × 103
Std8.421 × 1026.670 × 1029.453 × 1021.953 × 1031.460 × 1031.218 × 1031.283 × 103
F11Mean3.001 × 1035.945 × 1033.684 × 1037.217 × 1038.982 × 1039.271 × 1034.261 × 103
Best2.624 × 1034.552 × 1033.057 × 1035.914 × 1036.490 × 1036.698 × 1033.336 × 103
Worst3.481 × 1037.952 × 1035.192 × 1039.309 × 1039.882 × 1032.082 × 1045.502 × 103
Std2.001 × 1028.775 × 1026.229 × 1028.595 × 1027.743 × 1022.739 × 1036.180 × 102
F12Mean2.999 × 1033.036 × 1033.229 × 1033.085 × 1033.936 × 1033.886 × 1033.061 × 103
Best2.950 × 1032.968 × 1033.011 × 1033.014 × 1033.526 × 1033.392 × 1032.989 × 103
Worst3.400 × 1033.195 × 1033.592 × 1033.172 × 1034.416 × 1034.429 × 1033.126 × 103
Std8.243 × 1014.866 × 1011.475 × 1024.117 × 1012.725 × 1022.795 × 1023.982 × 101
The bold values indicate the best values.
Table 7. Friedman ranking results of all algorithms on CEC2022 test functions.
Table 7. Friedman ranking results of all algorithms on CEC2022 test functions.
FunctionIPOPOHHOSCAOOAAOAAO
F11342657
F21425763
F32453761
F42436751
F51354672
F61425763
F72543671
F83254671
F91324765
F102143675
F111425673
F121254763
Mean Rank1.503.253.584.006.506.252.92
Final Rank1345762
Table 8. Wilcoxon signed-rank test p-values results of IPO compared to other algorithms on CEC2022 test functions.
Table 8. Wilcoxon signed-rank test p-values results of IPO compared to other algorithms on CEC2022 test functions.
Functionvs. POvs. HHOvs. SCAvs. OOAvs. AOAvs. AO
F14.143 × 10−63.392 × 10−66.152 × 10−63.392 × 10−63.392 × 10−63.392 × 10−6
F29.073 × 10−61.140 × 10−23.392 × 10−63.392 × 10−63.392 × 10−61.146 × 10−4
F34.937 × 10−41.050 × 10−34.553 × 10−11.330 × 10−51.866 × 10−32.455 × 10−1
F44.020 × 10−58.357 × 10−13.392 × 10−63.392 × 10−63.392 × 10−61.057 × 10−1
F51.866 × 10−34.020 × 10−53.691 × 10−32.229 × 10−41.146 × 10−42.463 × 10−3
F63.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−6
F76.783 × 10−11.844 × 10−13.195 × 10−16.187 × 10−12.998 × 10−13.440 × 10−2
F86.783 × 10−19.669 × 10−16.783 × 10−15.452 × 10−34.937 × 10−47.089 × 10−1
F93.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−6
F101.440 × 10−23.837 × 10−19.339 × 10−16.709 × 10−41.892 × 10−41.440 × 10−2
F113.392 × 10−62.798 × 10−53.392 × 10−63.392 × 10−63.392 × 10−63.392 × 10−6
F121.807 × 10−21.330 × 10−54.020 × 10−53.392 × 10−63.392 × 10−69.059 × 10−4
+/=/–9/2/18/4/08/4/08/4/08/4/08/3/1
Table 9. The average computational time for each algorithm on CEC2022 test functions.
Table 9. The average computational time for each algorithm on CEC2022 test functions.
FunctionIPOPOHHOSCAOOAAOAAO
F11.589 × 10−11.311 × 10−11.848 × 10−11.219 × 10−11.359 × 10−11.165 × 10−12.057 × 10−1
F21.562 × 10−11.346 × 10−11.786 × 10−11.280 × 10−11.404 × 10−11.191 × 10−12.017 × 10−1
F32.174 × 10−11.930 × 10−13.393 × 10−11.834 × 10−12.550 × 10−11.691 × 10−13.094 × 10−1
F41.687 × 10−11.532 × 10−12.316 × 10−11.425 × 10−11.649 × 10−11.261 × 10−12.305 × 10−1
F51.700 × 10−11.464 × 10−12.399 × 10−11.414 × 10−11.706 × 10−11.297 × 10−12.329 × 10−1
F61.560 × 10−11.373 × 10−12.034 × 10−11.316 × 10−11.457 × 10−11.193 × 10−12.043 × 10−1
F72.315 × 10−12.135 × 10−14.130 × 10−12.111 × 10−12.955 × 10−11.902 × 10−13.616 × 10−1
F82.459 × 10−12.293 × 10−14.406 × 10−12.347 × 10−13.348 × 10−12.095 × 10−14.006 × 10−1
F91.856 × 10−11.728 × 10−13.057 × 10−11.685 × 10−12.407 × 10−11.569 × 10−12.968 × 10−1
F101.638 × 10−11.548 × 10−12.709 × 10−11.565 × 10−12.081 × 10−11.381 × 10−12.549 × 10−1
F111.927 × 10−11.804 × 10−13.216 × 10−11.840 × 10−12.652 × 10−11.733 × 10−13.218 × 10−1
F122.139 × 10−11.980 × 10−13.667 × 10−12.016 × 10−13.029 × 10−11.857 × 10−13.558 × 10−1
Table 10. The detailed information of classification datasets.
Table 10. The detailed information of classification datasets.
DatasetsNumber of FeaturesNumber of Training SamplesNumber of Test SamplesNumber of ClassesMLP StructureDimensionSearch Range
XOR38823-7-136[−10, 10]
Balloon4202024-9-155[−10, 10]
Iris415015034-9-375[−10, 10]
Breast cancer959910029-19-1210[−10, 10]
Heart2280187222-45-11081[−10, 10]
Table 11. MSE results on training data.
Table 11. MSE results on training data.
DatasetsIndexIPO-MLPPO-MLPHHO-MLPSCA-MLPOOA-MLPAOA-MLPAO-MLP
XORMean1.186 × 1021.508 × 10−13.865 × 10−24.753 × 10−21.942 × 10−12.098 × 10−14.933 × 10−2
Best2.274 × 10−94.259 × 10−85.663 × 10−83.263 × 10−31.216 × 10−11.367 × 10−19.035 × 10−9
Worst2.500 × 10−12.500 × 10−12.143 × 10−11.093 × 10−12.500 × 10−12.500 × 10−12.500 × 10−1
Std4.887 × 10−21.029 × 10−16.754 × 10−23.419 × 10−23.971 × 10−23.530 × 10−27.784 × 10−2
Rank1523674
BalloonMean3.939 × 10−113.739 × 10−63.058 × 10−61.095 × 10−55.539 × 10−25.416 × 10−38.642 × 10−7
Best2.280 × 10−221.397 × 10−151.568 × 10−196.161 × 10−93.989 × 10−45.130 × 10−91.634 × 10−18
Worst1.181 × 10−94.288 × 10−56.335 × 10−51.219 × 10−41.524 × 10−15.789 × 10−21.385 × 10−5
Std2.157 × 10−19.062 × 10−61.250 × 10−52.341 × 10−53.975 × 10−21.217 × 10−23.286 × 10−6
Rank1435762
IrisMean3.541 × 1021.013 × 10−17.539 × 10−21.965 × 10−14.756 × 10−13.509 × 10−15.743 × 10−2
Best1.549 × 10−24.374 × 10−23.090 × 10−27.834 × 10−21.853 × 10−12.276 × 10−12.375 × 10−2
Worst1.060 × 10−12.896 × 10−13.745 × 10−13.427 × 10−16.657 × 10−14.835 × 10−13.511 × 10−1
Std1.871 × 10−25.048 × 10−28.281 × 10−26.464 × 10−21.099 × 10−17.280 × 10−28.078 × 10−2
Rank1435762
Breast cancerMean1.499 × 1031.729 × 10−31.887 × 10−31.388 × 10−22.078 × 10−34.708 × 10−31.985 × 10−3
Best1.329 × 10−31.486 × 10−31.618 × 10−34.188 × 10−31.698 × 10−32.256 × 10−31.639 × 10−3
Worst1.711 × 10−32.270 × 10−32.131 × 10−33.623 × 10−22.941 × 10−31.247 × 10−22.628 × 10−3
Std8.498 × 10−51.775 × 10−41.520 × 10−47.986 × 10−32.530 × 10−42.191 × 10−32.365 × 10−4
Rank1237564
HeartMean8.432 × 1021.134 × 10−11.226 × 10−11.810 × 10−11.689 × 10−11.533 × 10−11.008 × 10−1
Best6.383 × 10−28.461 × 10−28.822 × 10−21.454 × 10−11.493 × 10−11.215 × 10−16.721 × 10−2
Worst1.192 × 10−11.450 × 10−11.699 × 10−12.113 × 10−11.937 × 10−11.798 × 10−11.410 × 10−1
Std1.233 × 10−21.592 × 10−22.094 × 10−21.678 × 10−21.051 × 10−21.436 × 10−22.134 × 10−2
Rank1347652
Mean Rank13.635.46.262.8
Final Rank1435762
The bold values indicate the best values.
Table 12. Classification accuracy results on test data (%).
Table 12. Classification accuracy results on test data (%).
DatasetsIndexIPO-MLPPO-MLPHHO-MLPSCA-MLPOOA-MLPAOA-MLPAO-MLP
XORMean94.1729.58 66.67 46.67 7.08 9.58 71.67
Best100.00 100.00 100.00 75.00 37.50 50.00 100.00
Worst0.00 0.00 0.00 12.50 0.00 0.00 0.00
Std22.44 40.13 36.31 17.66 10.73 13.80 33.95
Rank1534762
BalloonMean100.00100.00 100.00 100.00 30.67 75.00 100.00
Best100.00 100.00 100.00 100.00 100.00 100.00 100.00
Worst100.00 100.00 100.00 100.00 0.00 0.00 100.00
Std0.00 0.00 0.00 0.00 27.94 34.22 0.00
Rank1111761
IrisMean75.4238.58 48.24 36.18 1.80 2.04 74.93
Best92.00 79.33 86.67 64.67 33.33 24.67 89.33
Worst14.67 0.00 0.00 9.33 0.00 0.00 41.33
Std16.24 24.02 22.44 15.63 6.32 6.21 11.88
Rank1435762
Breast cancerMean98.57 98.53 98.83 58.17 99.1390.00 97.97
Best100.00 100.00 100.00 95.00 100.00 99.00 100.00
Worst98.00 94.00 98.00 5.00 96.00 0.00 96.00
Std0.63 1.25 0.70 33.13 0.78 18.23 0.89
Rank4537162
HeartMean66.96 48.25 40.88 73.4624.96 38.58 62.63
Best87.50 76.25 71.25 80.00 35.00 76.25 90.00
Worst38.75 3.75 5.00 65.00 0.00 5.00 32.50
Std12.05 21.49 16.17 3.39 8.97 19.66 14.28
Rank2451763
Mean Rank1.83.833.65.862
Final Rank1534672
The bold values indicate the best values.
Table 13. Comparison of training results.
Table 13. Comparison of training results.
IndexIPO-MLPPO-MLPHHO-MLPSCA-MLPOOA-MLPAOA-MLPAO-MLP
Best1.543 × 10−25.685 × 10−22.703 × 10−27.628 × 10−21.634 × 10−12.037 × 10−12.381 × 10−2
Mean4.111 × 10−21.069 × 10−16.166 × 10−21.919 × 10−14.586 × 10−13.488 × 10−15.172 × 10−2
Worst3.580 × 10−12.471 × 10−13.486 × 10−13.945 × 10−16.099 × 10−15.521 × 10−13.560 × 10−1
Std6.044 × 10−25.048 × 10−26.139 × 10−27.363 × 10−21.069 × 10−19.440 × 10−25.954 × 10−2
Rank1435762
The bold values indicate the best values.
Table 14. Comparison of accuracy results (%).
Table 14. Comparison of accuracy results (%).
IndexIPO-MLPPO-MLPHHO-MLPSCA-MLPOOA-MLPAOA-MLPAO-MLP
Best88.3366.67 83.33 71.67 18.33 8.33 85.00
Mean70.2228.44 44.56 27.67 1.28 0.39 67.39
Worst6.670.00 0.00 3.33 0.00 0.00 10.00
Std18.3223.04 24.85 20.49 3.83 1.56 18.81
Rank1435672
The bold values indicate the best values.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, F.; Dai, C.; Hussien, A.G.; Zheng, R. IPO: An Improved Parrot Optimizer for Global Optimization and Multilayer Perceptron Classification Problems. Biomimetics 2025, 10, 358. https://doi.org/10.3390/biomimetics10060358

AMA Style

Li F, Dai C, Hussien AG, Zheng R. IPO: An Improved Parrot Optimizer for Global Optimization and Multilayer Perceptron Classification Problems. Biomimetics. 2025; 10(6):358. https://doi.org/10.3390/biomimetics10060358

Chicago/Turabian Style

Li, Fang, Congteng Dai, Abdelazim G. Hussien, and Rong Zheng. 2025. "IPO: An Improved Parrot Optimizer for Global Optimization and Multilayer Perceptron Classification Problems" Biomimetics 10, no. 6: 358. https://doi.org/10.3390/biomimetics10060358

APA Style

Li, F., Dai, C., Hussien, A. G., & Zheng, R. (2025). IPO: An Improved Parrot Optimizer for Global Optimization and Multilayer Perceptron Classification Problems. Biomimetics, 10(6), 358. https://doi.org/10.3390/biomimetics10060358

Article Metrics

Back to TopTop