Multi-Strategy Improved Particle Swarm Optimization Algorithm and Gazelle Optimization Algorithm and Application

: In addressing the challenges associated with low convergence accuracy and unstable optimization results in the original gazelle optimization algorithm (GOA), this paper proposes a novel approach incorporating chaos mapping termed multi-strategy particle swarm optimization with gazelle optimization algorithm (MPSOGOA). In the population initialization stage, segmented mapping is integrated to generate a uniformly distributed high-quality population which enhances diversity, and global perturbation of the population is added to improve the convergence speed in the early iteration and the convergence accuracy in the late iteration. By combining particle swarm optimization (PSO) and GOA, the algorithm leverages individual experiences of gazelles, which improves convergence accuracy and stability. Tested on 35 benchmark functions, MPSOGOA demonstrates superior performance in convergence accuracy and stability through Friedman tests and Wilcoxon signed-rank tests, surpassing other metaheuristic algorithms. Applied to engineering optimization problems, including constrained implementations, MPSOGOA exhibits excellent optimization performance.


Introduction
Optimization is the process wherein individuals, under certain constraints, employ specific methods and techniques to enhance the performance of existing entities, thereby seeking the optimal solution to a given problem within the solution space.Nowadays, optimization issues are ubiquitous in daily life and engineering technology, serving as popular research topics in fields such as automation, computer science, telecommunications, aerospace, and more.
Heuristic algorithms, which draw inspiration from natural laws, can be broadly classified into the following categories: physical methods based on principles such as gravity, temperature, and inertia, which randomly search for the optimal solution to optimization problems, for instance, methods like gravitational search [1]; simulated annealing [2]; and black hole algorithms [3].Evolutionary algorithms, grounded in Darwin's theory, facilitate the gradual discovery of optimal solutions as the individuals within a population evolve through iterations during the search process.Typical examples include genetic algorithms [4], biogeography-based optimization algorithms [5], artificial algae algorithm [6], widow optimization search algorithm [7], and taboo search algorithm [8].In a population of organisms, each individual has its own role, and communication among individuals enables the acquisition of superior information, ultimately completing the population's evolution.Swarm intelligence optimization algorithms are essentially mathematical models created by researchers to simulate the behavior of collective animals in the natural world.
Inspired by various behaviors exhibited by natural biological populations such as insects, birds, fish, and herds, numerous swarm intelligence optimization algorithms have been proposed and have played a crucial role in many scientific and engineering applications.Mayer Martin Janos et al. used genetic algorithms to find an optimal solution for a hybrid renewable energy system at the home level that is both economical and environmentally friendly [9].Laith Abualigah and Muhammad Alkhrabsheh can effectively solve the problem of cloud computing task scheduling using a hybrid multilateral optimizer optimized by a genetic algorithm [10].G Lodewijks et al. [11] significantly reduced CO2 emissions in the airport baggage handling transportation system by applying particle swarm optimization (PSO) algorithm.Bilal Hussain [12] proposed decomposition weighting and PSO (DWS-PSO), which provides a new solution for price-driven demand response and home energy management systems for renewable energy and storage scheduling.Paul Kaushik and Hati Debolina [13] applied the Harris hawk optimization algorithm to household energy management, resulting in reduced power consumption.Jiang [14] and others utilized the artificial bee colony algorithm for ship structural profile optimization.Abd Elaziz Mohamed and colleagues [15] improved the artificial rabbit optimization algorithm for skin cancer prediction, achieving reliable predictive results.Bishla Sandeep and team [16] employed the chimpanzee optimization algorithm for optimizing the scheduling of batteries in electric vehicles.Percin Hasan Bektas and Caliskan Abuzer [17] utilized the whale optimization algorithm to control fuel cell systems.Jagadish Kumar N. and Balasubramanian C. [18] implemented the widow optimization algorithm for cloud service resource scheduling, effectively reducing the cost of cloud services.Zeng [19] optimized heterogeneous wireless sensor network coverage using the wild horse optimization algorithm, achieving significant coverage and connectivity.Chhabra Amit [20] and others applied the vulture search optimization algorithm in feature selection.Liu and team [21] predicted the lifespan of lithium-ion batteries using an improved sparrow algorithm.Xu and colleagues [22] performed feature selection using the binary arithmetic optimization algorithm.
With the development of heuristic algorithms, integrating different optimization mechanisms and evolutionary characteristics into algorithms, as well as drawing on each other's strengths and overcoming the inherent deficiencies of the algorithms, has gradually become a new trend in the development of optimization algorithms.Chen [23] and others combined the differential evolution algorithm with the biogeography-based optimization algorithm for application in the three-dimensional bin packing problem, significantly improving the utilization of box volume.Long and colleagues [24] integrated the bacterial foraging optimization algorithm and simulated annealing algorithm in local path planning for unmanned vessels, efficiently planning obstacle avoidance paths.Zou and team [25] employed a cross-strategy of whale optimization algorithm and genetic algorithm in the cogeneration system, reducing energy consumption.Ramachandran Murugan [26] and others balanced the locust optimization algorithm and the Harris hawk optimization algorithm in the initial and later convergence stages, applying it to the economic dispatch problem of the thermalelectric field.Manar Hamza and team [27] combined the differential evolution algorithm with the arithmetic optimization algorithm, enhancing the optimization effect.Pashaei Elham and Pashaei Elnaz [28] combined the binary arithmetic optimization algorithm with the simulated annealing algorithm, improving computational accuracy.Bhowmik Sandeep and Acharyya Sriyankar [29] combined the differential evolution algorithm with the genetic algorithm in the image encryption problem.
PSO, as one of the classic metaheuristic algorithms, has been applied in many fields in recent years.Valiollah Panahizadeh [30] used PSO to improve the impact strength and elastic modulus of polyamide-based nanocomposites.Kim Kang Hyun et al. [31] used the PSO algorithm to optimize the drainage system of the undersea tunnel, significantly reducing the construction cost.Kirti Pal et al. [32] optimized the installation cost of flexible AC transmission system through PSO to improve the stability and load conditions of the power system.Zezhong Kang et al. [33] applied the improved PSO algorithm to the rural power grid auxiliary cogeneration system in the North China Plain to determine the optimal unit capacity configuration.In addition, particle swarm computing is often combined with other methods to improve the search performance and has been applied in various fields.Somporn Sirisumrannukul et al. [34] combined artificial neural networks (ANNs) and PSO algorithms to not only collect real-time environmental data and air conditioner usage records, but also autonomously adjust the operation of air conditioners.Sathasivam Karthikeyan et al. [35] adopted the artificial bee swarm (ABC) algorithm and PSO algorithm to optimize the Boost converter and improve the efficiency of the system.Norouzi Hadi and Bazargan Jalal [36] used the linear Musjingen method and PSO algorithm for the first time to study river water pollution and calculate the time change of pollution concentration at different river locations.Shaikh Muhammad Suhail [37] and others combined the PSO algorithm with the moth-flame optimization algorithm for application in power transmission systems.Makhija Divya and colleagues [38] overcame the drawbacks of both the local search of the PSO algorithm and the global search of the grey wolf optimization algorithm, applying this method to the workflow task scheduling problem.Tijani Muhammed Adekilekun and team [39] combined the PSO algorithm with the bat algorithm to effectively avoid falling into local optima, applying this method to the joint economic dispatch scheduling problem in power systems.Osei Kwakye Jeremiah [40] and others combined the PSO algorithm with the gravitational search algorithm to overcome premature convergence.Wang and colleagues [41] integrated the PSO algorithm with the marine predator algorithm.Samantaray Sandeep and team [42] combined the PSO algorithm with the slime mold algorithm in flood flow prediction.Wang [43] and others combined the PSO algorithm with the artificial bee colony algorithm in underwater terrainassisted navigation, enhancing the matching effect.
Among the swarm intelligence optimization algorithms mentioned above, the gazelle optimization algorithm (GOA) [44] has gained increasing usage in practical engineering optimization problems due to its advantage in finding the optimal solution in test functions.However, due to its inherent drawbacks such as low convergence accuracy and unstable optimization results, it may not yield satisfactory results in all optimization problems.This paper aims to improve the shortcomings of the GOA, addressing its deficiencies and enhancing the convergence speed and stability of GOA.The main contributions of this work are as follows: 1.
Initializing the population through chaotic mapping to improve the quality and diversity of initial solutions.

2.
Implementing phased population perturbation to enhance the stability of optimization results while maintaining high precision.

3.
Combined with PSO, the role of the individual experience of the gazelle in the escape process is used to improve the ability of the algorithm to jump out of the local optimum.
This paper is divided into seven sections.Section 2 provides a review and analysis of the literature.Section 3 briefly describes the principles of the traditional GOA.Section 4 introduces the MPSOGOA.Section 5 presents the experimental design for testing functions and engineering applications.Section 6 discusses the experimental results.The conclusion is presented in the final section.

Exploration Phase
During this phase, the gazelles, without predators or any signs of their presence, remain in a calm state, grazing.Drawing on the foraging behavior of gazelles freely grazing, the algorithm simulates the random movements of gazelles within the solution space.In nature, the strongest gazelles not only possess strong survival abilities but also lead other gazelles in evading predators, with the fittest individual in the population being referred to as the alpha gazelle.Assuming the d-dimensional alpha gazelle is represented as shown: We extend the top gazelle individuals to construct an n × d dimensional Elite matrix, where n represents the population number and d represents the dimension.The matrix is given by Equation ( 2): The updating strategy of individual positions in the gazelle population is related to the current optimal individual position.Based on the distance between the current optimal individual and its own grazing position, the individual position is updated, with the displacement step controlled by Brownian motion.The mathematical model is shown in: where gazelle t+1 and gazelle t represent the positions at the (t + 1)th and t-th iterations, respectively.s denotes the speed of gazelle movement during free grazing, R is a random number between 0 and 1, Elite represents the matrix of the alpha gazelle, and R B is the vector of Brownian motion, as given by Equations ( 4) and (5): where µ and σ are constants, µ = 0 is the mean value, and σ 2 = 1 is the unit variance.

Exploitation Phase
In this phase, the algorithm simulates the fleeing behavior of gazelles upon detecting predators, with each phase adopting movements in opposite directions based on the parity of the iteration count.Equation (6) for Levy flight motion is provided in [39]: where α = 1.5, x = Normal(0, σ 2 x ), y = Normal(0, σ 2 y ), σ y = 1, σ x is given by: Upon spotting a predator, gazelles immediately initiate escape, simulating the gazelle's fleeing behavior using a Lévy flight.The escape model is provided by: Electronics 2024, 13, 1580 5 of 33 where S represents the maximum speed achievable by the gazelle during the escape process, and R L is a vector of random numbers based on the Lévy flight, as given by: While tracking gazelles, predators move in the same direction.Therefore, during the gazelle's escape process, the predators also exhibit exploratory behavior in the search space.However, predators are slower in the initial phase of the pursuit, and a Brownian motion is used to simulate the chasing process, followed by the adoption of a Lévy flight in the later stages to model the predator's behavior.The mathematical model for the predator's pursuit of gazelles is provided by: where CF represents the cumulative effect of predators, as shown in: The survival rate of gazelles in the face of predators is 0.66, which implies that predators have a 34% chance of successful hunting.Using predator success rates (PSRs) to represent the success rate of predators, a mathematical model of the gazelle escape process is established, as shown in: where r 1 and r 2 are random indices of the gazelle population.U is a binary matrix representing the logical value obtained by comparing random numbers in the range of [0, 1] with 0.34, such as U = 0 , i f r < 0.34 1 , otherwise .

MPSOGOA
The GOA possesses the advantage of finding effective solutions for most optimization problems.However, it is characterized by the drawback of low convergence accuracy and slow convergence speed.This paper addresses this issue from three perspectives: introducing a chaotic strategy to enhance the quality of initial solutions; implementing population-wide perturbation to improve the convergence speed in the early iterations and the convergence accuracy in the later iterations; and integrating PSO to emphasize the significance of individual gazelle experiences, effectively balancing the exploration and exploitation aspects of the algorithm.

Chaos Strategy
Chaotic motion is non-repetitive and has characteristics of randomness and ergodicity.In recent years, chaotic mapping has been used by many scholars [45][46][47][48][49] for optimization algorithms, and has achieved good results in improving population diversity.The initial population of the GOA when solving optimization problems is randomly generated data.The initial gazelle population generated using this method is uncertain, and individual gazelles cannot traverse the feasible region.The diversity and uniformity of the initial population will affect the optimization ability of the algorithm.Chaotic mapping has a higher search speed than random search, can prevent falling into local optimality when solving optimization problems, and improves the global search ability of the algorithm.
Reasonable use of chaos theory in the population initialization stage can evenly distribute population individuals within the feasible region, thereby achieving the purpose of improving population diversity and uniformity.Currently, many literatures use Logistic mapping, but the traversal of Logistic mapping is uneven, resulting in unsatisfactory convergence speed of the algorithm.
In this paper, Piecewise is used to map the initialized position of individual gazelles.Piecewise mapping produces uniform initial values within [0, 1] and performs better than Logistic mapping in terms of uniformity.Therefore, the Piecewise chaotic sequence can be introduced into the GOA, and the characteristics of the chaotic sequence can be used to effectively improve the ability of the GOA to search for the optimal solution.Its expression is given by Equation ( 13): The number of iterations is set to 1000, and the distribution histogram and scatter plot of the generated PLCM and Logistic mapping are shown in Figure 1.It can be seen from the diagram that the initial gazelle population based on PLCM chaotic map is more evenly distributed, avoiding the situation of focusing on a certain point, and avoiding the distribution characteristics of large at both ends and small in the middle presented by Logistic map.

Global Perturbation of the Population
The gazelle matrix represents the positions of individual gazelles, and the optimization of the positions of individual gazelles can be achieved by perturbing the gazelle matrix.By perturbing the gazelle matrix, we can effectively optimize the positions of individual gazelles, thereby enhancing the convergence speed of the algorithm in the initial iterations.Furthermore, this perturbation also increases the capability of the algorithm to escape from local optima in the later iterations, making the algorithm more adept at global searches.In the GOA, the perturbation of the gazelle matrix is typically achieved by randomly selecting and updating the positions of individual gazelles.This randomness helps break the possibility of the algorithm getting trapped in local optima, enabling the algorithm to search for better solutions in a larger search space.The mathematical model for the population-wide perturbation is provided by Equations ( 14) and (15): where gazelle t represents the positions at the t-th iterations, F gazelle t is its corresponding fitness value, new_gazelle t+1 represents the temporary position of the (t + 1)-th iterations, F newt is its corresponding fitness value, r is a random number within the range [0, 1], P is a coefficient factor of 1 or 2, RANDOM denotes the position of a randomly selected gazelle individual in the population, and FRANDOM is its corresponding fitness value.
The number of iterations is set to 1000, and the distribution histogram and scatter plot of the generated PLCM and Logistic mapping are shown in Figure 1.It can be seen from the diagram that the initial gazelle population based on PLCM chaotic map is more evenly distributed, avoiding the situation of focusing on a certain point, and avoiding the distribution characteristics of large at both ends and small in the middle presented by Logistic map.

Global Perturbation of the Population
The gazelle matrix represents the positions of individual gazelles, and the optimization of the positions of individual gazelles can be achieved by perturbing the gazelle matrix.By perturbing the gazelle matrix, we can effectively optimize the positions of individual gazelles, thereby enhancing the convergence speed of the algorithm in the initial iterations.Furthermore, this perturbation also increases the capability of the algorithm to escape from local optima in the later iterations, making the algorithm more adept at global searches.In the GOA, the perturbation of the gazelle matrix is typically achieved by randomly selecting and updating the positions of individual gazelles.This randomness helps break the possibility of the algorithm getting trapped in local optima, enabling the algorithm to search for better solutions in a larger search space.The mathematical model for the population-wide perturbation is provided by Equations ( 14) and (15): where gazelle t represents the positions at the t-th iterations, F t gazelle is its corresponding fitness value, 1 _ t new gazelle + represents the temporary position of the (t + 1)-th itera-  The PSO algorithm [50] is a population-based stochastic search algorithm that mimics the social behavior of birds during foraging.It seeks the optimal solution in the solution space using two attributes: velocity and position.Throughout the iterative process of the algorithm, each particle in the population represents a candidate solution.The best position (pbest) of each particle, as well as the global best position (gbest) of the population, are recorded to find the optimal solution for the optimization problem.
Suppose the PSO algorithm is applied to an optimization problem in a d-dimensional search space.The updated equations for the j-th dimension velocity component v i,j and the position component x i,j of the i-th particle x i = (x i,1 , x i,2 , ... , x i,d ) in the t + 1th iteration of the population are given by: where ω represents the inertia weight, c 1 and c 2 are acceleration factors, and rand 1 and rand 2 are uniformly distributed random numbers in the range [0, 1].

Combination of PSO and GOA
Combining the two optimization algorithms has become a mainstream trend, and many scholars have shown that this approach can achieve remarkable results.This fusion not only strengthens the overall performance of the algorithm, but also cleverly makes up for the limitations of a single algorithm and realizes the complementarity and enhancement of advantages.The combination of GOA and PSO algorithm provides a new effective method to deal with complex optimization problems.In the PSO algorithm, the position update of particles mainly depends on the historical best position information of individuals and groups.The GOA simulates the behavior of gazelles in nature when they escape from predators.The core formula of gazelle position renewal includes Equations ( 3), ( 8) and (10), all of which indicate that gazelle position renewal depends mainly on the guidance and influence of the best gazelle individual.If the best gazelle individual chooses the wrong escape route, it will lead the population to extinction.In the optimization problem, the choice of the best gazelle individual and its escape route are mapped to the search process of the optimal solution in the algorithm, then the choice of the wrong escape route by the best gazelle individual is equivalent to the algorithm falling into the local optimal solution or misleading solution in the search process.In this case, the whole population (that is, the search space of the algorithm) may be affected by this wrong solution, resulting in the whole search process deviating from the direction of the global optimal solution, and ultimately getting unsatisfactory optimization results.If the concept of individual escape experience is introduced into the GOA, even if the best gazelle individual accidentally falls into the local optimal solution, other gazelle individuals can still escape from the local solution by relying on their individual escape experience.The combined method can be modeled using Equation (17), and the positions and velocities of particles can also be updated, where ω min is the minimum inertia weight and ω max is the maximum inertia weight.
After combining PSO with GOA in the optimization process, it can not only learn from the wisdom of top gazelles, but can also make full use of the experience accumulated by individual gazelles in the escape process, which makes the whole population better jump out of the local optimal trap in the search process and enhances the search ability of the algorithm in the complex and changeable problem space.This fusion makes the excellent solution propagate and utilize in the population more quickly, accelerating the convergence speed of the whole population.

Pseudocode of the Proposed Algorithm
The pseudocode of the MPSOGOA outlines the process of the search optimization scheme as shown in Algorithm 1.The chaotic strategy improves the quality of the initial solutions, while the population-wide perturbation enhances the convergence speed and accuracy of the algorithm.Integration with PSO amplifies the role of individual experiences during the optimization process, preventing premature convergence of the algorithm.After meeting the termination conditions, the algorithm outputs the identified optimal solution.The combined action of the three strategies ensures both a high level of precision and an improved convergence speed of the optimization results, thereby guaranteeing that MPSOGOA can always find the optimal solution.For each dimension: Execute exploration activities on the gazelle matrix according to Equation (3).

Else
Execute Lévy flight on the gazelle matrix according to Equation (8).

End If End If
End For End For Execute particle swarm movement on the gazelle matrix based on Equation (26).Evaluate the fitness value of the gazelle.Update pbest, gbest, and Elite.Execute escape movement on the gazelle matrix according to Equation (12).Iter = iter + 1 End While Return the optimal value from the population.
According to the flowchart in Figure 2, the MPSOGOA process involves primarily initializing the population, evaluating the fitness of the gazelle, and updating candidate solutions.The complexity of MPSOGOA is determined by the maximum iteration count (iter_max), the population size (P), and the problem's dimension (D).From the algorithm flowchart, it is evident that the algorithm complexity is composed of two parts, denoted as O(iter_max × D) + O(F).Among these, F represents the time consumed by the algorithm in evaluating various functions.Thus, the complete analysis of algorithm complexity is as follows in evaluating various functions.Thus, the complete analysis of algorithm complexity is as follows: O(MPSOGOA) = O(iter_max × P × D) + O(CFE × P).The complexity analysis of MPSOGOA can be simplified to: O(MPSOGOA) = O(iter_max × P × D + CFE × P).

Experimental Design
This section presents an analysis of the simulation results of the MPSOGOA.The performance of the proposed MPSOGOA is evaluated using 35 test functions and 4 practical engineering design problems.The results of MPSOGOA in test functions and practical engineering applications are compared with the following algorithms: MPSOGOA,

Experimental Design
This section presents an analysis of the simulation results of the MPSOGOA.The performance of the proposed MPSOGOA is evaluated using 35 test functions and 4 practical engineering design problems.The results of MPSOGOA in test functions and practical engineering applications are compared with the following algorithms: MPSOGOA, GOA, grey wolf optimizer (GWO) [51], sine cosine algorithm (SCA) [52], arithmetic optimization algorithm (AOA) [53], PSO [50], differential evolution (DE) [54], chimp optimization algorithm (Chimp) [55], biogeography-based optimization (BBO) [56], and golden jackal optimization (GJO) [57].All experiments were conducted on a computer running on a 64-bit Windows 7 operating system equipped with a Core i5 CPU operating at a frequency of 2.50 GHz and 4.0 GB of RAM.The Matlab version used was R2021b.To minimize the impact of randomness on the algorithm, all algorithms were independently run 30 times for each test function, with the population size (P) set to 50 and the maximum iteration count (iter_max) set to 1000 generations.The parameter settings for all algorithms in the experiment are shown in Table 1.To comprehensively evaluate the effectiveness of the algorithm, the following statistical evaluation indicators are utilized: best value, worst value, mean value, standard deviation (SD), and median value.

Test Function
Tables 2 and 3 record the 35 test functions used to evaluate the performance of the MPSOGOA.The first 20 problems are classic test functions in optimization problems.Table 2 provides the test functions, their positions, and the corresponding optimal values.F1-F5 are continuous unimodal functions.F6 represents a discontinuous step function, while F7 denotes a quartic noise function.F8-F13 are multimodal functions, and F14-F20 are fixed-dimension multimodal functions.Table 3 presents 15 problems, including a selection of test functions from the CEC2014 and CEC2017 competitions.F21 and F22 are unimodal functions, F23-F31 and F35 are simple multimodal functions, and F34 represents a hybrid function.Unimodal functions typically have a single global optimum and are often used to test the exploratory capabilities of metaheuristic algorithms.Multimodal functions have multiple local optima, making them more complex than unimodal functions.Therefore, they are frequently employed to test whether optimization techniques possess good exploratory capabilities.The number of multimodal functions exponentially increases with the design variables, balancing exploration and exploitation to enhance the algorithm's search efficiency and prevent it from getting trapped in local optima.These test functions are used to infer the potential for the algorithm to find optimal solutions in real-world problems.

Practical Engineering Applications
By testing the performance of optimization techniques on real-world engineering problems and designing corresponding parameter values, the overall design cost is minimized.This study focuses on the welding beam design problem, compression spring design problem, and pressure vessel design problem in mechanical engineering.Most engineering design problems in practical applications are governed by equality and inequality constraints, which are managed in the design objective function using penalty functions.The application of MPSOGOA to mechanical engineering design problems is compared with results obtained from nine other metaheuristic algorithms.

Welded Beam
Welded beam design is a problem of minimizing optimization, often utilized to assess the capability of optimization techniques in addressing practical issues.The welded beam design involves the manufacturing of a welded beam with the minimum cost under multiple constraints.The proposed MPSOGOA, along with 10 other metaheuristic algorithms, is applied to the welded beam design problem to minimize the cost of manufacturing the welded beam.Figure 3 provides an illustration of the welded beam.The decision variables that constrain the minimization of the cost of the welded beam design include the length (l), height (h), thickness (b), and weld thickness (h) of the steel bars.The constraints for the welded beam design encompass shear force (τ), bending stress (θ), column buckling load (P c ), beam deflection (δ), and lateral constraints, represented by WBD constraints.The objective function and penalty function for this problem is as follows: Electronics 2024, 13, x FOR PEER REVIEW 14 of 35 The bounds of the variables are as follows: The constraints are as follows: The bounds of the variables are as follows: The constraints are as follows: The intermediate variables of the constraints are as follows: )) where σ max = 3 × 10 4 psi, P = 6 × 10 3 lb, L = 14 in, δ max = 0.25 in, E = 30 × 10 6 psi, τ max = 13600 psi, and G = 1.2 × 10 7 psi.

Compression Spring Design Issues
The objective of spring design is to minimize the total weight of the tension/compression spring.As depicted in Figure 4, this spring design problem is controlled by three parameters: the wire diameter (d), mean coil diameter (D), and number of active coils in the spring (P).The design function and constraint function for the spring design problem are given in Equations ( 24) and ( 25), Electronics 2024, 13, x FOR PEER REVIEW 15 of 35 where σmax = 3 × 10 4 psi, P = 6 × 10 3 lb, L = 14 in, δmax = 0.25 in, E = 30 × 10 6 psi, τmax = 13600 psi, and G = 1.2 × 10 7 psi.

Compression Spring Design Issues
The objective of spring design is to minimize the total weight of the tension/compression spring.As depicted in Figure 4, this spring design problem is controlled by three parameters: the wire diameter (d), mean coil diameter (D), and number of active coils in the spring (P).The design function and constraint function for the spring design problem are given in Equations ( 24) and ( 25), The boundary conditions of the variables are as follows: The constraints of the spring design problem are as follows: The boundary conditions of the variables are as follows: The constraints of the spring design problem are as follows:

Pressure Vessel Design
The primary objective of the pressure vessel design problem is to minimize the production cost of the pressure vessel to the greatest extent possible (Figure 5).This problem is governed by four control parameters: the thickness of the pressure vessel (Ts), thickness of the head (Th), internal radius of the vessel (R), and length of the vessel head (L).The design function and constraint function for the pressure vessel design problem are provided in Equations ( 29) and (30), respectively,  29) and ( 30), respectively, The variation range of variables is as follows: The constraints of the pressure vessel design problem are as follows:

Results and Discussion
In this section, to validate the performance of the proposed MPSOGOA, 35 test func- The variation range of variables is as follows: The constraints of the pressure vessel design problem are as follows:

Results and Discussion
In this section, to validate the performance of the proposed MPSOGOA, 35 test functions and 3 real-world engineering optimization problems are employed for performance testing, and comparisons are made with the original GOA and currently popular metaheuristic algorithms.The entire experiment is conducted in the following parts: (1) analysis of the experimental results of classical test functions, (2) analysis of the experimental results of the CEC2014 and CEC2017 composite test functions, (3) convergence performance of the MPSOGOA, and (4) result analysis of the 3 engineering optimization problems.
The performance of each algorithm is measured using five performance indicators: best value, worst value, average value, standard deviation, and median.Furthermore, the performance of the MPSOGOA is evaluated using Friedman rank sum test and Wilcoxon signed-rank test.

Analysis of CEC2005 Experimental Results
Table 4 presents the experimental results of 10 algorithms on 20 benchmark functions, including the best results obtained from 30 independent runs (Best), the worst results (Worst), the mean values (Mean), the standard deviations (Std), the medians (Median), and the Wilcoxon signed-rank test rankings (Rank) of each algorithm across the 20 benchmark functions.The final ranking is determined by the Friedman rank sum ranking of each algorithm across the 20 benchmark functions.
Unimodal functions have only one optimal value, so they are often used to test the exploitation capability of algorithms.It can be seen from Table 4 that MPSOGOA shows the best performance on F1-F4.When solving F1, the convergence accuracy is improved by 180 orders of magnitude compared with GOA and 61 orders of magnitude compared with other algorithms.The convergence accuracy is also improved in different degrees.When solving F2, F3, and F4, the convergence accuracy is improved by more than 57 orders of magnitude and the standard deviation is improved by more than 40 orders of magnitude compared with GOA.Compared with GOA on F5-F7, the optimum values are improved.There is an improvement of two orders of magnitude on F6 and one order of magnitude on F7, with a smaller standard deviation.This means that the stability is also improved while the accuracy of convergence is improved.The above results indicate that MPSOGOA has good exploitation capability.F8-F20 is a multimodal function and a fixed-dimensional multimodal function, which usually has multiple extreme values and is often used to test the exploration ability of the algorithm.In the test of multimodal functions F8-F13, MPSOGOA performs better than other comparison algorithms on most functions and can find the optimal solution.When solving F12 and F13, the convergence accuracy of MPSOGOA is improved by two orders of magnitude compared with GOA.MPSOGOA shows excellent exploration ability on multimodal functions.In the test of F14 and F15, the optimal value of MPSOGOA is similar to other algorithms, but its standard deviation is lower.The experimental results show that MPSOGOA has good performance in solving unimodal and multimodal functions and excellent performance in convergence accuracy and stability.
Table 5 shows the Friedman rankings of the 10 algorithms in the benchmark test function.MPSOGOA achieved good results in the Friedman test.The MPSOGOA proposed in this article ranks first among the 10 optimization algorithms.Furthermore, we conducted the Wilcoxon signed-rank test to assess the significance of differences between MPSOGOA and the other nine algorithms.When the p-value is less than 0.05, it is considered that there is a significant difference between the two algorithms for that specific function.The results of the Wilcoxon signed-rank test between MPSOGOA and the nine contrastive algorithms are presented in Table 6.It is evident from the table that MPSOGOA exhibits significant differences from the nine contrastive algorithms across the majority of the functions.Overall, across the 20 benchmark test functions, the performance of the MPSOGOA surpasses that of the nine contrastive algorithms, demonstrating strong competitiveness in the achieved results.The ability of the algorithm to find the global optimal solution is evaluated by the combined test functions in CEC2014 and CEC2017.The test results of MPSOGOA and the other nine comparison algorithms are shown in Table 7.As can be seen from Table 7, the improvement of MPSOGOA has achieved better optimization effects in all combined functions.In the test of function F21, MPSOGOA becomes the only one that converges to the global optimal solution successfully.In addition, in the tests of F23, F24, and F31, the MPSOGOA and GOAs successfully find the global optimal solution.However, by comparing the standard deviation, it can be found that MPSOGOA has higher stability in the search process and can find the global optimal solution more stably.For F22, F25, F26, F27, F29, F30, F34, and F35 functions, MPSOGOA has achieved significant improvement in both optimization accuracy and stability compared with GOA.This series of improvements not only enhances the robustness of the algorithm, but also further validates the advantages of MPSOGOA in solving complex optimization problems.In summary, MPSOGOA successfully improves the ability of global optimal solution search by combining the advantages of PSO and gazelle optimization.Whether faced with the challenge of a single function or a series of combined functions, MPSOGOA has demonstrated its superior optimization performance and stability.Table 8 shows the results of the Wilcoxon signed-rank test for 15 composite functions.The results show that there are significant differences between MPSOGOA and other algorithms on most test functions.The rankings of the Friedman test are given in Table 9. MPSOGOA first among the 10 algorithms, proving that MPSOGOA's optimization ability exceeds other algorithms.This shows that MPSOGOA has good optimization capabilities and can solve most optimization problems.6 compares the convergence curves of MPSOGOA with the original GOA and eight other optimization algorithms on the classical test functions.The behavior of optimization algorithms that converge to the optimal solution early in the optimization process can lead to the inability of the algorithm to find the final global optimal solution.It can be observed that MPSOGOA quickly converges to the optimal solution in most test functions without being constrained by local optima.The convergence curves of F1-F4 demonstrate the accelerated convergence speed of MPSOGOA.The early convergence to the optimal solution in F5, F9, F10, F11, F15, and F20 is attributed to the role of the chaotic strategy and population-wide perturbation in the initial iterations.The algorithm stalls in the later iterations of F7, and the individual experience of particles and populationwide perturbation play a role in the later iterations, enabling the algorithm to escape local optima and eventually find the global optimal solution.Compared to GOA, MPSOGOA significantly converges faster to the global optimal solution in F1-F7, F9-F13, and F15.
perturbation play a role in the later iterations, enabling the algorithm to escape local optima and eventually find the global optimal solution.Compared to GOA, MPSOGOA significantly converges faster to the global optimal solution in F1-F7, F9-F13, and F15.  Figure 7 shows the comparison of convergence curves of MPSOGOA, GOA, and eight other optimization algorithms in CEC2014 and CEC2017 combined test functions.Compared with other algorithms, MPSOGOA has an advantage in convergence speed.The convergence curve of MPSOGOA is always lower than the convergence curve of other algorithms, and the convergence curve drops significantly faster.The convergence rate of MPSOGOA on F21, F26, F27, F28, and F31 is obviously better than other algorithms.In the early stage of iteration, MPSOGOA's convergence ability is far superior to other algorithms, and it can quickly approach the optimal solution in a short time.This is because the algorithm adopts PLCM for population initialization in the initial stage, constructs the initial population with relatively uniform distribution, and improves the quality of the initial population, especially in the initial stage of functions F21 and F31, which have better fitness values than other algorithms.In addition, MPSOGOA shows stable and fast convergence on function F23 with small fluctuation due to the combination of population global perturbation strategy and particle swarm strategy.This further confirms MPSOGOA's significant advantages in terms of global convergence speed and optimization accuracy.Figure 7 shows the comparison of convergence curves of MPSOGOA, GOA, and eight other optimization algorithms in CEC2014 and CEC2017 combined test functions.Compared with other algorithms, MPSOGOA has an advantage in convergence speed.The convergence curve of MPSOGOA is always lower than the convergence curve of other algorithms, and the convergence curve drops significantly faster.The convergence rate of MPSOGOA on F21, F26, F27, F28, and F31 is obviously better than other algorithms.In the early stage of iteration, MPSOGOA's convergence ability is far superior to other algorithms, and it can quickly approach the optimal solution in a short time.This is because the algorithm adopts PLCM for population initialization in the initial stage, constructs the initial population with relatively uniform distribution, and improves the quality of the initial population, especially in the initial stage of functions F21 and F31, which have better fitness values than other algorithms.In addition, MPSOGOA shows stable and fast convergence on function F23 with small fluctuation due to the combination of population global perturbation strategy and particle swarm strategy.This further confirms MPSOGOA's significant advantages in terms of global convergence speed and optimization accuracy.After comprehensive analysis of MPSOGOA's optimization accuracy and convergence speed, the MPSOGOA has significantly improved both in terms of convergence speed and global optimization accuracy, and shows advantages in terms of stability.Compared with other algorithms, MPSOGOA has less fluctuation in results during multiple runs, meaning that its output results are more reliable and stable.The improved method is effective.

Ablation Experiment
Ablation experiments were conducted on MPSOGOA and GOA in order to comprehensively validate the effectiveness of the three proposed strategies during the optimization process.Three strategies were employed in this study to enhance the performance of GOA, leading to the following scenarios: the use of only the PWLC mapping strategy in the gazelle optimization algorithm (GOA1), the use of only population-wide disturbance in the gazelle optimization algorithm (GOA2), the use of only the PSO algorithm-integrated gazelle optimization algorithm (GOA3), the simultaneous use of PWLC mapping and population-wide disturbance strategies in the gazelle optimization algorithm (GOA4), the simultaneous use of PWLC mapping and PSO strategy-integrated gazelle optimization algorithm (GOA5), and the simultaneous use of population-wide disturbance and PSO strategy-integrated gazelle optimization algorithm (GOA6).The tests were based on classical test functions from Table 10 with a population size set at 50 and iteration count at 1000 in the experiments.Each algorithm was independently run 30 times, and calculations were performed for the optimal value, worst value, average value, standard deviation, and median.
It is evident that the convergence accuracy of the GOA1 algorithm from Table 10 and Figure 8, which incorporates only the PWLC mapping strategy, the GOA2 algorithm with only the population-wide disturbance strategy, and the GOA3 algorithm with only the strategy integrated with PSO, surpasses that of the standard GOA across 15 test functions.Furthermore, the convergence speed on these 15 test functions is also superior to that of the GOA.This indicates the efficacy of each strategy in enhancing the GOA.A comparative analysis between the optimization results of GOA1, GOA2, and GOA3 with GOA4, GOA5, and GOA6 reveals that the convergence accuracy achieved by the fusion of two improvement strategies is generally superior to that achieved using a single improvement strategy.
Simultaneously, the convergence speed is also enhanced, underscoring the synergistic and stable effectiveness of all improvement strategies.Their collective impact serves to improve the solving capability of MSPGOA.It can be also observed that the convergence accuracy of MPSOGOA from Table 10 and Figure 8, which integrate three improvement strategies, surpasses that of GOAs employing one or two improvement strategies.Notably, MPSOGOA exhibits the fastest convergence speed among the 15 test functions.

Welded Beam
Table 11 presents the experimental results of the welded beam design problem.The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms (GOA, GWO, SCA, AOA, PSO, DE, Chimp, BBO, and GJO) in the welded beam design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test.It is evident from the table that the statistical results of all performance aspects of the proposed MPSOGOA in the welded beam design problem are optimal, indicating its strong optimization effectiveness in solving practical engineering applications and effectively reducing the cost of the welded beam design problem.The table indicates that MPSOGOA achieves the optimal cost value of 1.67022 for the welded beam design, with the corresponding optimal decision variables being steel bar length 0.198832, steel bar height 3.33737, steel bar thickness 9.19202, and weld thickness 0.198832.Compared to the original GOA, MPSOGOA exhibits a smaller standard deviation, signifying its superior stability.The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.Simultaneously, the convergence speed is also enhanced, underscoring the synergistic and stable effectiveness of all improvement strategies.Their collective impact serves to improve the solving capability of MSPGOA.It can be also observed that the convergence accuracy of MPSOGOA from Table 10 and Figure 8, which integrate three improvement strategies, surpasses that of GOAs employing one or two improvement strategies.Notably, MPSOGOA exhibits the fastest convergence speed among the 15 test functions.

Welded Beam
Table 11 presents the experimental results of the welded beam design problem.The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms (GOA, GWO, SCA, AOA, PSO, DE, Chimp, BBO, and GJO) in the welded beam design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test.It is evident from the table that the statistical results of all performance aspects of the proposed MPSOGOA in the welded beam design problem are optimal, indicating its strong optimization effectiveness in solving practical engineering applications and effectively reducing the cost of the welded beam design problem.The table indicates that MPSOGOA achieves the optimal cost value of 1.67022 for the welded beam design, with the corresponding optimal decision variables being steel bar length 0.198832, steel bar height 3.33737, steel bar thickness 9.19202, and weld thickness 0.198832.Compared to the original GOA, MPSOGOA exhibits a smaller standard deviation, signifying its superior stability.The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.

Compression Spring Design
Table 12 presents the experimental results for the compression spring design problem.The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms in the compression spring design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test.It is evident from the table that MPSOGOA

Compression Spring Design
Table 12 presents the experimental results for the compression spring design problem.The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms in the compression spring design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test.It is evident from the table that MPSOGOA achieves the optimal cost value of 0.0126652 for the compression spring design, with the corresponding optimal decision variables being wire diameter 0.0516905, mean coil diameter 0.356752, and the number of effective coils in the spring 11.287.MPSOGOA and DE achieve the same optimal cost value in the compression spring design problem.Furthermore, when compared to the original GOA, MPSOGOA outperforms GOA in all statistical data aspects.The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.

Pressure Vessel Design
Table 13 presents the experimental results for the pressure vessel design problem.The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms in the pressure vessel design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test.It is evident from the table that the statistical results of all performance aspects of the proposed MPSOGOA in the pressure vessel design problem are optimal, indicating its strong optimization effectiveness in solving practical engineering applications and effectively reducing the cost of the pressure vessel design problem.
From the table, it can be seen that MPSOGOA achieves the optimal cost value of 5885.33

Pressure Vessel Design
Table 13 presents the experimental results for the pressure vessel design problem.The table includes the optimal solution obtained by MPSOGOA and the other nine optimization algorithms in the pressure vessel design problem, along with their corresponding optimal variables, worst value, mean, standard deviation, and median, as well as the values from the signed-rank test.It is evident from the table that the statistical results of all performance aspects of the proposed MPSOGOA in the pressure vessel design problem are optimal, indicating its strong optimization effectiveness in solving practical engineering applications and effectively reducing the cost of the pressure vessel design problem.From the table, it can be seen that MPSOGOA achieves the optimal cost value of 5885.33 for the pressure vessel design, with the corresponding optimal decision variables being pressure vessel thickness 0.778168, head thickness 0.384649, internal vessel radius 40.3196, and vessel head length 200.Compared to the original GOA, MPSOGOA exhibits significant improvement in terms of standard deviation, suggesting its superior stability in the process of seeking the optimal solution.The results of the signed-rank test in the table indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.

Conclusions
In this paper, we present an enhanced version of the MPSOGOA, incorporating a chaotic strategy to refine the quality of the initial population.A population-wide perturbation is strategically applied to augment the convergence speed and precision of the algorithm.Furthermore, through synergistic integration with PSO, emphasis is placed on leveraging individual experiences within the optimization process, culminating in a comprehensive enhancement of algorithmic performance.MPSOGOA demonstrates promising outcomes across a suite of 35 test functions and 4 engineering design challenges.Notably, in a majority of the test functions, MPSOGOA consistently identifies optimal solu- Pressure Vessel Design

Conclusions
In this paper, we present an enhanced version of the MPSOGOA, incorporating a chaotic strategy to refine the quality of the initial population.A population-wide perturbation is strategically applied to augment the convergence speed and precision of the algorithm.Furthermore, through synergistic integration with PSO, emphasis is placed on leveraging individual experiences within the optimization process, culminating in a comprehensive enhancement of algorithmic performance.MPSOGOA demonstrates promising outcomes across a suite of 35 test functions and 4 engineering design challenges.Notably, in a majority of the test functions, MPSOGOA consistently identifies optimal solutions, displaying diminished standard deviations compared to the original GOA.This signifies a marked improvement in stability when contrasted with GOA.Additionally, in the Friedman test, MPSOGOA attains the foremost position among all comparative algorithms.Across the spectrum of four engineering design problems, the performance metrics of MPSOGOA surpass those of esteemed optimization algorithms including GOA, GWO, SCA, AOA, PSO, DE, Chimp, BBO, and GJO.To summarize, based on the comparative analysis of algorithms in both test functions and engineering designs, MPSOGOA emerges as a superior choice for addressing optimization challenges.

Figure 3 .
Figure 3. Schematic diagram of welded beam design.

Figure 3 .
Figure 3. Schematic diagram of welded beam design.

Figure 4 .
Figure 4. Schematic diagram of compression spring design problem.

Figure 4 .
Figure 4. Schematic diagram of compression spring design problem.

Figure 5 .
Figure 5. Schematic diagram of pressure vessel design issues.

Figure 5 .
Figure 5. Schematic diagram of pressure vessel design issues.

Figure 7 .
Figure 7. Convergence diagram of CEC2014 and CEC2017 group and test functions.

Figure 7 .
Figure 7. Convergence diagram of CEC2014 and CEC2017 group and test functions.

Figure 8 .
Figure 8. Convergence results of ablation experiments based on CEC2005.

Figure 8 .
Figure 8. Convergence results of ablation experiments based on CEC2005.

Figure 9
Figure9shows the convergence curves of each algorithm on the welded beam design problem.These algorithms can quickly converge to the optimal solution.

Figure 9
Figure9shows the convergence curves of each algorithm on the welded beam design problem.These algorithms can quickly converge to the optimal solution.

Figure 10
Figure 10 shows the convergence curves of each algorithm on the welded beam design problem.Except for the PSO and Chimp algorithms, other algorithms can quickly converge to the optimal solution.

Figure 10
Figure10shows the convergence curves of each algorithm on the welded beam design problem.Except for the PSO and Chimp algorithms, other algorithms can quickly converge to the optimal solution.

Figure 11
Figure11shows the convergence curves of each algorithm on the pressure vessel design problem.MPSOGOA, GOA, GWO, SCA, DE, Chimp, and GJO are all able to quickly converge to the optimal solution.AOA falls into a local optimum and eventually escapes from the local optimum, and optimally converges to the optimal value.

nics 2024 ,
13, x FOR PEER REVIEW 32 of 35 indicate that the p-values for MPSOGOA and the other nine algorithms are all less than 0.05, indicating no statistical significance.

Figure 11
Figure11shows the convergence curves of each algorithm on the pressure vessel design problem.MPSOGOA, GOA, GWO, SCA, DE, Chimp, and GJO are all able to quickly converge to the optimal solution.AOA falls into a local optimum and eventually escapes from the local optimum, and optimally converges to the optimal value.

Table 1 .
Algorithm parameter settings for comparison.

Table 2 .
Classic test function.

Table 7 .
CEC2014 and CEC2017 combined test function results.

Table 8 .
Wilcoxon signed-rank test results of CEC2014 and CEC2017 combined test functions.

Table 9 .
Friedman ranking of CEC2014 and CEC2017 combination functions.

Table 10 .
Ablation experiment results based on CEC2005.

Table 11 .
Experimental results of WBD.

Table 11 .
Experimental results of WBD.

Table 12 .
Experimental results of CSD.

Table 13 .
Experimental results of PVD.

Table 13 .
Experimental results of PVD.