Waterwheel Plant Algorithm: A Novel Metaheuristic Optimization Method

: Attempting to address optimization problems in various scientiﬁc disciplines is a fundamental and signiﬁcant difﬁculty requiring optimization. This study presents the waterwheel plant technique (WWPA), a novel stochastic optimization technique motivated by natural systems. The proposed WWPA’s basic concept is based on modeling the waterwheel plant’s natural behavior while on a hunting expedition. To ﬁnd prey, WWPA uses plants as search agents. We present WWPA’s mathematical model for use in addressing optimization problems. Twenty-three objective functions of varying unimodal and multimodal types were used to assess WWPA’s performance. The results of optimizing unimodal functions demonstrate WWPA’s strong exploitation ability to get close to the optimal solution, while the results of optimizing multimodal functions show WWPA’s strong exploration ability to zero in on the major optimal region of the search space. Three engineering design problems were also used to gauge WWPA’s potential for improving practical programs. The effectiveness of WWPA in optimization was evaluated by comparing its results with those of seven widely used metaheuristic algorithms. When compared with eight competing algorithms, the simulation results and analyses demonstrate that WWPA outperformed them by ﬁnding a more proportionate balance between exploration and exploitation.


Introduction
Optimization is finding the optimal settings for a system's design parameters to minimize or maximize the fitness function.At the same time, all of the constraints are met [1,2].Optimization difficulties exist in every industry, academic discipline, and study area.Exact algorithms are one type of optimization strategy, whereas heuristic and metaheuristic algorithms are another [3][4][5].Because it requires fewer sophisticated calculations, the former category takes less time to complete, but it may be less useful and practical.As opposed to the former, the second class of algorithms (metaheuristics) exhibits some random/stochastic behavior and makes an "informed search choice" for some "wise areas" [6].
The above theorem inspires scientists to develop cutting-edge algorithms and improve existing ones.Since optimization exists in many disciplines, including cloud computing activities [7], face identification [8], power [9,10], and engineering challenges [11], it has recently attracted a lot of attention from researchers.According to the No Free Lunch (NFL) hypothesis [12], no algorithm can identify the best solution in all cases, and many optimization algorithms have been published.In other words, an algorithm that succeeds in finding the best answer to one kind of problem does not succeed in solving another.
Because metaheuristic algorithms use a form of random search, it is impossible to guarantee that they always find the global optimum.However, due to their closeness to the global optimal solution, metaheuristic algorithms' solutions are regarded as quasioptimal [13].To find a workable answer, metaheuristic algorithms need strong search capabilities in both global and local problem-solving spaces.Combining exploration with the global search process may improve the algorithm's capacity to find the primary optimum region and break out of local optima.The algorithm's capacity to converge on potentially superior solutions in promising areas is improved by the search process at the local level, which incorporates the idea of exploitation [14].While searching for an optimal solution, metaheuristic algorithms thrive when they balance exploration and exploitation.Thus, an algorithm that better balances exploration and exploitation when comparing the performance of many metaheuristic algorithms on an optimization problem [15] provides a better quasi-optimal solution.Many metaheuristic algorithms have been developed to improve the quality of results obtained for optimization problems.
Optimization methods can be categorized as either deterministic or stochastic.Solving linear, convex, continuous, differentiable, and low-dimensional optimization problems is applicable within the capabilities of both gradient-based and non-gradient-based deterministic techniques [16].Optimization problems that are non-linear, non-convex, discontinuous, non-differentiable, and/or high-dimensional are unfortunately outside the scope of deterministic techniques.Deterministic methods provide bad results in this optimization problem, because they become mired in local optimum solutions [17].
Optimizing problems are notoriously challenging to solve using deterministic methods; thus, academics have responded to stochastic processes.An effective random search in the problem-solving space employing random operators and trial-and-error procedures characterizes metaheuristic algorithms, one of the most popular stochastic approaches [18].Metaheuristic algorithms have gained popularity for handling optimization problems due to their effectiveness in solving problems that are non-linear, non-convex, discontinuous, non-differentiable, NP-hard, complex, and high-dimensional.They also require no differentiable information about the objective function or constraints and are not dependent on the problem type [19].
Considering the many metaheuristic algorithms that have already been developed, whether there is still a need to introduce even more metaheuristic algorithms is the key question that drives metaheuristic algorithm research.The NFL theorem [20] answers this topic by showing that there is no universally superior metaheuristic method for optimization.Even if a metaheuristic algorithm addresses one set of optimization problems, it does not mean that it works just as well for solving another set of optimization problems.The NFL theorem states that an algorithm may succeed in addressing one optimization problem while failing to solve a different one.So, when applied to optimization problems, a metaheuristic algorithm's output may be taken at face value.As a result, the NFL theorem motivates researchers to create cutting-edge metaheuristic algorithms that can more efficiently solve optimization problems.This paper's innovative contribution is the design of a new metaheuristic algorithm for addressing optimization problems in various scientific disciplines; the method is called Waterwheel Plant Algorithm (WWPA).The following are the most significant contributions of this work:

•
Modeling natural waterwheel behavior inspired the development of WWPA.

•
The method used by waterwheel plants to locate their insect food, capture it, and then move it to a more convenient location before devouring it inspired the essential idea of WWPA.

•
We provide a mathematical model of the WWPA implementation processes throughout the two exploration and exploitation stages.• Twenty-three benchmark functions were used to measure WWPA's efficiency in various optimization tasks.• Three engineering problems were considered in evaluating the effectiveness of the proposed WWPA.

•
Well-known algorithms were used as benchmarks against which the proposed WWPA method was evaluated.• A statistical analysis was performed to confirm the significant difference of the proposed approach when compared with the other competitor algorithms.
The following is how the rest of the paper is laid out: In Section 2, we present our literature review.Section 3 then presents the mathematical model and the introduction to the proposed Waterwheel Plant Algorithm (WWPA).Simulation and effectiveness studies for optimization problems in addition to the assessment of how well the proposed WWPA performed in handling practical problems are then described in Section 4. Section 5 summarizes the results, and suggestions for further research are offered.

Literature Review
When dealing with practical problems, it is common to encounter a large number of local optimum solutions, since the search space is typically complicated.An optimization method is more likely to converge too quickly because of this, leading to an increased risk of local optimizations.Many optimization algorithms attempt to address this problem by employing methods that broaden the population's genetic makeup.Local optima may be avoided by using these methods, although the convergence performance may suffer.Consequently, creating a powerful metaheuristic algorithm for optimization necessitates striking a balance between exploration and exploitation.As a result of striking this equilibrium, the optimization algorithm's convergence speed is enhanced, and the search space is explored more thoroughly, allowing the local optima to be avoided.Metaheuristic algorithms draw inspiration from various sources, including evolutionary occurrences, natural phenomena, animal life in nature, biological sciences, physics, game rules, and human relationships.
Natural swarming phenomena, such as those seen in insects, fish, birds, mammals, and plants and animals, have inspired the development of new metaheuristic algorithms that use swarm intelligence to solve problems.Metaheuristic algorithms can be categorized into five classes based on the type of motivation employed in their development: swarm-based, evolutionary-based, physics-based, human-based, and game-based.The most well-known swarm-based algorithms include Particle Swarm Optimization (PSO) [21], Ant Colony Optimization (ACO) [22], and Artificial Bee Colony (ABC) [23,24].The PSO design is based on the analogy of animal flocks foraging for food.The ability of ants to find the quickest route from their colony to a food supply significantly influenced the development of ACO.The design of ABC is based on a simulation of the behavior of foraging bee colonies.Swarmbased algorithms include Golden Jackal Optimization (GJO) [25], Coati Optimization Algorithm (COA) [26], Marine Predator Algorithm (MPA) [27], and Mountain Gazelle Optimizer (MGO) [28].
The biological sciences, genetics, Darwin's theory of evolution, survival of the fittest, and natural selection inspired the development of evolutionary-based metaheuristic algorithms.Some of the most well-known evolutionary-based methods are Genetic Algorithm (GA) [29] and Differential Evolution (DE) [30].These approaches are built on models of the reproductive process and use the chance operations of selection, crossover, and mutation.Artificial Immune Systems (AISs) are designed using models of the human immune system to fight off infections and other microorganisms [31].Cultural Algorithm (CA) [32], Evolution Strategy (ES) [33], and Genetic Programming (GP) [32] are further examples of evolutionary-based metaheuristic algorithms [34,35].Metaheuristic algorithms with a physics foundation are motivated by physical phenomena, forces, laws, and other notions.One of the most well-known physics-based strategies is called "Simulated Annealing" (SA) [36].Modeling the metal annealing process, where the metal is melted under heat and then gently heated to form a perfect crystal, led to the development of SA.Several algorithms that take their inspiration from Newton's laws of motion and physical forces have been developed.These include Spring Search Algorithm (SSA) [37], which uses the tension force of a spring and Hooke's law; Momentum Search Algorithm (MSA) [38]; and Gravitational Search Algorithm (GSA) [39].
Water Cycle Algorithm (WCA) was developed to simulate the many physical changes in the natural water cycle [40].Multi-Verse Optimizer (MVO) [41], Archimedes Optimization Algorithm (AOA) [42], Equilibrium Optimizer (EO) [43], Electro-Magnetism Optimization (EMO) [44], Nuclear Reaction Optimization (NRO) [45], and Lichtenberg Algorithm (LA) [46] are some well-known metaheuristics in the past decade.There have been advancements in artificial intelligence (AI) that take cues from human behavior in areas such as communication, thinking, and social interaction to create human-based metaheuristic algorithms.The most popular human-based strategy is Teaching-Learning-Based Optimization (TLBO) [47].The design inspiration for TLBO came from observing classroom interactions between educators and their pupils.Poor and Rich Optimization's (PRO) central concept is that economically disadvantaged and privileged social groups may and should work together to better their economic standing [48,49].
Multiple metaheuristic algorithms have been proposed in recent years, with each employing a unique strategy for overcoming these problems.A contemporary example of a metaheuristic that takes inspiration from nature is Butterfly Optimization Algorithm (BOA) [60].BOA acts as a butterfly might when looking for food and trying to mate.BOA's exploration and exploitation methods are relatively straightforward.In BOA, the butterfly can either aimlessly flit around in the search space to accomplish exploration or go straight to the best butterfly to accomplish exploitation.Switch probabilities determine the relative weights of exploration and exploitation.Using traditional benchmark functions and engineering design challenges, BOA was proven to work.The results and performance of BOA are positive overall.Stochastic Fractal Search (SFS) is a relatively new metaheuristic that takes its cues from fractals in nature [61].During the optimization phase, SFS primarily uses diffusion and update processes.While the first method guarantees that the search space is exploited, the second method expands its scope with regular updates.In addition, SFS employs Levy flight and Gaussian methods to generate new particles [62,63].Utilizing these methods, the algorithm's convergence rate may be sped up.Good performance and robust exploratory capabilities were seen in tests on SFS using both confined and unconstrained standard benchmark functions.Optimal Baleen Whale Algorithm: To accomplish its goals of exploration and exploitation, WOA [64] employs several methods.Some approaches use movement around a randomly chosen solution to enhance discovery.The opposite is true for alternative solutions, which spiral towards the optimal option to meet their needs.Achieving a happy medium between exploration and exploitation is dependent on WOA's use of two adaptive parameters.WOA has been rigorously examined and verified compared with industry-standard benchmark functions and restricted engineering design challenges.
Stochastic Paint Optimizer (SPO) [65] is an optimization technique influenced by art.SPO is a population-based optimizer that draws inspiration from the beauty of color and the painting method.To identify the ideal color, the SPO optimization algorithm considers the search area on a canvas and applies several color combinations.Great exploration and exploitation in SPO are provided by four straightforward color combination rules that do not require any internal parameters.Well-known mathematical benchmark functions were used to assess the algorithm's performance, and the results were compared with more modern, well-researched methods to confirm the accuracy of the findings.In [66], the authors developed the multi-objective version of this method for global engineering problems.Principles such as employing an external archive of a specified size set the suggested method apart from the original SPO.Moreover, this method offers the leader selection function for performing multi-objective optimization.Adding chaos to the framework of metaheuristic algorithms is one of the effective methods that can be used to increase the performance of these algorithms.In [67], ten chaotic maps are used to introduce chaos into SPO.The primary contributions of this research are the proposals of chaotic versions and the identification of the optimal chaotic version of SPO.The analysis of certain mathematical and engineering problems revealed that some chaotic SPO variations improve upon the functionality of the standard SPO.
In addition, several extensions of WOA have been developed and used for a wide range of optimization problems.Harris Hawks Optimization (HHO) [68] is a brand-new optimizer that takes its cues from hawks' method of hunting.HHO employs four tactics to ambush its target during the exploitation stage.During this stage, it takes a cue from hawks, as they hunt by perching in various places, waiting for the right moment to strike.HHO uses an adaptable equation similar to WOA's to iterate between the exploration and exploitation phases.To verify HHO's reliability, it was subjected to rigorous testing against various reference functions and limited technical design challenges.HHO was shown to be both competitive and promising.
Many researchers have recently developed hybrid optimization algorithms, which combine the best features of two or more optimization techniques to address the shortcomings of using only one [69].For example, in [70,71], a novel hybrid optimizer dubbed PSOSCA was developed by fusing the PSO algorithm with Sine Cosine Algorithm and the Levy flight technique.The Levy flight strategy uses random wanderings to expand the search area.Using these random walks, you may rest assured that much ground is covered and local maxima are more effectively avoided.Sine Cosine Algorithm (SCA) [72,73] improves PSO's ability to discover and exploit new areas by using position update equations.PSOSCA has benefits and is successful against most PSO variations, as evidenced by the results of tests.Standard benchmark functions and real-world, resource-limited engineering challenges were used to verify the efficacy of the new hybrid, PSOSCA.
In addition to the previous optimization algorithms, recent efforts contributed to the emergence of more advanced algorithms.These algorithms include Keshtel Algorithm (KA) [74] and its application in [75], Social Engineering Optimizer (SEO) [76] and its application in [77], Red Deer Algorithm (RDA) [78] and its application in [79], and the tabu search-based hybrid metaheuristic approach [80].Despite the promising performance achieved by these algorithms, according to the No Free Lunch theorem, there is an opportunity to develop more algorithms to improve the overall performance of optimizing machine learning models for various applications.
An examination of current optimization techniques reveals that no metaheuristic algorithm is predicated on simulating the organic behavior of waterwheel machinery.The hunting behavior of plants has been studied, and the results indicate that it is an intelligent process with significant potential for use in developing a new optimizer.In this study, a new swarm-based metaheuristic method is developed and presented in the next section to fill this knowledge gap by mathematically simulating the natural behaviors of waterwheel plants.In this research paper, we present a new metaheuristic optimization approach, WWPA, which takes its cues from the coordinated efforts of swarms of indi-vidual organisms working toward a common objective.WWPA seeks to find a middle ground between guaranteeing rapid convergence and preventing inertia between potential local optima.Methods for improving exploitation performance, striking a healthy balance between exploration and exploitation, expanding the search space, and diversifying the present population all contribute to this goal.This paper's primary contribution is the development of a novel optimization algorithm, referred to as Waterwheel Plant Algorithm (WWPA), which gives a fresh perspective on the problem space of optimization.Compared with other swarm-based and evolutionary-based algorithms, preliminary research indicates that WWPA is competitive, promising, and can even exceed them.The proposed algorithm's efficacy was tested and confirmed with real-world, time-limited engineering design challenges as added proof of efficiency.

The Proposed Methodology
The proposed Waterwheel Plant Optimization Algorithm (WWPA) is presented in this section.The section presents the algorithm's inspiration and the corresponding mathematical model.

Inspiration of WWPA
A wide petiole bears the traps of the waterwheel plant (also referred to as Aldrovanda vesiculosa), which resemble little (1/12 inches) transparent flytraps [81,82].The traps are protected against damage or false triggers caused by accidental contact with other aquatic plants by a ring of bristles that resemble hair and surround the trap.Similar to the teeth of a flytrap, the trap's outer edges are coated with many hook-like teeth that interlock as the trap closes around its prey.About 40 long trigger hairs (think of the 6-8 trigger hairs within a Venus flytrap trap) are located within the trap and are responsible for closing the clamshell when triggered once or more times.In addition to the trigger hairs, these predators have acid-secreting glands that help them digest food.The unfortunate victim is ensnared by the trap's interlocking teeth and mucus sealant, which together seal around it and push it down to the base of the trap, close to the hinge.Much of the water is then digested in juices as the trap drives the rest out.Each Aldrovanda trap may catch two-to-four meals before it gives up, similar to a flytrap.Figure 1 shows a picture of the waterwheel plant.

The Mathematical Model of WWPA
This section discusses how to set up WWPA and then details how to update the waterwheel's location throughout exploration and exploitation using a model of the waterwheel's actual behavior.

Initialization
The proposed WWPA is a population-based technique that, via iteration, may deliver an appropriate solution based on the search power of its population members in the universe of possible solutions to the problem.Because of their position in the search space, the waterwheels that comprise the WWPA population each have their values of the problem variables.Accordingly, each waterwheel represents a possible solution to the problem, which may be mathematically represented by a vector.The WWPA population, which includes all the waterwheels, may be represented by matrix (1).The waterwheels' positions in the search space are randomly initialized at the outset of WWPA implementation using (2).
where the number of waterwheels and the number of variables are denoted by N and m, respectively; r i,j is a random number in the interval [0, 1]; lb j and ub j are the lower bound and upper bound of the j-th problem variable; P is the population matrix of waterwheel locations; Pi is the i-th waterwheel (a candidate solution); and p i,j is its j-th dimension (problem variable).Each waterwheel represents a potential solution to the problem, so the objective function can be calculated for each.It has been shown that a vector may be used to effectively represent the values that have been determined to constitute the objective function of the problem (3).
where F is the vector of all the objective function values and F i is the estimated value for the i-th waterwheel.The objective function evaluations are the key metrics for selecting the best solutions.Therefore, the best candidate solution (i.e., the best member) corresponds to the highest value of the objective function, and the lowest value corresponds to the worst candidate solution (i.e., the worst member).Because the waterwheels move across the search space at different rates in each iteration, the best answer must also vary over time.

Phase 1: Position Identification and Hunting of Insects (Exploration)
Due to their acute sense of smell, waterwheels are formidable predators that can track out the source of pests.Whenever an insect comes into the range of a waterwheel, the waterwheel starts to attack it.It then attacks and hunts the bug after pinpointing its location.Using a simulation of this behavior of waterwheels, WWPA models the initial stage of its population update process.The exploration capacity of WWPA of finding the optimal region and escaping from local optima is enhanced by modeling the waterwheel attack on the insect, which causes considerable shifts in the position of the waterwheel in the search space.To determine the new location of the waterwheel, the equation below is used in conjunction with the simulation of the waterwheel's approach to the insect.If the value of the goal function is increased by moving the waterwheel to this location, the former location is abandoned in favor of the one described below.
On the other hand, the position of the waterwheel can be changed using the following equation in case the solution does not improve for three consecutive iterations: where → r 1 and → r 1 are random variables with values in the ranges [0, 2] and [0, 1], respectively.In addition, K is an exponential variable with values in the range [0, 1], and → W is a vector that indicates the diameter of the circle in which the waterwheel plant searches for promising areas.

Phase 2: Carrying the Insect in the Suitable Tube (Exploitation)
A waterwheel captures an insect and transports it to a feeding tube.The second step of population update in WWPA is modeled after this simulated behavior of waterwheels.WWPA's exploitation power is increased during the local search, and better solutions are converged upon near the ones that have already been discovered, thanks to the model of transporting the insect to the appropriate tube leading to the creation of small changes in the position of the waterwheel in the search space.For each waterwheel in the population, WWPA's designers first determine a new random location as a "good position for consuming insects," mimicking the waterwheels' natural activity.Therefore, if the goal function value is higher at this new site, the waterwheel is moved instead of the prior location, as shown in the following equations: where → r 3 is a random variable with values in the range [0, 2], → P (t) is the current solution at iteration t, and → P best is the best solution.Similar to the exploration phase, if the solution does not improve for three iterations, the following mutation is applied to guarantee to avoid local minima: where F and C are random variables with values in the range [−5, 5].In addition, the value of K decreases exponentially using the following equation: T max + F (10)

Pseudocode of the Proposed WWPA
As an iterative method, WWPA is presented.After the first and second phases of WWPA have been implemented, the final step is to adjust the locations of all waterwheels.The values of the goal function are compared; then, the best solution candidate is revised.The waterwheels' locations are then changed for the following iteration, and this process repeats itself until the algorithm reaches its final iteration.A schematic representation of the inspiration of the proposed methodology is shown in Figure 2. In addition, Algorithm 1 presents the steps of the procedure involved in putting WWPA into practice.Upon comple-tion of the algorithm execution, WWPA offers the most promising candidate solution that it has stored throughout the iterations.
Explore the waterwheel plant search space using: if Solution does not change for three iterations then Exploit the current solutions to get best solution using: if Solution does not change for three iterations then  Decrease the value of K exponentially using: Calculate objective function f n for each position P i 22: Find the best position P best

23:
Set t = t + 1 24: end while 25: Return the best solution P best

Complexity Analysis
This section assesses the WWPA proposal's computational complexity.The complexity of WWPA calculation was determined to be O(t max × n), but it is O(t max × n × d) for the d-dimension.The details of calculating this complexity are listed in the following.The level of complexity is defined for iterations with a maximum of t max and n agents:

Experimental Results
In this section, we present the evaluation of the proposed WWPA with two tests to demonstrate its worth: a benchmark function test and a test replicating a real-world engineering challenge.Although the benchmark function test is useful, it is important to utilize suitable, adequate, and diverse types of benchmark functions owing to the randomness of the computation results produced by the stochastic optimization method.This study employed 23 regularly used benchmark function tests of varying properties [83].To guarantee that a proposed optimization method can also achieve higher performance in engineering applications, it is necessary to conduct several actual engineering verification tests and use a set of benchmark functions.Real-world engineering problems are optimization problems with many constraints, making them ideal for comparing algorithms' relative effectiveness.Designing a pressure vessel, a tension/compression spring, and a welded beam are all employed in verification engineering problems.Mechanics and structural design are the appropriate areas of study for these three engineering problems.

Benchmark Function Test
This work employed 23 benchmark test functions widely used in optimization algorithms.Unimodal benchmark functions F1 to F7 were included in the conducted experiments.Benchmark functions F8 to F13 were part of the multimodal set, whereas F14 to F23 were part of the multimodal set fixed in dimensions.Tables 1-3 provide a summary of the test functions and their corresponding parameters.In these tables, D and Fun refer to the number of dimensions and the mathematical function, respectively.Range shows the interval of the search space, and f min refers to the optimal value that the corresponding functions can achieve.Figure 3 displays the illustrative 3D models of typical functions included in the comparison results.
The population size was 50, and the number of iterations was 500 to solve the benchmark test functions.Algorithms such as Particle Swarm Optimization (PSO) [84], Genetic Algorithm (GA) [85], Differential Evolution (DE) [86], Whale Optimization Algorithm (WOA) [87], Grey Wolf Optimization (GWO) [88], JAYA algorithm [89], and the Fire Hawk Optimizer (FHO) algorithm [90] were contrasted with the proposed optimization algorithm.Table 4 displays the sources from which these algorithms were derived.Table 5 displays the algorithms' parameter settings that were employed in the performance comparisons.
The optimal solution and statistical data show that the proposed WWPA performed much better than PSO and GA.However, popular optimization methods such as PSO and GA did not perform well compared with other algorithms when tested against benchmark functions.In addition, compared with DE and GWO, although the algorithm still had benefits, its performance dropped in terms of fixed-dimension multimodal benchmark, likely due to the algorithm's linear search route, more flexible parameter selection approach, and the insertion of empirical parameters.
It is also evident that the suggested WWPA achieved higher performance in six functions than WOA due to WWPA's less complicated search algorithm.WOA is highly effective, although its search procedure is time-consuming and laborious.Researchers found that DE's success may be primarily attributed to its adaptable coding strategy and ability to address zero-one problems.Predation rules in nature inspired the development of two new natural heuristic optimization algorithms, WOA and GWO.In the next section, we demonstrate the results of a detailed performance comparison with the competing optimization methods.In conclusion, the proposed WWPA improved the performance when benchmark functions were tested.

Function D Range
Table 2. Description of multimodal benchmark functions.
[−5.12, 5.12] 0 ) + 1 30 [−600, 600] 0 Table 3. Description of multimodal-based fixed-dimension benchmark functions.On the other hand, Figure 4 shows the convergence curves for six standard functions.As shown in the figure, it can be noticed that WWPA has faster convergence than other competitors.Moreover, a non-parametric test called Wilcoxon rank sum was used at the 5% level of significance to make a fair comparison between WWPA and other algorithm results in each independent run.Table 6 shows the results of such parameters.From this table, it can be seen that the p-values for almost all functions are less than 0.05.

ANOVA and Wilcoxon Rank Sum
The ANOVA test on benchmark function f 6 is shown in Table 7.On the other hand, we give, in this section, a statistical analysis comparing WWPA's results with those of competing algorithms so that we can establish whether or not WWPA does offer a substantial advantage [91,92].The Wilcoxon rank sum test was used because it is a non-parametric statistical test for comparing means across many groups.The statistical significance of WWPA's advantage over the other algorithms was established by utilizing the Wilcoxon rank sum test and an associated p-value.Table 6 statistically compares WWPA's outcomes with competing algorithms' findings.According to these outcomes, WWPA has a statistically significant advantage over the comparable algorithm when the p-value is less than 0.05.The Wilcoxon signed rank test results for benchmark function f 6 based on the proposed WWPA against the compared algorithms are introduced in Table 8.The performance of the proposed continuous WWPA for the benchmark functions is confirmed by the results of the algorithm when it was applied to this situation and compared with the algorithms that are considered state of the art.The residual plot shown in Figure 5 is a type of scatter plot used to visualize the errors of a regression model.The residuals are the difference between the observed and the predicted values and are used to detect outliers, influential observations, and trends in the data.The residual plot shows the residuals on the vertical axis and the independent variable on the horizontal axis.The figure shows that the points in a residual plot are randomly dispersed around the horizontal axis, which refers to the appropriateness of the proposed approach.In addition, the homoscedasticity plot shown in Figure 5 is a type of graph used to visually assess a dataset's homoscedasticity.Homoscedasticity is the property of a dataset in which the variance of the data points is the same across all values of the independent variable.This type of plot is typically used to detect any type of heteroscedasticity, which is the opposite of homoscedasticity and occurs when the variance of the data points is not the same across all values of the independent variable.Homoscedasticity plots are typically created by plotting the residuals of a regression model against the independent variable.The result of this plot shows the proposed algorithm's promising performance when applied to the benchmark functions.
Moreover, the QQ plot, or quantile-quantile plot, shown in Figure 5, is a graphical tool used to compare two probability distributions by plotting their quantiles against each other.It is often used to check if a given dataset follows a normal distribution.The QQ plot consists in plotting the quantiles of the first dataset on the x-axis and the quantiles of the second dataset on the y-axis.If the datasets come from the same distribution, then the points in the plot should fall along a 45-degree line.Deviations from this line indicate that the datasets come from different distributions.QQ plots can also be used to compare the distributions of two samples or a sample and a theoretical distribution.On the other hand, the heatmap plot shown in Figure 5 is a graphical representation of an optimization algorithm's performance.It shows the relative performance of the algorithm across a range of different inputs and parameters.Heatmap plots are often used to visualize the performance of an algorithm on a wide range of inputs and parameters, allowing different optimization strategies to be easily compared.This heatmap plot is used to identify potential improvement areas and potential bottlenecks in an optimization process.It is useful for helping to visualize the progress of an optimization process over time.Figure 6 shows the box plot of the proposed and competing algorithms for benchmark functions f 1 to f 7 .

Constrained Engineering Design Problems
In this part, we present how the algorithm's capability was tested to solve two constrained optimization problems involving the design of a tension/compression spring and a pressure vessel.We validated WWPA by solving two restricted optimization examples.These examples involved the design of tension/compression springs [93] and pressure vessels [94].The two engineering problems are mathematically described in this section.WWPA's results were compared with GA, GSA, GWO, and PSO algorithms' outcomes to obtain the minimum cost.

Tension/Compression Spring Design Problem
Spring tension and compression design (TCSD) is depicted in Figure 7 [93].The method aims to reduce the space that a coil spring occupies when subjected to a fixed tension or compression.As such, TCSD is classified as a continuous constraint problem.The L-th design variable of the TCSD is the number of active coils in the spring; the d-th variable is the diameter of the winding; and the w-th variable is the diameter of the wire.TCSD may be stated in mathematical terms as follows: Minimize subject to the constraints where the three variables' ranges are as follows:  As the table above shows, WWPA was the most effective way to solve the tension/ compression spring design problem and produced the best possible solution.The results of WWPA's use in this topic are shown in Table 9.The table below compares the results of WWPA, GA, PSO, DE, GWO, and WOA in finding the best cost and values for the design factors.Table 10 displays the statistical outcomes of WWPA and other algorithms in solving the tension/compression spring design problem.Twenty people, 500 maximum iterations, and 20 separate runs were employed to find a solution to this challenge.From what we can see in this table, WWPA performed as well as, if not better than, the average of the other optimizers.Furthermore, the optimal solution to the problem was found using WWPA using the fewest possible function evaluations.After extensively exploring the search space, the outcomes demonstrate that WWPA may rapidly converge toward the ideal aim.The problem of the cylindrical vessel [94] is that it is capped at both ends by hemispherical heads, as shown in Figure 8.The problem objective is minimizing the total cost, which includes material, forming, and welding costs.Four variables in this design need to be optimized.The first parameter is the thickness of the shell (T s ), and the second is the thickness of the head (T h ).The third and fourth parameters are the inner radius, R, and the length of the cylindrical section, L, not including the head.The parameters of T s and T h are integer multiples of 0.0625 inches, the available thickness of steel plates, and R and L are continuous values.The mathematical formulation of the problem can be described as follows: Minimize subject to the constraints where the four variables' ranges are as follows: Many scholars have used numerous methods, including GA, PSO, and GWO, to address this problem and provide a solution.Table 11 displays WWPA's results on this problem.The table presents the optimum values of the design variables for each optimization method (WWPA, GA, PSO, and GWO).It is clear that WWPA is superior to previous optimization methods and can determine the ideal design for a pressure vessel that is both technically possible and economically viable.Table 12 presents a statistical comparison of WWPA and other algorithms' solutions to the pressure vessel design problem across 30 iterations.Throughout 500 iterations, 20 people helped us find a solution to this problem.Looking at this table, one can see that WWPA had the highest mean score compared with the other strategies.When it came to identifying the perfect design with the fewest possible fitness tests, WWPA also shone.WWPA's comprehensive exploration and exploitation approaches helped identify the most promising configurations of design Furthermore, the quick convergence behavior of WWPA is demonstrated by the fact that optimal values were found with a minimal number of fitness tests.

Welded Beam Design Problem
One of the standard optimization problems in engineering is the welded beam design problem [95,96], shown in Figure 9. Four design parameters are used to describe this problem.These parameters are the weld width, w; the weld length, L; the main beam depth, h; and the main beam thickness, d.The overall cost of fabricating the welded beam can be minimized by imposing constraints on shear stress A, bending stress B, buckling load P, and maximum end detection C.
where σ = 504, 000 where the four variables' ranges are as follows: 0.1 ≤ w, h ≤ 2.0, Cost minimization is the goal of WWPA, GA, PSO, and WOA, and Table 13 shows the ideal design variables corresponding to each method's optimal cost.Compared with other methods, WWPA's optimal design was discovered while minimizing the number of function evaluations.In the welded beam design problem, WWPA excelled, and the table shows that it could identify the best possible optimum design factors.Table 14 shows the statistical outcomes of WWPA and other algorithms in the welded beam design problem.Throughout 20 runs and 500 iterations, 20 individuals were used.Compared with other networks, WWPA ranked third in the overall average.

Conclusions and Future Perspectives
In this study, we introduce the waterwheel plant technique (WWPA), a novel swarmbased optimization technique.The planned WWPA heavily draws on the tactics and actions of waterwheel plants in the course of their search.Following an explanation of how WWPA works, a mathematical model that can be used to help with optimization issues is offered.Twenty-three objective functions from the categories of unimodal, highdimensional multimodal, and fixed-dimensional multimodal were used to evaluate the effectiveness of the proposed method.The capabilities of the proposed algorithm were further examined by comparing the optimization results acquired by WWPA and those provided by seven other well-known algorithms: PSO, DE, WOA, GWO, GA, FHO, and JAYA.The proposed WWPA was shown to have strong exploitation power in convergently finding the global optimal solution as evidenced by the optimization results of unimodal functions.These functions' simulation results demonstrate that WWPA outperformed eight other algorithms by a large margin when it came to fixing problems with a single modality.The multimodal function simulation results show that the proposed WWPA has strong exploration capability to test the search space and efficiently locate the ideal region.The WWPA method was superior to seven competing algorithms in simulating real-world scenarios involving multimodal optimization.The simulation results show that the proposed WWPA outperformed other methods by a wide margin in solving optimization problems.We also used WWPA to solve the difficulties of designing a pressure tank, a speed reducer, a welded beam, and a tension/compression spring.When tackling design difficulties in the real world, the simulation findings demonstrate that WWPA performed admirably.
The authors of this paper suggest several avenues for future investigation.The proposed methodology has the potential to pave the way for creating binary and multi-objective variants of WWPA, among other areas of study.In addition, the authors' proposed directions for future research include using WWPA to address optimization problems in a wide range of scientific disciplines and real-world contexts, keeping in mind the potential of the planned WWPA for facilitating numerous future endeavors.Feature selection, data mining, COVID-19 modeling, big data, artificial intelligence, power systems, machine learning, signal denoising, wireless sensor networks, image processing, and other benchmark tasks are just some of the many areas where this approach has been put to use.It is possible that in the future, new optimizers that will perform better than WWPA in some real-world applications will be created; this is a drawback shared by all stochastic optimization approaches, including the proposed WWPA.In addition, the solutions to optimization problems obtained utilizing WWPA cannot be guaranteed to be exactly equivalent to the global optimum because of the stochastic nature of the solution approach.

Figure 1 .
Figure 1.Image of the waterwheel plant [81].(a) Lateral view of a free-floating shoot with numerous traps.(b) Frontal view with open and closed traps.(c) Single open trap.(d) Schematic drawing of an open trap.

Figure 2 .
Figure 2. The inspiration of the proposed methodology.

Figure 4 .
Figure 4. Convergence curves of the presented and compared algorithms for functions f 1 , f 2 , f 3 , f 4 , f 5 , and f 11 .

Figure 5 .
Figure 5. Visualization of the analysis of the results of solving the benchmark functions.

Figure 6 .
Figure 6.Inspiration of the proposed methodology.

1 :
Initialize waterwheel plants' positions P i (i = 1, 2, ..., n) for n plants, objective function f n , iterations T max , parameters of r, → r 1 , → r 2 , → r 3 , f , c, and K 2: Calculate fitness of f n for each position P i 3: Find best plant position P best 4: Set t = 1 5: while t ≤ T max do

Table 1 .
Description of unimodal benchmark functions.

Table 4 .
The source of inspiration of the competitor algorithms.

Table 5 .
The configuration parameters of the competing algorithms used in comparisons.

Table 7 .
ANOVA test results of the F6 function.

Table 8 .
Wilcoxon test results of the F6 function.

Table 9 .
Comparison of the best solution to tension/compression spring design problem.

Table 10 .
Descriptive statistics of tension compression.

Table 11 .
Comparison of the best solution to pressure vessel design problem.

Table 12 .
Descriptive statistics of pressure.

Table 13 .
Comparison of the best solution to the welded beam design.

Table 14 .
Descriptive statistics of the welded beam design problem.