Modified Flower Pollination Algorithm for Global Optimization

: In this paper, a modiﬁed ﬂower pollination algorithm (MFPA) is proposed to improve the performance of the classical algorithm and to tackle the nonlinear equation systems widely used in engineering and science ﬁelds. In addition, the differential evolution (DE) is integrated with MFPA to strengthen its exploration operator in a new variant called HFPA. Those two algorithms were assessed using 23 well-known mathematical unimodal and multimodal test functions and 27 well-known nonlinear equation systems, and the obtained outcomes were extensively compared with those of eight well-known metaheuristic algorithms under various statistical analyses and the convergence curve. The experimental ﬁndings show that both MFPA and HFPA are competitive together and, compared to the others, they could be superior and competitive for most test cases.


Introduction
In recent decades, meta-heuristic optimization algorithms have widely interfered in several fields, tackling numerous optimization problems, especially engineering problems, due to fabulously avoiding stagnation in local optima, with high convergence speed in the right direction of the near-optimal solution [1]. The meta-heuristic algorithms have been classified into four categories according to the inspiration nature: evolutionary algorithms, physics-based algorithms, swarm-based algorithms, and human-based algorithms. The first category, called evolution-based algorithms, mimics biological evolution based on reproduction, mutation, recombination, and selection to produce new offspring stronger than their parents. The most population evolutionary algorithms which have been significantly applied for various optimization problems are genetic algorithms (GA) [2], evolution strategy (ES) [3], genetic programming (GP) [4], probability-based incremental learning (PBIL) [5], and biogeography-based optimizer (BBO) [6].
The last category, called human-based algorithms, has worked on emulating human behaviors, for proposing other algorithms with different methodology; including Teaching Learning Based Optimization(TLBO) [41], Harmony search (HS) [42], League Championship Algorithm (LCA) [43], Group Counseling Optimization (GCO) [44,45], Mine Blast Algorithm (MBA) [46], Seeker Optimization Algorithm (SOA) [47], Soccer League Competition (SLC) algorithm [48,49], Firework Algorithm [50], and many others [51]. The significant successes achieved by metaheuristic algorithms have given them the first rank as optimization models for tackling several optimization problems in a reasonable time [52]. One of the most popular optimization problems tackled by those optimization algorithms is nonlinear equation systems (NESs).
Nonlinear equation systems (NESs) have significantly arisen in engineering and science fields and solving those systems has recently attracted the attention of several researchers for finding effective optimization methods [53][54][55]. The optimization methods proposed for tackling the NESs have been divided into two categories: metaheuristic and classical. The metaheuristic techniques won significant interest over the classical ones due to averting being stuck in local minima, accelerating the convergence speed, and independence of the initial guess, in addition to fulfilling better outcomes in a reasonable time as discussed before. Several papers apply the metaheuristic algorithms: human-, evolution-, swarm-, and physics-based, for tackling the NESs, as discussed in the next section. The mathematical model of the NESs are described as: where D refers to the number of dimensions, n determines the number of equations, and x involves the solution to the NESs. As shown in Equation (1), the NESs consists of more than one objective function and, hence, the metaheuristic algorithms designed to deal with the problem with a single objective are not able to solve them. Therefore, the NESs will be converted into a single objective to be solvable by the metaheuristic algorithms using the following formula: The flower optimization algorithm (FPA) proposed for tackling the global optimization based on mimicking the pollination process of flowers have had an effective performance for tackling several optimization problems [27,[56][57][58][59][60][61], but unfortunately, its performance still substantially suffers from stagnation in local minima because of the inability to explore several regions within the search space during the optimization process, in addition to having low convergence speed, which makes the classical FPA consume several iterations for searching better solutions within unpromising regions. Broadly speaking, the classical FPA was evaluated using 10 mathematical test functions under a population size of 25 and maximum iteration reaching 10,840; this is considered a significant rate to be consumed for coming to the desired outcomes. Furthermore, the authors in [56] hybridized the classical FPA with the clonal selection algorithm (CSA) to solve 23 global test functions. This work was based on improving the local search of the classical FPA to avoid being stuck in local minima, and to reach better outcomes. In our opinion, that hybridization between two algorithms, to mix the advantages of each one to overcome their owning disadvantages, is considered a good alternative, but that might be sometimes ineffective because of difficulty in finding two or more algorithms completing each other to reach better outcomes. As an easier alternative, we see that the structure of the classical algorithms needs to be redesigned differently to create various updating schemes, working on exploring several regions within the search space for reaching better outcomes without wasting the iterations useless.
Therefore, a new modified FPA (MFPA) is proposed in this paper to overcome all those aforementioned problems, by building an effective mathematical model based on effectively hybridizing various updating schemes to enable this modified algorithm of adapting itself during looking for the solutions to the optimization problem. This modified algorithm could overcome the standard one for all 23 well-known unimodal and multimodal test functions and 27 NESs. However, unfortunately, it still suffers from a defect in its exploration operator, which prevents it from reaching better outcomes for some test functions compared to the competing algorithms.
Therefore, a well-established evolutionary algorithm known as differential evolution (DE) has been successfully applied to tackling several optimization problems, either continuous or discrete ones, and enjoyed with various updating schemes. One of those schemes, having a high ability for the exploration, is "DE/rand/1", which explores the regions around one selected randomly from the population [62]. There are several DE variants based on various updating schemes or hybridization between DE, and some effective techniques have been extensively applied to tackle global optimization [63][64][65][66][67][68][69][70][71][72][73]. Since our proposed, MFPA suffers from a problem in the exploration operator, and the DE's updating scheme "DE/rand/1", have more direction to the exploration operator than the exploitation. DE is integrated with MFPA to propose a new variant called HFPA, balancing between the exploration and exploitation for preserving the population diversity to avoid being stuck into the local minima, and moving accurately toward the best-so-far solution to reduce the time-consuming fitness function evaluations. After validation and comparison on 23 well-known unimodal and multimodal test functions, HFPA has superior outcomes compared to the rival algorithms, with success rates reaching 78% and 70% in best and average cases, also outperforming MFPA, which has percentages of 61% and 52% as the second-best one.
Finally, our proposed algorithms: HFPA and MFPA have further investigated 27 NESs, and compared with some recent (and well-established) optimization algorithms, which show that those proposed algorithms are better with outperformance rate, reaching 100% and 81% in the best case, and 67% and 37% in the average case. It is concluded-based on the experimental findings-that HFPA is better than all competing algorithms and MFPA for both global optimization and NESs; thus, it is a strong alternative to tackle those two types of optimization problems. Briefly, this paper presents the following contributions: Proposes a modified variant of the classical FPA, namely MFPA, with various updating schemes to tackle both global optimization and NESs. Improves the exploration operator of MFPA using the DE with the "DE/rand/1" scheme to propose a new hybrid variant, called HFPA, with strong attributes. The experimental findings show that HFPA has superior performance for tackling global optimization and NESs compared to eight rival algorithms and MFPA.
The structure of this paper is depicted in Figure 1: Section 2 describes works done previously on tackling the NESs. Section 3 describes the standard algorithms: FPA and DE. Section 4 shows our proposed algorithms, explaining them clearly and effectively. Section 5 exposes various experiments and presents some discussions. Finally, Section 6 presents our conclusions and future work. The structure of this paper is depicted in Figure 1: Section 2 describes works done previously on tackling the NESs. Section 3 describes the standard algorithms: FPA and DE. Section 4 shows our proposed algorithms, explaining them clearly and effectively. Section 5 exposes various experiments and presents some discussions. Finally, Section 6 presents our conclusions and future work.

Literature Review: NESs
As aforementioned-NESs are a hot research area that have attracted the attention of researchers (in terms of proposing an optimization model that could optimally solve them in a reasonable time). Therefore, researchers have significantly moved towards metaheuristic algorithms as a strong alternative to the classical ones to tackle NESs. Some of the metaheuristic algorithms proposed for tackling the NESs are reviewed next.
Ramadas, G.C. and E.M.d.G [74] have employed some variants of the harmony search algorithm to tackle NESs. Furthermore, the social emotion optimization algorithm (SEOA) was recently published with some improvements to develop a new variant, namely HSEOA, avoiding being stuck in local optima in order to reach better outcomes [75]. In addition, another method based on the multi-crossover real-coded genetic algorithm was proposed to tackle NESs, and was compared to some evolutionary algorithms to show its superiority [76]. Furthermore, Grosan, C., et al. [77] dealt with this problem as a multi-objective one, where each function represented an objective and tried to find the non-dominated solution, which minimizes all of the test functions together.
In the same context, a new efficient variant of the genetic algorithm (GA) improved, using the symmetric and harmonious individuals and elitism way to improve the population diversity and the convergence speed, respectively [78]. The particle swarm optimization improved using a conjugate direction (CD) method, and was developed to propose a new variant, namely CDPSO, overcoming the optimization problems with high dimensions [79]. Moreover, in [80], the PSO was used as a technique for tackling NESs as suggested to overcome the disadvantages of the classical methods, e.g., Newton's method. A new NES technique based on the modified firefly algorithm was employed to deal with the problems with multiple roots [81]. In [82], another NES approach, named parallel elitesubspace evolutionary algorithm (PESEA), was proposed to tackle NESs in a reasonable time.

Literature Review: NESs
As aforementioned-NESs are a hot research area that have attracted the attention of researchers (in terms of proposing an optimization model that could optimally solve them in a reasonable time). Therefore, researchers have significantly moved towards metaheuristic algorithms as a strong alternative to the classical ones to tackle NESs. Some of the metaheuristic algorithms proposed for tackling the NESs are reviewed next.
Ramadas, G.C. and E.M.d.G [74] have employed some variants of the harmony search algorithm to tackle NESs. Furthermore, the social emotion optimization algorithm (SEOA) was recently published with some improvements to develop a new variant, namely HSEOA, avoiding being stuck in local optima in order to reach better outcomes [75]. In addition, another method based on the multi-crossover real-coded genetic algorithm was proposed to tackle NESs, and was compared to some evolutionary algorithms to show its superiority [76]. Furthermore, Grosan, C., et al. [77] dealt with this problem as a multi-objective one, where each function represented an objective and tried to find the non-dominated solution, which minimizes all of the test functions together.
In the same context, a new efficient variant of the genetic algorithm (GA) improved, using the symmetric and harmonious individuals and elitism way to improve the population diversity and the convergence speed, respectively [78]. The particle swarm optimization improved using a conjugate direction (CD) method, and was developed to propose a new variant, namely CDPSO, overcoming the optimization problems with high dimensions [79]. Moreover, in [80], the PSO was used as a technique for tackling NESs as suggested to overcome the disadvantages of the classical methods, e.g., Newton's method. A new NES technique based on the modified firefly algorithm was employed to deal with the problems with multiple roots [81]. In [82], another NES approach, named parallel elite-subspace evolutionary algorithm (PESEA), was proposed to tackle NESs in a reasonable time.
The grasshopper optimization algorithm (GOA) and genetic algorithm (GA) were effectively integrated to produce a hybrid variant, namely hybrid-GOA-GA, which could efficiently tackle the NESs [83]. This hybrid variant was validated using eight benchmark problems with different applications and its outcomes were compared with some of the state-of-the-art outcomes, in terms of computational costs, final results, and convergence speed. The experimental outcomes show its effectiveness for all of these terms. Furthermore, differential evolution (DE) was improved using two methods: a new mutation operation strategy and a restart technique to preserve the population diversity and avoid being stuck in local minima, which had to, by the standard DE, suggest a new variant, Mathematics 2021, 9, 1661 5 of 37 namely DE-R, to accurately solve the NESs. DE-R was validated using different real-world problems and compared with some recently proposed methods to show its superiority. In terms of the convergence speed and accuracy, DE-R was better.
A new hybrid algorithm was recently employed for solving NESs [84]. This hybrid algorithm, called DEMBO, was based on integrating the differential evolution (DE) algorithm into the monarch butterfly optimization (MBO) to overcome its defects confined to time-consuming fitness functions and falling in local minima. This algorithm was evaluated using nine unconstrained optimization problems and eight NESs, and compared to some state-of-the-art algorithms. The experimental findings demonstrate its superiority over the competing ones. In [85], a framework based on both grey wolf optimizer and multiobjective particle swarm optimization was proposed to tackle the NESs. This framework could be more effective compared to some of the classical and metaheuristic techniques. The differential evolution unified with a method, known as the Powell conjugate direction method, to avert stagnation in local minima was suggested, to propose a system called DE-Powell for tackling the NESs [86]. DE-Powell could be more effective compared to several existing algorithms when solving nine NESs.
The cuckoo search algorithm and the niche strategy have been combined to propose a strong variant called the niche cuckoo search algorithm (NCSA) for solving NESs [87]. NCSA has been benchmarked using 20 well-known mathematical test functions and some NESs, and compared to three well-established metaheuristic algorithms, such as chaos graycoded genetic algorithm, classical genetic algorithm, and standard cuckoo search algorithm, showing that this algorithm is more adaptable compared to the other for solving the NESs. A hybrid algorithm, based on incorporating the cuckoo search (CS) with the particle swarm optimization (PSO), to overcome the huge function evaluations required by CS and local minima as the defect of the PSO, has been proposed in a variant named CSPSO to tackle the NESs [88]. CSPSO was benchmarked by some NESs and 28 CEC2013 benchmark functions to show its efficiency, as well as compared with some existing algorithms to measure its efficiency.
The bat algorithm, improved by a differential operator and Levy flight strategy, to accelerate the convergence speed and avoid local minima, respectively, were proposed; this improved variant was named DLBA [89]. Fourteen typical test functions and an NES have been employed for benchmarking the efficiency of the proposed algorithm compared to some other optimization algorithms. The experimental findings show the effectiveness of DLBA for finding better solutions than all competing ones.
In [90], a comparative study among the various variant of the genetic algorithms, in addition to the classical methods, was performed to see which one is better for solving the system of equations. The experimental results of this study showed that a modified GA variant was the best. The grey wolf optimizer was efficiently combined with the DE to produce a new variant called GWO-DE, with strong characteristics, such as avoiding getting stuck in local minima and accelerating the convergence speed, for solving the NESs [91]. The experimental findings, as mentioned by the authors, proved the efficacy of GWO-DE for tackling most of the NESs, compared to the existing optimization techniques. There are several other approaches proposed for tackling the NESs [92][93][94][95].

Flower Pollination Algorithm (FPA)
Yang, X.-S. [27] proposed a nature-inspired metaheuristic optimization algorithm called the flower pollination algorithm (FPA), based on mimicking the pollination process of flowers. There are two kinds of pollination: self-pollination and cross-pollination. In selfpollination, the fertilization process is performed between the flowers of the same types, where the pollen from one flower goes to fertilize another similar one. Cross-pollination is related to transferring the pollen for long distances between different plants, by insects, such as birds, bees, and bats. It is worth mentioning that some insects tend to visit some flowers without the others, in a phenomenon called flower constancy. Generally, the flower pollination process could be described in the following rules:

1.
Biotic and cross-pollination can be defined as global pollination used to explore the regions of the search space for finding the most promising regions. This stage is based on the levy distribution.

2.
The abiotic self-pollination describes the local pollination utilized to exploit the regions around the current solution for accelerating the convergence speed. 3.
The flower constancy property can be regarded as a reproduction ratio that is proportional to the degree of similarity between two flowers.

4.
Local pollination has a slight advantage in comparison to global pollination due to the physical proximity and wind. In specific, the local and global pollinations are controlled by a control variable P having a value between 0 and 1.
The mathematical model of global pollination and flower constancy is based on involving the fittest insect through the ones that travel for long distances, which is described as follows: where t indicates the current iteration, x t i is the current position of the ith solution, x * is the best-so-far solution, l is a step generated based on the levy distribution, γ is the step size scaling factor, and x t+1 i express the next position. While the mathematical model of local pollination is described as follows: where is a variable involving a random value generated at the interval of 0 and 1 based on the uniform distribution. x t k and x t j are two solutions selected randomly from the current population.

Differential Evolution
Storn, R.J.T.r. [96] proposed a population-based optimization algorithm named differential evolution (DE), similar to genetic algorithms, in terms of the mutation, crossover, and selection operators. The differential evolution before starting the optimization process initializes a number of individuals with D dimensions for each one x t i,j | i = 1, 2, 3, . . . . . . , NP; j = 1, 2, 3, . . . . . . , D, where NP is the individuals number and called also as population size and D is the dimension size, within the search space of an optimization problem. Afterwards, the mutation and crossover operators have been applied to explore the search space for finding better solutions as described below.

Mutation Operator
This operator has been employed by DE to generate a mutant vector, namely x t i , called as target vector, in the population. The mutant vector is generated using the mutation strategy described below: where → x t a is a random solution selected randomly from the population at generation t. F is a positive scaling factor.

Crossover Operator
After generating the mutant vector v t i , the crossover operator has been employed to generate a trial vector corresponding mutant one, according to a crossover probability (CR). This crossover operation is described as follows: j r is a random integer generated between 1 and D, j indicates the current dimension, and CR is a constant value predefined between 0 and 1 to determine the percentage of the dimensions copied to the trial vector from the mutant one.

Selection Operator
Finally, the selection operator is used to evaluate the trial vector → u t i and the current one → x t i and the fittest one is used at the next generation. In general, the selection process for a minimization problem is expressed using the mathematical formulation as such: where f (.) indicates the objective function or often known as the fitness function.

Proposed Algorithm: Hybrid Modified FPA (HMFPA)
The steps used to build the proposed algorithm, developed for solving the global optimization and NESs, are described in this section, and involve initialization, evaluation, modification, and comprehensive algorithm.

Initialization
Before beginning the optimization process, NP solutions will be distributed within the lower bound and upper bound vectors of the optimization problem using the following formula: where → U, and → L are the upper and lower bound vectors, → r is a vector consisting of D cells having values generated randomly between 0 and 1. Afterward, those initialized solutions will be evaluated using Equation (2) to find the best-so-far solution used at the next generation for updating the current population in the hope of exploring a better one.

Global Pollination
The classical FPA has designed a mathematical model for the global pollination, which is based on transferring the pollens among the plants by insects, based on updating the current position in a reverse direction to the best-so-far solution, → x * , to take the pollens for a long distance. However, this involves some defects, mentioned next, which might affect the performance of the FPA. Since the main goal of this stage takes the pollen a long distance to fertilize other plants, it is not essential to always move the current position in the reverse direction into the best-so-far, because updating using various schemes, which might be combined in an effective manner to take the pollen to several regions, involving various plants within the search space, might significantly affect the optimization process. Therefore, three various updating schemes swapped effectively to take the pollen to several regions within the optimization process are mathematically described as follows. The first updating scheme is based on relating the current position to each search agent with the current iteration to help the algorithm gradually explore various regions around the current solution within the search space, even reaching the end of the iteration. In this case, the optimization process will focus on a local search around this current solution in the hope of finding a better solution. Generally, this updating scheme is modeled as follows: where t max indicates the maximum iteration, a is a distance control factor to determine the distance around the current position to be explored. The second updating scheme is searching around the best-so-far solution based on two-step sizes: the first one will take the algorithm in a reverse direction to the best-so-far solution, while the other works on improving this direction to be close to the best-far solution, to promote the exploitation operator, or further, to strengthen the exploration operator. The mathematical model of this scheme is described as follows: where → x t r 1 and → x t r 2 are two solutions randomly selected from the population at iteration t, while r is a numerical value generated between 0 and 1 under the uniform distribution. Finally, the third updating scheme is based on exploring the regions between the current best-so-far position and its negative one, based on the uniform distribution, to avoid being stuck in local minima, as modeled mathematically below: U indicates a uniform distribution method that takes the lower endpoints −1 * r 1 and upper endpoint r 1 as inputs and return a vector involves random values generated in-between; where r 1 is a value created randomly between 0 and 1. The swapping between those three updating scheme is achieved as described by the following equation to balance between the implementation of the following updating scheme and the other two, as an attempt to balance between the exploration and exploitation capability: where r, r 1 , and r 2 are numerical values generated randomly between 0 and 1.

Local Pollination
Regarding modification to the mathematical model at this stage-our idea was based on designing one using two various schemes that are exchanged using a probability of 0.5 to involve balance between them. The first one searches around the current position scaled according to the current iteration, to promote the searchability of the algorithm within the search space, to avoid being stuck in local minima. The second searches around the best-so-far solution, and is also scaled according to the current iteration to improve the exploitation operator, to accelerate the convergence speed in the right direction of the near-optimal solution.
where → x t m and → x t n are two solutions selected randomly from the population, and r is a random number generated between 0 and 1. Finally, Algorithm 1 shows the steps of modified FPA (MFPA) and the same steps depicted in Figure 2.
Mathematics 2021, 9, x FOR PEER REVIEW 9 of 35 where ⃗ and ⃗ are two solutions selected randomly from the population, and is a random number generated between 0 and 1. Finally, Algorithm 1 shows the steps of modified FPA (MFPA) and the same steps depicted in Figure 2.

Hybridization of MFPA with DE(HFPA)
Unfortunately, MFPA still suffers from a lack of population diversity; this will pull the algorithm into the local minima and, hence, it cannot get to the near-optimal solution. Therefore, the DE has been effectively integrated into MFPA with a probability p 1 , picked experimentally, as shown in the experiments section later, even taking the algorithm into other regions, preserving the population diversity for achieving better outcomes. Finally, the steps of integrating MFPA with DE are listed in Algorithm 2, and its framework is described in Figure 3. For (i = 1: NP) 5.
r: create a random number between 0 and 1.
End for 12.

17.
Creating a mutant vector Applying crossover operator.
End for 21.

Outcomes and Discussion
This section assesses the performance of the proposed algorithm using two independent experiments: the first one is based on checking its performance to search for the near-optimal solution for 23 well-known mathematical test functions, and the second will employ this, proposed for estimating the roots of 27 common NESs. Specifically, this section is organized as follows:  Section 5.1 shows the parameter settings and benchmark test functions.  Section 5.2 presents validation and comparison under 23 global optimization problems.  Section 5.3 presents validation and comparison under 27 NESs.
Regarding the parameters of each compared algorithm, they were assigned at the implementation, as cited in the published papers. However, the proposed algorithms: MFPA and HFPA have three effective parameters, which need to be optimally picked to maximize their performances, those parameters are p, , and . After executing different experiments with different values for each parameter on different test functions, it is obvious that the best value for p is 0.4, as shown in Figure 4a,b. The best for the parameters a and are of 0.8 and 0.5, as shown in Figure 4c,d. Regarding the parameter for the proposed, it is set to 0.5 to increase the step size for increasing the exploration operator, while the parameters CR and F are set to 0.9 and 0.5, respectively, as described in [99]. All algorithms were executed 30 independent times with a population size of 30 and a maximum iteration of 500 under the same machine to ensure a fair comparison.

Outcomes and Discussion
This section assesses the performance of the proposed algorithm using two independent experiments: the first one is based on checking its performance to search for the near-optimal solution for 23 well-known mathematical test functions, and the second will employ this, proposed for estimating the roots of 27 common NESs. Specifically, this section is organized as follows: Section 5.1 shows the parameter settings and benchmark test functions. Section 5.2 presents validation and comparison under 23 global optimization problems. Section 5.3 presents validation and comparison under 27 NESs.
Regarding the parameters of each compared algorithm, they were assigned at the implementation, as cited in the published papers. However, the proposed algorithms: MFPA and HFPA have three effective parameters, which need to be optimally picked to maximize their performances, those parameters are p, a, and p 1 . After executing different experiments with different values for each parameter on different test functions, it is obvious that the best value for p is 0.4, as shown in Figure 4a,b. The best for the parameters a and p 1 are of 0.8 and 0.5, as shown in Figure 4c,d. Regarding the parameter γ for the proposed, it is set to 0.5 to increase the step size for increasing the exploration operator, while the parameters CR and F are set to 0.9 and 0.5, respectively, as described in [99]. All algorithms were executed 30 independent times with a population size of 30 and a maximum iteration of 500 under the same machine to ensure a fair comparison.  The algorithms are here compared based on estimating the optimal value for two benchmarks: the first one consists of eight well-known unimodal mathematical test functions and 15 multimodal ones, as described in Table 1, which consists of four columns: the first one called "Name" mentions the function name, the second labeled "Formula: shows the mathematical equation of each function, the third labeled "D" carries the number of dimensions, and the last labeled "R" involves the search area of each function. The second benchmark involves 28 widely used NESs, defined in Table 2. The landscape of the unimodal and multimodal functions are depicted in Figure 5 to display the difference between the two. The algorithms are here compared based on estimating the optimal value for two benchmarks: the first one consists of eight well-known unimodal mathematical test functions and 15 multimodal ones, as described in Table 1, which consists of four columns: the first one called "Name" mentions the function name, the second labeled "Formula: shows the mathematical equation of each function, the third labeled "D" carries the number of dimensions, and the last labeled "R" involves the search area of each function. The second benchmark involves 28 widely used NESs, defined in Table 2. The landscape of the unimodal and multimodal functions are depicted in Figure 5 to display the difference between the two. Step u(x i , 10, 100, 4) u(x i , 10, 100, 4) f13

Comparison of the Global Optimization
This section is presented to compare the performance of the standard FPA, MFPA, and HFPA together, and with seven well-known swarm and evolutionary algorithms to see how far our modification to the standard FPA could positively affect its performance for solving 23 well-known unimodal and multimodal functions. Due to the stochastic nature of these algorithms, they are executed 30 independent times, and the best, Avg, worst, and standard deviation (SD) of the fitness values obtained by each one were calculated and exposed in Tables 3 and 4. Inspecting Tables 3 and 4 shows that both MPFA and HFPA could outperform the classical FPA for all used test functions and this affirms that our modification could aid the standard in reaching other regions not reachable by this classical one. Not only could HFPA outperform the standard one, but it could also be superior and competitive with the rival algorithms and MFPA for 16 out of 23 test function, with a percentage up to 70%, as shown in Figure 6 in all independent runs. Moreover, for the other seven test functions, it could reach less value in the best case for two with a total percentage of 78%, as depicted in Figure 6, and its performance was significantly converged for the other ones. This superiority achieved by HFPA is due to preserving the population diversity among the individuals along the optimization process and this helps it avoid being stuck in local minima, which prevents it from reaching better outcomes. Based on that, HFPA is a strong alternative metaheuristic algorithm to the existing ones for tackling optimization problems. Mathematics 2021, 9,

F9
Best −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 Avg −9.52 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.53 × 10 2 −7.60 × 10 2 −8.98 × 10 2 −9.18 × 10 2 −9.10 × 10 2 −9.28 × 10 2 Worst −7.87 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −9.60 × 10 2 −8.   Furthermore, the convergence curve obtained by each algorithm in log scale is presented in Figure 7, on nine test functions randomly selected to show if any of them need fewer iterations to reach the optimal solution. This figure shows that MFPA could be superior for F2, F3, F6, F12, and F22, while both MFPA and HFPA were competitive with each other and superior to the other for the remainder, except F7, which has better convergence by RUN. This figure, which elaborates the superiority of the MFPA for most test functions, affirms that MFPA has a better exploitation operator than the HFPA, and this might not be effective for some optimization problems, which need higher exploration capabilities to cover the search space as possible for reaching the best solution. Furthermore, the convergence curve obtained by each algorithm in log scale is presented in Figure 7, on nine test functions randomly selected to show if any of them need fewer iterations to reach the optimal solution. This figure shows that MFPA could be superior for F2, F3, F6, F12, and F22, while both MFPA and HFPA were competitive with each other and superior to the other for the remainder, except F7, which has better convergence by RUN. This figure, which elaborates the superiority of the MFPA for most test functions, affirms that MFPA has a better exploitation operator than the HFPA, and this might not be effective for some optimization problems, which need higher exploration capabilities to cover the search space as possible for reaching the best solution.
Finally, to see if the speedup of our proposed algorithms is better or not, Figure 8 shows the average of the computational cost consumed by each algorithm on all test functions within the independent runs, which affirms that both HFPA and MFPA are almost competitive with PSO, and superior to the others, except for EO and FPA, which need less computational costs, but have worse performance in comparison to the proposed algorithms.

Comparison of the NESs
As a case study, the NESs are herein solved by the proposed algorithms: MFPA and HFPA, and their outcomes are compared with eight well-established optimization algorithms to see their superiority for tackling these equations. The proposed algorithms: MFPA and HFPA, in addition to the others, are executed 30 independent times and the analyzed outcomes are exhibited in Tables 5 and 6, which show HFPA could have superior and equal performance for 18 out of 27 test functions with a percentage of 67%, as found in Figure 9, better than MFPA, which could be superior and competitive for only 10 NESs, with a success proportion of 37% as the second-best algorithm in all independent runs. For the other test functions, in the best case, HFPA could be competitive and superior to the competing algorithms for all employed NESs with a proportion of 100%, as found in Figure 9. This confirms the efficiency of integrating the DE with the MFPA, which gives this hybrid variant a higher influence for the exploration over the exploitation to preserve the population diversity, to prevent being stuck in local minima and, hence, reach better outcomes along the optimization process, as long as the population diversity is preserved. Mathematics 2021, 9,

Comparison of the NESs
As a case study, the NESs are herein solved by the proposed algorithms: MFPA and HFPA, and their outcomes are compared with eight well-established optimization algorithms to see their superiority for tackling these equations. The proposed algorithms: MFPA and HFPA, in addition to the others, are executed 30 independent times and the analyzed outcomes are exhibited in Tables 5 and 6, which show HFPA could have superior and equal performance for 18 out of 27 test functions with a percentage of 67%, as      Furthermore, the convergence curves obtained by each algorithm in log scale are presented in Figure 10 to show if any of them needs fewer iterations to reach the optimal solution. This figure shows that MFPA could be superior for f1, f4, f13, f15, and f17; and HFPA for f2, f3, f7, f10, f11, and f12, while both are competitive with the others for the remaining functions depicted in Figure 10. Moreover, from Figure 10, it is noted that MFPA has better exploitation capability because it could reach the optimal solution using significantly smaller iterations than those needed by the other competing algorithms. MFPA for f3 has a worse convergence speed than most of the others because of weakening its exploration operator, which helps it keep the population diversity to explore several regions within the whole optimization process, in the hope of establishing the region, which has near-optimal solution. On the contrary, the HFPA on f3 could be superior to all the others in terms of the convergence speed, which notifies that HFPA balances between the exploration and exploitation capability to avoid being stuck in local minima, and accelerate the convergence speed to the near-optimal solution.    In addition, Figure 11 affirms that the average of the computational cost consumed by the proposed algorithm is superior to HOA, DE, MPA, RUN, and SMA; and competitive to PSO; however, unfortunately, they almost consume twice the time of both EO and FPA as our main future challenge.

Comparison between FPA Variants on NESs
In this section, the proposed algorithms: HFPA and MFPA, in addition to the classical FPA, are compared with each other based on drawing the boxplot of the outcomes obtained by each various test function, and exposing the outcomes in Figure 12. From this figure, it is obvious that both MFPA and HFPA could be better than the classical FPA for all test functions, and both HFPA and MFPA have competitive performance.

Conclusions and Future Work
In this paper, the classical FPA is modified to improve its global pollination for exploring more regions in the search space to avoid being stuck in local minima. In addition, the local pollination is also modified to enhance the exploitation capability for searching extensively around the best-so-far solution to accelerate the convergence speed in the right direction of the near-optimal solution This modified variant is abbreviated MFPA. Furthermore, the differential evolution algorithm was integrated with this modified variant, effectively as an attempt to develop a new one, namely HFPA, having a high exploration operator. The proposed algorithms: MFPA and HFPA, and the classical FPA, in addition to seven well-known metaheuristic algorithms, were extensively assessed using 23 unimodal and multimodal mathematical test functions, and 27 widely used nonlinear equation systems. Their outcomes were statistically analyzed and compared with each other. The experimental findings affirm that both MFPA and HFPA are significantly competitive with each other and dramatically superior to the standard FPA. Moreover, these findings show that MFPA and HFPA are superior and competitive to the well-known compared metaheuristic algorithms in terms of final accuracy, computational cost, and convergence speed. Our future work involves proposing a binary variant of those two proposed algorithms for tackling the 0-1 knapsack, feature selection, and cryptanalysis of cipher-text, in addition to proposing another combinatorial algorithm to tackle the DNA fragment assembly problem. Moreover, in the future, we will search for a new strategy to balance between the exploration and exploitation operators of this modified variant to fulfill better outcomes.