A Local Search-Based Generalized Normal Distribution Algorithm for Permutation Flow Shop Scheduling

: This paper studies the generalized normal distribution algorithm (GNDO) performance for tackling the permutation ﬂow shop scheduling problem (PFSSP). Because PFSSP is a discrete problem and GNDO generates continuous values, the largest ranked value rule is used to convert those continuous values into discrete ones to make GNDO applicable for solving this discrete problem. Additionally, the discrete GNDO is effectively integrated with a local search strategy to improve the quality of the best-so-far solution in an abbreviated version of HGNDO. More than that, a new improvement using the swap mutation operator applied on the best-so-far solution to avoid being stuck into local optima by accelerating the convergence speed is effectively applied to HGNDO to propose a new version, namely a hybrid-improved GNDO (HIGNDO). Last but not least, the local search strategy is improved using the scramble mutation operator to utilize each trial as ideally as possible for reaching better outcomes. This improved local search strategy is integrated with IGNDO to produce a new strong algorithm abbreviated as IHGNDO. Those proposed algorithms are extensively compared with a number of well-established optimization algorithms using various statistical analyses to estimate the optimal makespan for 41 well-known instances in a reasonable time. The ﬁndings show the beneﬁts and speedup of both IHGNDO and HIGNDO over all the compared algorithms, in addition to HGNDO. the permutation flow shop scheduling problem (PFSSP), in this paper, we investigate the performance of a novel optimization algorithm, namely generalized normal distribution (GNDO), for solving this problem. Due to the continuous nature of GNDO and the discreteness of PFSSP, the larg- est ranked value (LRV) rule is used to make GNDO applicable for solving this problem. In a new attempt to improve the performance of the discrete GNDO, a new version of GNDO, namely a hybrid GNDO (HGNDO), is developed based on applying a local search strategy to improve the quality of the optimal global solution. In addition, the GNDO has an improvement by also applying the swap mutation operator on the best-so-far solution to find better solutions, and this improvement is integrated with HGNDO to produce a new version, namely HIGNDO. Finally, the scramble mutation operator is integrated with the local search strategy to utilize each attempt done by this local search for improving the best-so-far solution as much as possible; this local search is used with the improved GNDO using the swap mutation operator to produce a strong version abbreviated as IH- GNDO for tackling the PFSSP. To validate the performance of the algorithms accurately, 41 common instances used widely in the literature are employed. Additionally, to check the proposed superiority, they are extensively compared with some well-established re- cently-published optimization algorithms using various performance metrics. The find- ings show that HIGNDO and IHGNDO could be superior in terms of standard deviation, CPU time, and makespan. Those findings also show that IHGNDO is better than HI- GNDO for most performance metrics, and this confirms the effectiveness of our improve- ment to the local search strategy. Our future work involves applying those proposed al- gorithms for tackling other types of the flow shop scheduling problem. Author Contributions: R.M., and M.A.; methodology, M.A.-B., R.M., and M.A.; M.A.-B. and R.M.; M.A.-B., R.M..; M.A.-B., and investigation,


Introduction
The permutation flow shop scheduling problem (PFSSP) is a critical problem that needs to be solved accurately and effectively to minimize the makespan criteria. The solution to this problem involves finding the near-optimal permutation of n jobs to be processed in a set of m machines sequentially that will minimize the makespan required, even completing the last job in the last machine [1]. This problem has significant utilization in several fields, especially in industries such as computing designs, procurement, and information processing. According to its significant effectiveness and its nature, which is normally classified as nondeterministic polynomial time (NP)-hard [1][2][3][4][5][6], several techniques of exact, heuristic, and meta-heuristic properties have been extensively employed for solving this problem. Some of them will be surveyed in the rest of this section. tune the crossover rate, in addition to accelerating the convergence speed of the algorithm using the opposition-based learning. Finally, ODDE used a local search strategy to avoid being stuck into local minima by improving the best-so-far solution.
The whale optimization algorithm integrated with a local search-ability on the best solution and mutation operators have been suggested by Abdel-Basset, M., et al. [34] to propose a new variant, namely HWA, for tackling PFSSP. Broadly speaking, HWA used the NEH algorithm in the initialization step to create 10% of the populations with a certain diversity and quality as an attempt to avoid being stuck into local minima for reaching better outcomes. Afterward, to make WOA applicable to the PFSSP, the LRV rule was used to make the solutions generated by it relevant to this problem. Furthermore, it was integrated with two operators to improve the diversity for avoiding being stuck into the local minima problem: swap mutation and insert-reversed block. Finally, to accelerate the convergence speed toward the optimal solution and avoid being stuck in the local minimum, it was integrated with a fast local search strategy on the best-so-far solution.
In [35], Mishra developed a discrete Jaya optimization algorithm for tackling the PFSSP. Because the standard Jaya algorithm has been adapted for tackling the continuous optimization problem that is contradicted to the PFSSP, which is normally classified as a discrete one, the largest order value rule was used to convert those continuous values into discrete ones relevant to the PFSSP. This discrete Jaya algorithm was verified on a set of well-known benchmarks and compared extensively under various statistical analyses with hybrid genetic algorithm (HGA, 2003), hybrid differential evolution (HDE, 2008), hybrid particle swarm optimization (HSPO, 2008), teaching-learning based optimization (TLBO, 2014), and hybrid backtracking search algorithm (HBSA, 2015) that are not up to date, and its performance with the recent optimization algorithms published over the last three years are unknown.
The whale optimization algorithm (WOA) [36] improved using the chaos map and then integrated with the NEH algorithm has been proposed for tackling the PFSSP. In detail, the NEH algorithm and the largest ranked values rule are used in the initialization step of the chaos WOA (CWA) to initialize the solutions in better quality. After that, CWA used the chaotic maps to avoid being stuck into local minima and accelerate convergence speed by assisting two other operators: cross operator and reversal-insertion to improve its exploration capability. Ultimately, CWA used the local search strategy to improve the quality of the best-so-far solution to improve the exploitation capability of CWA. This algorithm was observed using various benchmarks and compared with various optimization algorithms to check its superiority.
Further, a new discrete multiobjective approach based on the fireworks algorithm (DMOFWA) has been recently proposed for solving the multi-objective flow shop scheduling problem with sequence-dependent setup times (MOFSP-SDST); this approach was abbreviately called DMOFWA [37]. Inside this approach, two various machine learning techniques have been integrated: The first one called opposition-based learning was used to improve the exploration operator of the standard algorithm to avert entrapment into local minima, and the second one is the clustering analysis and was used to cluster fireworks individuals.
To overcome expensive computational costs and local minima problems that might suffer from most of the above-described algorithms, we developed a novel discrete optimization algorithm to tackle the PFSSP in a reasonable time compared to some existing techniques. Recently, a new optimization algorithm, namely generalized optimization algorithm (GNDO), based on the normal distribution theory, has been developed by Zhang [38] for tackling the parameter extraction problem of the single diode and double diode photovoltaic models. Due to its high ability to estimate the parameter values that minimize the error rate between the measured I-V curve and the estimated I-V curve, in this paper, we try to observe its performance for tackling the PFSSP. In order to make GNDO applicable to the PFSSP classified as a discrete problem contradicted by the continuous problems tackled using the standard GNDO, the largest ranked value (LRV) rule is used to convert those continuous values into job permutations adequate to solve the PFSSP. Furthermore, this discrete GNDO using the LRV rule is integrated with a local search strategy to avoid being stuck into local minima for reaching better outcomes; this version is named a hybrid GNDO (GNDO). In another attempt to improve the quality of HGNDO, it was integrated with the swap mutation operator applied on the best-so-far solution as another attempt to promote the exploitation capability for reaching better outcomes; this version was abbreviated as HIGNDO. Finally, to improve the quality of the solutions, the local search strategy is improved using the scramble mutation operator and then integrated with HIGNDO to produce a new version named IHGNDO. The proposed algorithms, HGNDO, HIGNDO, and IHGNDO, are verified using 41 well-known instances widely used in the literature and compared with a number of the recent well-established algorithms to verify their efficacy using various performance metrics. The experimental results affirm the superiority of IHGNDO and HIGNDO over the other algorithms in terms of standard deviation, computational cost, and makespan. Generally, our contributions in this work include the following:

•
Develop GNDO using the LRV rule for PFSSP. • Improve GNDO using the swap mutation operator to avoid being stuck into local minima.

•
Enhance the local search strategy using the scramble mutation operator for accelerating the convergence speed toward the near-optimal solution. • Integrate the improved local search strategy and the standard one with the improved GNDO and GNDO for tackling the PFSSP.

•
The experimental findings show that IHGNDO and HIGNDO are better in terms of standard deviation and computational cost and final accuracy.
This work is organized as follows: Section 2 explains the PFSSP; Section 3 describes the standard generalized normal distribution optimization algorithm; Section 4 explains the proposed algorithm; Section 5 includes the results and discussion; and Section 6 illustrates our conclusions and future work.

Description of the Permutation Flow Shop Scheduling Problem
Assuming that n jobs are running sequentially over m machines in the permutation flow that will minimize the makespan, this problem is known as the permutation flow shop scheduling problem (PFSSP). The makespan is measured using time units such as seconds, milliseconds, etc. Therefore, to solve this problem, the best permutation c * that will minimize the makespan of execution of the last job on the last machine must be accurately extracted. In general, the following points summarize the PFSSP: (1) on each machine, each job j b |b = 1, 2, 3, . . . . . . . . . , n could run just once, where n is the number of jobs; (2) just a job could be executed on a machine i z |z = 1, 2, 3, . . . . . . ., m at a time with processing time PT, where m is the number of machines; (3) each job j b will have a completion time c on a machine v z , and this time is symbolized as c(j b , i z ); (4) each job has a processing time comprised of the set-up time of the machine and the running time; and (5) each job takes a time of 0 when starting. Mathematically, PFSSP could be modeled as follows: In our work, the objective function used by the suggested algorithm to evaluate each solution is described as follows: where → j i is the jobs permutation of the ith solution. This objective function will be used to evaluate each permutation extracted by the algorithms, and the one with less makespan is considered the best.

Standard Algorithm: Generalized Normal Distribution Optimization
Zhang [38] developed a new optimization algorithm based on the normal distribution theory to tackle the parameter estimation problem of Photovoltaic models: single diode model and double diode model; this algorithm is called generalized normal distribution optimization (GNDO). The mathematical model of GNDO is extensively described in the rest of this section.

Exploitation Operator
This operator is utilized to search extensively around the best-so-far solution X * to check if there are better solutions as an attempt to accelerate the convergence speed. In GNDO, this operator is designed based on searching around the mean µ i of X * , the current ith solution X i t , and the mean M of all solutions at generation t calculated according to Equation (8); µ i is computed using Equation (7). After that, GNDO exploits the solutions around this mean using a step size computed according to Equation (9) to generate a new trial solution T i t using Equation (6) having the following characteristics: accelerating the convergence speed in addition to improving the quality of the solutions. T i t is carried over to the next generation if its objective value is better than the objective of X i t .
r 1 , r 2 , ℷ 1 , and ℷ 2 are four numbers generated randomly at the interval between 0 and 1.

Exploration Operator
However, µ i may be local minima, and subsequently searching around it is futile to improve the quality of the solutions. Therefore, the exploration operator is used to explore the search space as much as possible to avoid being stuck into local minima. In mathematical terms, this operator is formulated as follows:

The Proposed Work
In this section, the steps of initialization, swap mutation and scramble mutation operators, and improved local search that comprise the proposed algorithms will be discussed in detail within this section.

Initialization
In the beginning, N solutions with n dimensions for each one are generated and initialized with distinct integers generated randomly between 0 and n. After that, those solutions will be evaluated, and the one with less makespan will be carried over to the next generation as the best-so-far solution. The ending to this phase considers starting the optimization process used to optimize the initial solutions to generate new better ones. However, unfortunately, the updated solutions generated by GNDO are continuous, not discrete, as required for the PFSSP, so the largest ranked value (LRV) is used to convert the continuous values generated by GNDO into a job permutation. The LRV sets the largest value in the updated solution as the first order of a job permutation and the second-largest value as the second one. Table 1 presents a simple example to illustrate the LRV rule for generating the job permutation from an updated solution T i t .

Swap Mutation Operator
This mutation operator is extensively used for solving the permutation problem by swapping the values of two positions selected randomly from the solution. In the proposed algorithm, this operation is applied on the best-so-far solution 0.1 times to search for other solutions with a smaller makespan than the current best-so-far. Figure 1 gives an example about the swap mutation operator, where Figure 1a shows the order of the positions before using this mutation operator, while Figure 1b shows the order after swapping the value in the third position with the values in the seventh position.
1, 2, and 3 are three indices selected randomly from the population, such that 1 2 3 . The exploration and exploitation operators are randomly swapped in the optimization process.

The Proposed Work
In this section, the steps of initialization, swap mutation and scramble mutation operators, and improved local search that comprise the proposed algorithms will be discussed in detail within this section.

Initialization
In the beginning, N solutions with n dimensions for each one are generated and initialized with distinct integers generated randomly between 0 and n. After that, those solutions will be evaluated, and the one with less makespan will be carried over to the next generation as the best-so-far solution. The ending to this phase considers starting the optimization process used to optimize the initial solutions to generate new better ones. However, unfortunately, the updated solutions generated by GNDO are continuous, not discrete, as required for the PFSSP, so the largest ranked value (LRV) is used to convert the continuous values generated by GNDO into a job permutation. The LRV sets the largest value in the updated solution as the first order of a job permutation and the second-largest value as the second one. Table 1 presents a simple example to illustrate the LRV rule for generating the job permutation from an updated solution .

Swap Mutation Operator
This mutation operator is extensively used for solving the permutation problem by swapping the values of two positions selected randomly from the solution. In the proposed algorithm, this operation is applied on the best-so-far solution 0.1 times to search for other solutions with a smaller makespan than the current best-so-far. Figure 1 gives an example about the swap mutation operator, where Figure 1a shows the order of the positions before using this mutation operator, while Figure 1b

Scramble Mutation Operator
In this operator, two positions are randomly picked, and the jobs between those two positions are shuffled and inserted again, as depicted in the following table (Table 2).

Scramble Mutation O
In this operator, tw positions are shuffled a

Improved Local Searc
Additionally, in th around the best-so-far ing to a specific probab

Improved Local Search Strategy (ILSS)
Additionally, in this work, a local search strategy is used to explore the solutions around the best-so-far solution for finding better solutions. This strategy will try according to a specific probability LSP each job in the best-so-far solution in all positions within this best solution to find a permutation with better makespan than the current best-so-far one. This strategy is used with the best-so-far solution without the others because the best-so-far solution might be so close to the optimal solution and need only simple changes to fulfill this optimal solution. This local search is integrated with the improved GNDO using the swap mutation operator to generate a version for tackling PFSSP, abbreviated as HIGNDO. In addition, in some cases, small changes may consume a large number of iterations without any benefit, so, in this research, a new addition to this LSS is made to make more changes to the best-so-far solution in the hope of finding a better solution. This addition is based on using the scramble mutation operator additionally with the LSS to explore more permutations. This improved local search strategy is abbreviated as ILSS, and its steps are listed in Algorithm 1. In Algorithm 2, the steps of improved GNDO (IGNDO) using the swap mutation operator hybridized with the LSS without the scramble mutation operator are extensively described to produce a version for tackling PFSSP known as HIGNDO. A new version using ILSS with IGNDO is developed to verify the efficacy of our improvement to the LSS for reaching better outcomes. This version is abbreviated as IHGNDO and is depicted in Figure 2.

Results and Comparisons
In our experiments, the proposed algorithms are extensively validated on three benchmarks commonly used in the literature: (1) the first dataset is called the Carlier dataset, having eight instances with a number of jobs ranging between 7 and 14, and a number of machines at the interval between 4 and 9 [39]; (2) the second is the Reeves dataset with 21 instances, where the number of machines and the number of jobs ranges between 20 and 75, and 5 and 20, respectively [40]; and (3) finally, the third one is known as the Heller and involves two instances with a number of jobs ranging between 20 and 100, and a number of machines of 10, respectively [41]. Those datasets are taken from [42] with
r : create a random number between 0 and 1.
Applying scramble mutation operator on X 8.
Calculate the fitness of X.
End for 12. End for Return X * Algorithm 2 HIGNDO.
Compute T i t according to Equation (11). 16.
T: Applying the swap mutation on the best-so-far solution.

Results and Comparisons
In our experiments, the proposed algorithms are extensively validated on three benchmarks commonly used in the literature: (1) the first dataset is called the Carlier dataset, having eight instances with a number of jobs ranging between 7 and 14, and a number of machines at the interval between 4 and 9 [39]; (2) the second is the Reeves dataset with 21 instances, where the number of machines and the number of jobs ranges between 20 and 75, and 5 and 20, respectively [40]; and (3) finally, the third one is known as the Heller and involves two instances with a number of jobs ranging between 20 and 100, and a number of machines of 10, respectively [41]. Those datasets are taken from [42] with some characteristics about the number of jobs and machines, and the best-known makespan z * in Table 3. Furthermore, the proposed algorithms are extensively compared with a number of the well-established optimization algorithms: sine cosine algorithm (SCA) [43], slap swarm algorithm (SSA) [44], whale optimization algorithm (WOA) [34], genetic algorithm (GA), equilibrium optimization algorithm (EOA) [45], marine predators optimization algorithm (MPA) [42], and a hybrid tunicate swarm algorithm (HTSA) [46] integrated with the local search strategy to ensure a fair comparison and verify their efficacy in terms of six performance metrics: average relative error (ARE), worst relative error (WRE), best relative error (BRE), an average of makespan (Avg), standard deviation (SD), and computational cost (Time in milliseconds (ms)). BRE indicates how far the best-obtained solution Z B is close to the best-known solution and is formulated using the following formula: Meanwhile, WRE calculated using the next equation is a metric used to assess the remoteness between the worst-obtained makespan Z w and the best known.
Regarding ARE, it is used to show the relative error with respect to the average makespan values within 30 independent runs and the best-known one. Mathematically, ARE is modeled as follows.
The algorithms used in our experiments after integrating local search are named a hybrid SCA (HSCA) [43], a hybrid SSA (HSSA) [44], a hybrid WOA (HWOA) [34], a hybrid GA (HGA), a hybrid EOA (HEOA) [45], a hybrid MPA (HMPA) [42], and a hybrid TSA (HTSA) [46]. Regarding the parameters of those algorithms, they were assigned after extensive experiments. The EOA has two parameters: a 1 (exploration factor) and a 2 (exploitation factor), which are needed to be accurately estimated, and after several experiments for extracting their optimal values, we note that all observed values for a 2 were significantly converged; therefore, it is set to 1 as used in the standard algorithm; a 1 , which is responsible for the exploration operator, is assigned a value of 2 estimated after several experiments, pictured in Figure 3a. The SSA is self-adaptive algorithm since it does not have parameters to be assigned before beginning the optimization process; on the other hand, the HSCA has one parameter called a responsible for deterimining where the algorithm will search for the near-optimal solution, and the value to this parameter was set to 3, as shown in Figure 3b. The HMPA has one parameter P called the scaling factor, and it is set in our experiment as cited in the standard algorithm because we found that these parameters have no effect on the performance of the algorithm while solving this problem. Finally, the HTSA has two effective parameters, namely x max and x min , representing the initial and subordinate speeds for social interaction and are assigned to 1 and 2, as described in Figure 3c,d which depict the outcomes of their tuning using various values. The HGA used a value of 0.02 and 0.8 for both the mutation and crossover probabilities, as recommended in [40]. All algorithms were executed under those parameters 30 independent times within the same environment with a maximum of iteration and population size: 200 and 50, respectively.

Comparison under Carlier
This section validates the performance of the algorithms on the Carlier instances to show the readers the efficacy of each one. Each algorithm is run 30 independent times on each instance out of eight instances of the Carlier dataset, and then the various performance metrics are calculated and presented in Table 4, which shows the superiority of IHGNDO, HIGNDO, and HGNDO on most test cases. Broadly speaking, IHGNDO could reach the best-known value for all instances and fulfill a value of 0 for ARE, WRE, BRE, and SD, in addition to its outperformance in the time metric for two instances. Meanwhile, HIGNDO could fulfill the best-known values of seven instances within all independent runs while failing incoming true the best-known value for Car04 instance in all runs. In addition, HIGNDO could be the best for the time metric in five instances. Generally, IH-GNDO could occupy the first rank for the makespan metric and the second rank after HIGNDO in terms of the CPU time. Additionally, Figure 4 presents the average of ARE, WRE, and BRE on all instances, which shows that IHGNDO could occupy the first rank for WRE and ARE, while it is competitive with the others in terms of WRE. Regarding SD, an average of makespan, and time metrics depicted in Figure 5, HIGNDO comes in the first rank before IHGNDO for the time metric; IHGNDO could be the best for time and

Comparison under Carlier
This section validates the performance of the algorithms on the Carlier instances to show the readers the efficacy of each one. Each algorithm is run 30 independent times on each instance out of eight instances of the Carlier dataset, and then the various performance metrics are calculated and presented in Table 4, which shows the superiority of IHGNDO, HIGNDO, and HGNDO on most test cases. Broadly speaking, IHGNDO could reach the best-known value for all instances and fulfill a value of 0 for ARE, WRE, BRE, and SD, in addition to its outperformance in the time metric for two instances. Meanwhile, HIGNDO could fulfill the best-known values of seven instances within all independent runs while failing incoming true the best-known value for Car04 instance in all runs. In addition, HIGNDO could be the best for the time metric in five instances. Generally, IHGNDO could occupy the first rank for the makespan metric and the second rank after HIGNDO in terms of the CPU time. Additionally, Figure 4 presents the average of ARE, WRE, and BRE on all instances, which shows that IHGNDO could occupy the first rank for WRE and ARE, while it is competitive with the others in terms of WRE. Regarding SD, an average of makespan, and time metrics depicted in Figure 5, HIGNDO comes in the first rank before IHGNDO for the time metric; IHGNDO could be the best for time and Avg metrics. Ultimately, Figures 6-8 compare the makespan values obtained by the different algorithms based on the boxplot. Those figures show the superiority of IHGNDO in terms of the average makespan. From the above analysis, IHGNDO could achieve positive outcomes reasonably, which makes it a strong alternative to the existing algorithms developed for tackling the PFSSP.

Comparison under Reeves
In this subsection, the proposed algorithms will be verified on the Reeve instances t verify their efficacy and compared to some state-of-the-art algorithms to show their supe riority. After running and calculation, various metrics values are introduced in Tables and 6 to observe the performance of the algorithms. Observing those tables shows th superiority of the proposed algorithms: IHGNDO, HIGNDO, and HGNDO for most per formance metrics in most test cases. To confirm that, Figures 9 and 10 are presented t show the average of each performance metric on all instances in the Reeves benchmark those figures elaborate the superiority of HIGNDO over the others in terms of BRE, ARE and Avg makespan, while IHGNDO could outperform in terms of SD and come in the si ranks for the time metric. Since the proposed algorithms could outperform the others i terms of final accuracy in a reasonable time, they are a strong alternative to the existin algorithms adapted for tackling the same problem. In addition, Figures 11-19 show th boxplot of the makespan values obtained by various algorithms on the instances from reC01 to reC17, which confirm the superiority of IHGNDO and HIGNDO in compariso to the others.

Comparison under Reeves
In this subsection, the proposed algorithms will be verified on the Reeve instances t verify their efficacy and compared to some state-of-the-art algorithms to show their supe riority. After running and calculation, various metrics values are introduced in Tables and 6 to observe the performance of the algorithms. Observing those tables shows th superiority of the proposed algorithms: IHGNDO, HIGNDO, and HGNDO for most per formance metrics in most test cases. To confirm that, Figures 9 and 10 are presented t show the average of each performance metric on all instances in the Reeves benchmark those figures elaborate the superiority of HIGNDO over the others in terms of BRE, ARE and Avg makespan, while IHGNDO could outperform in terms of SD and come in the si ranks for the time metric. Since the proposed algorithms could outperform the others i terms of final accuracy in a reasonable time, they are a strong alternative to the existin algorithms adapted for tackling the same problem. In addition, Figures 11-19 show th boxplot of the makespan values obtained by various algorithms on the instances from reC01 to reC17, which confirm the superiority of IHGNDO and HIGNDO in compariso to the others.

Comparison under Reeves
In this subsection, the proposed algorithms will be verified on the Reeve instances to verify their efficacy and compared to some state-of-the-art algorithms to show their superiority. After running and calculation, various metrics values are introduced in Tables 5 and 6 to observe the performance of the algorithms. Observing those tables shows the superiority of the proposed algorithms: IHGNDO, HIGNDO, and HGNDO for most performance metrics in most test cases. To confirm that, Figures 9 and 10 are presented to show the average of each performance metric on all instances in the Reeves benchmark; those figures elaborate the superiority of HIGNDO over the others in terms of BRE, ARE, and Avg makespan, while IHGNDO could outperform in terms of SD and come in the six ranks for the time metric. Since the proposed algorithms could outperform the others in terms of final accuracy in a reasonable time, they are a strong alternative to the existing algorithms adapted for tackling the same problem. In addition, Figures 11-19 show the boxplot of the makespan values obtained by various algorithms on the instances from reC01 to reC17, which confirm the superiority of IHGNDO and HIGNDO in comparison to the others.       Algorithms Figure 9. Comparison in terms of BRE, ARE, and WRE on Reeves instances.          Figure 11. Boxplot for reC01 instance.

Comparison under Heller
Here, the proposed algorithms will be compared to the other algorithms under the Heller instances. In Table 7 Figure 19. Boxplot for reC17 instance.

Comparison under Heller
Here, the proposed algorithms will be compared to the other algorithms under the Heller instances. In Table 7, various performance metrics values are exposed that show the superiority of IHGNDO in terms of ARE and Z Avg for the Hel1 instance and competitiveness with HIGNDO on Hel2 in terms of WRE, ARE, Time, SD, and Z Avg . Furthermore, for doing that, Figures 20 and 21 are exposed to show the average of WRE, ARE, SD, Time, Avg makespan, and BRE; those figures showed that IHGNDO is the best in terms of ARE, WRE, and Avg makespan; HIGNDO could be superior for Time and SD metrics; and all algorithms are competitive for BRE metric. Figures 22 and 23 depict the boxplot of makespan values produced in 30 independent runs on Hel1 and Hel2 using various optimization algorithms. From those figures, it is concluded that IHGNDO is the best.

Conclusions and Future Work
As a new attempt to produce a new algorithm that could tackle the permutation flow shop scheduling problem (PFSSP), in this paper, we investigate the performance of a nove optimization algorithm, namely generalized normal distribution (GNDO), for solving thi

Conclusions and Future Work
As a new attempt to produce a new algorithm that could tackle the permutation flow shop scheduling problem (PFSSP), in this paper, we investigate the performance of a novel optimization algorithm, namely generalized normal distribution (GNDO), for solving this problem. Due to the continuous nature of GNDO and the discreteness of PFSSP, the largest ranked value (LRV) rule is used to make GNDO applicable for solving this problem. In a new attempt to improve the performance of the discrete GNDO, a new version of GNDO, namely a hybrid GNDO (HGNDO), is developed based on applying a local search strategy to improve the quality of the optimal global solution. In addition, the GNDO has an improvement by also applying the swap mutation operator on the best-so-far solution to find better solutions, and this improvement is integrated with HGNDO to produce a new version, namely HIGNDO. Finally, the scramble mutation operator is integrated with the local search strategy to utilize each attempt done by this local search for improving the best-so-far solution as much as possible; this local search is used with the improved GNDO using the swap mutation operator to produce a strong version abbreviated as IHGNDO for tackling the PFSSP. To validate the performance of the algorithms accurately, 41 common instances used widely in the literature are employed. Additionally, to check the proposed superiority, they are extensively compared with some well-established recentlypublished optimization algorithms using various performance metrics. The findings show that HIGNDO and IHGNDO could be superior in terms of standard deviation, CPU time, and makespan. Those findings also show that IHGNDO is better than HIGNDO for most performance metrics, and this confirms the effectiveness of our improvement to the local search strategy. Our future work involves applying those proposed algorithms for tackling other types of the flow shop scheduling problem.