A Simple and Effective Approach for Tackling the Permutation Flow Shop Scheduling Problem

In this research, a new approach for tackling the permutation flow shop scheduling problem (PFSSP) is proposed. This algorithm is based on the steps of the elitism continuous genetic algorithm improved by two strategies and used the largest rank value (LRV) rule to transform the continuous values into discrete ones for enabling of solving the combinatorial PFSSP. The first strategy is combining the arithmetic crossover with the uniform crossover to give the algorithm a high capability on exploitation in addition to reducing stuck into local minima. The second one is re-initializing an individual selected randomly from the population to increase the exploration for avoiding stuck into local minima. Afterward, those two strategies are combined with the proposed algorithm to produce an improved one known as the improved efficient genetic algorithm (IEGA). To increase the exploitation capability of the IEGA, it is hybridized a local search strategy in a version abbreviated as HIEGA. HIEGA and IEGA are validated on three common benchmarks and compared with a number of well-known robust evolutionary and meta-heuristic algorithms to check their efficacy. The experimental results show that HIEGA and IEGA are competitive with others for the datasets incorporated in the comparison, such as Carlier, Reeves, and Heller.


Introduction
Recently, the flow shop scheduling problem (FSSP) has attracted the attention of the researchers for overcoming it due to its importance in industries, such as transportation, procurement, computing designs, information processing, and communication. Because this problem is NP-hard, which finding a solution in polynomial time is to hard, many algorithms in the literature were proposed to overcome this problem. Some of which will be reviewed within the next subsections. Johnson [1] in 1954 introduced and formulated FSSP for the first time. Using a limited range up to a 3-machine problem, Johnson was able to overcome this problem for a restricted case. Afterward, Nawaz et al. [2] proposed a meta-heuristic approach known as Nawaz-Enscore-Ham (NEH) algorithm for tacking this problem with m-machine and n-job.
Due to succeeding achieved by the NEH algorithm, the researchers have been work on improving its performance or integrating it with other optimization algorithms for overcoming this problem [3,4]. Before speaking of the optimization algorithms, we start reviewing the literature which is devoted to the improvement of the standard NEHheuristic method. Kalczynski [5] improved the performance of the NEH algorithm by tain the best permutation for this combinatorial problem. Although of the high ability of the evolutionary algorithm for solving this type of problems, the need for operators in genetic algorithms to help in improving their performance is still open even today. Therefore, within our research, we study the performance of the elitism based-GA (EGA) when integrating the Arithmetic crossover with the uniform crossover for tackling the PFSSP. Since the arithmetic crossover operator generates continuous values and PFSSP is discrete in nature, the LRV rule will be applied to transform those continuous values into discrete ones. In addition, to increase the exploration rate of EGA and an individual selected randomly form, the population will be re-initialized randomly within the search space of the problem. The incorporation between the uniform crossover and Arithmetic crossover in addition to the re-initialization process is integrated with the EGA to improve its performance when tackling the PFSSP in a version abbreviated as IEGA. Additionally, to improve dramatically the performance of IEGA when tackling PFSSP, it was hybridized with a local search strategy (LSS). In our work, we used a number of well-established optimization algorithms such as slap swarm algorithm (SSA) [21], whale optimization algorithm (WOA) [22], and sine cosine algorithm (SCA) [23] due to their significant success for solving several optimization problems [23][24][25][26][27][28][29]. Additionally, the hybrid whale algorithm (HWA) [30] as the most competitive algorithm for solving the permutation flow shop scheduling are used in our comparison with the proposed to see its strength to tackle this problem as an alternative to the strong existing algorithm. Furthermore, a genetic algorithm based on the uniform crossover (GA), elitism genetic algorithm based on the uniform crossover (EGA), and genetic algorithm based on the order-based crossover (OEGA) are additionally used to see the efficacy of the proposed over some the evolutionary algorithms.
Generally, our contributions within this paper are summarized in the following points: • Using the continuous values in the approach instead of discrete values, by employing LRV to convert those continuous values into discrete, for tackling PFSSP. • Combining the uniform crossover and the arithmetic crossover (UAC) to help in increasing the exploitation capability in addition to reducing stuck into local minima. • Proposing a version of the efficient GA, abbreviated as IEGA, improved by dynamic mutation and crossover probability (DMCP) and UAC for tacking the PFSSP. • Additionally, IEGA is enhanced by integrating with a LSS and insert-reversed block (IRB) operator for tackling the PFSSP, in a version abbreviated as HIEGA. • IEGA and HIEGA were tested on the benchmarks Reeves, Carlier, and Heller to check their performance.
The remainder of this paper is structured as: Section 2 illustrates the permutation flow shop scheduling problem; Section 3, introduces the proposed algorithms (IEGA, and HIEGA) and, in particular, Section 3.7 exposes the experiments outcomes, discussion, and comparison between results. Finally, Section 4 shows the conclusions about our proposed work in addition to our future work.

The Permutation Flow Shop Scheduling Problem
The permutation FSSP indicates employment n jobs over m machines consecutively and on the same permutation under the criterion of decreasing the make-span. Generally, this problem could be summarized in the following points: • Each job j b could be run only one time on each machine. b = 1, 2, 3, . . . , n • Each machine i z could address only a job at a time, z = 1, 2, 3, . . . , m • Each machine will address a job in a time known as the processing time and abbreviated as PT. • A completion time c is a time needed by each job j b on a machine i z and symbolized as c(j b , i z ).

•
The processing time of each job is a phrase about the running time added with the set-up time of the machine. • At the start, each job takes time of 0.
• PFSSP is solved with the objective of finding the best permutation that will minimize the makespan c * that is known as the maximum completion time or until the last job on the final machine was completed.
Mathematically, PFSSP could be modeled as follows: The permutation refers to the different sequences of the jobs on the machine. The objective of FSSP is finding the best permutation that will minimize the maximum completion time (makespan) and defined as follows: Equation (5) is considered the objective function that will be used to be minimized by our proposed algorithm until the best job permutation is found.

The Proposed Algorithm
In this section, the main steps of the proposed algorithm will be discussed in detail. GA is an approach inspired by the Darwinian theory of natural evolutionary [31][32][33]. In GA, a set consisting of N solutions, each one known as individual, will be initialized within the search space of the problem. After distribution, the fitness value for each individual will be calculated and a number of the best individuals will be selected to generate better individuals within the next generation. Specifically, the genetic algorithm depends on three basic operators: selection, crossover, and mutation operators.

Initialization
At the start, a population consisting of N individuals is generated with n dimension for each job and initialized with distinct random numbers to generate a permutation of the job sequence. After generating the random numbers within each individual, those numbers must be checked to prevent duplication of any number within the same individual. Since the random number generated within the individual is continuous, the need for a method to convert them into a job sequence permutation is necessary. According to the study performed by Li and Yin [34], LRV could effectively map the continuous values into job permutation. In LRV, the continuous value is ranked in decreasing order. Until LRV could generate the job permutation without any mistake, duplication of any value within each individual must be removed. For instance, Figure 1a present a solution with duplicated values, hence, this duplication need to be removed until the LRV could be used to estimate the job permutation. Therefore, this duplication is removed by inserting other values not found in this solution. Finally, this solution is mapped into job permutation by sorting in descending order as shown in Figure 1c; the index of the largest value in the unsorted solution is selected in the first position of the mapped solution, the position of second largest one is inserted into the second position of the mapped solution, and so on. After generating and checking the duplication in each solution, the algorithm must be evaluated to extract its makespan using Equation (5) to measure its quality in solving PFSSP in comparison with the others.

Selection Operator
The selection operator specifies the way of selecting the parents that will be used to generate the offspring in the next generation. Recently, many selection operators have been proposed, but within our research, we will use a selection operator known as tournament selection mechanism [35]. In this mechanism, a number K, known as tournament size, will be chosen and the solution with the best fitness will be taken as a parent for the next generation. After selecting the parents using the tournament section operator, the second operator known as the crossover operator will be used to generate the offspring within the next generation.

Crossover Operator
This operator works on generating the individuals within the next generation under the supervision of the best individual, selected according to the selection operator. Among all the available crossover operators, within our experiment, we selected Uniform crossover [36] and the arithmetic crossover. In the uniform crossover, a binary vector with a length equal to the size of the individuals will be created and initialized by generating a random number within the range 0 and 1 and if this number is smaller than the crossover rate (CR), the current position in this vector is assigned a value of 1. Otherwise, it will take a value of 0. Note that, 0 indicates that the current position of the offspring will be taken from the first parent, while 1 indicates the second individual, and this binary vector, called mask, will be used to generate the first individual. For the second individual, the values within this mask will be flipped to convert 0 into 1 and 1 into 0. Then the second one will be generated. Figure 2a illustrates two offspring, O 1 and O 2 , using two parents, P 1 and P 2 , using uniform crossover. In this figure, at the start, the mask M 1 will be initialized with 0 and 1 and its flipping is shown in M 2 . After that, O 1 will be generated according to M 1 , and M 2 will be used to generate O 2 .
Regarding the Arithmetic crossover, in this operator, the two parents are used to generate two offspring under the following formula: For example, Figure 2b shows the outcomes of the generated two offspring, O 1 , and O 2 , using two parents, P 1 and P 2 , under this crossover operator, assuming σ = 0.2.

Mutation Operator
In the end, the mutation operator based on a certain probability, known as mutation probability (MR), will be applied to each offspring as an attempt to generate a better solution and preventing stuck into local minima problems. MR is used until the GA is not converted into a primitive random search. Figure 2c shows the influence before using the mutation and after applying mutation.

Combination of Uniform Crossover and Arithmetic Crossover (UAC)
In this part, we will combine both uniform crossover and the Arithmetic crossover with each other according to the CRU, to recombine the two individuals together. The formula of this combination is as follows: where O is the generated offspring, P is the first parent selected using tournament selection, M is the second selected parent, r is a random number created to determine the weight of the second in relative to the generated offspring, and ν is used to determine the weight of P, we recommend 0.8 as discussed later. In the end, Algorithm 1 shows the steps of the combination of uniform crossover and the arithmetic crossover.

Algorithm 1. Uniform Arithmetic crossover (UAC)
1: P // is the first parent selected using tournament selection.

Local Search Strategy (LSS)
LSS works on exploring the solutions around the best-so-far solution to find a better one. Each job in the best-so-far individual will be tried in all the positions within this best based on a certain a probability known as LSP (LSP = 0.01 as recommended in [30]) and the permutation that will reduce the makespan in comparison to the original will be taken as the best-so-far one as illustrated in Algorithm 2.

Time Complexity
In this subsection, the time complexity of the HIEGA as the proposed algorithm in big-O will be computed to find its speedup for solving the PFSSP. First, lets show the components that will affect the speedup of the algorithm for one generation: • The first one is the generating process of offspring that need time complexity of O(nN).

•
The second one is LRV which will need time complexity of O(n log n) [37] for the Quicksort algorithm. And totally for all population, the time complexity with LRV will be O(Nn log n).

•
The last one is the LSS that need O(n 2 ) for single individual. For all individual, the time complexity is as O(Nn 2 ). Finally, after summing the time complexity of the previous three components as shown in Equation (9), it is concluded that the time complexity of the proposed algorithm is O(Nn 2 ) In this section, three different widely-used well-known datasets will be tested to justify the effectiveness of our proposed approach. Those datasets are: (1) the Carlier dataset [38] with eight instances, (2) the second one was introduced by Reeves [12] and contains 21 instances, only 14 instances of this datasets will be used within our experiments, and (3) the last one is created by Heller [39] and consist of two instances. The data sets used, can be found in http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/flowshop1.txt, and their descriptions are shown in Table 1 that shows the optimal known makespan symbolized as Z * . According to researches in the literature [30,34,40], the best-known value for each instance is used in our work to be compared with the proposed algorithm outcomes to see its efficacy. Also, in [30,40], the authors could reach less value than the best-known value for Hel1 as mentioned in [34]. Therefore, in our proposed work, we set the value of Hel1 as mentioned in most literature and show to the readers that the proposed could reach less value than the best-known ones.
The algorithms used in our experiments within this section are coded using java programming language on a device with 32GB of RAM, and Intel(R) Core(TM) i7-4700MQ CPU @2.40 GHz. Our proposed approach is experimentally compared with a number of the meta-heuristic and evolutionary algorithms, such as slap swarm algorithm (SSA) [21], whale optimization algorithm (WOA) [22], sine cosine algorithm (SCA) [23], hybrid whale algorithm (HWA) [30], a genetic algorithm dependent on the uniform crossover (GA), elitism GA based on the uniform crossover (EGA), and genetic algorithm based on the orderbased crossover (OEGA). The genetic algorithms (GA) have two important parameters: CR and MR that significantly affect their performance. For getting the optimal values for those two parameters, Figure 3b,c are introduced to tell that the best values for them are 0.8 and 0.02, respectively. Regarding IEGA, there is another parameter: P that must be accurately picked until getting to the optimality in its performance. After conducting several experiments with different values for P that are shown in Figure 3a, we found that the best value was 0.8. Regarding the parameters of SCA, SSA, and WOA, they are equal to the ones used in the cited papers. Table 2 introduces the parameters of the other compared algorithms. The maximum iteration and N are set to 100 and 20, respectively, to ensure a fair comparison with the other algorithms. The block size (BS) of the insert-reversed block operation is assigned to 5 as recommended in [30]. All the algorithms were running 30 independent times.
(a) Tuning of P parameter.
(b) Tuning of MR parameter.
(c) Tuning of CR parameter.

Performance Metric
In our experiments, three performance metrics are used to observe the performance of the compared algorithms: Worst Relative Error (WRE), Average Relative Error (ARE), and Best Relative Error (BRE). Each one of which could be mathematically formulated as follows: Z * indicates the best-known result, Z w is the worst value obtained within the independent runs, Z Avg is the average of the values obtained within 30 independent runs, and Z B is the best value obtained within the independent runs.

Comparison under Carlier
In this experiment, our proposed algorithm is compared with eight algorithms on benchmark Car to check its superiority. In the following figures, 0 values mean that the algorithms could come true to the optimal value. Figure 4a is introduced to sum the average of BRE obtained by each algorithm on each instance within 30 independent runs with each other to see the best one that could come true to the lowest BRE value. This figure shows that HIEGA and HWA could outperform all the other and come true values of 0 for BRE as the lowest possible value which algorithm could reach. For the average of ARE on all the Car instances, Figure 4b is introduced to show that our proposed algorithm could outperform all the other algorithm with a value of 0.001 and come in the first rank, and IEGA comes as the third-best one after HIEGA with a value of 0.012, while HWA occupies the second rank with a value of 0.002 and SCA come in the last rank with a value of 0.091. Concerning the mean of WRE on all the Car instances, Figure 4c is introduced to expose the superiority of IEGA with a value of 0.01 on the others with the exception of HWA that could get to the same value.   Tables 3 and 4 show the BRE, ARE, WRE, Z Avg , and standard deviation (SD) obtained by each algorithm on each Car instance. According to these tables, HIEGA could outperform the others for Car03, Car05, Car06, and Car07 instances in terms of the ARE, WRE, Z Avg , and SD, while was competitive with the others for the rest of the instances.

Comparison of Reeves
After proving the superiority of HIEGA and IEGA on the other genetic algorithms under the benchmark car in the previous experiment, through this part, they will be compared with the other entire algorithm on the benchmark Reeve to observe its superiority. To measure the performance of the algorithms, each algorithm is executed 30 independent runs on each Reeve instance, and then the different performance metrics: BRE, WRE, ARE, Z Avg , and SD though those runs are introduced in Tables 5-8 for all Reeves instances. For both ARE and Z Avg , HIEGA could outperform the others in 16 out of 21, while equal with HWA in another and loser in others. Likewise, for WRE, the proposed could be superior to the others for 11 instances and equal in three others, unfortunately could not outperform the HWA for 7 others. For BRE, HIEGA could come true the best for 10 instances, and equal with the HWA for 7 instances.       Regarding ARE, the average of each algorithm on all the Reeves instances is introduced in Figure 5. Inspecting this figure we can draws the superiority of our proposed algorithm under the average of the ARE on the entire Reeves instance, where it could win with a value of 0.102 as the best one and IEGA come in the third rank after HIEGA and HWA, while SCA comes in the last rank with a value of 0.179. After completing this experiment, it is concluded that the proposed algorithm is competitive with the HWA as a robust algorithm suggested recently for this problem, and subsequently it is considered a strong alternative to this algorithm for tackling the PFSSP.

Comparison of Heller
This dataset was created by Heller and consist of two instances. In this part, we compare the proposed algorithms with the other algorithms under this dataset. For doing that, Figure 6a-c are presented to illustrate the average of BRE, to see the summation of the ratio of the error between the best value obtained by each algorithm within the independent runs and the best-known value on each instance, the average of ARE, and the average of the WRE, respectively. According to those figures, our proposed algorithm is the best in comparison with the other algorithms in terms of the ARE, and WRE, meanwhile competitive with HIEGA in terms of the BRE. Moreover, Table 9 is introduced to show the outcomes of BRE, WRE, ARE, Z Avg and SD obtained by each algorithm on the two Heller instances that confirms our suppositions to the superiority of the proposed algorithm over the others for the five performance metrics used.

Comparison under CPU Time and BoxPLot
For knowing the speedup of each algorithm, we calculate the average of the CPU time needed by each algorithm until finishing implementing the instances of the Carlier and Heller, and this average value is introduced in Figure 7a. This figure told us that our proposed algorithm outperforms HWA, IEGA, GA, EGA, and OEGA and wins the first rank with a value of 2.59 after SSA, SCA, and WOA. When comparing HIEGA with SSA, SCA, and WOA in terms of CPU time and MS, our proposed algorithm could significantly come true better outcomes at a reasonable time. In Figure 7b,c we compare the algorithms under the Boxplot for the values obtained by each one within 30 independent runs on Hel1 and Hel2, respectively. Inspecting this figure shows that IEGA could outperform all the algorithms except HIEGA and HWA. Also from this figure, we found that HIEGA could overcome HWA under the boxplot of Hel1 and Hel2. Generally, our proposed algorithms, IEGA and HIEGA, are competitive in comparison with the others.

Conclusions
This work presents the integration between the uniform crossover and the arithmetic crossover (UAC) to enhance the exploitation capability and alleviate stuck into local minima problems. After that, the UAC with re-initializing an individual selected randomly from the population through each iteration are combined in the EGA, to enhance its performance when tackling the PFSSP, which is a well-known scheduling problem applied in several industrial applications in a version, abbreviated as IEGA. Additionally, we integrate a LSS with the IEGA for strength its performance toward solving PFSSP; this version is abbreviated as HIEGA. HIEGA and IEGA are experimentally validated on three wellknown benchmarks: Reeves, Heller, and Carlier, and compared with a number of the robust evolutionary and meta-heuristic algorithms. On the car instances, the proposed algorithm could reach a value of 0.001 for ARE; while for the Heller instances, it reaches a value of 0.003 for the same metric mentioned before; ultimately for the Reeve instances, a value of 0.020 for ARE is obtained by the proposed. The experimental outcomes show that IEGA and HIEGA is competitive with those algorithms.
Unfortunately, the computational cost of the proposed algorithm is slightly higher than some of the others used in comparison as our main limitation. Therefore, in our future work, we will work on overcoming the time complexity of those proposed by integrating them with some of the strategies like levy flight, and opposition theory until accelerating the convergence toward the best solution in less number of iterations. Additionally, we will incorporate extending our proposed algorithms to solve the open shop.