Hybrid Genetic Simulated Annealing Algorithm for Improved Flow Shop Scheduling with Makespan Criterion

: Flow shop scheduling problems have a wide range of real-world applications in intelligent manufacturing. Since they are known to be NP-hard for more than two machines, we propose a hybrid genetic simulated annealing (HGSA) algorithm for ﬂow shop scheduling problems. In the HGSA algorithm, in order to obtain high-quality initial solutions, an MME algorithm, combined with the MinMax (MM) and Nawaz–Enscore–Ham (NEH) algorithms, was used to generate the initial population. Meanwhile, a hormone regulation mechanism for a simulated annealing (SA) schedule was introduced as a cooling scheme. Using MME initialization, random crossover and mutation, and the cooling scheme, we improved the algorithm’s quality and performance. Extensive experiments have been carried out to verify the effectiveness of the combination approach of MME initialization, random crossover and mutation, and the cooling scheme for SA. The result on the Taillard benchmark showed that our HGSA algorithm achieved better performance relative to the best-known upper bounds on the makespan compared with ﬁve state-of-the-art algorithms in the literature. Ultimately, 109 out of 120 problem instances were further improved on makespan criterion. by our HGSA algorithm can be closer to the theoretical optimal values (upper bound) of these benchmark instances. Furthermore, the results show that HGSA performs well when the population size of jobs and the number of machines increases. Furthermore, the results show that HGSA performs well when the population size of jobs and the


Introduction
The flow shop scheduling problem (FSSP) was first proposed by Johnson in 1954 [1]. Due to its extensive production applications in industries such as food processing, steel manufacturing, plastics production, and chemical manufacturing, the problem has received much attention since it was introduced. Researchers from all over the world have conducted in-depth research on this issue, and in the literature [2,3], they have developed branch-and-bound algorithms to solve the flow shop scheduling problem. However, these exact solution algorithms are only suitable for small-sized scheduling problems, as the calculation time increases exponentially with the problem size. As FSSP has proved to be an NP-hard problem [4], exact algorithms have not been suitable for large-scale scheduling problems. After that, several heuristic algorithms were proposed. Among them, the Nawaz-Enscore-Ham (NEH) heuristic algorithm [5], proposed by Nawaz et al., was used to solve an FFSP with n workpieces and m machines. The comparison between NEH and 15 other algorithms showed that it can quickly obtain better solutions and is suitable for solving large-scale flow shop scheduling problems. However, the quality of the feasible solutions obtained by the NEH algorithm is not high [6]. After the NEH algorithm was proposed, many researchers combined NEH with other optimization algorithms to achieve better results. Marichelvam et al. [7] combined NEH with the cuckoo search algorithm to form an improved cuckoo search algorithm (ICS), which proved to be more effective at achieving optimal solutions than the other algorithms in the literature. Qi et al. [6] proposed an improved variable neighborhood search algorithm and proved that the NEH heuristic algorithm could generate better initial solutions. Burdett and Kozan [8] addressed the representation and construction of accurate train schedules by a hybrid job shop approach and proposed a constructive algorithm to construct a schedule using NEH insertion, backtracking, and dynamic route selection mechanisms. The experiment results showed the constructive algorithm required less CPU time and obtained better solutions.
Rathinam [9] used the decision tree (DT) algorithm to minimize the makespan of the FSSP. In addition, Govindan et al. [10] provided a hybrid algorithm (ADE) that combined a scatter search algorithm with decision tree which exceeded other existing hybrid algorithms. The main disadvantage is that the tree size of the ADE and DT algorithms increases with the number of jobs and machines in large-scale FSSPs. In [11], three kinds of hybrid discrete artificial bee colony (ABC) algorithm (hDABC1, hDABC2, and hDABC3), with the initial solution generated by MME-A and MME-B and new solutions generated by the adaptation strategy and distribution estimation, were proposed. Pan and Huang [12] developed a hybrid genetic algorithm (GA) based on two different local search strategies (insertion search and the insertion search repair and cut) and an orthogonal-array-based crossover. Gao et al. [13] proposed two Bertollisi heuristics [14], which were based on a job pair comparison approach, to generate a sequence of jobs and constructive heuristics combined with standard deviation heuristics. The algorithms integrated with local search strategies to improve the quality of the solution. The results showed the effectiveness of the proposed algorithm, especially for large-scale FSSPs. Nowicki and Smutnicki [15] proposed an approximation algorithm based on a tabu search technique with a specific neighborhood definition to solve a permutation flow shop problem. Sayoti et al. [16] proposed an adaptation of a new approach called the golden ball algorithm (GBA), which was inspired by concepts and strategies for soccer playing. The result showed GBA was able to find the optimal schedule rapidly for the small-scale flow shop schedule problem. In [17], a genetic algorithm integrated with a Hopfield network was proposed and proved to be an effective method to solve NP-hard problems. Although researchers have developed many algorithms to solve FSSPs, there still exist some shortcomings in these algorithms, such as local optimization and high computational cost, especially when solving large-scale problems. Table 1 summarizes some of the algorithms proposed for FSSPs in the literature. Most of these studies demonstrate that combinational heuristic and metaheuristic approaches which include advantages of more than one algorithm are very useful for solving flow shop problems. For instance, Tseng et al. [18] studied a hybrid genetic algorithm for a no-wait flow shop problem. Ding et al. [19] presented an improved iterated greedy algorithm with a tabu-based reconstruction strategy. An iterated greedy algorithm in [20], a high-performing memetic algorithm in [21], and a discrete self-organizing migrating algorithm in [22] have been used for flow shop problems with the minimum makespan criterion and obtained better results. Eddaly et.al [23] proposed a hybrid combinatorial particle swarm optimization (HCPSO) algorithm as a resolution technique for solving this problem. Burdett and Kozans [24] proposed a new multiparent operator which could greatly improve the performance of a GA search.
The GA and the simulated annealing (SA) algorithm are well-known metaheuristics that have been successfully used in flow shop problems. The genetic algorithm has the advantages of strong global optimization ability, fast speed, strong versatility, and easy implementation. However, it has the drawback of poor local search ability, which decreases search efficiency, especially in the late period of optimization. On the other hand, the simulated annealing algorithm has strong local search capabilities to make up the shortcomings of the genetic algorithm. GA + SA has been used to solve job shop [25], open shop [26], and flexible flow shop problems [27]. However, to the best of our knowledge, there are no reports on its application in flow shop scheduling, especially with improved crossover and mutation operators and adaptive simulated annealing.
Therefore, we propose a hybrid genetic simulated annealing (HGSA) algorithm to solve flow shop scheduling problems. The main contributions of this paper include the following: (1) We propose the novel HGSA algorithm to solve FSSPs. It is characterized by an operational coding in GA and a hormone regulation mechanism in the simulated annealing part of the algorithm. The MME [28] algorithm, combined with the NEH [5] and MinMax (MM) [29] heuristic algorithms, is used for initialization of the population. We use location-based intersection and two-point intersection for crossover operations with either one of these two being randomly selected. In the mutation process, twors mutation or inversion mutation [30] is randomly employed to mutate the population. After the crossover and mutation operation is completed, the best individuals are retained, and the simulated annealing operation is performed on these solutions. (2) Using the widely adopted Taillard benchmark FSSPs, we conducted extensive experiments and showed that our HGSA algorithm achieved better results than the baseline algorithms to which it was compared in our study. We showed that our HGSA algorithm's high performance for FSSPs is based on its hybrid search strategy, twors/inversion mutation, location-based/two-point crossovers, and their combination with the MME heuristic algorithm for population initialization.
The rest of this paper is organized as follows: Section 2 introduces the definition of the problem and mathematical model of the flow shop scheduling problem, Section 3 describes the HGSA, Section 4 shows the experimental results of HGSA, and Section 5 provides conclusions and future works.

Flow Shop Scheduling Problem Description
The flow shop scheduling problem can be described as follows: There are n jobs with the same process route to be processed, which need to be processed continuously on m machines. The layout of the flow shop is shown in Figure 1. The machining process satisfies the following assumptions: (1) All jobs and machines are available at time zero. (2) There is no prior priority among jobs. (3) Each job has m processing steps. (4) Every process must be processed on different machines and the process sequence cannot be changed. (5) Each machine can only process only one job at a time, and each workpiece can only be machined by one machine at a time. (6) The machining process cannot be interrupted during the processing. (7) All jobs must obey the first-in/first-out rule during processing. Note that the symbols used in this paper and their meanings are shown in Table 2.  Table 2. The meaning of the parameters used in this paper.

Parameter
Meaning Set of m machines  The processing time when job i is processed on machine tool j S ij The starting time when job i is processed on machine tool j C ij The finishing time when job i is processed on machine tool j π = {π 1 , π 2 , . . . , π n } A sequence of jobs Π Set of all jobs' sequences C max (π) Makespan of one job's sequence π The mathematical model of the FSSP makespan can be expressed as [4] min(Cmaxπ) (1) Formula (2) is a constraint which indicates that different processes of the same job cannot be proceeded at the same time. Formula (3) is a machine constraint which indicates that one machine can only process one job at a time. Formula (4) is time constraint which indicates that the next process of a job cannot start before the current process is completed. Formula (5) indicates that if the subsequent machine j + 1 is in the machining state, the job will delay at machine j until machine j + 1 is idle. Formula (6) shows that for any job i, the completion time is determined by the starting time and the processing time on machine j. Therefore, Cmax(π) and C ij can be calculated from Formulas (7) to (8):

Overview of the HGSA Algorithm
There are many algorithms which have been used to solve flow shop problems, such as the GA, particle swam optimization (PSO), the SA algorithm, the harmony search (HS) algorithm, ant colony optimization (ACO), the ABC, etc. Among these algorithms, the genetic algorithm has the advantages of strong global optimization ability, fast speed, strong versatility, and easy implementation. However, it has the drawback of poor local search ability, which decreases search efficiency, especially in the late period of optimization. Fortunately, the simulated annealing algorithm has strong local search capabilities to make up the shortcomings of the genetic algorithm. Hence, in this paper, the advantages of the two algorithms are combined. GA is employed to get an optimal or near-optimal solution among the solution space, and then SA is utilized to seek a better one based on the solution. In addition, in order to improve the search efficiency of SA, an annealing rate method based on the hormone regulation mechanism is used in this paper. The flowchart of the algorithm is shown in Figure 2. The simple pseudocode of HGSA is as follows:

HGSA {Hybrid Genetic Simulated Annealing Algorithm}
Initialize population by MME while (not stop condition) do Step 1: Select the population Step 2: Crossover operation If(random==0) location-based Crossover else two-point Crossover

Encoding Representation
In this paper, the job ordering is used as the solution chromosome. The number of genes in the chromosome is equal to the quantity of jobs to be processed, which is n. If there are 10 jobs to be processed, p1 = [3,5,8,7,9,6,4,2,1,10] can be taken as one of the chromosomes in the population. The number in the chromosome indicates the ID of the jobs being machined, and the position in the chromosome indicates the processing order of the jobs.

Initial Population
The initial population quality has a significant impact on the performance of an evolutionary algorithm. Good initial solutions can significantly improve the convergence rate and solution quality of the algorithm [51]. In this paper, the MME algorithm, combined with the NEH [5] and MM [29] heuristic algorithms, is used to generate the initial population. Ronconi [28] proved that the MME algorithm has better performance than the NEH algorithm for solving FSSPs. In [52], it was also proved that the initial solutions obtained by the MME algorithm explored the specific characteristics of the blocking condition, which made it outperform the NEH for minimizing the makespan of the Blocking Flow-Shop Scheduling (BFSS) problem.
The basic flow of the MME algorithm is as follows: (1) Calculate the total machining time required for all the steps of each jobs. Rank the job with the smallest total processing time in the first position of the job sorting. Rank the job with the second smallest processing time in the last position of the job sorting and set i = 2.
(2) The rest of the n-2 jobs are arranged in ascending order according to the label function value of Formula (9). The job that takes ai will be sorted on i-th position of the job sequence, fixes the order, and records as π 0 : In Formula (9), r is a random number between [0, 1], ∑ m−1 j=1 p i,j − p i−1,j+1 means the modulus of the difference between the two consecutive jobs on the adjacent machines, and (1 − r) × ∑ m 1 p ik indicates the job with a smaller total machining time is prioritized.
(3) i = i + 1. If i < n, go to step 2, otherwise go to step 4. (4) Exchange the first two jobs in π 0 and add them to the machining sequence π 1 . Calculate the makespan before and after the order exchanging. Then, reserve the sequence with the least makespan and fix the order of the two jobs. Denote the sequence as π 1 and set k = 2.
(5) Randomly select a job in the unprocessed sequence to insert to possible positions of π 1 . Calculate the makespan after the job is added, then select the position that can minimize the completion time.
(6) Delete a job from the unprocessed sequence after its position is determined. Perform step 5 until all job positions are determined to form a new chromosome. Repeat the above steps 1-6 for p times to generate an initial population of p size individuals.

Crossover Operation
In this paper, we randomly select location-based crossover and two-point crossover, both of which are often used in genetic algorithms. The process is to generate an integer out of 0 and 1 randomly. If it is 0, select the location-based crossover operation; otherwise, select the two-point crossover operation. In the following description, P1 and P2 represent two parent individuals, and C1 and C2 represent two child individuals. The specific steps are shown as follows.

Location-Based Crossover
(1) Exchange the genes at several randomly selected positions of P1 and P2. Keep the genes in other locations unchanged.
(2) Perform conflict detection to delete the same gene as the exchange position in the original parent gene.
(3) Fill the gene vacancy with unused genes sequentially. As shown in Figure 3a, there are 10 genes in this chromosome. Select positions 1, 3, 5, and 8 for exchange. The genes at the exchange position of P1 and P2 are directly exchanged to the same position of C2 and C1. Then, delete the same genes as the swap position in C1 and C2 and fill them with unused genes. crossover operation. In the following description, P1 and P2 represent two parent individuals, and 219 C1 and C2 represent two child individuals. The specific steps are shown as follows.

Two-Point Crossover
(1) Two intersections are randomly generated, the gene fragments between the two points are copied from P1 and P2 to the corresponding positions of C1 and C2, and their positions and order are kept unchanged.
(2) The genes contained in the intersections between P1 and P2 are removed, the jobs not included in the intersections are copied to C2 and C1, and their order is maintained.
As shown in Figure 3b, the chromosome contains 10 genes (jobs). Select positions 3 and 6 for crossover. Then, the genes in P1 and P2 located between position 3 and 6 are directly copied to C1 and C2. The repetitive gene is removed, the remaining genes in P2 (white gene fragments in the figure) are sequentially filled into C1 to produce new individuals, and C2 is produced in the same manner.

Mutation Operation
In the genetic algorithm, the role of the mutation operator is to perturb the original chromosome to improve the local search ability and capability to jump out of local optima. In this paper, two commonly used mutation operations twors mutation and inversion mutation are randomly selected [53].

235
(2) The genes contained in the intersections between P1 and P2 are removed, the jobs not 236 included in the intersections are copied to C2 and C1, and their order is maintained.

237
As shown in Figure 3b

248
(1) Two positions in the chromosome are randomly selected.

249
(2) Insert the top-ranked gene into the back of another gene.

250
As shown in Figure 4a

255
(1) Two positions in the gene are randomly selected.

256
(2) The genes between the two positions were reversed, and the other gene positions are 257 unchanged. The specific operation mode is shown in Figure 4b.
In the process of the HGSA algorithm, better individuals obtained from GA are sent to SA for

Inversion Mutation
(1) Two positions in the gene are randomly selected.
(2) The genes between the two positions were reversed, and the other gene positions are unchanged. The specific operation mode is shown in Figure 4b.

Simulated Annealing Operation
In the process of the HGSA algorithm, better individuals obtained from GA are sent to SA for improvement. The SA operation can help solutions to avoid falling into local optima. SA starts from a higher temperature (i.e., the initial temperature) and randomly finds the global optimal solution of the objective function in the neighborhood along with the continuous decrease at a certain annealing rate of temperature. At last, the algorithm ends and outputs the optimal solution when the termination condition of the algorithm is reached. The search efficiency of the SA is affected by some parameters such as the initial temperature, the neighborhood structure, the annealing rate, and the termination condition. These factors play a significant role in the performance of the HGSA and should be carefully implemented as follows.

Neighborhood Structure
The neighborhood structure will directly affect the local search efficiency. This paper adopts the more common neighborhood structure in the production scheduling problem, namely, exchange and insertion [54].
(1) Two points exchange. Exchange the genes at two positions which are randomly generated. For example, the coding of a chromosome is "154837629. The randomly generated gene positions are 3 and 6. Then, the new chromosome produced by exchanging genes at these two positions is "15783429".
(2) Insertion. Select two gene positions randomly. Then, the gene with the larger position number is inserted into the previous position of the gene with the smaller position number, and the smaller number position gene and the subsequent gene sequence is postponed. For example, the code of one chromosome is "15783429", the two randomly selected position numbers are 2 and 4, and the new chromosome obtained after the insertion operation is "18573429".

Initial Temperature
The initial temperature T 0 of the SA should be properly set because a too high initial temperature causes waste of calculation time, and a too small initial temperature causes the efficiency of the global search to decrease. This article uses the following formula to set the initial temperature: In the formula, U max is the maximum value of all feasible solutions generated by the MME algorithm, and U min is the minimum value of all feasible solutions generated by the MME algorithm.

Annealing Rate
The annealing rate has a significant effect on the efficiency of the simulated annealing algorithm [55]. In order to improve the search efficiency of the algorithm, a new annealing rate method based on the hormone regulation mechanism [27] is used in this paper. The annealing rate can be obtained by Formulas (10)- (12): F down (k) = 1 1 + k n (12) where α is a small constant, k represents the number of iterations, n represents the Hill coefficient (n ≥ 1), ∆T is the difference between the current temperature (k + 1) and the previous temperature (k), and ∆T < 0. The effect of the annealing rate function in the case of k = 20, n = 1; 1.2; 1.5; 2.5 is shown in Figure 5. In this paper, the value of n is chosen to be 1.5.  the algorithm termination criterion. In the algorithm design, the termination temperature is adopted.

304
When the current temperature reaches the termination temperature requirement, the outer loop is 305 terminated and the algorithm ends.

The Terminating Condition
In SA, the termination criterion of the annealing procedure consists of the Metropolis sampling stability criterion of the inner loop and the external loop termination criterion. That is, when the new state or the current state's neighborhood solution space has been searched, the inner loop is jumped out, and the temperature is returned to the outside cycle. The outer loop termination criterion is also the algorithm termination criterion. In the algorithm design, the termination temperature is adopted. When the current temperature reaches the termination temperature requirement, the outer loop is terminated and the algorithm ends.

Benchmark Selected
In this work, 120 problems that were contributed to the OR-Library by E. TAILLARD [56] were used. The 120 problems called Ta001, Ta002 . . . Ta120, respectively, were by E. TAILLARD (1993) and are often used to test the algorithm performance in flow shop problems. All these benchmarks can be downloaded from: http://mistic.heig-vd.ch/taillard/problemes.dir/ordonnancement.dir/ ordonnancement.html.

Computational Complexity
In this section, we derive the algorithmic complexity of the proposed scheme. The meanings of the symbols used are as follows: n is the number of jobs, m is the number of machines, T0 is the initial temperature, Tf is the termination temperature, P is the population size, and G is the number of iterations. At first, the complexity of MME is O (n 2 m). In GA, the complexity of selection and mutation are O(n × logn) and O(nm), respectively. The computational complexity of SA is O(Pn 2 m 2 logTf/T0). The total computational complexity of HGSA is O (n 2 m) + G × O(n × logn) + G × O(n 2 m) + G × O(nm) + G × O(Pn 2 m 2 logTf/T0). It can be seen that the complexity of the proposed algorithm is related to population size, iterations, problem size, and parameters.

Experimental Results and Analysis
The algorithm of this paper was coded in Matlab language. The program running environment was Intel Corei5, 2.19 GHz CPU, and 8 GB RAM. We selected 120 classic examples proposed by Taillard as test data, and the data size of the Taillard benchmark was from 20jobs-5machines to 500jobs-20machines. The experimental parameters of the algorithm were set as follows [57,58]: the population size was 100, the crossover probability was 0.9, and the mutation probability was 0.1. In order to compare the performance of the HGSA with different Hill coefficients (n = 1; 1.2; 1.5; 2.5), the average relative percentage deviation (ARPD) values of different Hill coefficients with other parameters fixed are shown in Table 3. In these experiments, we chose the 200 × 20 and 500 × 20 instance in Talliard as the benchmarks. The results show that algorithm obtained the minimum ARPD when the Hill coefficient was equal to 1.5. So, the value of the Hill coefficient was chosen to be 1.5. In order to analyze the performance of the proposed algorithm, it was compared with the hybrid genetic algorithm (HGA) in [18], the improved iterated greedy algorithm (IIGA) in [19], the iterated greedy algorithm with a referenced insertion scheme (IG-RIS) in [20], the memetic algorithm (MA) in [21], and the discrete self-organizing migrating algorithm (DSOMA) in [22]. Table 4 shows the makespan results of the proposed algorithm and the other four algorithms tested in different sizes of Taillard benchmark. Each data scale has 10 sets of data, and the boldface figure in Table 4 represents the optimal solution obtained by the algorithms. It can be seen from the table that the algorithm proposed in this paper improved the results of 109 out of 120 examples, which indicates that the best results obtained by HGSA are better than the other four metaheuristics, reflecting the better search quality of HGSA. Compared with the MA, IG-RIS, and HIGA algorithms, the results of HGSA were greatly improved. Only in 11 cases with a small problem size were the results of HGSA slightly inferior to DSOMA. As the scale of the study increased, HGSA reflected a more obvious advantage. At the same time, the comparison with the HIGA and HGA algorithms showed that the MME initial population generation method and SA local search method based on hormone regulation mechanism could improve the optimization effect of the algorithm more effectively, especially for large-scale examples. Figure 6 is the Gantt chart of the ta001. The chart shows that the makespan obtained by proposed algorithm was 1324. The optimal schedule we got was [3,17,15,16,8,6,9,18,4,2,14,5,7,11,12,10,1,19,13,20]. In order to further compare the advantages and disadvantages of the proposed algorithm and the other algorithms, three parameters of average error (AE), improvement percentage (IP), and ARPD were introduced to compare the effects of each algorithm.  × 5   Ta001  1278  --1449  1486  1374  1324  Ta002  1359  --1460  1528  1408  1442  Ta003  1081  --1386  1460  1280  1098  Ta004  1293  --1521  1588  1448  1469  Ta005  1235  --1403  1449  1341  1291  Ta006  1195  --1430  1481  1363  1391  Ta007  1239  --1461  1483 Ta021  2297  --2912  2973  2436  2331  Ta022  2099  --2780  2582  2234  2280  Ta023  2326  --2922  3013  2479  2480  Ta024  2223  --2967  3001  2348  2362  Ta025  2291  --2953  3003  2435  2507  Ta026  2226  --2908  2988  2383  2375  Ta027  2273  --2970  3052  2390  2341  Ta028  2200  --2763  2839  2328  2279  Ta029  2237  --2972  3009  2363  2410  Ta030  2178  --2919  2979  2323  2401   50 × 5   Ta031  2724  3000  3002  3127  3161  3033  2731  Ta032  2834  3199  3201  3438  3432  3045  2934  Ta033  2621  3011  3011  3182  3211  3036  2638  Ta034  2751  3128  3128  3289  3339  3011  2785  Ta035  2863  3162  3166  3315  3356  3128  2864  Ta036  2829  3166  3169  3324  3347  3166  2907  Ta037  2725  3013  3013  3183  3231  3021  2764  Ta038  2683  3067  3073  3243  3235  3063  2706  Ta039  2552  2908  2908  3059  3072  2908  2610  Ta040  2782  3111  3120  3301  3317 (14) where zopt is the known upper bound value for each instance, zi is the optimal makespan of each   (1) AE represents the average error between the makespan of the algorithms and the lowest known upper bound values. The formula is where z opt is the known upper bound value for each instance, z i is the optimal makespan of each algorithm, and K is the number of instances. The average error comparison results of several algorithms are shown in Figure 7. As can be seen from the figure, the lowest AE value of HGSA in this paper was 5.58, which was lower than the lowest value of 20.516 for the DSOMA algorithm.
where zopt is the known upper bound value for each instance, zi is the optimal makespan of each  (2) IP represents the improvement percentage on makespan of the HGSA algorithm compared with the other algorithms. The formula is where C HGSA is the minimum makespan obtained by the HGSA algorithm, and C X is the minimum makespan obtained by the other algorithms. Figure 8 shows the percentage improvement on makespan of the HGSA algorithm compared to the MA, IG-RIS, IIGA, and DSOMA algorithms. It can be seen from the figure that the superiority of the HGSA algorithm is not obvious compared with other algorithms when the scale of the benchmark is small. In the 20 × 20 instances, the HGSA algorithm performed slightly weaker than the DSOMA algorithm. However, with the increase of instances scale, that is, the increasing of workpieces and machine tools number, the advantages of HGSA became increasingly obvious. In the 500 × 20 instances, HGSA had an improvement of 31.7%, 30.5%, 80.6%, 36.1%, and 71.9% compared to MA, G-RIS, HGA, DSOMA, and IIGA, respectively. As the scale of instances increased, the improvement effect was more obvious, indicating that the algorithm performed better when the schedule was larger.
(3) ARPD represents the average relative percentage deviation between the result obtained by the algorithm and the optimal value. The formula is where K is the number of the same scale instances, C xi is the makespan of the algorithm x on the ith instance of the same size, and C opti is the optimal value on the ith instance of the same size. Figure 9 shows that the ARPD of HGSA was similar to that of the DSOMA algorithm in the 20 × 20 instances, which was approximately 6%. In the other instances, the ARPD of HGSA, which were all less than 9%, was significantly lower than that of the existing algorithms. This demonstrates that the results obtained by our HGSA algorithm can be closer to the theoretical optimal values (upper bound) of these benchmark instances. Furthermore, the results show that HGSA performs well when the population size of jobs and the number of machines increases. Furthermore, the results show that HGSA performs well when the population size of jobs and the number of machines increases.   (16) where K is the number of the same scale instances, Cxi is the makespan of the algorithm x on the ith 383 instance of the same size, and Copti is the optimal value on the ith instance of the same size. Figure 9 384 shows that the ARPD of HGSA was similar to that of the DSOMA algorithm in the 20×20 instances, 385 which was approximately 6%. In the other instances, the ARPD of HGSA, which were all less than 386 9%, was significantly lower than that of the existing algorithms. This demonstrates that the results

Conclusions and Future Work
In this paper, a hybrid genetic simulated annealing algorithm based on the hormone regulation mechanism was designed for the flow shop scheduling problem. In order to ensure the quality of the initial population, the MME algorithm, combined with the NEH and MM algorithms, was used to generate the initial population with a certain quality and diversity. Then, SA based on the hormone regulation mechanism was combined with GA to balance the efficiency and global search ability of the algorithm. Furthermore, randomly selected cross-mutation methods were introduced in GA to obtain promising results. Through the test on Talliard benchmarks, the results of HGSA were compared with MA, IG-RIS, HGA, IIGA, and DSOMA in terms of makespan, AE, IP, and ARPD. The results of our HGSA algorithm were closer to the upper bound of the Talliard benchmarks, which verified the effectiveness of the HGSA. In future research, we plan to add an energy consumption objective into the flow shop problem. We will also adapt our algorithm and apply it to the flow shop scheduling problem with job random arrivals together with an energy saving model. In addition, the proposed algorithm can also be extended to solve hybrid flow shop scheduling and job shop scheduling problems.