No-Wait Job Shop Scheduling Using a Population-Based Iterated Greedy Algorithm

: When no-wait constraint holds in job shops, a job has to be processed with no waiting time from the ﬁrst to the last operation, and the start time of a job is greatly restricted. Using key elements of the iterated greedy algorithm, this paper proposes a population-based iterated greedy (PBIG) algorithm for ﬁnding high-quality schedules in no-wait job shops. Firstly, the Nawaz–Enscore– Ham (NEH) heuristic used for ﬂow shop is extended in no-wait job shops, and an initialization scheme based on the NEH heuristic is developed to generate start solutions with a certain quality and diversity. Secondly, the iterated greedy procedure is introduced based on the destruction and construction perturbator and the insert-based local search. Furthermore, a population-based co-evolutionary scheme is presented by imposing the iterated greedy procedure in parallel and hybridizing both the left timetabling and inverse left timetabling methods. Computational results based on well-known benchmark instances show that the proposed algorithm outperforms two existing metaheuristics by a signiﬁcant margin.


Introduction
No-wait constraints widely exist in the steel-making industry (Pinedo [1]; Tang et al. [2]), concrete manufacturing (Grabowski and Pempera [3]), chemical and pharmaceutical industries (Rajendran [4]), food industries (Hall and Sriskandarajah [5]), and so on. The job shop problem with no-wait constraints is called the no-wait job shop scheduling problem (NWJSP) and it differs from traditional job shop problem (JSP) a lot because of the nowait constraints. NWJSP has gained the increasing attention of researchers over decades. With regard to its complexity, NWJSP is NP (non-deterministic polynomial time)-hard in the strong sense (Lenstra et al. [6]). Sahni and Cho [7] proved that it is strongly NP-hard even for two machine cases. Mascis and Pacciarelli [8] formulated it as an alternative graph and presented several heuristics and a branch and bound method. Broek [9] formulated the problem as a mixed integer program (MIP) and presented a branch and bound method. Recently, Bürgy and Gröflin [10] provided a compact formulation of the problem and proposed an effective approach based on optimal job insertion.
Due to the NP-hardness of NWJSP, the focus has been mostly on metaheuristic approaches for the problem. The pioneer work conducted by Macchiaroli et al. [11] decomposed the problem into two sub-problems and gave out a two-phase tabu search algorithm that is superior to dispatching rules. Schuster and Framinan [12] presented a variable neighborhood search (VNS) algorithm and a hybrid algorithm of simulated annealing and generic algorithm (GASA). Later, Schuster [13] developed a fast tabu search (TS) method, and Framinan and Schuster [14] proposed a complete local search with memory (CLM). Zhu et al. [15] investigated the timetabling methods and developed a complete local search with limited memory (CLLM), which was shown to be comparable to the VNS, GASA and CLM. Zhu and Li [16] also proposed an efficient shift penalty-based timetabling method and further put forward a modified complete local search with memory (MCLM). Mokhtari [17] presented a neuro-evolutionary variable neighborhood search which is based on the combination of an enhanced variable neighborhood search and an artificial neural network. The proposed algorithm was shown to be applicable and effective for the problem. Very recently, Aitzai et al. [18] proposed a branch and bound method and a particle swarm optimization algorithm for the problem. They compared the proposed algorithms with several heuristics but bypassed the other metaheuristics. Li et al. [19] improved the CLLM and developed a complete local search with memory and variable neighborhood structure (CLMMV) algorithm. The CLMMV was shown to have similar effectiveness to and better efficiency than the CLLM. More recently, Sundar et al. [20] proposed a hybrid artificial bee colony (HABC) algorithm and stated that the HABC outperforms the MCLM, as well as the CLLM.
According to the decomposition scheme in Macchiaroli et al. [11], the NWJSP can be decomposed into a timetabling problem and sequencing problem. The sequencing problem is to find a processing sequence of an optimal schedule, whereas the timetabling problem is to determine a feasible starting time for each job in the processing sequence. Using this decomposition scheme, the NWJSP can be solved in a similar way as the flow shop scheduling problem. As an effective and efficient procedure, the iterated greedy (IG) algorithm originally presented by Ruiz and Stutzle [21] has been applied in various scheduling environments, such as identical parallel machine scheduling [22], the distributed flow shop scheduling problem [23], and the blocking flow shop scheduling problem [24]. The IG has shown its unique potentials and advantages of fast convergence, good effectiveness, and easy implementation. In this study, we try to introduce and adapt the IG for the NWJSP. In order to enhance the diversity of the algorithm, we introduce a co-evolutionary scheme, and present a population-based IG algorithm.
The rest of the paper is organized as follows. In Section 2, the NWJSP is formulated into two sub-problems: the timetabling problem and the sequencing problem. Section 3 presents the iterated greedy algorithm and the competitive co-evolutionary scheme. Section 4 analyzes the computational results. Finally, concluding remarks are given in Section 5.

Problem Statement
The NWJSP involves a set of machines and a set of jobs which have to be processed on the machines. Each job has its own processing route, namely its own sequence of operations. Each operation is associated with a processing time and a processing machine. The no-wait constraint holds for each job, which means no waiting time is allowed between two consecutive operations of the same job. Besides, we have the following assumptions: (1) all the jobs and machines are available at time zero; (2) at any time, a machine can process at most one job, and a job can be processed on at most one machine; (3) preemption is not allowed; (4) the set-up, release, and transfer time is incorporated in the processing time; (5) no job is allowed to reenter previous machines.
The objective is to find a feasible schedule that minimizes the maximum completion time of all jobs, namely the makespan.

Problem Formulation
In conformity with the description in Zhu and Li [16], the notations in Table 1 are used. For two jobs J i and J j , let u and v be their two operations that are processed on the same machine ({u, v} ∈ ξ ij ). According to the no-wait constraint, the completion time of operation u and v is t i + P iu and t j + P jv , respectively. Operation u is either anterior or posterior to operation v. Therefore, we have t j + P jv − p jv ≥ t i + P iu or t i + P iu − p iu ≥ t j + P jv , which is equivalent to: pairs of operations processed on the same machine P k i cumulated processing time of J i when o k i is finished L i total processing times of J i Using the above condition (1), the problem with the makespan criterion can be described as follows: t n+1 is the start time of a dummy job, representing the makespan. Constraint (3) means that each job starts after time zero. Constraint (4) guarantees that the no-wait requirement is satisfied for any two jobs. Note that the number of constraint (4) is reduced by using i < j instead of i = j.
For the above operation pair {u, v} ∈ ξ ij , let M k denote the same machine of operations u and v, then P iu , P jv , p iu , p jv mean the same as P k i , P k j , p k i , p k j , respectively. Condition (1) can be rewritten as: For two jobs J i and J j , if their start time difference t j − t i satisfies condition (5), then these two jobs do not conflict on machine M k .
Obviously, the start times t i and t j are feasible if and only if jobs J i and J j do not conflict on all machines, which means that t j − t i satisfies condition (5) for all machines M k (k = 1, 2, . . . , m). Therefore, constraint (4) can be described as t j − t i ∈ F ij , where F ij is an interval set with the feasible values of t j − t i , obtained by: With the above notations, the NWJSP with the makespan criterion is further formulated as follows: L i and all F ij can be computed in advance. According to Equation (6), all F ij can be computed in time O(n 2 mlogm). F ij is an interval set with at most m + 1 intervals.
The existing effective approaches for the problem usually decompose it into the sequencing and timetabling sub-problems. The purpose of the sequencing sub-problem is to find a job sequence, denoted here as π * = (π * [1] , . . . , π * [n] ), that generates a schedule minimizing the makespan. Clearly, the search space of the sequencing sub-problem has n! solutions. The purpose of the timetabling sub-problem is to find a timetable, ST π = (t [1] , . . . , t [n] , C max (π)), with a minimum C max (π) based on a given job sequence π = (π [1] , . . . , π [n] ).

Timetabling Methods
There are several timetabling methods to determine a feasible schedule from a provided job permutation. The combinations of different timetabling methods and different sequencing algorithms are extensively studied by Samarghandi et al. [25]. They found that complicated methods are not necessarily superior to simple methods, and some simpler methods prove to be more effective. Deng et al. [26] investigated several timetabling methods for the problem with total flow time criterion and found that the left timetabling and inverse left timetabling methods are more effective when the algorithm for the sequencing problem is run with the same computational efforts. In the left timetabling method, we set t [1] = 0, and compute the minimum t [i] successively for i = 2, . . . ,n, subjected to (1) t [i] ≥ 0 and (2) π [i] does not conflict with π [j] for all j < i. The inverse left timetabling method is the same as the left timetabling method except that it is performed on the inverse instance. The inverse left timetabling method is based on the fact that a solution for the inverse instance is also applicable for the original instance (see more in Schuster [13] and Zhu et al. [15]).
As stated above, t j − t i ∈ F i,j means the start times of J j and J i are feasible. Let j means the start times of J j and J i are feasible and J j starts not earlier than J i . All F i,j and S i,j can be computed in advance with time complexity O(n 2 m log m). Without loss of generality, assume that the permutation is π = (J 1 , J 2 , . . . , J n ). The left timetabling of π utilizes all the precomputed S 1,j (1 < j) and Then t j is computed successively from j = 2 to j = n. For job J j , with the already computed t i (i < j), the steps to compute t j is designed as follows.
Step 1: set t j − t 1 as the minimum value that satisfies the first interval of S 1,j .
Step 2: check F j−1,j , . . . , F 2,j , S 1,j successively. When checking F i,j (i > 1) or S i,j (i = 1), if the incumbent t j − t i does not satisfy any interval of F i,j (i > 1) or S i,j (i = 1), augment t j to make it satisfied and then check F j−1,j , . . . , F 2,j , S 1,j successively again. This step is repeated until t j − t i satisfies all the F j−1,j , . . . , F 2,j , S 1,j .
Since there are at most m + 1 intervals in F i,j or S i,j , when the worst case happens, to compute t j needs time complexity O(mj 2 ), and to compute all the t j (j = 1, . . . , n) needs time complexity O(mn 3 ), which results in the worst case time complexity O(mn 3 ) for the left timetabling. According to Deng et al. [26], the above procedure is effective for the left timetabling method and inverse left timetabling method.

Population-Based Iterated Greedy Algorithm
In this section, we try to introduce and adapt the IG for the NWJSP. We first develop an IG algorithm as a combination of a perturbator and an insertion-based local search. Thereafter, we introduce a co-evolutionary scheme and present a population-based IG (PBIG) algorithm. Most of the algorithms in the existing studies apply one timetabling method, whereas the PBIG in this study uses both the left timetabling and inverse left timetabling methods to enhance the quality of schedule solutions.

Iterated Greedy Procedure
In the framework of IG for permutation flow shop scheduling problem (PFSP), the incumbent solution is updated by iterating over three phases. Firstly, the destruction and construction (DC) operator is used as a perturbator to generate a candidate solution usually different from the incumbent solution. Then an iterative improvement local search is applied to the candidate solution and a new solution is obtained. Finally, an acceptance criterion decides whether the new solution will replace the incumbent one. In this paper, these phases are applied to NWJSP as follows.

Destruction and Construction
The DC operator consists of two phases: destruction phase and construction phase. In destruction phase, d jobs are randomly selected, removed from the incumbent permutation π. Then in the construction phase, all the deleted jobs are reinserted, one by one, into π to construct a complete permutation. The procedure of the DC is shown in Algorithm 1, where the final π F is the candidate solution found by the DC. 1: choose d unrepeated jobs s 1 , . . . , s d randomly, delete them from π, and a sequence π F with nd jobs is obtained. 2: for i from 1 to d 3: insert s i into the nd + i positions of π F , evaluate the obtained nd + i sequences, and replace π F with the best one. 4: endfor.

Local Search
The local search is performed on the candidate solution found by the DC. In the local search of the IG algorithm by Ruiz and Stutzle [21], a job s is extracted from the permutation π and inserted into the other n − 1 possible positions. Let π s binsert denote the permutation of the best insert move, namely the permutation resulting in the best makespan among the n − 1 permutations. If π s binsert is better than π, π is replaced with π s binsert . The process is then repeated for another job, and it terminates when no improvement occurs for all jobs. Deng et al. [26] improved this local search by avoiding redundant search, and developed insert-based local search (IBLS). Here we introduce the IBLS for the makespan criterion. The procedure of the IBLS is shown in Algorithm 2. The IBLS employs a random permutation at the very beginning to make the local search more stochastic. 1: π R = a permutation generated randomly 2: find π s binsert 6: if (π s binsert is better than π) 7: π = π s binsert 8: i = 1 9: else 10: i = i + 1 11: endif 12: h = (h + 1) % n 13: endwhile

Initialization
Since the PBIG is a pullulation-based algorithm, there is a population with p solutions evolving in the algorithm. Each solution is performed with the IG procedure. The Nawaz-Enscore-Ham (NEH) heuristic (Nawaz et al. [27]) has been shown to be one of the most effective heuristics for flow shop problems, and it has been extensively utilized to generate initial solutions for metaheuristics for the flow shop problems. The NEH heuristic firstly sequences the jobs in non-increasing order of the total processing time on all the machines. Then it constructs a partial solution by taking into consideration the first two jobs. Finally, a complete solution is constructed by inserting these jobs one by one into the current partial solution.
To adapt the NEH heuristic for NWJSP, the evaluation of a partial sequence in NWJSP is different from that in PFSP. For a partial sequence, here the timetabling method is applied to construct a partial time table as a partial solution. With this, the NEH heuristic is described as follows.
Step 2: insert job ρ(k) to all the possible k positions of σ and obtain k tentative partial sequences. Evaluate these partial sequences by applying the timetabling method, replace σ with the partial sequence that results in the minimum makespan.
Step 3: Let k = k + 1. If k ≤ n, go to step 2; otherwise σ is the final permutation.
To employ the NEH heuristic to generate a random solution with good quality, we use a random job permutation as job order in the first step, and develop a variant of the NEH heuristic, called NEH_RAN. It should be noted that both the left timetabling and inverse left timetabling methods can be used in the above NEH heuristic and its variants. In other words, the heuristic can be applied with the left timetabling for the original instance to obtain a solution, and it can also be applied with the inverse timetabling for the inverse instance to obtain another solution.
To take advantage of both the left timetabling and inverse left timetabling methods, both methods are applied in the p IG procedures. A bool vector m d = (m d (1), . . . , m d (p)) is designed to indicate the timetabling method for the p IG procedures. m d (k) = true means that the k-th IG procedure is performed with the left timetabling, otherwise it is performed with the inverse left timetabling. m d is initialized in a form of (true, false, true, false, . . . ). Then, the p initial solutions of the p IG procedures are initialized as follows.
The first is initialized by the NEH heuristic using the left timetabling, and the second is initialized by the NEH heuristic using the inverse left timetabling. The remaining p − 2 initial solutions are initialized by the NEH_RAN using the inverse left timetabling (if the m d (k) value is true) or using the inverse left timetabling (if the m d (k) value is false). Such an initialization strategy not only takes advantage of both the timetabling methods, but also constructs the initial start solutions with both quality and diversity.

Competitive Co-Evolutionary Scheme
Three best solutions, π L , π I and π G , are stored together in the algorithm. π L is the best solution found by all IG procedures with respect to left timetabling method, while π I is the best solution found by all IG procedures with respect to inverse left timetabling method. π G is the better of π L and π I , namely the global best solution found so far. π L , π I and π G are initialized based on the p initial solutions.
After the p initial solutions are generated, the p IG procedures go into iteration simultaneously. Each IG procedure evolves according to its own timetabling method, either left timetabling or inverse left timetabling. It can be easily inferred that as the evolution proceeds, the incumbent solutions found by the IG algorithms are probably not the same, which means some of the incumbent solutions may be relatively better than others. Therefore, a reasonable assumption is that when an iteration is accomplished, some advantage should be taken of the relatively better solutions. Based on this assumption, the tournament selection is introduced as a competitive strategy. In the tournament selection, firstly, three solutions are randomly selected, and then the worst one is replaced with a perturbation solution which is generated by performing the DC on π L or π I with parameter D. Let π k (k = 1, . . . , p) denote the incumbent solution of the k-th IG procedure, and let m dbest denote the bool value indicating the timetabling method for π G . The competitive strategy is illustrated in Algorithm 3, where rand is a real number randomly generated in [1]. Note that m dbest = true means that π G is the same as π L , otherwise it is the same as π I . The perturbation solution is generated based on the global solution π G with probability pb. Considering that the global solution π G should be given more chances than the other best solution, the suggested value of pb are between 0.5 and 1.0. 1: randomly select three solutions from all and find the worst one. 2: if (rand < pb) 3: π C := π G 4: md(k*) := mdbest 5: else if (mdbest) 6: π C := π L 7: md(k*) := true 8: else 9: π C := π I 10: md(k*) := false 11: endif 12: endif 13: perform DC on π C with parameter D and obtain a perturbation solution π C * 14: π k * := π C *

Procedure of the Population-Based Iterated Greedy (PBIG) Algorithm
Since the details of all components of the PBIG have been given, the whole computational procedure is outlined in Algorithm 4. Such an algorithm is expected to solve the NWJSP with the makespan criterion effectively and efficiently. It should be noted that although the PBIG in this study has an analogous framework with the population-based iterated greedy (denoted PBIG_D here) algorithm in [26], they are different in several facets. Firstly, the PBIG in this study is applied to the makespan criterion, whereas the PBIG_D is designed for the total flow time criterion. Secondly, different optimization criteria cause different characteristics for the NWJSP, including the distribution features of the solutions and the effects of timetabling methods. Therefore, the PBIG uses both the left timetabling and inverse left timetabling methods, whereas the PBIG_D only employs the left timetabling method. Lastly, the PBIG contains a newly-designed competitive scheme, where the perturbation solution is generated based on a solution with either of the two timetabling methods, whereas in the competitive mechanism of the PBIG_D, the shaking solution is simply generated from the best solution found so far. 1: set parameters d, p, D, pb. 2: initialize π k (k = 1, . . . , p), π L , π I , π G , md, mdbest, Temp. 3: while (not termination) 4: for (each π k ) //perform each IG procedure 5: perform the DC operator on π k and then the IBLS, and obtain a new solution π k . If π k is better than π k , then let π k := π k and update π L , π I , π G , mdbest if possible. 6: endfor 7: perform competitive strategy. 8: endwhile

Computational Results and Comparisons
Computational experiments are performed on the following well-known benchmark instances: ft06, ft10, ft20 (Fisher and Thompson [28]), orb01-10 (Applegate and Cook [29]), abz5-9 (Adams et al. [30]), la01-40 (Lawrence [31]), and swv01-20 (Storer et al. [32]). The algorithm is programmed in C++ language and the running environment is a PC with Intel Core(TM) i5-6200 2.3 GHz processor. The acceleration method for insert neighborhood in [33] is used in order to save computational efforts. The following relative percentage deviation (RPD) is calculated to indicate the effectiveness: where C ALG is the solution obtained by the tested algorithm, and C REF is the reference solution. In this study, we use the same reference solution as in [16,20].

Calibration of the PBIG Algorithm
There are four parameters in total to calibrate for the PBIG, namely p, d, pb, and D. A larger Design of Experiments (DOE) [34] is carried out based on the following factor: (1) parameter p tested at four levels: 4, 6, 8, 10; (2) parameter d tested at four levels: 2, 4, 6, 8; (3) parameter pb tested at four levels: 0.6, 0.7, 0.8, 0.9; (4) parameter D tested at four levels: 2, 4, 6, 8. We select the following seven instances with different sizes from each other: la06, la11, la21, la26, la31, la36, and swv01. Each instance is solved by the algorithm for each combination of the factors with five independent replications. The average RPD (ARPD) obtained by the algorithm is calculated as a response variable. The stopping criterion is the elapsed CPU time not less than 6 mn 2 milliseconds (ms). The multi-factor Analysis of Variance (ANOVA) technique is used to analyze the computational results. The ANOVA results are shown in Table 2. It can be seen from Table 2 that the parameters p, d, and D are statistically significant, whereas the parameter pb is not. The means plots of these factors, together with least significant difference (LSD) 95% confidence intervals, are illustrated in Fig. 1. Recall that if the LSD intervals for two means are not overlapping, then the difference between the two means is statistically significant.  Figure 1 suggests that for the values of parameter p, 8 is statistically better than 4 and 6, although its difference from 10 is not significant. For the values of parameter d, 4 is statistically better than 2 and 8. The values 6 and 8 are relatively better than 2 and 4 for the parameter D, while no statistical significance is found for the parameter pb. Finally, the parameters of the algorithm are set as p = 8, d = 4, pb = 0.7, and D = 6.

Comparisons with Other Metaheuristics
Among the metaheuristics developed for the NWJSP in the literature, the MCLM [16] and the HABC [18] algorithms are two state-of-the-art approaches. Therefore, these two algorithms are used to compare with the proposed PBIG in this subsection. We bypassed the other algorithms either because it provided a low-level performance or because it was hard to compare with the PBIG due to different performance indexes. Like the MCLM and HABC algorithms, the PBIG is applied to 22 small instances and 40 large instances with 20 replications. It should be noted that the MCLM was implemented in Java on a Pentium

Comparisons with Other Metaheuristics
Among the metaheuristics developed for the NWJSP in the literature, the MCLM [16] and the HABC [18] algorithms are two state-of-the-art approaches. Therefore, these two algorithms are used to compare with the proposed PBIG in this subsection. We bypassed the other algorithms either because it provided a low-level performance or because it was hard to compare with the PBIG due to different performance indexes. Like the MCLM and HABC algorithms, the PBIG is applied to 22 small instances and 40 large instances with 20 replications. It should be noted that the MCLM was implemented in Java on a Pentium 4 processor and its average central processing unit (CPU) time was 6.55 s for the small instances and 654.48 s for the large instances, while the HABC was implemented in C on an identical processor and its average CPU time was 4.63 and 570.40 s for the small and large instances, respectively. Both the MCLM and the HABC adopted a stopping criterion determined by the current results found by the algorithm, and thus the computational time of the algorithm was not controllable. Such kind of stopping criterion makes it difficult to compare the algorithms in a fair way. Take instance Swv09 as an example, the average CPU time of the MCLM was 448 s whereas that of the HABC was 1531.08 s, and it was difficult to determine which algorithm was better although the RPD results of the MCLM were worse than those of the HABC. For this reason, the stopping criterion of the PBIG was set as the elapsed CPU time not less than 3 mn 2 ms for the small instances and 60 mn 2 ms for the large instances to facilitate the comparisons under the same criterion by future researchers. Using this stopping criterion, the PBIG required less average CPU time than the MCLM and HABC. We adopt the original results from [16,18] and do not reimplement the MCLM and HABC. The computational results are given in Tables 3 and 4 for the small and large instances, respectively. In Tables 3 and 4, T A denotes the average CPU time (in seconds) of 20 runs. Best denotes the best makespan value among 20 runs for the corresponding algorithm. RPD B denotes the RPD value of the Best. ARPD denotes the average RPD value over 20 runs. Best results among the three algorithms are shown in bold for Best, RPD B , and ARPD. NA denotes that the value was not provided in the original results. The column noted as BKS shows the reference solution used in the RPD. Note that the BKS values in Table 3 are also optimal solutions of the small instances. Table 3. Results for the modified complete local search with memory (MCLM), hybrid artificial bee colony (HABC) and population-based iterated greedy (PBIG) on the small instances.

Instance.
n, m BKS MCLM HABC PBIG   Table 3 shows that the PBIG can optimally solve all the small instances except Orb05. Further, for all the instances except Orb05 and La01, the PBIG can find the optimal solutions for every single run since the RPD B and ARPD values are both 0.00. The average RPD B and ARPD values of the PBIG are both 0.01, which are better than those of the MCLM (0.39, 0.45) and HABC (0.23, 0.47), whereas the average computational time of the PBIG (2.55 s) is much less than that of the MCLM (6.55 s) and HABC (4.63 s). It can be seen clearly from Table 4 that the results obtained by the PBIG and HABC are clearly better that that of the MCLM. Besides, Table 4 illustrates that the PBIG is a competitive algorithm for the large instances, and the average computational time of the PBIG is less than half of that of the HABC. The results also show that the PBIG finds equal best solution for 25 instances and better best solution for 9 instances when compared with the HABC. The PBIG obtains the same average ARPD value (−0.64) as the HABC while the average RPD B value of the PBIG (−1.88) is slightly better that that of the HABC (−1.74), which implies that the peak performance of the PBIG is superior to that of the HABC. We do not perform statistical test due to lack of the original data for the MCLM and HABC. However, it can be concluded from the results of Tables 3 and 4 that the proposed PBIG algorithm is a competitive metaheuristic to solve the problem under consideration, especially for the large instances.

RPD B ARPD T A (s) RPD B ARPD T A (s) RPD B ARPD T A (s)
In addition, to illustrate the best scheduling results provided by the PBIG more clearly, Gantt charts are drawn in Figures 2 and 3 for two instances, La11 and La12, respectively. In Figures 2 and 3, we assume that time unit is minute, and the job number starts from zero. The start time of each job is marked as red. The processing times of all jobs are taken from the benchmark instances La11 and La12 (Lawrence [31]). For instance with La11, it can be seen from Table 4 that the best solutions obtained by the MCLM and HABC are 1635 and 1627, respectively. Compared with the MCLM and HABC, the PBIG can yield a solution with lower makespan value (1621). With La12, the PBIG can also yield a solution with lower makespan value than the MCLM and HABC.
Algorithms 2021, 14, x FOR PEER REVIEW 13 of 15 jobs are taken from the benchmark instances La11 and La12 (Lawrence [31]). For instance with La11, it can be seen from Table 4 that the best solutions obtained by the MCLM and HABC are 1635 and 1627, respectively. Compared with the MCLM and HABC, the PBIG can yield a solution with lower makespan value (1621). With La12, the PBIG can also yield a solution with lower makespan value than the MCLM and HABC.

Conclusions
This study considers the no-wait job shop scheduling problem by using a populationbased iterated greedy (PBIG) algorithm. The problem is decomposed into a sequencing problem and timetabling problem. The Nawaz-Enscore-Ham (NEH) heuristic used for flow shop is extended in no-wait job shops, and an initialization scheme based on the NEH heuristic is developed to generate start solutions with both quality and diversity.  jobs are taken from the benchmark instances La11 and La12 (Lawrence [31]). For instance with La11, it can be seen from Table 4 that the best solutions obtained by the MCLM and HABC are 1635 and 1627, respectively. Compared with the MCLM and HABC, the PBIG can yield a solution with lower makespan value (1621). With La12, the PBIG can also yield a solution with lower makespan value than the MCLM and HABC.

Conclusions
This study considers the no-wait job shop scheduling problem by using a populationbased iterated greedy (PBIG) algorithm. The problem is decomposed into a sequencing problem and timetabling problem. The Nawaz-Enscore-Ham (NEH) heuristic used for flow shop is extended in no-wait job shops, and an initialization scheme based on the NEH heuristic is developed to generate start solutions with both quality and diversity.

Conclusions
This study considers the no-wait job shop scheduling problem by using a populationbased iterated greedy (PBIG) algorithm. The problem is decomposed into a sequencing problem and timetabling problem. The Nawaz-Enscore-Ham (NEH) heuristic used for flow shop is extended in no-wait job shops, and an initialization scheme based on the NEH heuristic is developed to generate start solutions with both quality and diversity. Furthermore, a population-based co-evolutionary scheme is presented by imposing the iterated greedy procedure in parallel and hybridizing both the left timetabling and inverse left timetabling methods. Lastly, the proposed algorithm is compared with two effective metaheuristics in literature, and its effectiveness is demonstrated by computational results.
Considering the structural simplicity and high effectiveness of the PBIG, we hold the view that the algorithm is technically feasible to apply in practical production environment. In future, we will focus on adapting the PBIG algorithm for the multi-objective job shop scheduling problem, as well as the job shop scheduling problem with uncertainty.