A Genetic Hyper-Heuristic for an Order Scheduling Problem with Two Scenario-Dependent Parameters in a Parallel-Machine Environment

: Studies on the customer order scheduling problem have been attracting increasing attention. Most current approaches consider that either component processing times for customer orders on each machine are constant or all customer orders are available at the outset of production planning. However, these assumptions do not hold in real-world applications. Uncertainty may be caused by multiple issues including a machine breakdown, the working environment changing, and workers’ instability. On the basis of these factors, we introduced a parallel-machine customer order scheduling problem with two scenario-dependent component processing times, due dates, and ready times. The objective was to identify an appropriate and robust schedule for minimizing the maximum of the sum of weighted numbers of tardy orders among the considered scenarios. To solve this difﬁcult problem, we derived a few dominant properties and a lower bound for determining an optimal solution. Subsequently, we considered three variants of Moore’s algorithm, a genetic algorithm, and a genetic-algorithm-based hyper-heuristic that incorporated the proposed seven low-level heuristics to solve this problem. Finally, the performances of all proposed algorithms were evaluated.


Introduction
In many service and manufacturing environments, the product development team independently develops modules for multiple products, and the product design is considered complete after all modules are designed. This production sequence is referred to as the customer order scheduling problem (COSP) in the literature. The COSP is encountered in diverse industries and applications; for instance, in the manufacture of semi-finished lenses [1], in determining the equilibrium of production capacity to solve a practical order rescheduling problem in the steel industry [2], and in a product-service system offering a mix of tangible products and intangible services to meet the personalized needs of customers [3]. For more applications, please refer to a review and classification of concurrent-type scheduling models by Framinan et al. [4].
COSP studies have employed different objective functions. For example, by taking the total completion time of a given set of orders as the criterion, Ahmadi et al. [1] developed constructive heuristics; Framinan et al. [5] applied both the aforementioned constructive heuristics and metaheuristics to solve the COSP. heuristics, and proposed four variants of the cloud-theory-based simulated annealing (CSA) hyper-heuristic method to solve the problem. Wu et al. [37] applied the scenario concept to a single-machine scheduling problem with sequence-dependent setup times for minimizing the total completion time. They employed a B&B method and developed five variants of the CSA along with five new neighborhood schemes to solve the problem. Kämmerling and Kurtz [38] presented an algorithm to calculate efficiently lower bounds for the binary two-stage robust problem. Furthermore, [1] used the scenario pheromone in real-world condition for producing plastic lenses; the plastic lens procedures could be conducted by either skilled or semiskilled employees. Therefore, the component processing times of an order differ depending on whether the order was executed by a skilled employee or a semiskilled employee. Additionally, issues pertaining to customer's due dates and the release dates in COSPs have rarely been explored. By challenging these factors, we formulated an m-parallel-machine COSP problem with two scenario-dependent component processing times, due dates, and ready times. The objective was to identify an appropriate and robust (min-max regret) approach to minimize the maximum of the sum of weighted numbers of tardy orders among the considered scenarios. More recently, Wu et al. [39] introduced a branch-and-bound algorithm and two variants of a simulated annealing hyperheuristic for a two-agent customer order scheduling problem with scenario-dependent component processing times and release dates. Xuan et al. [40] proposed an exact method, three scenario-dependent heuristics, and a population-based iterated greedy algorithm for a single-machine scheduling problem with a scenario-dependent processing time and due date. For understanding the importance of the criteria, due dates, and release dates in real applications, please refer to Yin et al. [41] for a few production examples involving due date settings.
The contributions of this study can be summarized as follows. (1) This study presents a model of real COSPs in practical settings by addressing two scenario-dependent component processing times, ready times, and due dates. This is a new and unexplored problem. (2) The objective function minimized the maximum total weighted number of tardy orders across the two possible scenarios by considering all possible permutations instead of only the total weighted number of tardy orders. (3) Three properties and a lower bound were derived to accelerate the search for an effective B&B method. (4) Moore's algorithm (Moore [42]) was used to construct heuristics. (5) A hyper-heuristic based on a GA that incorporated seven low-level heuristics was proposed to solve this problem.
The remainder of this study is organized as follows. In the second section, the notation definition and problem description are presented. In the third section, the derivation of a lower bound and several properties are described and used in the B&B algorithm. In the fourth section, three modified variants of Moore's algorithm are introduced. In the fifth, the GA used herein as well as the GA-based hyper-heuristic that incorporates the proposed seven low-level heuristics are described. The sixth section outlines the parameter tuning and setting. In the seventh section, the performances of all of the five algorithms are proposed and evaluated herein. In the final section, our conclusions and an outline for future studies on the topic are presented.

Problem Statement
The considered problem can be described as follows: For n customer orders, these orders belong to n different clients and each customer order has m components to be processed on m machines. Each order has its important weight (w i ), and one machine produces only one component. Because factors causing substantial uncertainties are shown, we considered that a customer order had a component processing time t s iv on M v , a ready time r s i , and a due date d s i in scenario s, where s = 1, 2. Our objective here was to formulate a robust policy that minimized the maximum of the weighted number of tardy orders in two scenario-dependent environments. In other words, the aim was to identify a job sequence σ * such that σ * = arg{min σ∈Ω {max s=1,2 ∑ n i=1 w i NT s i (σ)}}, where Ω is the set of all possible permutation schedules, and NT s i (σ) = 1 if customer order i is tardy in scenario s in σ and is 0 otherwise. When m = 1, the problem with the one-scenario environment is NP-hard, as demonstrated by Karp [43]; the same COSP problem with one scenario was addressed by Lin et al. [28]. Thus, the problem considered in the present study was NP-hard as well.

Branch-and-Bound Method
Lin et al. [28] addressed the same COSP problem, but they considered only one scenario. Following their ideas, we derived a lower bound to enhance the searching power of the B&B algorithm. Suppose σ = (δ, δ c ) is a schedule in which δ denotes a determined schedule with q orders and δ c denotes the remaining unscheduled (n − q) orders. The completion times of the orders placed after the kth position in σ are expressed as follows: where z s k denotes the completion time of the order scheduled at the kth position in the scheduled part, s = 1, 2; r s (k+1) ≤ . . . ≤ r s (k+n 1 ) denotes the nondecreasing form of r s Therefore, the following formulas can be obtained: where w (1) = min{w i , i ∈ δ c }; d s max = max d s i , i ∈ δ c , s = 1, 2; U s {x>a} = 1 and U s {x≤a} = 0, s = 1, 2; and {t s (q+i) * , i ∈ δ c } denotes a set of a nonincreasing order of ∑ m v=1 t s iv , i ∈ δ c . Therefore, the inequality formula can be derived from Equation (1) as follows: Thus, a lower bound can be obtained as follows: To indicate that σ = (δ, i, j, δ ) is no worse than σ = (δ, j, i, δ ), we will show that w i NT s i (σ) + w j NT s j (σ) ≤ w j NT s j (σ ) + w i NT s i (σ ) and C s j (σ) ≤ C s i (σ ) hold for s = 1, 2. Moreover, let (q − 1) be the number of orders in δ, and let z s v be the completion time of the (q − 1)th order in δ on machine M v , v = 1, 2, . . . , m and s = 1, 2. According the definition, the completion times of order i and order j in σ and σ are given as: Based on the aforementioned expressions, the below properties can be obtained to increase the speeding power of a B&B algorithm to solve the problem under study. Only Then, σ is no worse than σ .
Proof: Details of the proof of Case (i) of Property 1 are as follows.
The completion times of jobs O j in sequence σ and O i in sequence σ are, respectively: By applying the condition r s j > r s i ≥ max v∈Ω M {z s v }, we can simplify C s j (σ) and C s i (σ ) as follows: By applying r s i + max v∈Ω M t s iv > r s j to (3), we have: j , and r s i + max v∈Ω M t s iv + max v∈Ω M t s jv < d s j in succession, this implies NT s j (σ) = 0. Because the weights w i and w j > 0, the following desired results are obtained: Then, σ is no worse than σ .

Three Modified Moore's Heuristics
The literature [42] indicates that Moore's algorithm produces an optimal schedule for minimizing the total number of tardy jobs on a single machine. To find the nearoptimal robust job sequences for the proposed NP-hard problem, by following the idea of Moore's algorithm, we introduced three mixed heuristics and combined them with the scenario-dependent processing times, ready times, and due dates across two possible scenarios. Notably, Moore's algorithm could not be applied directly to solve this model regardless of whether the scenario-dependent parameters were present. In light of the favorable performance of Moore's algorithm in conjunction with the tardiness criterion in the classical singe-machine setting, the pairwise interchange improvement was applied to Moore's algorithm. The process flow of Moore's algorithm is as follows: (1)  Let completed on time and scheduled using the EDD rule, and σ denotes an arbitrary 10: sequence of orders that are tardy under σ M . 11: Execute σ M by using the pairwise interchange improvement method and output the final solution.
Moore_pi_m heuristic: Let Note that the complexity of Moore's algorithm can be seen in [42].

A Genetic and a Genetic Hyper-Heuristic
Most researchers have observed that in general, a heuristic seems relatively simple and easy to construct, whereas a metaheuristic is both complex and difficult to construct, as well as to use for intelligently implementing random search strategies [44,45]. The GA has been successfully utilized to obtain high-quality approximate solutions of many combinatorial problems. The GA is an effective computerized search tool for identifying the best and optimal solutions to complex problems based on genetic and neural selection mechanics such as mutation, crossover, and reproduction. These mutations have been used to successfully solve numerous NP-hard combinatorial problems. In the GA or GAHH (hyper-heuristic based on the GA framework), we applied a group of continuous real numbers to display the codes of orders by using a random-number encoding method. For example, given a chromosome (0.73, 0.62, 0.14, 0.23, 0.81) as a gene, we decoded it as a schedule (3, 4, 2, 1, 5) by using the ranking method. Specifically, in the reproduction stage, we selected the parents and recombined them by using a certain crossover operator to create offspring. In particular, in this study, we considered a linear order crossover (see Iyer and Saxena [46]). Moreover, the notations Pop, P, and IT_GA denote the number of parents, value of the mutation rate, and number of iterations (or generations), respectively, in executing the GA. The main structures of the proposed GA are summarized as follows: Steps of genetic algorithm: 00: Input Pop, P, IT_GA. 01: Generate a series of Pop initial parents (schedules) and find their fitness values. 02: Do i = 1, IT_GA 03: Choose two parents from Pop populations by using the roulette wheel method and employ a linear order crossover to reproduce a set of Pop offspring. 04: For each offspring, generate a random number u (0 < u < 1) if u < P; then, create a new 05: offspring by inserting a displacement mutation. 06: Record the best one (schedule) and replace these Pop parents with their offspring. 07: End do /* for the number of iterations (IT_GA) is fulfilled */ 08: Output the final best schedule and its fitness value.
In the following, a GA-based hyper-heuristic is applied to solve this problem by identifying problem-solving methods instead of directly finding solutions to the problem (refer to Cowling et al. [47] and Anagnostopoulos and Koulinas [48]). Hyper-heuristic processes have a high level of strategy and a low level of a group of heuristics that is used to determine a low-level heuristic to produce a new solution. Moreover, seven low-level heuristics are proposed on the basis of candidate variation operators such as the two-job swap heuristic, one step (or two steps) to the right heuristic, one step (or two steps) to the left heuristic, pulling-out and onward-moved reinsertion heuristic, and pulling-out and backward-moved reinsertion heuristic. Many studies have indicated that the two-job swapping method represents an effective improved scheme. To explore diverse search solutions, the randomly determined neighborhoods of a current job must be explored as well. They are recorded as LH 1 , LH 2 , . . . , and LH 7 . The details of the proposed seven low-level heuristics are as follows: LH 1 : Two-order swap heuristic: randomly select two orders (e.g., O 2 and O 4 ) in a schedule σ and swap the selected two orders, resulting in a new schedule σ . For example, . LH 2 : One step to the right heuristic: randomly select one order (e.g., O 2 ) in a schedule σ, extract order O 2 from its position, move it one position toward the right, and reinsert it to obtain a new schedule σ . For example, Notably, as the value of n increases (for example, when n > 10,) in general, the operators LH 6 and LH 7 differ from the other five heuristics, especially when n = 100 or 200.
In what follows, the genetic algorthim hyper-heuristic is introduced based on the GA framework as well; it was labeled GAHH. In the execution of the GAHH, we randomly selected a low-level heuristic based on its selection probability and applied it once to each population over several iterations (named L_no). The current solution was replaced with a newly generated solution if the new solution was superior to the current solution; otherwise, it was accepted with a certain probability. Let f l = 1/7 be the initial probability of each LH l ; l = 1, 2, . . . , 7. Assume that π l is the recorded total frequency of obtaining a superior solution when cyclically executing LH l . To ensure that all of the seven low-level heuristics in the pool were in the GAHH, we set π l = max{1, π l }. The procedures of the GAHH were as follows: Steps of genetic algorthim hyper-heuristic: 08: Replace π l = π l + 1 with LH j if RC(σ t ) < RC(σ i ). 09: Retain each superior current parent σ i 10: End do /* for the low-level heuristics */ 11: End do /* i = 1, 2, . . . , Pop */ 12: Update the probabilities { f l , l = 1, 2, . . . , 7} of LH 1 , LH 2 , . . . , and LH 7 according 13: their past records as { f l = π l / ∑ 7 j=1 π j , l = 1, . . .,7} 14: Select two parents from Pop populations by using the roulette wheel method and 15: employ a linear order crossover to reproduce a set of Pop offspring. 16: For each offspring, generate a random number u (0 < u < 1) if u < P; then, create a 17: new offspring by inserting a displacement mutation. 18: Retain the best offspring, and replace the parents of this Pop with their offspring. 19: End do /*when the iterative number of high-level cycles (ITRN) is */ 20: Output the final best sequence and its fitness value.
The flowchart of the GAHH is depicted in Figure 1.

Tuning Genetic Algorithm Hyper-Heuristic Parameters
With reference to the scheme of Montgomery [49], in this section, we present an approach that used one factor at a time to tune the relevant GAHH parameters. The GAHH proposed in Section 5 had four parameters; namely, population size (Pop), number of low- To find an exact solution for small-sized orders, the best of the schedules found using the three proposed heuristics, a GA, and the GAHH was used as the upper bound in a depth-first B&B method. To help cut the branching trees, the proposed properties and the lower bound were used in the method. The orders were first scheduled in a forward manner, and we selected a systematic search and branch down each tree [25,28,34,36].

Tuning Genetic Algorithm Hyper-Heuristic Parameters
With reference to the scheme of Montgomery [49], in this section, we present an approach that used one factor at a time to tune the relevant GAHH parameters. The GAHH proposed in Section 5 had four parameters; namely, population size (Pop), number of low-level heuristics applied (L_no), mutation probability (P), and number of highlevel cycles (ITRN). To reduce the computation time or obtain superior solutions, the values of these parameters must be tuned before conducting a simulation study. To obtain suitable parameter settings, we computed the average error percentage (AEP) as AEP = , where H i is the objective solution searched using the GAHH, and B * i is the optimal objective value obtained using the B&B method for each instance i. With reference to the designs of Leung et al. [10][11][12][13][14]16], Lee [20], Lin et al. [28], and Yang and Yu [33], the weights w i were generated from the uniform distribution U (1, 100). The component processing time t (1) iv and ready time r i of an order were generated from the uniform distributions U (1, 100) and U (1, 100 × n × λ); the due dates of an order were generated from the uniform distribution U (TPTbar(1) (1 − τ − ρ/2), TPTbar(1) (1 − τ + ρ/2)) in Scenario 1. In Scenario 2, the component processing time t (2) iv and ready time r (2) i of an order were generated from the uniform distributions U (1, 200) and U (1, 200 × n × λ), and the due dates of an order were generated from the uniform distribution U (TPTbar (2) ( iv /m; τ and ρ describe the tardiness factor and range of due dates, respectively; and 0 < λ < 1 is a controllable parameter. For simplification, we set n = 10, m = 3, τ = 0.5, ρ = 0.5, and λ = 0.3 and generated 100 problem instances for testing. With Pop = 10, P = 0.05, and L_no = 20, a simulation was conducted and designed as ITRN was varied from 1 to 10 in incremental steps of 1. It can be seen in Figure 2a that the lowest point of the AEP was located at ITRN = 6. With Pop = 10, ITRN = 6, and L_no = 20, a simulation was conducted and designed as P was varied from 0.01 to 0.1 in incremental steps of 0.1. It can be seen in Figure 2b that the AEP was approximately zero (below 0.59%) when P was 0.04, which indicated an effective reduction in the AEP. With ITRN = 6, P = 0.04, and L_no = 20, a simulation was conducted and designed as Pop was varied from 10 to 20 in incremental steps of 2. It can be seen in Figure 2c that the AEP was the minimum (~0.29%) when Pop = 20, which indicated an effective reduction in the AEP with an increase in Pop. With ITRN = 6, P = 0.04, and Pop = 20, a simulation was conducted and designed as L_no was varied from 10 to 50 in incremental steps of 4. It can be seen in Figure 2d that the AEP was the lowest (~0.03%) when L_no equaled 46.
Finally, the optimal setting values of the parameter (Pop, P, ITRN, L_no) were set to (20, 0.04, 6, 46) for small-sized orders. However, for large-sized orders, where n = 100 and 200, a greater number of lower-level heuristics (L_no) was required to obtain superior solutions; following the same approach as above, we eventually set L_no to 560 and 1000 for n = 100 and n = 200, respectively, for use in subsequent simulation studies.  Finally, the optimal setting values of the parameter (Pop, P, ITRN, L_no) were set to (20, 0.04, 6, 46) for small-sized orders. However, for large-sized orders, where n = 100 and 200, a greater number of lower-level heuristics (L_no) was required to obtain superior solutions; following the same approach as above, we eventually set L_no to 560 and 1000 for n = 100 and n = 200, respectively, for use in subsequent simulation studies.
Notably, three stopping conditions are commonly used in metaheuristics; namely, the number of generations, the difference between the current best solution and the previous best solution, and the limitation of CPU time. To fairly compare the proposed GAHH and GA, the same values for the population size, the same crossover scheme, and the same mutation rates were used in both approaches. We only revised the number of generations (IT_GA) in case of the GA to approximate to the number of repeated cycles (ITRN) multiplied by the times of the low-level heuristics per cycle (L_no) in the GAHH, that is, IT_GA = ITRN × L_no. After our preliminary tests, the parameters (Pop, P, IT_GA) in the GA method were set to (20, 0.04, 276) for small-sized orders (n = 9, 11), (20, 0.04, 3360) for n = 100, and (20, 0.04, 6000) for n = 200, respectively. Due to the integration of highlevel strategic heuristics with a group of low-level heuristics, a small GAHH population was adequate.

Simulation Study
In this section, the performances of the B&B method, three heuristics, the GA, and the GAHH were evaluated through simulation studies. All of the algorithms were coded in FORTRAN and executed on a personal computer equipped with an Intel Core i7 CPU (2.66 GHz) and 4 GB of RAM and running the Windows XP operating system. With reference to the design of Leung et al. [10,[12][13][14], Lee [20], and Lin et al. [28], we generated the component processing times t  U (1, 200nλ) for Scenario 2, respectively, where λ is the control variable. The value of λ was set to 0.1, 0.3, and 0.5. Herein, the two parts of the simulation study were designed to address smalland large-sized orders.

Results Obtained for Small-Sized Orders
For the small-sized orders, the number of orders n was set as 9 and 11 and the number of machines m as 2, 3, and 4. A total of 100 problem instances were examined for each combination of (n, m, λ, τ, ρ). Thus, in total, 10,800 (=100·2·3·3·2·3) instances were tested in this experiment. The B&B method was set to stop and run the next instance when the number of trimmed nodes exceeded 10 8 .
The average and maximum number of nodes and the average and maximum execution times (in seconds) were recorded to determine the performance of the B&B method. To assess the performance of the three modified Moore's heuristics, GA, and GAHH, the AEP and maximum error percentage were recorded. The performance of the B&B method is summarized in Table 1. The performances of the proposed heuristics and algorithms are summarized in Table 2. The effects of the parameters on the execution of the B&B method are summarized in Table 1. The averages of the mean nodes and CPU time increased dramatically as n was increased from 9 to 11 regardless of the parameter. This phenomenon illustrated one characteristic of an NP-hard problem. The number of machines and ρ had little effect on the performance of the B&B method. In terms of the effects of the parameters m, λ, and τ on the mean number of nodes in the B&B method, the means of both the number of nodes and CPU time tended to increase for both n = 9 and n = 11 as the value of one of these parameters was increased and the other parameters were fixed. The number of nodes and CPU time decreased as the value of ρ increased for both values of n. This result implied that in the instances with a smaller value, the optimal solution was easily obtained using the B&B method.
In terms of the effects of the aforementioned parameters on the performance of the three Moore's-type heuristics, the mean AEPs of the three heuristics, GA, and GAHH tended to decrease as the value of m, λ, or τ increased (Table 2). In contrast, the mean AEP of the three heuristics and both GAs increased as the value of ρ increased. Moreover, the results in Table 2 confirmed that on average, the GAHH outperformed the other algorithms.
To compare the solution quality among the Moore_pi_M, Moore_pi_m, Moore_pi_mean, GA, and GAHH (with mean AEPs of 62.09, 59. 44, 63.13, 19.77, and 1.22, respectively), the AEPs obtained under variations in different factors including the algorithm; number of jobs; number of machines; and the parameters λ, τ, and ρ were fitted to a linear model. The analysis of variance results for the AEP revealed significant differences in the performance of all factors when the significance level was 0.05, but the normality of the error term was significantly invalid as indicated by a Shapiro-Wilk test (with the statistic = 0.8535 and p < 0.0001). The boxplots in Figure 3 display the AEP distributions of all the proposed heuristics and algorithms. Accordingly, a nonparametric statistical method was used to perform multiple comparisons between these methods. Then, a Freidman test (with p < 0.0001) showed that the AEP samples were not followed the same distribution under a significance level of 0.05 according to the basis of AEP ranks on the 108 (n·m·λ·τ·ρ = 2·3·3·2·3) blocks of test problem instances.   Furthermore, to find the pairwise differences among the three heuristics, GA, and GAHH, we conducted a Wilcoxon-Nemenyi-McDonald-Thompson (WNMT) test (see chapter 7 in Hollander et al. [51]). Table 3 lists the sum of ranks of AEPs for the three heuristics, GA, and GAHH. The behaviors of all of the proposed algorithms could be grouped into four groups under a significance level of 0.05. As can be inferred from columns 2 and 3 of Table 3, the performance of the GAHH was the best (rank-sum = 110.0) followed by GA (rank-sum = 223), whereas Moores_pi_m and Moores_pi_mean (rank-sums = 414.0 and 472.0, respectively) exhibited the poorest performances. As for the usage of the seven low-level heuristics, the variations in the probabilities of them being called in the GAHH are displayed in Figure 4. HL 3 was called most frequently and was typically followed by HL 1 . However, HL 4 , HL 5 , HL 6 , and HL 7 were rarely called.

Results for Large-Sized Orders
For a large number of jobs, we set the order size n to 100 and 200 and the number of machines m to 5, 10, and 20. One hundred problem instances were randomly examined for each parameter combination. Consequently, a total of 10,800 (=100·2·3·3·2·3) problem instances were examined in this simulation. To evaluate the performance of the three heuristics, GA, and GAHH, we reported the mean and maximum relative percentage deviation (RPD). RPD is defined as follows: RPD = 100∑ ( * * )[%], where is the value of the objective function searched for using the three heuristics, GA, and GAHH; and H* is the minimum among these five methods for each instance. The RPDs obtained with variations in n, m, λ, τ, and ρ are summarized in Table 4.

Results for Large-Sized Orders
For a large number of jobs, we set the order size n to 100 and 200 and the number of machines m to 5, 10, and 20. One hundred problem instances were randomly examined for each parameter combination. Consequently, a total of 10,800 (=100·2·3·3·2·3) problem instances were examined in this simulation. To evaluate the performance of the three heuristics, GA, and GAHH, we reported the mean and maximum relative percentage deviation (RPD). RPD is defined as follows: , where H i is the value of the objective function searched for using the three heuristics, GA, and GAHH; and H* is the minimum among these five methods for each instance. The RPDs obtained with variations in n, m, λ, τ, and ρ are summarized in Table 4. As summarized in Table 4, the average RPDs of the three modified Moore's-type heuristics and the GA strictly decreased as the number of machines and values of the parameters λ and τ increased for n = 100 and 200. In contrast, the average RPDs of the three Moore's-type heuristics decreased as the value of ρ increased. Moreover, with an average RPD of close to 0 for both values of n, the GAHH outperformed the other methods.
The boxplots of the RPDs of the five algorithms are depicted in Figure 5. The GAHH outperformed the other four algorithms when n was large. Overall, the average RPDs of the Moores_pi_M, Moores_pi_m, Moores_pi_mean, GA, and GAHH approaches were 106.68, 125. 22, 105.45, 76.79, and 0.00, respectively. The experimental results further corroborated the superiority of the GAHH approach.
RPD of close to 0 for both values of n, the GAHH outperformed the other methods.
The boxplots of the RPDs of the five algorithms are depicted in Figure 5. The GAHH outperformed the other four algorithms when n was large. Overall, the average RPDs of the Moores_pi_M, Moores_pi_m, Moores_pi_mean, GA, and GAHH approaches were 106.68, 125. 22, 105.45, 76.79, and 0.00, respectively. The experimental results further corroborated the superiority of the GAHH approach. In view of the significant differences in the average RPDs of the three heuristics, GA, and GAHH, we performed an analysis to confirm the differences and make direct pairwise comparisons statistically. Another linear model that was run in the SAS 9.4 environment was fitted to the RPDs obtained under variations in different factors including the algorithm; number of jobs and machines; and parameters λ, τ, and ρ. Subsequently, the normality of the error term was tested for significance by running the Shapiro-Wilk test (statistic = 0.9114 and p < 0.0001) under a significance level of 0.05. A nonparametric statistical method-the WNMT test-was used to make multiple comparisons of the performance of the five algorithms. Table 3 lists the sum of ranks of RPDs of the three proposed In view of the significant differences in the average RPDs of the three heuristics, GA, and GAHH, we performed an analysis to confirm the differences and make direct pairwise comparisons statistically. Another linear model that was run in the SAS 9.4 environment was fitted to the RPDs obtained under variations in different factors including the algorithm; number of jobs and machines; and parameters λ, τ, and ρ. Subsequently, the normality of the error term was tested for significance by running the Shapiro-Wilk test (statistic = 0.9114 and p < 0.0001) under a significance level of 0.05. A nonparametric statistical method-the WNMT test-was used to make multiple comparisons of the performance of the five algorithms. Table 3 lists the sum of ranks of RPDs of the three proposed heuristics, GA, and GAHH. The performance levels of all proposed algorithms could be grouped into four groups at the 0.05 significance level. As can be inferred from columns 4 and 5 of Table 3, the GAHH (rank-sum = 108.0) yielded the best performance followed by the GA (rank-sum = 255), whereas Moores_pi_m (rank-sum = 496) yielded the poorest performance.
The CPU times of all of the proposed algorithms were extremely short in the smallorder-size case; thus, they were omitted here. Table 5 lists the CPU times of all of the proposed heuristics and metaheuristics in the large-order-size case. As summarized and illustrated in Table 5 and Figure 6, respectively, on average, the AEPs of the Moore_pi_M, Moore_pi_m, Moore_pi_mean, GA, and GAHH approaches were 0.32, 0.30, 0.30, 0.02, and 2.14, respectively, in the large-order-size case regardless of the parameter values, except for the number of orders (n). Furthermore, in terms of the differences between the GA and GAHH, under the same design with the same population, the same mutation, and It_GA = ITRN × L_no, the computation time of the GAHH was 2 s longer than that of the GA because the GAHH required considerable CPU time to select low-level heuristics. In the worst case, GAHH required 8.02 s to solve an instance. However, the mean of the maximum solving times of the GAHH approach was 2.51 s. This result confirmed that the GAHH approach maintained a small population and limited the number of generations required for obtaining a high-quality solution.  On the basis on the test results, we concluded that the GAHH, on average, outperformed the other heuristics and the GA in the simulation tests regardless of the order size.

Conclusions and Future Work
Customer order scheduling models have emerged as a major challenge in manufacturing environments and practical applications (Framinan et al. [4]). Diverging from the common assumption that component processing times, ready times, and due dates are fixed, this study investigated cases involving scenario-dependent component processing times, ready times, and due dates; the objective was to find an appropriate and robust sequence of orders to minimize the maximum of the sum of weighted tardy orders across the considered scenarios. To solve this problem, first, dominance properties and a lower bound were derived and a B&B method was applied to search for the optimal solutions for small-sized orders. Second, three variants (heuristics) of Moore's algorithm and the GAHH were developed along with a conventional GA to obtain approximate solutions for large-sized orders. Simulations were performed to evaluate the capability of the B&B method and the effects of parameter values on its performance. The experimental results obtained by considering the dominance properties and the derived lower bound indicated that the B&B method could solve problem instances with n values up to 11 within a reasonable CPU time ( Table 1). The experimental tests further demonstrated that the GA and GAHH performed satisfactorily in term of efficacy and efficiency for problem instances involving both small-and large-sized orders. In the case of the GAHH with a scheme for randomly selecting from among seven operators, we only required a small population and a small number of iterative cycles (ITRN) to obtain a high-quality solution compared with the GA without a neighborhood heuristic. Overall, on the basis of the simulation results, we can recommend using the GAHH approach to solve the problem considered herein due to its superior performance and speed in attaining high-quality solutions in terms of the AEP and RPD. In the future, researchers can investigate the COSP with more than three scenario-dependent processing times as well as order rejection based on different criteria; for example, the total completion time, the makespan, or even a multiobjective On the basis on the test results, we concluded that the GAHH, on average, outperformed the other heuristics and the GA in the simulation tests regardless of the order size.

Conclusions and Future Work
Customer order scheduling models have emerged as a major challenge in manufacturing environments and practical applications (Framinan et al. [4]). Diverging from the common assumption that component processing times, ready times, and due dates are fixed, this study investigated cases involving scenario-dependent component processing times, ready times, and due dates; the objective was to find an appropriate and robust sequence of orders to minimize the maximum of the sum of weighted tardy orders across the considered scenarios. To solve this problem, first, dominance properties and a lower bound were derived and a B&B method was applied to search for the optimal solutions for small-sized orders. Second, three variants (heuristics) of Moore's algorithm and the GAHH were developed along with a conventional GA to obtain approximate solutions for large-sized orders. Simulations were performed to evaluate the capability of the B&B method and the effects of parameter values on its performance. The experimental results obtained by considering the dominance properties and the derived lower bound indicated that the B&B method could solve problem instances with n values up to 11 within a reasonable CPU time ( Table 1). The experimental tests further demonstrated that the GA and GAHH performed satisfactorily in term of efficacy and efficiency for problem instances involving both small-and large-sized orders. In the case of the GAHH with a scheme for randomly selecting from among seven operators, we only required a small population and a small number of iterative cycles (ITRN) to obtain a high-quality solution compared with the GA without a neighborhood heuristic. Overall, on the basis of the simulation results, we can recommend using the GAHH approach to solve the problem considered herein due to its superior performance and speed in attaining high-quality solutions in terms of the AEP and RPD. In the future, researchers can investigate the COSP with more than three scenario-dependent processing times as well as order rejection based on different criteria; for example, the total completion time, the makespan, or even a multiobjective case. Another direction for future study could involve using the GAHH with seven versions to evaluate the impact of having no operator, one operator, and multiple operators.