A Novel Disruptive Innovation-Like Algorithm for Single-Machine Scheduling with Sequence-Dependent Family Setup Times

: This paper proposes a novel disruptive innovation-like algorithm (DILA) for the problem of scheduling a single machine with sequence-dependent family setup times to minimize the total tardiness. The proposed algorithm is derived from Christensen’s (1997) theory of disruptive innovation. Based on this theory, a DILA is proposed, which ﬁrst generates two initial populations: the mainstream market population and the emerging market population. Then, to improve the quality of solutions in the populations, three phases are created within a generation. Finally, the DILA allows the populations to evolve for several generations until the termination condition is met. Two populations constructed in DILA are to overcome the weakness of premature. In addition, two functions—the member-added function and member-removed function—implemented in DILA are added to increase the diversity. The DILA was tested on a dataset of 1440 observations from the literature. The computational results conﬁrm that DILA is very effective. We observe that the DILA outperformed HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR. Compared with HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR, the proposed DILA produced 258, 113, 105, and 68 better solutions, respectively, for the 1440 instances, whereas the number of worse solutions did not exceed 4.


Introduction
In this paper, a novel disruptive innovation-like algorithm (DILA) is proposed for solving the single-machine scheduling problem with sequence-dependent family setup times (SMSDFS) to minimize the total tardiness. The candidate scheduling problem can be found in real-life manufacturing environments, such as the module process in Thin film transistor liquid crystal display (TFT LCD) manufacturing [1] and steel wire factories [2,3]. In addition, the considered SMSDFS problem is also suitable for evaluating the performance of the DILA. This is because the single-machine scheduling problem can be used as the basis for the development of heuristics [4].
The concept of the proposed DILA was mainly derived from Christensen's [5] theory of disruptive innovation. Disruptive innovation divides the market into a mainstream and an emerging market. The price and quality of the products offered by companies within the mainstream market are higher, which is not accepted by all consumers. Hence, another group of companies introduces disruptive innovation products into the emerging market to attract those consumers who do not accept the products offered in the mainstream market. Consumers in the emerging market have no choices other than the disruptive innovation products. However, with the improvement of the quality of the low-end products, mainstream consumers, who are more fastidious about the quality of products, also move away from the original value network to enter this emerging market. This also means that companies in the mainstream market that focus on the provision of high-end products cannot grow after a period of time. Thus, the emerging market gradually becomes a mainstream market. In addition, the companies in the emerging market will continue to self-develop and grow their business evolution and development processes as well as imitate the management approach of the companies with outstanding performance. Some

Description of the Problem
The scheduling problem considered in this study can be described as follows: A set of independent jobs J i (i = 1, 2, . . . , n) has to be scheduled on a single machine without interruption or preemption. All the jobs are available for processing at time zero, machine breakdown does not occur, and the machine can process only one job at a time. Each job J i has a positive processing time p i and a due date d i . Jobs are classified into F families. Each family l has n l jobs (l = 1, 2, . . . , F) such that n 1 + n 2 + . . . n F = n. (i) denotes the family of job i. Here, the family setup time, s ij , is determined by job j that is scheduled to be performed after job i by the machine, with f (i) = f (j). If jobs i and j belong to the same family ( f (i) = f (j)), there is no setup time, that is, s ij is zero. The following notations are used to describe the SMSDFS problem and the proposed DILA in the next section: i = job index, i = 1,2,3, . . . ,n p i = processing time of job i l = family index, l = 1,2,3, . . . ,F (i) = the family of job i s ij = family sequence-dependent setup time of job j is processed immediately after job i d i = due date of job i P1 = population 1 P2 = population 2 P num1 = the number of members in P1 P num2 = the number of members in P2 Π 1p = the solution p in P1 Π 2p = the solution p in P2 Π gbest = the best solution in P1 and P2 Π lbest 1 = the local best solution in P1 Π lbest 2 = the local best solution in P2 V 1p = the value of the member p in P1 V 2p = the value of the member p in P2 V lbest 1 = the value of the local best solution in P1 V lbest 2 = the value of the local best solution in P2 V gbest = the value of the global best solution in P1 and P2 Π V ND = the solution generated by variable neighborhood descent (VND) algorithm V V ND = the value of the solution generated by VND algorithm c1, c2 = the values for controlling the probabilities of executing the member-added function and member-removed function C i = completion time of job i The scheduling problem considered in this paper is to minimize the total tardiness. The objective can be defined as below:

Proposed Algorithm
In this paper, a novel DILA is proposed to solve the candidate problem. The main components of the DILA include the initialization of the populations and the three-phase evolution, namely, the mainstream market evolution, mainstream and emerging market co-evolution, and emerging market evolution. In the first phase of the DILA, only the mainstream market population evolves; the emerging market population does not exist in this first phase of evolution. In the second phase of the DILA, the emerging market population joins the second phase of evolution, and the mainstream market population and the emerging market population evolve at the same time. The members of both the mainstream market population and the emerging market population can learn from the local best solution or the global best solution. In the last phase, only the emerging market population is evolving. In the DILA, the three phases are repeated until the termination condition is met. The three phases can be regarded as one generation. After the end of one generation, it is judged which population has a better solution, and then that population becomes the mainstream market population in the next generation. Figure 1 illustrates the basic process of the DILA, and the main components of the proposed DILA are described below.

Initial Populations
As mentioned above, the DILA includes two populations. The first population's members are generated by two heuristics: the apparent tardiness cost with setup (ATCS) rule [11] and the destruction and construction (DC) procedure [12]. The second population's members are generated by the RANDOM rule and the DC procedure. The priority of the ATCS rule used in the paper can be expressed as follows: where t denotes the current time; job j has just been completed on the machine; p and s represent the average processing time and average setup time of all remaining jobs, respectively; k1 and k2 are look-ahead or scaling parameters. Values of k1 = 4.75 and k2 = 0.15 are used in the paper. The generation method for the first population's members works as follows: The ATCS rule is used to generate a member solution. Then, based on the generated member solution, the DC procedure is used to regenerate Pnum1 different solutions, where Pnum1 is the size of population 1. The main difference between the generation mechanism for the first and second population's members is that the generated solution from the second population uses the RANDOM rule to randomly generate a solution. Based on the generated member solution, the DC procedure is used to regenerate Pnum2 different solutions, where Pnum2 is the size of population 2.

Initial Populations
As mentioned above, the DILA includes two populations. The first population's members are generated by two heuristics: the apparent tardiness cost with setup (ATCS) rule [11] and the destruction and construction (DC) procedure [12]. The second population's members are generated by the RANDOM rule and the DC procedure. The priority of the ATCS rule used in the paper can be expressed as follows: where t denotes the current time; job j has just been completed on the machine; p and s represent the average processing time and average setup time of all remaining jobs, respectively; k 1 and k 2 are look-ahead or scaling parameters. Values of k 1 = 4.75 and k 2 = 0.15 are used in the paper. The generation method for the first population's members works as follows: The ATCS rule is used to generate a member solution. Then, based on the generated member solution, the DC procedure is used to regenerate P num1 different solutions, where P num1 is the size of population 1. The main difference between the generation mechanism for the first and second population's members is that the generated solution from the second population uses the RANDOM rule to randomly generate a solution. Based on the generated member solution, the DC procedure is used to regenerate P num2 different solutions, where P num2 is the size of population 2.
We calculate the objective value for each member of each population. The population that has the best solution is set as the mainstream market population (P1), and the other one is set as the emerging market population (P2).

Evolution of the Mainstream Market Population
In the DILA, the mainstream market population evolves first (phase 1). The procedure for the evolution of the mainstream market population is presented in Figure 2. The members of the population will be updated by using two methods: (1) learning from the local best solution and (2) self-learning. The partially mapped crossover (PMX) operator suggested by Goldberg and Lingle [13] is modified to be the learning method in the paper. For convenience, we named it MPMX. It allows an information exchange between member p (Π 1p ) and the local best solution Π lbest 1 when the member's solution value (V 1p ) is not equal to the local best solution value (V lbest 1 ). The MPMX crossover used in this study is described as follows. Taking Figure 3 as an example, there are two members with 10 jobs between them in the diagram, namely, member Π 1p and the local best solution Π lbest 1 .
The member Π 1p can inherit a fixed length of subsequence from the local best solution Π lbest 1 . We assumed that the fixed length is 4 and one point was selected randomly is 3, for example, positions 3 and 6, which indicated that member Π 1p would inherit the jobs in positions 3 to 6 from the local best solution Π 1p . The rest of the positions in Π p , such as J 3 , J 10 , and J 8 , were retained if they were not the same as the inherited jobs in positions 3 to 6. The order of the repetitive parts, such as J 5 , J 4 , and J 1 , were determined by the original order of member Π 1p . Then, they were filled into the blank positions 1, 2, and 9 in sequence from left to right. Then, J 5 is first placed into the blank position; when all the jobs are put into the blank positions, a new individual solution Π p is formed. Furthermore, the DC procedure [12] is applied as the self-learning method. We calculate the objective value for each member of each population. The population that has the best solution is set as the mainstream market population (P1), and the other one is set as the emerging market population (P2).

Evolution of the Mainstream Market Population
In the DILA, the mainstream market population evolves first (phase 1). The procedure for the evolution of the mainstream market population is presented in Figure 2. The members of the population will be updated by using two methods: (1) learning from the local best solution and (2) self-learning. The partially mapped crossover (PMX) operator suggested by Goldberg and Lingle [13] is modified to be the learning method in the paper. For convenience, we named it MPMX. It allows an information exchange between member p ( ) and the local best solution when the member's solution value ( ) is not equal to the local best solution value ( ). The MPMX crossover used in this study is described as follows. Taking Figure 3 as an example, there are two members with 10 jobs between them in the diagram, namely, member and the local best solution . The member can inherit a fixed length of subsequence from the local best solution . We assumed that the fixed length is 4 and one point was selected randomly is 3, for example, positions 3 and 6, which indicated that member would inherit the jobs in positions 3 to 6 from the local best solution . The rest of the positions in , such as J3 , J10, and J8 , were retained if they were not the same as the inherited jobs in positions 3 to 6. The order of the repetitive parts, such as J5 , J4, and J1 , were determined by the original order of member . Then, they were filled into the blank positions 1, 2, and 9 in sequence from left to right. Then, J5 is first placed into the blank position; when all the jobs are put into the blank positions, a new individual solution is formed. Furthermore, the DC procedure [12] is applied as the self-learning method.   When all members in population 1 have been updated, a solution is selected, and the variable neighborhood descent (VND) algorithm is used to improve the solution. The selected solution value is the minimum among all the members' solution values and is different from the local best solution value . The VND is a metaheuristic proposed by Hansen and Mladenović [14] has been successfully applied to solve scheduling problems. The VND searches the solution space using a set of predefined neighborhood structures. The search escapes from local optima by systematically changing these neighborhood structures. In the paper, a series of neighborhood structures based on the insertion move of a subsequence (block) is implemented as the neighborhood structures of the VND. The variable sequence of neighborhood structures used in this paper is the same as the one in Chen [15], which employs eight insertion neighborhood structures. The move of a subsequence may involve a one-job insertion move, two consecutive jobs insertion move, …, or eight consecutive jobs insertion move. For convenience, a one-job insertion move is denoted as INS (1), a two consecutive jobs insertion move is denoted as INS (2), …, and the eight consecutive jobs insertion move is denoted as INS (8).
Finally, two functions-the member-added function (MAF) and member-removed function (MRF)-are implemented to represent the joining and disappearance of members. A new member is added to the population, or a member of the population is eliminated according to two controlled parameters, c1 and c2, which are less than or equal to 1.0. The values c1 and c2 are set in order to generate competent probabilities that preserve population diversification. Eleven combinations of c1 and c2 shown in Table 1 are considered in this paper. The probability values pa and pr for each of the combinations are calculated as follows. First, c1 is used to determine pa (pa = c1), and c2 is used to determine pr (pr = (1 − c1) × c2). For instance, if c1 = 0.2 and c2 = 0.25 (the second combination in Table  1), then pa = c1 = 0.2 and pr = (1−c1) × c2 = 0.2. When all members in population 1 have been updated, a solution Π 1k is selected, and the variable neighborhood descent (VND) algorithm is used to improve the solution. The selected solution value V 1k is the minimum among all the members' solution values and is different from the local best solution value V lbest 1 . The VND is a metaheuristic proposed by Hansen and Mladenović [14] has been successfully applied to solve scheduling problems. The VND searches the solution space using a set of predefined neighborhood structures. The search escapes from local optima by systematically changing these neighborhood structures. In the paper, a series of neighborhood structures based on the insertion move of a subsequence (block) is implemented as the neighborhood structures of the VND. The variable sequence of neighborhood structures used in this paper is the same as the one in Chen [15], which employs eight insertion neighborhood structures. The move of a subsequence may involve a one-job insertion move, two consecutive jobs insertion move, . . . , or eight consecutive jobs insertion move. For convenience, a one-job insertion move is denoted as INS(1), a two consecutive jobs insertion move is denoted as INS (2), . . . , and the eight consecutive jobs insertion move is denoted as INS (8).
Finally, two functions-the member-added function (MAF) and member-removed function (MRF)-are implemented to represent the joining and disappearance of members. A new member is added to the population, or a member of the population is eliminated according to two controlled parameters, c 1 and c 2 , which are less than or equal to 1.0. The values c 1 and c 2 are set in order to generate competent probabilities that preserve population diversification. Eleven combinations of c 1 and c 2 shown in Table 1 are considered in this paper. The probability values p a and p r for each of the combinations are calculated as follows. First, c 1 is used to determine p a (p a = c 1 ), and c 2 is used to determine p r (p r = (1 − c 1 ) × c 2 ). For instance, if c 1 = 0.2 and c 2 = 0.25 (the second combination in Table 1), then p a = c 1 = 0.2 and p r = (1−c 1 ) × c 2 = 0.2. The values of p a and p r for the combinations in Table 1 demonstrate the purpose of selecting the 11 combinations. A combination with a larger p a and a smaller p r implies that the probability that a member will be added is larger than the probability that a member will be removed. On the contrary, for a combination with a smaller p a and a larger p r , the probability that a member will be removed is larger than the probability that a member will be added. The first combination (c 1  The MAF is implemented with a probability of p a ; this function randomly generates a solution and applies the DC procedure to shake it and then adds it to the population. The MRF is implemented with a probability of p r ; this function eliminates the member with the worst objective value. This search process of the phase continues until a phase termination criterion is satisfied.
To increase the diversity of the population, the two-stage MAF and MRF mechanism is proposed. This two-stage mechanism means that in the first half of all the execution time, we use a set of control probability and then use another set of control probability in the last half of the execution time to do MAF and MRF.

Co-Evolution of Mainstream Market Population and Emerging Market Population
After the mainstream market evolution, the emerging market members join the evolution (phase 2), namely, the co-evolution. The procedure for the co-evolution of the mainstream market and the emerging market is presented in Figure 4. The members in the mainstream and emerging market populations learn from the global best solution and engage in self-learning. The MPMX crossover operator, DC procedure, MAF, and MRF used in phase 1 are also used in phase 2. However, in phase 2, the learning method allows information to be exchanged between a member and the global best solution Π gbest when the member's solution value is not equal to the global best solution value. Members of the emerging market population can also learn from the global best solution. This is because the emerging market companies can refer to the practices of the mainstream market companies that already exist. This search process of the phase continues until a phase termination criterion is satisfied or the local best solution of the emerging market population (V lbest 2 ) is better than the local best solution of the mainstream market population (V lbest 1 ). If V lbest 2 is smaller than V lbest 1 , the profitability of the emerging market has surpassed that of the mainstream market. Therefore, phase 2 can be terminated and phase 3 begins, in which only the emerging market evolves.

Evolution of Emerging Market Population
In phase 3, only the emerging market population evolves. The procedure for the evolution of the emerging market population is presented in Figure 5. The members in the emerging market population will learn from the global best solution and engage in self-learning. The MPMX crossover operator, DC procedure, MAF, and MRF used in phase 2 are also used in this phase.

Evolution of Emerging Market Population
In phase 3, only the emerging market population evolves. The procedure for the evolution of the emerging market population is presented in Figure 5. The members in the emerging market population will learn from the global best solution and engage in self-learning. The MPMX crossover operator, DC procedure, MAF, and MRF used in phase 2 are also used in this phase.

Computational Experiments
To verify the performance of the proposed DILA, two sets of experiments were conducted. The first set investigates the effects of the controlled probability and the phase termination time. The second set compares the performance of the DILA with the best parameters found in the first set against several metaheuristics reported in the literature.
We coded the DILA in C++ and executed it on an Intel Core i7 PC with a 3.4 GHz CPU and 4 GB RAM. A series of computational experiments were conducted to compare the results of all the combinations of the algorithms with the benchmark instances provided by Jacob and Arroyo [9]. The problem instances are characterized by four factors: the number of jobs n, number of families F, due-date range r, and setup-time range η. The number of jobs has three levels, which are set at 60, 80, and 100. The number of families has four levels, which are set at 2, 3, 4, and 5. The due date of jobs is generated from integer numbers uniformly distributed in the interval 0, ∑ (recall that r is the due-date range). The due-date range r has four levels: 0.5, 1.5, 2.5, and 3.5. Three setup-time ranges η- [11,20], [51,100], and [101, 200]-are used to generate job setup times. The processing time for a job is generated from a discrete uniform distribution [1,99]. From each of the 144 combinations of the four factors, 10 problem instances, each involving a processing time in the interval [1,99], were generated. Therefore, a total of 1440 instances were used to evaluate the proposed algorithms.
The relative deviation index (RDI) and number of best solutions (NBS) produced are the two criteria used to evaluate the performance of the proposed algorithms. The RDI is calculated as follows: Sa is the solution obtained by method a, and Sb and Sw are the best and the worst solution values, respectively, among the solutions obtained by all the methods included in the comparison.

The First Set of Experiments
For the first set of experiments, six instances (instances 506, 709, 781, 809, 897, and 909) with 80 jobs were used to investigate the effects of the parameters. The parameter of controlled probability has 17 separate levels (see Table 2). The single-stage type has 11

Computational Experiments
To verify the performance of the proposed DILA, two sets of experiments were conducted. The first set investigates the effects of the controlled probability and the phase termination time. The second set compares the performance of the DILA with the best parameters found in the first set against several metaheuristics reported in the literature.
We coded the DILA in C++ and executed it on an Intel Core i7 PC with a 3.4 GHz CPU and 4 GB RAM. A series of computational experiments were conducted to compare the results of all the combinations of the algorithms with the benchmark instances provided by Jacob and Arroyo [9]. The problem instances are characterized by four factors: the number of jobs n, number of families F, due-date range r, and setup-time range η. The number of jobs has three levels, which are set at 60, 80, and 100. The number of families has four levels, which are set at 2, 3, 4, and 5. The due date of jobs is generated from integer numbers uniformly distributed in the interval 0, r ∑ n j=1 p j (recall that r is the due-date range). The due-date range r has four levels: 0.5, 1.5, 2.5, and 3.5. Three setup-time ranges η- [11,20], [51,100], and [101, 200]-are used to generate job setup times. The processing time for a job is generated from a discrete uniform distribution [1,99]. From each of the 144 combinations of the four factors, 10 problem instances, each involving a processing time in the interval [1,99], were generated. Therefore, a total of 1440 instances were used to evaluate the proposed algorithms.
The relative deviation index (RDI) and number of best solutions (NBS) produced are the two criteria used to evaluate the performance of the proposed algorithms. The RDI is calculated as follows: Sa is the solution obtained by method a, and S b and S w are the best and the worst solution values, respectively, among the solutions obtained by all the methods included in the comparison.

The First Set of Experiments
For the first set of experiments, six instances (instances 506, 709, 781, 809, 897, and 909) with 80 jobs were used to investigate the effects of the parameters. The parameter of controlled probability has 17 separate levels (see Table 2). The single-stage type has 11 levels, and the two-stage type has six levels. The parameter of the phase termination time has three levels: 1, 2, and 3 s. Therefore, there are a total of 51 different combinations of the two major parameters. The remaining parameters of the DILA were set as follows: the fixed length for the MPMX crossover was set to 15, the DC size for the DC procedure was set to 4, the neighborhood structure was set to INS(1) + INS(2) + . . . + INS(8), the initial population size was set to 30, and the maximum execution time was set to 40 s. Then, the DILA was applied to each of the 51 combinations to solve the six instances five times each. Table 2. The values of the controlled probability.

Type of MAF and MRF
Controlled Probability (p a , p r ) Single Stage Analysis of variance (ANOVA) was used to analyze the RDI produced by the DILA runs of all the 51 combinations for the six instances. Table 3 presents the results of the ANOVA, showing that the parameter of controlled probability significantly affects the performance of the DILA. Therefore, we applied Duncan's test to test if the performance of any two controlled probabilities is significantly different. Table 4 presents the results of Duncan's test. Note that the different letters indicate the performance of the DILA with the indicated levels being significantly different. The results in Table 4 show that the following combinations of the controlled probabilities produce the best mean RDI: stage 1: (p 1 a = 0.5, p 1 r = 0.2); stage 2: (p 2 a = 0.8, p 2 r = 0.2). Furthermore, the results show that the single-stage type produces the six worst mean RDIs. Table 3, the ANOVA table, also shows that the phase termination time does not significantly affect the output. This means that the three levels do not affect the DILA differently. However, the phase termination time of 2 s produces the best mean RDI. Based on the results, these analyses suggest that the best parameters are as follows: controlled probability: stage 1 = (0.50, 0.20) and stage 2 = (0.80, 0.20) and phase termination time: 2 s.

The Second Set of Experiments
The second set of experiments investigated the performance of the DILA using the best parameter values determined in the first set of experiments. We compared the performance of the proposed DILA with four heuristics [9]: HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR. The results of the four heuristics are provided by Jacob and Arroyo [9]. For a fair comparison of these algorithms, 30 runs of the DILA were intentionally performed with a time limit of n*0.5 s for 1440 test instances, which is in the same stop condition as in Jacob and Arroyo [9]. Table 5 lists the RDI and NBS for different job and family sizes obtained by the DILA and the four heuristics. There are 120 instances per n × F group. The results in Table 6 show that the DILA was able to find the best solutions for 1435 out of 1440 instances (99.7%).  Table 6 presents the results of the paired t-tests for HGA, ILS_BASIC, ILS_DP, ILS_DP + PR, and the DILA. The results show that the DILA dominates HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR at a significance level of 0.05. A comparison with the reference heuristics is summarized in Table 7, where the number of solutions better than, equal to, and worse than those obtained by the DILA are presented. We observe that the DILA outperformed HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR. Compared with HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR, the proposed DILA produced 258, 113, 105, and 68 better solutions, respectively, for the 1440 instances, whereas the number of worse solutions did not exceed 4.   We further analyzed the performance of the DILA across a wide variety of values for the following parameters: number of jobs (n), number of families (F), due-date range (r), and setup-time range (η). The results are summarized in Tables 8-11. In Table 8, we observe that the DILA performs better than the other heuristics regardless of the number of jobs. For the case of 60 jobs, the DILA can find all the best solutions, and the average RDI (0.02) produced by ILS_DP + PR is very close to the average RDI value (0.00) generated by DILA. In Table 9, we can clearly see that the DILA performs better than the other heuristics regardless of the number of families. DILA can find all the best solutions for two families and five families. ILS_DP + PR can also find all the best solutions for two families. The results in Tables 10 and 11 also indicate that the DILA performs better than the other heuristics regardless of the values of due-date ranges and setup-time ranges. The DILA can find all the best solutions when the setup-time range is [11,20]. The average RDI produced by ILS_DP + PR is 0.94 when the setup range is [51, 100]. The average RDI value is very close to the value (0.63) generated by DILA.

Conclusions
This paper has proposed a novel DILA to solve the problem of single-machine scheduling with sequence-dependent family setup times while minimizing the total tardiness of jobs. The DILA first generates two initial populations: the mainstream market population and the emerging market population. Then, three phases within a generation are used to evolve the quality of solutions in the two populations. Finally, the DILA allows the population to evolve for several generations until the maximum execution time is met. The MAF and MRF for the two-stage type (stage 1 = (0.50, 0.20) and stage 2 = (0.80, 0.20)) and the phase termination time of 2 s were incorporated in the DILA to help achieve better performance. Computational experiments on 1440 benchmark instances were used to compare the results produced by the proposed DILA against those of several metaheuristics. The results show that the proposed DILA is superior to the HGA, ILS_BASIC, ILS_DP, and ILS_DP + PR heuristics. Furthermore, the DILA found the best solution in 1435 out of 1440 instances.
We also analyzed the performance of the DILA across a wide variety of values for the following parameters: number of jobs (n), number of families (F), due-date range (r), and setup-time range (η). The DILA performs better than the other heuristics regardless of every level of the four parameters. The DILA can find all best solutions when the number of jobs is 60, the numbers of families are 2 and 5, and the setup-time range is [11,20].
Future research may involve applying the proposed heuristic to other machine scheduling problems, for example, problems with parallel machines, flow shops, and hybrid flow shops. Furthermore, a self-adaptive MAF and MRF strategy may be the future direction for research on the DILA.