Minimizing Makespan in A Two-Machine Flowshop Problem with Processing Time Linearly Dependent on Job Waiting Time

This paper is aimed at studying a two-machine flowshop scheduling where the processing times are linearly dependent on the waiting times of the jobs prior to processing on the second machine. That is, when a job is processed completely on the first machine, a certain delay time is required before its processing on the second machine. If we would like to reduce the actual waiting time, the processing time of the job on the second machine increases. The objective is to minimize the makespan. When the processing time is reduced, it implies that the consumption of energy is reduced. It is beneficial to environmental sustainability. We show that the proposed problem is NP-hard in the strong sense. A 0-1 mixed integer programming and a heuristic algorithm with computational experiment are proposed. Some cases solved in polynomial time are also provided.


Introduction
In this paper, we consider a two-machine flowshop scheduling problem where the processing times are linearly dependent on the waiting times of the jobs prior to processing on the second machine. The objective is to minimize the makespan. In this problem, when a job is processed completely on the first machine, a certain delay time is required before its processing on the second machine. If we can find a way to reduce the certain delay time at the cost of extra processing time added to the processing time of the job on the second machine, the whole processing time of all jobs may be reduced. When the whole processing time is reduced, the consumption of energy is reduced. This contributes to environmental sustainability. In addition, the managerial implication of this approach is that a production scheduler has more leeway to arrange a job sequence to reach the goal. One realistic example of this kind of scheduling exists in a painting process. Generally, there are at least two layer-painting stages in a painting engineering. After the first stage, the product has to wait for some time until it is dry naturally. Then, the product goes to the second stage. If we want to reduce the waiting time, we can use a dryer to accelerate the drying process. This implies that the cost of extra processing time is added to the processing time of the job on the second machine. Another practical scheduling problem arises at the cooking-chilling stage in a food plant [1]. The chilling process must start within 30 min of the completion of cooking. Otherwise, the food has to be discarded as unfit for human consumption. If there is an advantage in terms of the makespan minimization, we can reduce the waiting time after the completion of cooking at the cost of an extra processing time of the chilling process. In such a situation, it is good for the sustainability of the earth because it involves less food time is allowed to be decreased at the cost of extra processing time added to the processing time of a job on the second machine.
The organization of the remainder of this paper is as follows. In Section 2, there is a description of the problem and its complexity. A 0-1 mixed integer programming formulation is given in Section 3 and the heuristic algorithm is presented in Section 4. Thereafter, computational experiments are reported in Section 5. Finally, in Section 6, we give the conclusions.

Problem Description and Complexity
The proposed two-machine makespan flowshop scheduling problem denoted T with processing time linearly dependent on job waiting time is described as follows.
First, some notations are introduced in the following and additional notations will be given when needed throughout the paper. For a given set of jobs J = {J 1 , J 2 , . . . , J n }, let a i and b i be the regular processing times of job J i on the first machine M 1 and the second machine M 2 , respectively. We assume that for each job J i , when J i is processed completely on machine M 1 , a delay time d i is required before its processing on machine M 2 . However, in some situations, the actual waiting time w i is allowed to be smaller than the delay time d i at the cost of extra processing time added to the processing time b i of job J i on M 2 . That is, if the actual waiting time w i of J i is smaller than d i , then the processing time of J i on machine M 2 is given by However, if w i ≥ d i , then the processing time on machine M 2 is b i . The objective is to find the optimal schedule minimizing the makespan.
First, in the proposed problem, if α i > 1, there is no benefit to reduce the actual time w i , and then the proposed problem is reduced to the problem studied in Yu et al. [18]. Hence, if α i > 1, the proposed problem is also NP-hard in the strong sense. Next, we will show that the partition problem [26] reduces to the proposed problem if 0 < α i = α ≤ 1, i = 1, . . . , n. Consider the following well-known NP-complete problem: Partition: Given positive integers s 1 , s 2 , . . . , s k , does there exist a subset For a given instance of partition, s 1 , s 2 , . . . , s k , an instance of the proposed problem is constructed as follows: where i∈N s i = 2S. We will show that Partition has a solution if and only if the above instance has an optimal schedule with the minimum makespan C max = 4S 2 + S + 3. Lemma 1. For the above instance, it is sufficient to consider the schedules that have the same job processing sequence on both machines for the job J i , i ∈ {1, . . . , k + 1}.
Proof. If a schedule Π does not have the same job processing sequence on both machines for the job J i , i∈{1, . . . , k + 1}, then there are two cases: (1) There is a job J i , which directly precedes J j on machine M 1 and follows J j on machine M 2 , i, j ∈ {1, . . . , k + 1}, perhaps with intervening jobs (Figure 1a). We may interchange the order of J i and J j on machine M 1 without increasing the makespan, because There is a sequence (A 1 , J i , J k+2 , J j , A 2 ) on machine M 1 , where A 1 and A 2 are subsequences, and J i follows J j on machine M 2 , perhaps with intervening jobs (Figure 1b). We may interchange the processing order of J i and J j in (A 1 , J i , J k+2 , J j , A 2 ) as the following sequence (A 1 , J k+2 , J j , J i , A 2 ) on machine M 1 without increasing the makespan (Figure 1c), because d i = d j = 0.  This process of interchanging jobs may be repeated until a schedule Π' is obtained with the same order on machine M1 as that on machine M2 for the job Ji, i{1, . . . , k+1}. Π' is clearly not worse than Π. Therefore, Lemma 1 holds. □ Lemma 2. If Jk+1 is not processed first, then the makespan of any schedule is greater than 4S 2 + S + 3.

Proof.
For any given schedule, if Ji (iN) is processed first, then Similarly, if Jk+2 is processed first, then This process of interchanging jobs may be repeated until a schedule Π' is obtained with the same order on machine M 1 as that on machine M 2 for the job J i , i ∈ {1, . . . , k + 1}. Π' is clearly not worse than Π. Therefore, Lemma 1 holds.

Lemma 2.
If J k+1 is not processed first, then the makespan of any schedule is greater than 4S 2 + S + 3.

Proof.
For any given schedule, if J i (i ∈ N) is processed first, then Similarly, if J k+2 is processed first, then Thus, we only consider the schedules whose J k+1 is processed first on both machines. Lemma 3. If J k+2 is not processed second on machine M 1 , then the makespan of any schedule is greater than 4S 2 + S + 3.
Proof. Let U = {i|J i is processed between J k+1 and J k+2 on machine M 1 }. If there are jobs J i , i∈U, then the following two cases are considered: (1) If any job J i , i∈N, does not create any idle-time slot on machine M 2 , that is, the processing of these jobs on machine M 2 is continuous, then there are two subcases: , there is a lower bound of the C max , which is 4S 2 + S + 3 . In this case, J k+2 creates an idle-time slot l k+2 > 0; therefore, C max > 4S 2 + S + 3.
(2) Similarly, if a job J r , r∈N, creates an idle-time slot l r > 0, then C max > 4S 2 + S + 3.  then Thus, we only consider the schedules whose J k+2 is processed second on machine M 1 .
(ii) If a job Jr, r  E-{j}, creates an idle-time slot lr > 0, then Cmax > 4S 2 + S + 3. Thus, the makespan is greater than 4S 2 + S + 3 if partition has no solution. It follows that partition has a solution if and only if the optimal schedule of the above instance has the minimum makespan Cmax = 4S 2 + S + 3. □  If partition has no solution, we will show that the makespan of any schedule for the above instance is greater than 4S 2 + S + 3. For given schedule in which J k+1 is processed first on two machines and J k+2 is processed second on machine M 1 , we let E = {i|J i is processed between J k+1 and J k+2 on machine M 2 }. By the assumption that partition has no solution and i∈E s i − S = c 0, we consider the following two cases:

A 0-1 Mixed Integer Programming Formulation
(1) If c < 0 and l > 0 (Figure 3b), we have If c > 0 (c ≥ 1/2, because s 1 , s 2 , . . . , s k are positive integers), without loss of generality, we let J j , j ∈ E, be the last processed job in E (Figure 3c), then there are two cases: does not create any idle-time slot, then we will show that job J j creates an idle-time slot l j > 0 on machine M 2 ( Figure 3c). The idle-time slot is If a job J r , r ∈ E-{j}, creates an idle-time slot l r > 0, then C max > 4S 2 + S + 3.
Thus, the makespan is greater than 4S 2 + S + 3 if partition has no solution. It follows that partition has a solution if and only if the optimal schedule of the above instance has the minimum makespan C max = 4S 2 + S + 3.

A 0-1 Mixed Integer Programming Formulation
According to the similar analysis in Yang and Chern [23], a 0-1 mixed integer programming of the problem is developed as follows. First, we denote that A i = the starting time of job i on machine M 1 ; B i = the starting time of job i on machine M 2 ; w i = the actual waiting time of job i before its processing on machine For each job i, it is clear to have (1) In addition, it is necessary to assure that no two operations can be processed simultaneously by the same machine. Suppose, for example, that job j precedes job i on machine 1, then it is necessary to have On the other hand, if job i precedes job j on machine 1, then it is necessary to have These inequalities are called disjunctive constraints, because one and only one of these inequalities must hold. In order to accommodate these constraints in the formulation, these disjunctive constraints can be rewritten as follows: where M represents a sufficiently large positive number. By the same way, these disjunctive constraints of job i and job j processed on machine 2 can be expressed as follows: We note that then it is necessary to have W i ≥ 0 and For the makespan problem, it is necessary to have Then, a disjunctive integer programming formulation of the proposed problem can be given by The total number of type (1), (6), and (7) constraints is equal to 3n. The total number of type (2) and (3) constraints is equal to n(n − 1). The total number of type (4) and (5) constraints is equal to n(n − 1). Hence, the total number of constraints is n(2n + 1). There are 3n + 1 nonnegative variables of C max , A i , B i , and W i . We note that if y ijk is in the formulation, then y jik needs to be defined. Hence, there are n(n − 1)/2 0 − 1 integer variables of y ij1 and the same number of y ij2 . The total number of variables is thus n(n + 2) + 1.

Heuristic Algorithm and Its Worst-Case Performance
As a reducible waiting time is considered in the proposed problem, somehow, the proposed problem is similar to the two-machine flowshop scheduling problem with start and stop lags. Therefore, we first use the Maggu and Das's algorithm [27] to determine a sequence A (Algorithm 1).
Step 2. Determine the job processing order in the following way: 2.1 Decompose set J into the following two sets: 2.2. Arrange the members of set U in nondecreasing order of a i + d i , and arrange the members of set V in nonincreasing order of b i + d i . 2.3. A sequence A is the ordered set U followed by the ordered set V.
Some solvable cases are described in the following. First, if min 1≤i≤n {α i }≥1, the proposed problem is the same as that in Dell'Amico [17]. Therefore,

an optimal schedule is given by Maggu and
Das's Algorithm.

Theorem 2 ([23]
). In the problem, the case considered here is with max is an optimal schedule, where B is an arbitrary subsequence of the jobs without J i* , J j .
Proof. Please see the proof in Appendix A.
is an optimal schedule, where B is an arbitrary subsequence of the jobs without J i* , and we schedule the jobs with no-wait manner as shown in Figure 4. Proof. It is clear that a lower bound on the makespan is  bi + idi}, then the makespan of (B, Ji*) with no-wait manner, as shown in Figure 4, is equal to the lower bound  = n i i a 1 + bi* + i*di*. Hence, (B, Ji*) is an optimal schedule. □ In the following, we propose a heuristic algorithm for the problem. The heuristic algorithm is presented for the problem with ai  0, bi  0, i > 0, and di > 0 for all of the jobs. In this problem, if i  1, then it is useless to reduce the waiting time. However, if 0 < i < 1, then it is useful to reduce the waiting time as much as possible. A stepwise description of the algorithm is given as follows:

Heuristic algorithm
Step 0. (Initialization) Check those conditions stated in Theorem 2 and Theorem 3. If any one of conditions holds, the optimal schedule is obtained. Otherwise, generate a heuristic solution in the following steps. Determine a sequence π by using Maggu and Das's algorithm. Set k = 1 and C[0],1 = C[0],2 = 0.
Step 1. (Determining the reduced period of waiting time and the additional time on machine M2 for each job) Set Ck,1 = Ck-1,1+a [k]. Figure 5(a)), then go to Step 1.1. Figure 5(b)), then go to Step 1.2.  In the following, we propose a heuristic algorithm for the problem. The heuristic algorithm is presented for the problem with a i ≥ 0, b i ≥ 0, α i > 0, and d i > 0 for all of the jobs. In this problem, if α i ≥ 1, then it is useless to reduce the waiting time. However, if 0 < α i < 1, then it is useful to reduce the waiting time as much as possible. A stepwise description of the algorithm is given as follows (Algorithm 2):

Algorithm 2. Heuristic algorithm
Step 0. (Initialization) Check those conditions stated in Theorem 2 and Theorem 3. If any one of conditions holds, the optimal schedule is obtained. Otherwise, generate a heuristic solution in the following steps.  (Figure 5a), then go to Step 1.1. (Figure 5b), then go to Step 1.2. (Figure 5c), then go to Step 1.3.
Go to Step 2.
Go to Step 2.
. Go to Step 2.
In the heuristic algorithm, Step 0 first determines a sequence A by using Maggu and Das's algorithm.
Step 1 adjusts the reduced period of waiting time and the additional time on machine 2 for each job. In Step 1. Step 2. Set k = k+1 If k  n, go to Step 1. Otherwise, stop.
In the heuristic algorithm, Step 0 first determines a sequence A by using Maggu and Das's algorithm.
Step 1 adjusts the reduced period of waiting time and the additional time on machine 2 for each job. In Step 1.1 and Step 1. In the following, an example of five jobs is given to illustrate the heuristic algorithm. Example. There are five jobs; that is, J1, J2, J3, J4, and J5, to be processed on machine M1 and machine M2. The processing times of these jobs on machine M1 are a1 = 1, a2 = 3, a3 = 2, a4 = 3, and a5 = 2, respectively. The processing times on machine M2 are b1 = 5, b2 = 4, b3 = 1, b4 = 2, and b5 = 3, respectively. In the following, an example of five jobs is given to illustrate the heuristic algorithm.
According to the above heuristic algorithm, we obtain that the makespan of these five jobs is 16.6. Please see the details in Appendix B.

Lower bound
First, we assume that processing on each machine may be continuous, D 1 = {i?α i ≥ 1 for iN} and D 2 = N-D 1 . We can see that, if α i ≥ 1, then it is useless to reduce the waiting time. However, if 1 > α i > 0, then it is useful to reduce the waiting time as much as possible. Because all the jobs have to be processed on machine M 1 and M 2 , if the delay time is short relative to the corresponding processing time, we have an immediate lower bound On the other hand, if one of the delay times is long enough relative to the corresponding processing time, we have the second lower bound LB2 = max {max i∈D 1 Hence, a lower bound is calculated as follows: In the following, we will find the worst case of the heuristic algorithm.
Case 3. α [c] ≥ 1 (See Figure 6c) Theorem 4. Let π* be an optimal schedule and π be the Maggu and Das's schedule for the heuristic algorithm.
Proof. The proof is the similar to that of Theorem 4. Thus, we omit it here.

Computational Experiments
Although we find the worst case of the heuristic algorithm under some certain situations (case 1 and case 3), the upper bound of case 2 is still unknown. Therefore, in order to evaluate the overall efficiency of the heuristic algorithm, we generate several groups of random problems as follows: (1) n is equal to 10, 20,  In the computational experiment, a total of 720 test problems are generated. The computation times of algorithms for all the test problems are within one second. For each of these random problems, the percentage of the error e = (C h − C low ) * 100/C low is computed, where C h is the makespan of the heuristic solution and C low is the lower bound on the makespan.
The result is given in Table 1. There are 20 test problems for each problem type. To evaluate the overall performance of the heuristic algorithm, we compute the mean of all the average percentage errors reported in Table 1. The mean value is 1.97%, which suggests that the heuristic algorithm, on average, finds schedules that are within 1.97% of optimality. From Theorem 4 and 5, we can see that the upper bound of the heuristic algorithm is 2 * C low ; therefore, the performance of the heuristic algorithm is quite satisfactory. From Figure 7, the larger the value of d i , the greater the percentage of the error. Because the proposed heuristic algorithm is restricted to searching a near-optimal permutation schedule, it may imply that the optimal schedule is likely to be a non-permutation schedule when the values of delay times of jobs are larger. Therefore, the performance of the heuristic algorithm is better when d i is smaller. We can also see that the average percentage of errors decreases as the job number n increases for different values of d i . Especially, when the job number n is more than 100, the mean value of the average percentage of errors is less than 1%. In view of the NP-hardness of the problem, this result is quite encouraging as it provides efficient procedures for solving large-sized problems.

Conclusions
In this paper, we investigate a two-machine flowshop scheduling problem in which the processing times of the second operations are linearly dependent on the waiting times of the jobs. This problem is shown to be NP-hard. A 0-1 integer programming and an efficient heuristic algorithm with computational experiment are also proposed. In addition, the worst case of the heuristic algorithm under some situations is provided. From the computational experiments, the overall performance of the proposed algorithm is quite satisfactory, especially when the job number is large. Some cases solved in polynomial time are also provided.
There are some limitations in this study. For example, the proposed heuristic algorithm only searches a near-optimal permutation schedule. In the future research, to develop a heuristic algorithm to find both permutation and non-permutation schedules may improve this scheduling performance (makespan). In addition, another performance measure, such as total tardiness or maximum lateness, may be considered for the proposed problem. Actually, in real situations, the cost of the reduction of the waiting time may be non-linear or the compression of the waiting time may be limited. Therefore, future research may focus on these issues, too. There are some limitations in this study. For example, the proposed heuristic algorithm only searches a near-optimal permutation schedule. In the future research, to develop a heuristic algorithm to find both permutation and non-permutation schedules may improve this scheduling performance (makespan). In addition, another performance measure, such as total tardiness or maximum lateness, may be considered for the proposed problem. Actually, in real situations, the cost of the reduction of the waiting time may be non-linear or the compression of the waiting time may be limited. Therefore, future research may focus on these issues, too.

Acknowledgments:
The authors wish to thank the anonymous reviewers for their helpful comments on an earlier version of this paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A.
Theorem 2. In the problem, the case considered here is with and ai* +i*di* = . In the following, we will show that if bi + di}, and aj + dj  bi* +i*di* , then the makespan of (Ji*, Jj, B), as shown in Figure A1, is equal to the lower bound ai* +i*di* +   However, in such a case, only the delay time of the first job in the sequence is worthy to be reduced. Therefore, the constraint of a j + d j ≤ b i* +α i* d i* guarantees that the finishing time of J j on machine M 1 plus the delay time of the job is earlier than the finishing time of J i* (the first job) on machine M 2 . The constraint of max guarantees the finishing times of jobs (except for J i* and J j ) on machine M 1 , plus the delay times of these jobs are earlier than the finishing time of their previous jobs on machine M 2 . It also implies that a i ≤ b i for i = 1, 2, . . . , n.
We proceed by induction. First, we can obtain that the finishing time of J i* on machine M 2 is (a i* + b i* + α i* d i* ). The finishing time of J j (the second job) on machine M 1 plus the delay time of the job is (a i* + a j + d j ). If (a j + d j ) ≤ (b i* + α i* d i* ), then (a i* + a j + d j ) ≤ (a i* + b i* + α i* d i* ). This means that the finishing time of J j (the second job) on machine M 1 plus the delay time of the job is earlier than the finishing time of J i* on machine M 2 . There is no room for the reduction of the actual waiting time of J j . Therefore, job J j can be processed on machine M 2 immediately after job J i* is finished on machine M 2 . Then, the completion time of J j on machine M 2 is a i* +α i* d i* + b i* + b j . Therefore, the case m = 2 (J j = J 2 ) is true.
If the mth case is assumed to be true, that is, (a i * + a j + m k=3 a k + d m ) ≤ (a i * + b i * + α i * d i * + b j + m−1 k=3 b k ), and the completion time of the mth job on machine M 2 is (a i * + b i * + α i * d i * + b j + m−1 k=3 b k + b m ), then we show that the (m + 1)st case is also true.
The finishing time of the (m + 1)st job (say J m+1 ) on machine M 1 plus the delay time of the job is A = (a i * + a j + m k=3 a k + a m+1 + d m+1 ).
Go to Step 2.
Go to Step 2.
Go to Step 2.
Go to Step 2.