Variable Neighborhood Search Algorithms for an Integrated Manufacturing and Batch Delivery Scheduling Minimizing Total Tardiness

This article addresses an integrated problem of one batching and two scheduling decisions between a manufacturing plant and multi-delivery sites. In this problem, two scheduling problems and one batching problem must be simultaneously determined. In the manufacturing plant, jobs ordered by multiple customers are first manufactured by one of the machines in the plant. They are grouped to the same delivery place and delivered to the corresponding customers using a set of delivery trucks within a limited capacity. For the optimal solution, a mixed integer linear programming model is developed and two variable neighborhood search algorithms employing different probabilistic schemes. We tested the proposed algorithms to compare the performance and conclude that the variable neighborhood search algorithm with dynamic case selection probability finds better solutions in reasonable computing times compared with the variable neighborhood search algorithm with static case selection probability and genetic algorithms based on the test results.


Introduction
Over the past several decades, supply chain management (SCM) has emerged as an important topic and many operational research problems for the SCM have been attracted. Even though many researchers obtained local operational efficiency by optimizing the logistics flow of each entity (i.e., raw-material providers, manufacturing plants, whole-sale distributers, and customers) within supply chain (SC), they have become to recognize that the coordination between the entities in the SC is important to obtain global efficiency of logistics throughout the entities in the SC. Due to this reason, an integrated schedule of manufacturing and delivery has recently received great attention from the researchers.
One review article by Chen [1] introduced several single-period optimization models for integrating inbound-production and outbound-truck scheduling in the SC. The article introduced the integrated optimization models based on various objective function types using time, cost, and profit. For integrated scheduling problems between production and delivery, Fan et al. [2] studied the integrated production with a single machine and delivery scheduling with batching. The limitation of the study was that they considered the batch delivery to only one customer. Cakici et al. [3,4] investigated a similar problem with Fan et al. [2]. They extended the integrated scheduling problem to parallel machines in a production plant and multi-customers for batch delivery. However, they assumed that the delivery operation was processed using only a single truck. Agnetis et al. [5] studied the coordination problem of the batching and delivery problem, where product-part batches were

Problem Statement and Mixed Integer Linear Programming (MILP) Model
A number of orders were sequentially carried out by manufacturing, batching, and delivery operations between a manufacturing plant and multi-delivery sites. Many jobs in the various orders by customers were firstly received and produced by one of the identical parallel machines in a manufacturing plant. The jobs to be shipped to the same customer were grouped in a batch. The batch was loaded into one of the available trucks with a truck containing limit and delivered to the associated customer. Once the trucks were successfully delivered to the current delivery location, for the next delivery, they were directly returned to the manufacturing plant. In this article, three main decisions are to be determined: (1) machine scheduling, which gives a job assignment to a machine and a job sequence produced in each machine, (2) batching, which decides grouping the jobs to the same delivery place within a delivery capacity, and (3) truck delivery scheduling, which decides a batch assignment to truck and a batch sequence delivered in each truck. Total tardiness violating due times of each job is important to improve the service level of customers. Thus, the objective function is to minimize the total tardiness. For the mathematical formulation, the parameters and decision variables were defined in Appendix A.
In this model, the variables z M iim and z T kkt were specially introduced. The variable z M iim equals 1, if job i is assigned to the first processing sequence of machine m at the manufacturing plant. Similarly, the variable z T kkt equals 1, if batch k is assigned to the first delivery sequence to truck t. Since the batching is one main decision in the model, the initial set B is defined as the set of maximum available batches and the number of maximum available batches equals the number of jobs (|B| = |J|). Some dummy batches had no assigned jobs in set B, and we ignored the batches on the truck delivery scheduling. In these cases, we ignored the batches on the truck delivery scheduling, which were y C kn = 0 for ∀n ∈ C. For illustrating the proposed problem, a simple example is given in Figure 1 and Table 1. In the example, nine jobs from three orders by the corresponding customers were required to schedule manufacturing and delivery by two machines and two trucks. Jobs 1 and 2 were ordered by customer 1, jobs 3, 4, 5, and 6 were ordered by customer 2, and jobs 7, 8, and 9 were ordered by customer 3, respectively. In Table 1, the parameters of processing time, due time, and volume of each jobs, and transportation time to each customer are shown. From Figure 1, machine 1 sequentially produces jobs 3, 7, 5, and 9, and machine 2 also sequentially produces jobs 1,4,8,6, and 2 at the plant. According to the truck containing capacity (V = 10) and the ordering customer, 6 batches were grouped using the manufactured jobs. Once batching was completed, truck scheduling was processed based on the batches. For truck scheduling, truck 1 sequentially delivered three batches 3, 5, and 6 to customers 2, 3, and 3, and truck 2 sequentially delivered three batches, 1, 4, and 2, to customers 1, 2, and 1, respectively. From these schedules, the tardiness of each job is calculated in the last column of Table 1. Hence, the total tardiness of these schedules becomes 150.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 3 of 15 the truck delivery scheduling. In these cases, we ignored the batches on the truck delivery scheduling, which were = 0 for ∀ ∈ . For illustrating the proposed problem, a simple example is given in Figure 1 and Table 1. In the example, nine jobs from three orders by the corresponding customers were required to schedule manufacturing and delivery by two machines and two trucks. Jobs 1 and 2 were ordered by customer 1, jobs 3, 4, 5, and 6 were ordered by customer 2, and jobs 7, 8, and 9 were ordered by customer 3, respectively. In Table 1, the parameters of processing time, due time, and volume of each jobs, and transportation time to each customer are shown. From Figure 1, machine 1 sequentially produces jobs 3, 7, 5, and 9, and machine 2 also sequentially produces jobs 1,4,8,6, and 2 at the plant. According to the truck containing capacity ( = 10) and the ordering customer, 6 batches were grouped using the manufactured jobs. Once batching was completed, truck scheduling was processed based on the batches. For truck scheduling, truck 1 sequentially delivered three batches 3, 5, and 6 to customers 2, 3, and 3, and truck 2 sequentially delivered three batches, 1, 4, and 2, to customers 1, 2, and 1, respectively. From these schedules, the tardiness of each job is calculated in the last column of Table  1. Hence, the total tardiness of these schedules becomes 150.  Using the above problem parameters and decision variables, the MILP model for the proposed problem is as follows: s.t.  Using the above problem parameters and decision variables, the MILP model for the proposed problem is as follows: y C kn ≥ R in ·y B ik , f or ∀i ∈ J; ∀k ∈ B; ∀n ∈ C (12) y M im , y B ik , y T kt , y C kn = 0 or 1, f or ∀i ∈ J; ∀k ∈ B;∀t ∈ T; ∀n ∈ C z M ijm , z T klt = 0 or 1, f or ∀i, j ∈ J; ∀m ∈ M; ∀k, l ∈ B; ∀t ∈ T Constraint (2) is to determine the precedence relation of producing jobs within the same machine at the manufacturing plant and calculate the starting time of processing the jobs. Constraint (3) restricts that each job must be assigned to one of the machines in the manufacturing plant. Constraints (4)-(6) have a relation that jobs assigned to the same machine must appear exactly once in their sequence. Constraint (4) ensures that the beginning of the production sequence in each machine can assign, at most, one job. So, one job must be assigned to the first position if the rest of jobs are succeeded by the job in each machine. Constraint (5) guarantees that one job will be immediately preceded by one job in a machine if it is assigned to the machine, and Constraint (6) also guarantees that one job will be immediately succeeded by at most one job, if the job is assigned to one of machines. Also, no succeeding job is allowed if the job exists at the last position of the sequence in machines. Constraint (7) ensures that each job must be assigned to exactly one of the batches. Constraint (8) confirms that the total volume of jobs in batches must not be over the truck containing capacity.
Constraint (9) guarantees that all jobs in the same batch should belong and be shipped to the same customer. Constraint (10) restricts that the shipping starting time of each batch must be the longest completion time of manufacturing jobs in the batch. Constraints (11)-(12) force a relation between the jobs in the batch and the customer. In Constraint (13), the shipping time of each batch can be calculated by determining the precedence relation of the batches delivered by the same truck. Constraint (14) confirms that a truck must deliver only one batch to an associated customer at a time. Constraints (15)-(17) have a relation that batches assigned to the same truck must appear exactly once in their sequence. Constraint (15) ensures that the beginning of the delivery sequence in each truck can assign, at most, one batch. So, one batch must be assigned to the first position if the rest of the batches are succeeded by the batch in each truck. Constraint (5) guarantees that one batch will be immediately preceded by one batch in a truck if it is shipped to the truck, and Constraint (6) also guarantees that one batch will be immediately succeeded by at most one batch, if the batch is shipped to one of the trucks.
The above MILP formulation guarantees obtaining an optimal solution. However, the size of the formulation makes it hard to find an optimal solution within a limited time. This difficulty occurs from the number of integer variables and constraints. By the derived formulation, the numbers of integer variables and constraints depend on JM + BJ + TB + CB + J 2 M + B 2 T and respectively. Thus, CPLEX failed to obtain an optimal solution before running out of memory in large-sized problems. If the problem is reduced to consider only a machine scheduling problem, it is equivalent to a total tardiness parallel-machine scheduling problem. If the problem is reduced to considering only a batching problem, it is equivalent to a well-known bin-packing problem. If the problem is reduced to considering only a truck scheduling problem, it is equivalent to a total tardiness parallel-machine scheduling problem. Each of those problems are known to be -NP-hard [19]. Thus, it is necessary to propose an efficient heuristic to solve the problem within a short amount of time.

Variable Neighborhood Search (VNS) Algorithms
We develop VNS algorithms to solve the integrated scheduling problem efficiently. The VNS algorithm is designed to enrich the search space by restarting a local search heuristic with randomly generated neighborhood solutions from an incumbent solution by a pre-determined set of neighborhood structures. Systematic changes of the neighborhood solution within the local search is a key concept of VNS algorithms to improve a solution quality. Thus, the performance of VNS algorithm is mainly influenced by the local search heuristic and the neighborhood structure to meet problem characteristics [20].
The basic VNS algorithm starts with a randomly generated initial solution and repeats the shaking and moving steps until the termination condition (maximum neighborhood number) is met. In the shaking step, a neighbor of the incumbent solution is randomly generated, and the local search heuristic is performed. After the shaking step, the incumbent solution is compared with the local optimal solution and updates the incumbent solution when the local optimal solution is better. Algorithm 1 shows the procedure of the basic VNS scheme given as follows:

Begin
Find an initial solution set S * . Let iteration index k ← 1 . Define maximum neighborhood number k max . While (k ≤ k max ) Shaking: find a random solution set S ∈ N k (S * ). Perform a local search with S to find a local optimum S . Move or not:

Neighboorhood Structure
A solution of the integrated scheduling problem is represented by three sequences in this article: a job sequence for each machine, a job sequence for each batch, and a batch sequence for each truck. The example solution of the integrated schedule in Figure 1 is represented as shown in Figure 2.
shaking and moving steps until the termination condition (maximum neighborhood number) is met. In the shaking step, a neighbor of the incumbent solution is randomly generated, and the local search heuristic is performed. After the shaking step, the incumbent solution is compared with the local optimal solution and updates the incumbent solution when the local optimal solution is better. Algorithm 1 shows the procedure of the basic VNS scheme given as follows:

Begin
Find an initial solution set * . Let iteration index ← 1.
Perform a local search with to find a local optimum .

Neighboorhood Structure
A solution of the integrated scheduling problem is represented by three sequences in this article: a job sequence for each machine, a job sequence for each batch, and a batch sequence for each truck. The example solution of the integrated schedule in Figure 1 is represented as shown in Figure 2.  In order to simultaneously determine machine schedule, batch assignment, and truck delivery schedule, a combination of the basic neighborhood operations for each decision would be examined in each iteration of the main VNS scheme. For the combination, the neighborhood operations for each decision are grouped, respectively, and an operation is randomly selected in each group when making a neighborhood. The grouped neighborhood structure is shown in Table 2.

Local Search Scheme
For the local search in VNS algorithms, we used a search method based on sequence arrays (SMSA). To apply SMSA to the integrated scheduling problem, three sequence arrays were used, which are machine scheduling, batching, and truck scheduling arrays. The sequence arrays, which are represented by a single dimensional string array with digits, and those require assignment rules to determine manufacturing sequence of machines, batching construction, and shipping sequence of trucks. The digits for machine, batching, and truck scheduling represent a job sequence to apply to the machine assignment rule, the batching rule, and the truck assignment rule, respectively [21]. The decoding processes of the sequence arrays are carried out by the three rules. Joo and Kim [16] studied a similar problem (makespan problem) and compared the several assigning rules for their GA algorithm. The processing time and completion time-based assigning rules for machine and truck sequence arrays, and the minmax-based and rotation-based batching rules for batch sequence array were compared. They concluded that the completion time-based assigning rule for the machine and truck sequence arrays and the rotation-based batching rule for batch sequence array provided the best performance in terms of its effectiveness and efficiency for their algorithm. According to their result, we used three rules for decoding the sequence arrays to a compound solution for our local search scheme. The procedures of the three rules are as follows: • Machine assignment rule: Calculate the completion times of each machine by temporarily assigning the current job to the end of the sequence in the corresponding machine. And then, find the machine with the shortest completion time is found and the job is permanently assigned to the machine and placed on the end-position of the manufacturing sequence of the machine. • Batching rule: Find the first available batch, allowing the capacity of the batch to be greater than the volume of the job, as well as shipping towards the same customer and assigning the job to the batch. If no batch was satisfied by the conditions from the current available batches, create a new batch and assigned the job to the batch. • Truck assignment rule: Calculate the completion times of each truck by temporarily assigning the current batch to the end of the sequence in the corresponding truck. And then, find the truck with the shortest completion time and the batch was permanently assigned to the truck and placed on the end-position of the shipping sequence of the truck.
For the local search with SMSA, we classified seven modification cases according to which sequence arrays were selected to apply the modification (see Table 3). Cases 1, 2, and 3 applied only one sequence array to the modification process and cases 4, 5, and 6 applied two of three sequence arrays to the modification process. Case 7 applies all three sequence arrays to the modification process. One of seven cases is randomly chosen according to the case selection probability P c in every iteration of the local search with SMSA.
In this article, we developed two kinds of procedures for the local search with SMSA. The difference between two procedures is to update whether the case selection probabilities are used or not when the modification of sequence array is processed. The first local search uses the static even case selection probability with the value 1/(the number of modification cases). The second local search uses the dynamic case selection probability, which is increased when the case is selected and decreases with a deterioration rate when the case is not selected. Three sequence operators for the modification of sequence array in SMSA were used. For the operators, two front and rear positions were randomly selected in the original sequence array.  The pseudo codes of two local search procedures with SMSA are given in Algorithm 2 and 3 as follows:

Begin
Define termination count t max . Define number of modification cases C max Let the initial case selection probability P c ← 1/C max . Find an initial sequence arrays C * encoded from the solution set S. Let current local optimum S ← S . Let iteration index t ← 1 .

Modification of Sequence Arrays:
Find a random number r ← uni f orm(0, 1) . Let the case index c ← 1 . While (c ≤ C max ) If r ≤ c j=1 P j then Selecting case cs ← c .

Break End If End While
Modify the sequence arrays C * to a new sequence arrays C according to case cs. Generate a corresponding solution set S decoded from the sequence arrays C.

End If End While
Return the local optimum S . End

Begin
Define termination count t max . Define number of modification cases C max Let the initial case selection probability P c ← 1/C max . Let the deterioration rate α, 0 < α < 1. Find an initial sequence arrays C * encoded from the solution set S. Let current local optimum S ← S . Let iteration index t ← 1 . While (t ≤ t max ) Modification of Sequence Arrays: Find a random number r ← uni f orm(0, 1) . Let the case index c ← 1 . While (c ≤ C max ) If r ≤ c j=1 P j then Selecting case cs ← c .

Break End If End While
Modify the sequence arrays C * to a new sequence arrays C according to case cs. Generate a corresponding solution set S decoded from the sequence arrays C.

End If End While
Return the local optimum S . End

Encoding and Decoding of a Solution
SMSA used for local search in VNS algorithms is operated with three sequence arrays that have single dimensional string arrays. So, the encoding and decoding procedures between the compound (integrated) solution and sequence arrays for SMSA are required. The procedures used are the three assigning rules applied to the local search in Section 3.2. Figure 3 describes encoding and decoding procedures with the example integrated schedule presented in Figure 2 and Table 1. SMSA used for local search in VNS algorithms is operated with three sequence arrays that have single dimensional string arrays. So, the encoding and decoding procedures between the compound (integrated) solution and sequence arrays for SMSA are required. The procedures used are the three assigning rules applied to the local search in Section 3.2. Figure 3 describes encoding and decoding procedures with the example integrated schedule presented in Figure 2 and Table 1.

Computational Testing Experiments
In this section, we conduct extensive computational testing experiments to access the performance of two VNS algorithms using the local search with static case selection probability (VNS_S) and VNS algorithm using the local search with dynamic case selection probability (VNS_D).
Since the problem complexity increases as the number of jobs (J) increases, we divided the test problems into two groups using J. The first group is to compare the performances of VNS algorithms

Computational Testing Experiments
In this section, we conduct extensive computational testing experiments to access the performance of two VNS algorithms using the local search with static case selection probability (VNS_S) and VNS algorithm using the local search with dynamic case selection probability (VNS_D).
Since the problem complexity increases as the number of jobs (J) increases, we divided the test problems into two groups using J. The first group is to compare the performances of VNS algorithms with the optimal solution, and the test problems in the group are randomly generated with J selecting between 5 and 10. For the optimal solution, ILOG CPLEX 12.7 was adopted to solve the MILP formulation in Section 2. We terminated a particular run if an optimal solution was not found in an imposed 7200 (s) time limit. The test problems in the second group are randomly generated with J selecting more than 10. The group is to compare the relative performance of the solutions obtained by VNS algorithms. VNS algorithms were coded with the language C#, and all experiments were tested on a PC with 1.86 GHz Intel Core 2 CPU processor and 2 GB RAM.
The problem complexity is affected by five problem parameters which are the number of jobs (J), the number of machines (M), the number of trucks (T), the number of customers (C), and tardiness factor (δ)). Test problems are randomly made according to the five parameters. The tardiness factor δ (0 ≤ δ ≤ 1) is to generate the due times for each job. The due times for each job generated are more scattered as the value of δ is increased. Three values, 0.1, 0.3, and 0.5, are considered for the tardiness factor.
In the small-sized group, eight test problems were randomly generated for each tardiness factor. Under the predetermined one of tardiness factor values, the size of four problem parameters, which are the numbers of jobs, machines, trucks, and customers, were randomly selected by U [5,10], U [2,6], U [2,4], and U [3,4], respectively. In the large-sized group, a total of 24 test problems with 20, 40, and 60 jobs were randomly generated for each tardiness factor. Under the predetermined one of job sizes and one of tardiness factor values, the size of three problem parameters, which are the numbers of machines, trucks, and customers, were randomly selected by U [3,6], U [2,4], and U [3,6], respectively. The processing time and the transportation time were randomly selected by U [60,120] and U [60,240]. The volume of jobs was randomly selected by U [5,10] with a fixed value of the batch capacity as 20. The relative interval ratio selected by truck schedule were predetermined as {0.05, 0.1, 0.2, 0.4, 0.7, 1.0}.
The performance of VNS algorithms were relatively compared, and two performance measures, called relative deviation index (RDI) and mean absolute deviation (MAD), were defined. The measures are expressed by Equations (22) and (23), respectively.
where OBJ best and OBJ worst are the objective function values of the best feasible solution obtained by one of the algorithms (or the optimal solution by CPLEX) and the worst feasible solution obtained by one of the algorithms, respectively. OBJ sol is an objective function value obtained by any VNS algorithm.
where OBJ mean is a mean value of the replicated objective function values by any VNS algorithm. All problems were tested by 30 replications. The performance results of the small-sized group are presented in Table 4. The objective function values of the optimal solution by CPLEX are represented, and the average RDI and MAD by VNS algorithms are compared. For the small-sized group, low values of RDI and MAD are indications that all VNS algorithms give good performances. In Table 4, computational times (CPU times) of instances are also calculated. We can find the CPU time of CPLEX exponentially increases as the number of jobs increases. Meanwhile, CPLEX could not search an optimal solution for problems over 7-8 jobs in the given time limit. In Table 5, the performance results of the large-sized group are summarized. The average RDI and MAD of VNS_D is lower than those of VNS_S. The results indicate that VNS algorithms with a local search with dynamic case selection probability significantly improve the performance of the algorithms compared to VNS algorithms with a local search with static case selection probability. The CPU times of each VNS algorithm are short enough to obtain the best solution. The observed differences between two local search schemes are more statistically significant as tardiness factors decrease. This result indicates that VNS_D gives a better performance for the proposed scheduling problem as the due date becomes more tightly controlled. To compare the performances of the VNS algorithms with the other meta-heuristics, we tested GA-based algorithms with the same problem sets. The performance results are also presented in Table 5. The tested GA is a single-stage algorithm with independent dispatching rules. It uses a chromosome representing two string arrays for machine and truck scheduling and one string array with job batching. In this article, VNS algorithms proposed give better performance than GA in any job size and any tardiness factors. The results indicate that VNS algorithms improve the performance by broadly exploring the solution space compared with GA.

Conclusions
This article considered an integrated problem of one batching and two scheduling decisions between a manufacturing plant and multi-delivery sites. Many jobs ordered by multiple customers are firstly manufactured by one of machines in the plant. In this problem, two scheduling (machine and delivery truck scheduling) problems and one batching problem must be simultaneously determined to minimize the total tardiness. To find the optimal solution, a MILP model was developed. We mainly found an optimal solution using CPLEX for small-sized groups, but it was inefficient and impractical to find the optimal solution for the problems of large-sized groups. Thus, two VNS algorithms, which were applied with different local search schemes, were applied to improve the performance of the algorithm. We conclude that the VNS algorithm with dynamic case selection probability finds better solutions in reasonable CPU times, compared with the VNS algorithm with static case selection probability and the GA based on the test results.

Conflicts of Interest:
The authors declare no conflict of interest.