Next Article in Journal
Participatory Research towards Food System Redesign: Italian Case Study and Perspectives
Previous Article in Journal
Potential of Novel Biochars Produced from Invasive Aquatic Species Outside Food Chain in Removing Ammonium Nitrogen: Comparison with Conventional Biochars and Clinoptilolite
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uniform Parallel Machine Scheduling with Dedicated Machines, Job Splitting and Setup Resources

1
School of Business, Konkuk University, Seoul 05029, Korea
2
College of Global Business, Korea University, Sejong 30019, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2019, 11(24), 7137; https://doi.org/10.3390/su11247137
Submission received: 30 October 2019 / Revised: 30 November 2019 / Accepted: 9 December 2019 / Published: 13 December 2019
(This article belongs to the Section Sustainable Engineering and Science)

Abstract

:
We examine a uniform parallel machine scheduling problem with dedicated machines, job splitting, and limited setup resources for makespan minimization. In this problem, machines have different processing speeds, and each job can only be processed at several designated machines. A job can be split into multiple sections and those sections can be processed on multiple machines simultaneously. Sequence-independent setup times are assumed, and setup operations between jobs require setup operators that are limited. For the problem, we first develop a mathematical optimization model and for large-sized problems a constructive heuristic algorithm is proposed. Finally, we show that the algorithm developed is efficient and provides good solutions by experiments with various scenarios.

1. Introduction

In recent manufacturing industries, increasing productivity and minimizing production costs are essential for a sustainable business. Thus, scheduling is becoming more important to minimize production costs by reducing production completion times and allocating resources efficiently. Furthermore, manufacturing industry is directly relevant to energy sustainability. For instance, energy consumption in manufacturing industry accounts for a huge proportion of total energy consumption in the world; in China, about 50% of energy is consumed by manufacturing industry [1]. From this perspective, scheduling has been considered as a viable and effective way to improve both productivity and energy efficiency, and hence there have been many recent studies applying scheduling at the interface with sustainability in various areas (see, e.g., [2,3,4,5]).
In this study, we consider a uniform parallel machine scheduling problem with dedicated machines, job splitting properties, and limited setup resources, which can be easily observed in practice. In uniform parallel machines, all jobs can be processed on machines in parallel, but machines have different speeds. Dedicated machines enforce that a certain job can only be processed on a set of designated machines. Jobs can be split into multiple sections that can be processed on several machines simultaneously. When a job type is changed in a machine, a setup is needed by one of the operators who are generally insufficient in number to set up all of machines at the same time. The setup time for a job is sequence-independent, which indicates the setup time only depends on the job to be processed, whereas sequence-dependent setup times are determined by the combination of preceding and next jobs. The objective of our problem is to minimize the maximum of machine completion times, typically denoted as C m a x .
This problem is motivated from real systems that manufacture fan or equipment filter units (FFUs or EFUs), automotive pistons, bolts and nuts for automotive engines, textiles, printed circuit boards, and network computing [6,7]. FFUs are used to supply purified air to clean rooms, laboratories, and medical facilities by removing harmful airborne particles from recirculating air. The factory we consider in Korea is producing many different types of FFUs and EFUs for semiconductor, automotive, and food industry companies such as Samsung Electronics, SK Hynix, LG, Texas Instruments, and so on. The factory mainly assembles outsourced components and tests the assembled products before delivering them to customers. There are five assembly lines; two of them are automatic and the others are manual, where automatic lines are faster than manual ones. FFUs or EFUs can be assembled in one of dedicated machines, and setups are performed when job types are changed, which are sequence-independent and require a setup operator. In addition, since many units are made for each type of FFUs or EFUs, they can be split into arbitrary sections.
Figure 1 illustrates a simplified example in FFU manufacturing. In the example, there are three job types; many FFUs of Types 1–3 (see the left part of Figure 1). Jobs of each job type can be slit into multiple job sections, and they are assigned to machines. In this example, jobs of Types 1 and 2 can be processed on Machines 1 and 2, and jobs of Type 3 can only be processed on Machine 3 due to the dedicated machine constraint. In each machine, when job types change, a setup is required and it is performed by a human operator, as shown in Figure 1. In the example, there is only one human operator for setups and at most two setups can be required at the same time. In that case, one of two setups should be delayed, which makes the scheduling problem complicated.
Another factory we introduce is producing automotive pistons for Hyundai Motor Group, BMW, GM, and so on. Automotive pistons are first cast from aluminum alloys in uniform parallel machines, and then go through machining and assembly process steps. The multiple parallel machines for casting processes are classified into automatic, semi-automatic, and manual machines, and each of them can handle a set of certain piston types because machines are equipped with different tools and some pistons have specific process requirements. In addition, high quality pistons should be tracked in each unit so that all process parameters for each unit are checked. In this case, pistons for those customers should be processed on the machines that have the tracking system, which introduces dedicated-machine constraints. A piston type consists of thousands of units so that they can be divided into several lots, and multiple piston types are produced at the same time. Setups when product types change are performed by an operator, and the setup times do not depend on the preceding product type.
As introduced, the problem considered in this paper has many real applications and is constrained by various scheduling requirements such as parallel machines with different speeds, dedicated machines, and limited setup operators. To the best of our knowledge, no study has considered this problem. Therefore, this paper is intended to contribute to this end by presenting a mathematical optimization model and developing efficient heuristic algorithms.

2. Literature Review

In this section, we review papers related to our problem. Since a uniform parallel machine scheduling problem with dedicated machines, job splitting, and setup resources is considered, relevant literature can be classified into three groups: studies considering parallel machines, job splitting, and resources.
First, there have been numerous papers on scheduling identical parallel machines. Many early studies have analyzed list schedules and the longest processing time (LPT) first rule with the makespan minimization measure on either identical or uniform parallel machines [8,9,10,11,12,13]. The authors of [8,9] analyzed the worst-case bound of an arbitrary list schedule and the LPT rule on identical parallel machines. In the work by Garey and Graham [10], bounds of list schedules on identical parallel machines with a set of resources where each job requires specified units of each resource at all times during its execution were provided. In the paper by Gonzalez et al. [11], the performance of the LPT schedules on uniform parallel machines was analyzed, and the study by Friesen [12] provided tighter bounds for the same problem. The work by Cho and Sahni [13] analyzed the worst-case bound of list schedules for uniform parallel machine scheduling problems. All of them assumed the nonpreemptive schedules for minimizing the makespan. From these papers, we can have insights to develop priority rules for heuristic algorithms presented in this paper.
For uniform parallel machine scheduling, the work by Dessouky et al. [14] developed many efficient algorithms with scheduling criteria that are nondecreasing in the job completion times, such as makespan, total completion time, maximum lateness, and total tardiness. The study by Dessouky [15] proposed a branch and bound algorithm for uniform parallel machine scheduling with ready times in order to minimize the maximum lateness. The work by Balakrishnan et al. [16] examined uniform parallel machine scheduling problems with sequence-dependent setup times in order to minimize the sum of earliness and tardiness costs. They provided a mixed integer formulation that has substantially small 0-1 variables for small-sized problems. In the study by Lee et al. [17], two heuristic algorithms for uniform parallel machine scheduling were developed to derive an optimal assignment of operators to machines with learning effects in order to minimize the makespan. The work by Elvikis et al. [18] also considered a uniform parallel machine scheduling problem with two jobs that consist of multiple operations, and derived Pareto optima with makespan and cost functions. In the study by Elvikis and T’kindt [19], the problem with multiple objectives related with the job completion times was investigated by developing a minimal complete Pareto set enumeration algorithm. The work by Zhou et al. [20] considered a batch processing problem on uniform parallel machines with arbitrary job sizes in order to minimize the makespan. They developed a mixed integer programming model and an effective differential evolution-based hybrid algorithm. In the work by Jiang et al. [21], a hybrid algorithm that combines particle swarm optimization and genetic algorithm was developed for scheduling uniform parallel machines with batch transportation. The study by Zeng et al. [22] examined a bi-objective scheduling problem on uniform parallel machines by considering electricity costs under time-dependent or time-of-use electricity tariffs. All of these studies are not directly applicable to our problem since we consider more complicated problem settings. However, some ideas and approaches used in this paper were inspired by these previous works.
One of the important features of our problem is job splitting. Some papers have considered the job splitting property on parallel machines. Several polynomial time algorithms for scheduling parallel identical, uniform, and unrelated machines with job splitting were developed in order to minimize maximum weighted tardiness [23]. In the work by Yalaoui and Chu [6], an efficient heuristic algorithm for parallel machine scheduling with job splitting and sequence-dependent setup times was proposed and its performance was evaluated. They transformed the problem into a traveling salesman problem and solved with Little’s method. It was further analyzed with a linear programming approach [24]. Kim et al. [7] developed a two-phase heuristic algorithm for parallel machine scheduling with job splitting where an initial sequence is constructed by an existing heuristic method for parallel machine scheduling in the first phase and then the jobs are rescheduled by considering job splitting in the second phase. Shim and Kim [25] further analyzed the same problem by developing a branch and bound algorithm with several dominance properties. In the study by Park et al. [26], heuristic algorithms for minimizing total tardiness of jobs on parallel machines with job splitting were presented. Wang et al. [27] also examined a parallel machine scheduling problem with job splitting and learning with the total completion time measure, and used a branch and bound algorithm for small-sized problems and heuristics for large-sized ones. Even though most of these previous studies considering job splitting assume simpler manufacturing environment than that of our problem, they provide an idea of algorithm design for the heuristics developed in this paper.
Lastly, many papers have also considered resources in scheduling parallel machines, which is an important scheduling requirement of our problem. In the work by Kellerer and Strusevich [28], a parallel dedicated machine scheduling problem with a single resource for the makespan minimization was examined. They analyzed the complexity of different variants of the problem and developed heuristic algorithms employing the group technology approach. Kellerer and Strusevich [29] further considered the problem with multiple resources, and developed polynomial-time algorithms for special cases. In the study by Yeh et al. [30], several metaheuristic algorithms for uniform parallel machine scheduling were developed given that resource consumption cannot exceed a certain level. These studies assumed that resources are required to process jobs on machines. For setup resource constraints, the study by Hall et al. [31] dealt with nonpreemptive scheduling of a given set of jobs on several identical parallel machines with a common server was considered. In detail, they considered many classical scheduling objectives in this environment and proposed polynomial or pseudo-polynomial time algorithms for each problem considered. The work by Huang et al. [32] addressed a parallel dedicated machine scheduling problem with sequence-dependent setup times and a single server. Several papers have examined two-parallel machine scheduling problems with a server and proposed efficient heuristic algorithms [33,34,35]. The study by Cheng et al. [36] considered a common server and job preemption in parallel machine scheduling with the makespan measure and provided a pseudo-polynomial time algorithm for two machine cases and analyzed the performance ratios of some natural heuristic algorithms. Hamzadayi and Yildiz [37] also considered the same problem with sequence-dependent setup times and derived metaheuristic methods to minimize the makespan. For multiple servers, the study by Ou et al. [38] examined a parallel machine scheduling problem where servers perform unloading tasks of jobs and proposed a branch and bound and heuristic algorithms. Unlike the problem in this paper, most of studies considering resources in scheduling parallel machines assume a single resource (or server) or do not take into account other scheduling requirements considered in this paper such as job splitting and dedicated machines. Therefore, methods developed in the previous papers cannot be applied to our problem. For interested readers, reviews on parallel machine scheduling can be found in [39,40,41].
In summary, even though there have been numerous papers on parallel machine scheduling, no study has been performed for our problem; uniform parallel machine scheduling with dedicated machines, job splitting, and setup resource constraints. As explained above, this problem is motivated from real-world systems, and there are many other applications in which the proposed approach can be useful. We first describe the problem in detail and develop a mathematical programming model for the first time. We then provide four lower bounds and propose efficient heuristic algorithms. The performance of the algorithms is evaluated by comparing with lower bounds. An application of our algorithm to a real problem from industry is also introduced.

3. Problem Description & Analysis

In this study, we consider a uniform parallel machine scheduling problem with dedicated machines, job splitting properties, and limited setup resources, which can be easily observed in practice. In uniform parallel machine scheduling, n jobs are processed on m machines in parallel, but machines can have different speeds. The processing speed of machine i where i M , M = { 1 , 2 , . . , m } , is denoted by v i , which can be regarded as relative speeds. For example, if v 1 = 2 × v 2 , Machine 1 is twice as fast as Machine 2. Job j, j N , N = { 1 , 2 , . . . , n } , has the processing time of p j if the machine speed is 1. Therefore, in general, the processing time of job j on machine i is p j v i . When all machines have the same speed, e.g., v i = 1 for all i, the environment is the same as identical parallel machine scheduling which is a special case of our problem. Dedicated machines enforce that job j can only be processed on a set of designated machines, denoted as M j , and can be split into multiple sections that can be processed on several machines simultaneously. When a job type is changed in a machine, a setup is needed by one of r ( < m ) operators that are generally insufficient to set up all of m machines at the same time (see Figure 1). The setup time for job j, s j , is sequence-independent; that is, the setup time is not affected by the preceding job, and it only depends on the job to be processed. The objective is to minimize the maximum of machine completion times, C m a x where C m a x = max i C i and C i indicates the completion time of machine i.
The problem considered in this paper can be easily proven to be NP-hard because parallel machine scheduling problems with two machines, which are a special case of our problem with v i = 1 for all i M , r = m , s j = 0 for all j N , and M j = M for all j N , are proven to be NP-hard [42]. We assume that jobs processed on machines for the first time do not require setup operations. This is because once all jobs are completed for a given period, mostly one or several weeks, preventive maintenance for machines is performed and then they are set up for the next desired states [6]. Setup times are not affected by different speeds of machines since they are performed by setup operators. The lengths of jobs processed on each machine and their sequence should be determined by considering setup resources and dedicated machines with different speeds in order to minimize the makespan. For a better understanding, we provide the following example.
Example 1.
Suppose that there are three machines and seven jobs where ( p j , s j ) for all j where 1 j 7 are given as ( 14 , 5 ) , ( 12 , 4 ) , ( 11 , 4 ) , ( 15 , 3 ) , ( 7 , 3 ) , ( 10 , 2 ) , ( 5 , 2 ) . Figure 2 shows three Gantt charts for production schedules in identical or uniform parallel machines with one setup operator. In the Gantt chart, numbers in white bars indicate job indices, and black bars represent setup operations. Numbers in the bottom denote time stamps; for example, in Figure 2a, Machine 2 finishes at 31 while completion time of Machines 1 and 3 is 29. Assume that jobs are processed in their index order, the jobs processed for the first time on each machine do not require setups, and there is no dedicated machine constraint. When jobs are processed on identical parallel machines, their makespan is 31, as illustrated in Figure 2a. However, when the machines have different speeds, i.e., v 1 , v 2 , and v 3 are 0.9, 1, and 1.1, respectively, the schedule becomes the same as the Gantt chart in Figure 2b, and the makespan is 30. In this case, the schedule can be improved by splitting Job 7 into two sections and assigning one section with the processing time of 0.73 to Machine 3. Since v 3 is 1.1, it takes 0.66 for the section to be completed on Machine 3 and the makespan becomes 29.27. If Job 7 can only be processed on Machines 1 and 2, i.e., M 7 = { 1 , 2 } , splitting the job cannot improve the schedule. Hence, special considerations on scheduling uniform parallel machines with dedicated machines, job splitting, and setup resource constraints are required.
We now propose a mathematical programming model for uniform parallel machine scheduling with dedicated machines, job splitting, and setup resources for the first time. Table 1 lists the symbols and their descriptions used in the following mathematical programming model. Especially, H indicates a given scheduling horizon which can be determined by j N p j min i M v i + j N s j .
Minimize C m a x
Subject to t = 0 H x i j t 1 , j N , i M j
i M j t = 0 H x i j t 1 , j = 1 , 2 , . . . , n
t = 0 H x i j t = y i j , j N , i M j
t = 0 H t x i j t + ( L i j v i + s j ) y i j + D ( z i j k 1 ) z i 0 j s j t = 0 H t x i k t j N , i M j , k J i
C m a x t = 0 H t x i j t + ( L i j v i + s j ) y i j z i 0 j s j , j N , i M j
t = 0 H x i j t = k J i { n + 1 } z i j k , j N , i M j
t = 0 H x i k t = j J i { 0 } z i j k , k N , i M k
k J i z i 0 k = 1 , i M
j J i z i , j , n + 1 = 1 , i M
z i j j = 0 , j = 1 , 2 , . . . , n , i M j
i M j L i j = p j , j N
y i j L i j , j = 1 , 2 , . . . , n , i M j
y i j L i j p j , j N , i M j
( x i j t z i 0 j ) s j u = t t + s j 1 R i j u , j N , i M j t = 0 , 1 , . . . , H max j s j
t = 0 H R i j t s j t = 0 H x i j t , j N , i M j
t = 0 H R i j t s j ( 1 z i 0 j ) , j N , i M j
j = 1 n i M j R i j t r , t = 0 , 1 , . . . , H
x i j t , z i j k , y i j , R i j t { 0 , 1 } , L i j Z +
The objective is to minimize C m a x , which indicates the maximum completion time of machines. The constraint in Equation (2) is used for ensuring that a section of job j can be assigned to machine i, i M j , only once; multiple assignments of sections of the same job type to a certain machine are not allowed. The constraint in Equation (3) is developed for job splitting property, which indicates that multiple sections of job j can be processed on different machines in M j . The constraint in Equation (4) associates x i j t and y i j so that if a section of job j is assigned to machine i, y i j and the sum of x i j t for all t should be the same. In the constraint in Equation (5), when a section of job j starts to be processed on machine i, the next job k in J i should start after t + L i j v i + s j time units where D is a large number. The term z i 0 j s j is used to eliminate the setup time of the first job on machine i. The objective, C m a x , is obtained with the constraint in Equation (6). Once a section of job j is processed on machine i, it should have its succeeding and preceding jobs as indicated by the constraints in Equations (7) and (8), respectively. The constraints in Equations (9) and (10) ensure that dummy jobs 0 and n + 1 are the first and last jobs, respectively, on each machine. Since a section of job j cannot precede or success the same job type, z i j j = 0 as in the constraint in Equation (11). The sum of processing times for job j on machines in M j should be equal to p j from the constraint in Equation (12), and a section of job j is assigned to machine i, then L i j should be larger than 0 from the constraints in Equations (13) and (14). The constraints in Equations (15)–(18) are used for the setup resource constraints. If a section of job j starts to be processed on machine i at time u, R i j t should be 1 for all t where u t u + s j 1 , which is constrained by the constraints in Equations (15) and (16). The constraint in Equation (17) is used for the first jobs on machines because setup times for those jobs are ignored. Since there are r setup operators, the sum of R i j t at each time t should be less than or equal to r. Since L i j y i j is nonlinear, we introduce G i j , G i j = L i j y i j , which indicates the length of a section of job j that is actually processed on machine i. Then, the commercial solvers such as CPLEX can be used to obtain optimal solutions with the proposed formulation. The following inequalities are added where D is a large number:
G i j L i j D ( 1 y i j ) , j N , i M j
G i j L i j D ( y i j 1 ) , j N , i M j
G i j D y i j , j N , i M j
Optimal solutions from the mathematical formulation can be used to evaluate the performance of the proposed algorithm. However, large size problems cannot be solved within an acceptable time even with three machines and four jobs as our problem is NP-hard. Thus, lower bounds are derived and used for the performance evaluation. We define a job set N 1 as one that contains m jobs so that j N 1 s j is maximized and those m jobs in N 1 can be assigned to m machines one by one while satisfying dedicated machine constraints. Suppose that there are two machines and three jobs, and s 1 , s 2 , and s 3 are 2, 1, and 3, respectively. If M 1 = M 3 = { 1 } and M 2 = { 1 , 2 } , N 1 contains Jobs 2 and 3 instead of Jobs 1 and 3 even though s 2 + s 3 < s 1 + s 3 because Jobs 1 and 3 must be processed on Machine 1. A job set N 2 includes jobs that are in N \ N 1 . We used the Hungarian method to obtain set N 1 with n jobs and n machines (m machines + ( n m ) dummy machines) [43]. S l indicates a set of jobs that have the same set of dedicated machines, i.e., M j = M k if jobs j and k are in S l . In the above example, Jobs 1 and 3 are in the same set. We now present several lemmas to derive lower bounds.
Lemma 1.
A job-based lower bound L B 1 is max j p j i M j v i .
Proof. 
Since job j can only be processed on machines in M j , the minimum time required to complete job j is p j i M j v i . Hence, the maximum value among p j i M j v i becomes a job-based lower bound. ☐
Lemma 2.
A machine-based lower bound L B 2 is j = 1 n p j i = 1 m v i + j N 2 s j m .
Proof. 
For a basic parallel machine scheduling problem, a lower bound is j = 1 n p j m . In our problem, since m machines have different speeds, it takes at least j = 1 n p j i = 1 m v i to complete all of n jobs. In addition, the setup time of j N 2 s j for n m jobs is required. Hence, a machine-based lower bound is j = 1 n p j i = 1 m v i + j N 2 s j m . ☐
Lemma 3.
A resource-based lower bound L B 3 is j N 2 s j r .
Proof. 
Except the first m jobs assigned, n m jobs require setups that take at least j N 2 s j as we defined previously. Since those setups are performed by r operators, a resource-based lower bound is j N 2 s j r . ☐
Lemma 4.
A job set-based lower bound L B 4 is max S l { j S l p j i M S l v i + j S l s j | M S l | } where M S l is a set of machines that can process jobs in S l , and S l is a set of jobs in S l that does not contain | M S l | jobs with the largest setup times among jobs in S l .
Proof. 
For each job set, S l , a lower bound can be obtained similarly as L B 2 in Lemma 2. Jobs in S l can only be processed on machines in M S l , which takes at least j S l p j i M S l v i . The setups for | S l | jobs require at least as much as j S l s j since | M S l | jobs can start at first on | M S l | machines. Hence, the maximum value among j S l p j i M S l v i + j S l s j | M S l | for all S l becomes a job set-based lower bound. ☐
Corollary 1.
A lower bound of the makespan of the problem, L B , is max { L B 1 , L B 2 , L B 3 , L B 4 } .

4. Heuristic Algorithms

Since no setup is required for the first jobs on machines, assigning jobs with large setup times as the first ones can lead to the makespan reduction. However, sorting jobs in order of nonincreasing setup times and assigning the first m jobs to m machines may not be feasible or may lead to less setup time reductions due to the dedicated machines. Hence, we assign m jobs in N 1 to m machines so that the sum of setup times of those jobs is maximized. As mentioned above, the optimal assignment of m jobs to m machines can be found with the Hungarian method. In the method, n m dummy machines are made and setup times of n jobs on those machines are set to 0. The setup time of job j on machine i, i M j , is also set to 0. Then, the Hungarian method is applied with n jobs and n machines. It is known that the complexity of the Hungarian method is O ( n 3 ) . With this method, we can maximize the sum of setup times of jobs that are first assigned to each machine.
After assigning m jobs, each time machines become available, jobs are chosen according to some of well-known priority rules. The first one is the least flexible job (LFJ) first rule. When machine i finishes processing a job, the job with the smallest | M j | among jobs in J i is chosen and assigned. The LFJ performs well for dedicated parallel machine scheduling. The second one is the LPT rule that selects the job with the largest processing time among jobs in J i . After assigning all of n jobs, the loads of machines are balanced by splitting last jobs on each machine and assigning those sections to other machines by considering dedicated constraints and different speeds of machines.
We provide the procedures of the two priority rules for assigning n m jobs. Let L i as the earliest start time in which a job can be assigned to machine i by considering the setup resource constraints after assigning m jobs by the Hungarian method. The following rules are only applied to jobs in N 2 .
LFJ rule
-
Step 1: For machine l where l = arg min i M L i , select job k in J l N 2 that has the smallest | M k | and assign the job to the machine. Ties are broken according to the LPT rule. Update L l to L l + s k + p k v l , N 2 to N 2 \ { k } , and J i to J i \ { k } where i M k .
-
Step 2: If N 2 = , terminate. Otherwise, update L i for all i M and go to Step 1.
LPT rule
-
Step 1: For machine l where l = arg min i M L i , select job k in J l N 2 that has the longest p k and assign the job to the machine. Ties are broken according to the LFJ rule. Update L l to L l + s k + p k v l , N 2 to N 2 \ { k } , and J i to J i \ { k } where i M k .
-
Step 2: If N 2 = , terminate. Otherwise, update L i for all i M and go to Step 1.
It is worth noting that another priority rule that combines the LPT and LFJ was also tested but provides a poor performance in the preliminary experiments; when a machine with a high speed becomes idle, a job is assigned according to the LPT rule, and, otherwise, the LFJ rule is applied to select a job.
We now propose a heuristic algorithm that combines the Hungarian method for assigning first m jobs, one of priority rules (LFJ or LPT) for assigning n m jobs and the load balancing step on machines. We tested the above two priority rules and show their performance in Section 5.
Algorithm 1: An iterative algorithm for uniform parallel machine scheduling
-
Step 1: Using the Hungarian method, select m jobs and assign them to each machine so that the sum of setup times of such m jobs is maximized, and define a set N 1 that contains these first m jobs.
-
Step 2: Assign n m jobs in N 2 where N 2 = N \ N 1 according to the LFJ or LPT rule.
-
Step 3: Select machine l where l = arg max i M L i . If maxReducibleTime(l) > 0 , assign a section of the last job on machine l to machine bestAssignableMachine(l) so that L l = L i * where i * = bestAssignableMachine ( l ) , and repeat Step 3. Otherwise, go to Step 4.
-
Step 4: Consider M k l where l = arg max i M L i and k l is the last job on machine l. For each machine i M k l , update L i to L i maxReducibleTime(i) temporarily, and store its original value. If maxReducibleTime(l) = 0 , stop. Otherwise, let j = bestAssignableMachine(l) and restore L i to its original value. Assign a section of the last job on machine j to machine k where k = bestAssignableMachine(j) so that L j = L k , and go to Step 3.
Algorithm 2: Splitting jobs with long processing times to further improve solutions
-
Step 1: Let both N and N * be the initial job list and C m a x = .
-
Step 2: Apply Algorithm 1 with jobs in N and update C m a x to the resulting makespan. If C m a x is equal to the lower bound, stop. Otherwise, go to Step 3.
-
Step 3: If N * = or all of the jobs in N * have the processing time less than ϵ ( ϵ is a very small positive real number to avoid an infinite loop), stop. Otherwise, select job l in N * according to the LPT rule. Split job l into two sections, l 1 and l 2 , with the same processing time. Apply Algorithm 1 with jobs in ( N \ l ) l 1 , l 2 . Let the makespan obtained with the updated N be C m a x .
-
Step 4: If C m a x < C m a x , set C m a x = C m a x and update both N and N * by eliminating job l and adding jobs l 1 and l 2 . Otherwise, update N * by eliminating l. Go to Step 3.
Function: maxReducibleTime( l )
For a given machine l and its last job k l , return Δ where Δ = max i M k l max L l L i d i , k l s k l 1 + v l v i , 0 .
Function: bestAssignableMachine( l )
For a given machine l and its last job k l , return machine i where i = arg max i M k l L l L i d i , k l s k l 1 + v l v i .
Assign m jobs with the Hungarian method and n m jobs with one of the priority rules in Steps 1 and 2, respectively. Then, the schedule is updated by balancing the loads of machines in Steps 3 and 4. In Step 3, the machine with the largest L i which currently determines the makespan is chosen, and for all other machines compute maximum reducible times by splitting and reassigning the last job of the machine with the largest L i to another machine. This step is repeated until there is no further improvement in makespan. Step 4 is specially designed to improve the solution under dedicated machine constraints. Figure 3 shows an example where Step 4 of Algorithm 1 is effective. In Figure 3, Machine 3 has the longest completion time and the last job of Machine 3 is Job 3. In Step 3 of Algorithm 1, it is virtually impossible to have any improvement in makespan because Job 3 can be processed in Machines 1 and 3, but completion time of Machine 1 is almost the same as that of Machine 3. However, if the last job of Machine 1, Job 1, is split and reassigned to Machine 2 since M 1 = 1 , 2 , the completion time of Machine 1 is decreased and thus we can have an additional chance to reduce the makespan. Based on this idea, Step 4 of Algorithm 1 is designed.
With Algorithm 1, we can obtain a feasible and acceptable solution. The solution is updated further by splitting jobs with long processing times at a time in Algorithm 2.
Example 2.
Suppose that there are three machines and eight jobs in which v 1 , v 2 , and v 3 are 0.8, 1.0, and 1.2, respectively, and ( p j , s j ) = ( 7 , 2 ) , ( 8 , 1 ) , ( 9 , 4 ) , ( 5 , 1 ) , ( 4 , 1 ) , ( 7 , 4 ) , ( 2 , 1 ) , ( 6 , 3 ) for 1 j 8 . Assume that J 1 = { 1 , 2 , 3 , 4 , 6 , 8 } , J 2 = { 2 , 5 , 7 , 8 } , and J 3 = { 1 , 3 , 4 , 5 } . Figure 4a,c shows Gantt charts of schedules obtained from the LFJ and LPT rules, respectively, with r of 1. In Algorithm 1, the Hungarian method is first applied, and Jobs 6, 8, and 3 are assigned as the first jobs to Machines 1–3, respectively. Then, Machine 2 becomes idle at 6 time units, and Job 7 is selected since | M 7 | is the smallest one among jobs in J 2 according to the LFJ rule. After that, Machines 3 chooses Job 1 because all jobs in J 3 have the same value of | M j | where j J 3 and p 1 is the largest one. In a similar way, a complete schedule from the LFJ is obtained and its makespan is 20.5, as illustrated in Figure 4a. When the LPT rule is applied, Job 2 is first selected after Job 8 on Machine 2, and then Jobs 1 and 4 are assigned to Machines 3 and 1, respectively. The makespan is 23 since | M j | is not considered in the LPT rule. In Figure 4a, the last job, Job 4, on Machine 3 cannot be processed on Machine 2, and splitting the job into two sections and assigning one section to Machine 1 cannot improve the makespan. Hence, Job 2 on Machine 1 is split into two sections, and one is assigned to Machine 2. The schedule is further updated by splitting Job 4 on Machine 3 and assigning one section to Machine 1, as illustrated in Figure 4b. In this schedule, the makespan is 20. The schedules from the LPT rule cannot be improved by splitting the last jobs on machines. After maintaining the schedule, Algorithm 2 is applied.

5. Experimental Results

The proposed algorithms were tested with various scenarios. The machine speed was determined randomly between 0.8 and 1.2, and the setup times for jobs were generated with α p j where α was selected randomly within [ 0.01 , 0.1 ] , [ 0.1 , 0.2 ] , and [ 0.1 , 0.5 ] . Processing times were generated between 10 and 100. There were three levels of machine dedications, namely high, medium (i.e., mid), and low, and, in each level, jobs could be processed on a machine with the probability of 50%, 50–90%, and 90%, respectively. The scenarios had 5, 10, and 20 machines, each of which had 40, 60, and 80 jobs, respectively. The four algorithms, Algorithm 1 with LFJ (A1 (LFJ)), Algorithm 1 with LPT (A1 (LPT)), Algorithm 2 with LFJ (A2 (LFJ)), and Algorithm 2 with LPT (A2 (LPT)), were compared with lower bounds from Corollary 1. Average gaps were computed as follows:
makespan from the proposed algorithm lower bound lower bound × 100 ( % ) .
For each problem with a certain range of setup times, a dedication level, and the specific number of resources, 100 instances were generated, and the average gaps are shown. Note that the mathematical formulation model can only solve small instances with two machines and four jobs to optimality within 1 h. We do not think it is meaningful to compare with such very small-sized instances, and furthermore for such cases gaps between makespans from our algorithm and lower bounds are extremely small.
Figure 5, Figure 6 and Figure 7 show the experimental results with m of 10 and n of 40, 60, and 80, respectively. Each figure has nine graphs for the different setup time ranges and dedication levels. The graphs in those figures are labeled with (a), (b), (c) and H, M, and L according to setup ranges α [ 0.01 , 0.1 ] , [ 0.1 , 0.2 ] , and [ 0.1 , 0.5 ] , and the dedication levels, respectively. For example, Figure 5a-H indicates the result with α [ 0.01 , 0.1 ] and the high dedication level. The number of servers, r, was set to be less than m 2 to reflect the resource constraints in the schedule. Each graph shows the average gaps of the four algorithms with r of 3, 5, 7, and 9. In Figure 5, the average gaps tend to increase as the setup time range is larger and the dedication level becomes high. In addition, as r becomes large, the gaps become smaller. A1 (LFJ) performs better than A1 (LPT) in Figure 5a-H,b-H whereas the average gaps of A1 (LPT) are smaller in other cases because LFJ rule works well with the high dedication level. In the case of Figure 5c-H, A1 (LPT) is slightly better than A1 (LFJ) because the machine loads may be well-balanced with LPT rule under the large setup times. When the machine dedication level is mid or low, A1 (LPT) is always better than A1 (LFJ). The average gap of A1 (LPT) is decreased significantly in Figure 5c-M compared to Figure 5c-H. The difference between A1 (LFJ) and A1 (LPT) becomes large under mid and low dedication levels as the setup time ranges increase whereas the difference is small with the high dedication. A2 (LFJ) and A2 (LPT) have patterns similar to A1 (LFJ) and A1 (LPT) in Figure 5. It is interesting to note that the average gaps of A1 (LFJ) and A2 (LFJ) in Figure 5c-M,c-L are slightly larger than those in Figure 5c-H, respectively, when r is small, whereas the average gaps are mostly large when the dedication level is high. This may indicate that when setup times are large and resource constraints are tight, LFJ rule provides good solutions with the high dedication level. This feature can also be found in Figure 6 and Figure 7. A2 (LPT) provides the smallest gaps among the four algorithms for all cases except Figure 5a-H. A2 (LPT) has the largest average gap of 3.88% in Figure 5c-H, and the smallest average gap of 0.48% in Figure 5a-L.
The nine graphs in Figure 6 have the features similar to those in Figure 5, but the average gaps are smaller as n increases. The large gaps are obtained when α [ 0.1 , 0.5 ] and r is small. A1 (LFJ) performs better than A1 (LPT) in Figure 6a-H but it is outperformed in all the other cases. A1 (LFJ) and A2 (LFJ) work poorly especially with large setup times and small resources compared to A1 (LPT) and A2 (LPT). A2 (LPT) performs well in most cases, and its largest and smallest average gaps are 2.57% and 0.29% in Figure 6c-H,b-L, respectively. Figure 7 shows the results with n of 80. We can see that A1 (LPT) performs better than A1 (LFJ) in Figure 7a-H unlike the previous results. The average gaps of the four algorithms are smaller than those in Figure 6 except for A1 (LFJ) with large setup times and small resources. This may indicate less tight lower bounds or poor performance of LFJ rule with large n. A2 (LPT) provides the largest and smallest average gaps that are 1.88% and 0.21% in Figure 7c-H and in Figure 7b-L, respectively. It is good to use A2 (LFJ) with small n, high machine dedication, and small setup times. Otherwise, A2 (LPT) is better.
Table 2 summarizes all of the results in Figure 5, Figure 6 and Figure 7 according to the different dedication levels, setup ranges, resources, and the number of jobs. Table 2 also shows the results with different machine speed ranges, 0.5–1.5. We can see that the average gaps tend to become smaller as the dedication level becomes lower, setup time ranges are smaller, the number of resources is larger, and the number of jobs becomes large. The average gaps of the four algorithms are 3.12%, 1.99%, 2.58%, and 1.33%, and hence we can obtain solutions close to optimal ones with A2 (LPT), respectively. The performances of A1 (LFJ) and A1 (LPT) are improved as much as 17% and 33% by using Algorithm 2, respectively. When the variation of machine speeds is larger, the average gaps also increase. Even if the machine speed is selected between 0.5 and 1.5, the average gaps are less than 5%. We note that the computation times of A1 (LFJ) and A1 (LPT) are less than 1 s, and it takes at most 61 s for A2 (LFJ) and A2 (LPT) with n of 80.
The detailed experimental results for m of 5 and 20 with n of 40, 60, and 80 are given in Appendix A. Table 3 and Table 4 show the summary of the results with m of 5 and 20, respectively, with the speed between 0.8 and 1.2. We can see the features similar to the results in Table 2. When m is 5, A1 (LFJ) and A2 (LFJ) have the largest average gaps when the number of resources is small, whereas A1 (LPT) and A2 (LPT) perform poorly when the dedication level is high. On the other hand, A1 (LFJ) and A2 (LFJ) provide the largest average gaps when the number of jobs is small as m increases. When m is 20, the average gaps of A1 (LPT) and A2 (LPT) are also large with n of 40 compared to other scenarios. By comparing the results in Table 2, Table 3 and Table 4, we can see that the average gaps tend to increase as m increases. The average gaps of the four algorithms with m of 5 and 20 are 1.33%, 1.70%, 1.06%, and 0.92%, and 4.71%, 3.21%, 4.07%, and 2.43%, respectively. The gaps are not large by considering the fact that they are computed with lower bounds not the optimal ones. The performance of Algorithm 1 is improved significantly by applying Algorithm 2. We can conclude that practical problems can be solved efficiently especially with A2 (LPT). The computation time for the scenarios with m of 20 and n of 60 is 89.43 s on average, and the maximum computation time is 100.17 s.
We further evaluated the four algorithms with extreme cases in which the number of resources is about 20% of the number of machines. In practice, r was set to be larger than 50% of m to increase the efficiency of the system. For each of setup time ranges, dedication levels, and the number of jobs, 100 instances were generated. Table 5 shows the average gaps of the four algorithms with m of 5, 10, and 20. The large gaps are mostly obtained when α [ 0.1 , 0.5 ] and the dedication level is high. When m is 5, the gaps tend to decrease as n increases, whereas the gaps are large with the large number of jobs when m is 10, especially with A1 (LFJ) and A2 (LFJ). When m is 20, the instances with n of 60 provide the smallest gaps. We can see that, even in the extreme cases, A2 (LPT) provides the maximum average gaps of 7.13% when m and n are 5 and 40, respectively.
We finally tested our algorithms with a practical instance from an FFU factory in Korea. There were seven parallel machines in which three machines had the speed of 1.2 and the rest had the speed of 1. The setup times were not large, α [ 0.1 , 0.2 ] . The number of jobs was 20, 30, and 40, each of which had processing times of 1–30, 1–20, or 1–10, respectively, to have a one-week production schedule. The number of setup operators was 2, and the dedication level was mid. Table 6 shows the results with m of 7 and n of 20, 30, and 40. We can see that the average gaps are less than 4% with the A2 (LPT). As shown in the table, A2 (LPT) provides very efficient solutions in this case.

6. Conclusions

We investigate a parallel machine scheduling problem for makespan minimization with various scheduling requirements such as different processing speeds of machines, dedicated machines, job splitting, and limited setup operators. We first present a mathematical model, which clearly describes our problem and can be used to obtain optimal solutions for small-sized problems. Since the problem considered is NP-hard, we develop an efficient heuristic algorithm for practical problems, which first obtains a feasible solution and then improves the solution in a constructive way. The performance of the algorithm was tested with various scenarios and a real problem from industry. Finally, we can conclude that the algorithm proposed is very efficient and provides highly acceptable solutions for most practical cases.
In future research, analytical results of the heuristic algorithm such as wort-case performance bounds need to be derived. In another direction, the problem can be extended to cover other scheduling requirements; for example, sequence-dependent setup times.

Author Contributions

Conceptualization, J.-H.L. and H.J.; methodology, J.-H.L.; writing—original draft, J.-H.L.; and writing—review and editing, H.J.

Funding

This paper was supported by Konkuk University in 2018.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Computation results of m of 5 and n of 40.
Table A1. Computation results of m of 5 and n of 40.
ResourceSetupDedicationA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
r = 2 α [ 0.01 , 0.1 ] High1.114.680.781.77
Mid0.751.110.560.43
Low0.370.400.280.28
α [ 0.1 , 0.2 ] High2.604.391.922.10
Mid2.372.191.911.36
Low1.451.161.181.00
α [ 0.1 , 0.5 ] High5.356.774.204.32
Mid5.283.664.402.77
Low3.703.043.182.54
r = 3 α [ 0.01 , 0.1 ] High0.985.590.691.96
Mid0.721.210.550.44
Low0.300.350.230.23
α [ 0.1 , 0.2 ] High1.875.201.572.22
Mid1.711.401.430.80
Low0.720.710.630.59
α [ 0.1 , 0.5 ] High2.975.102.492.74
Mid2.822.272.381.49
Low1.571.221.301.07
r = 4 α [ 0.01 , 0.1 ] High0.944.390.681.79
Mid0.701.360.540.50
Low0.320.360.240.23
α [ 0.1 , 0.2 ] High2.144.551.622.28
Mid1.391.521.220.83
Low0.670.580.570.51
α [ 0.1 , 0.5 ] High2.745.492.272.92
Mid2.461.542.071.09
Low1.070.920.900.76
r = 5 α [ 0.01 , 0.1 ] High1.014.110.681.56
Mid0.691.230.530.45
Low0.310.370.250.23
α [ 0.1 , 0.2 ] High1.794.711.442.23
Mid1.511.311.240.77
Low0.630.630.550.53
α [ 0.1 , 0.5 ] High2.605.512.262.88
Mid2.281.711.861.12
Low0.920.970.780.82
Table A2. Computation results of m of 5 and n of 60.
Table A2. Computation results of m of 5 and n of 60.
ResourceSetupDedicationA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
r = 2 α [ 0.01 , 0.1 ] High0.582.160.430.71
Mid0.450.730.310.28
Low0.270.300.190.20
α [ 0.1 , 0.2 ] High1.912.831.411.36
Mid1.871.301.430.88
Low1.350.961.100.82
α [ 0.1 , 0.5 ] High4.874.543.742.77
Mid4.713.033.792.28
Low3.642.502.982.15
r = 3 α [ 0.01 , 0.1 ] High0.651.820.460.68
Mid0.420.700.290.28
Low0.190.230.140.16
α [ 0.1 , 0.2 ] High1.472.051.140.84
Mid0.980.880.780.50
Low0.510.490.430.43
α [ 0.1 , 0.5 ] High2.492.802.011.43
Mid2.041.381.610.99
Low1.240.900.980.78
r = 4 α [ 0.01 , 0.1 ] High0.562.550.430.88
Mid0.360.660.260.27
Low0.190.220.150.15
α [ 0.1 , 0.2 ] High1.212.361.000.95
Mid0.810.830.660.48
Low0.410.440.370.37
α [ 0.1 , 0.5 ] High2.042.831.731.16
Mid1.260.991.020.65
Low0.670.630.580.52
r = 5 α [ 0.01 , 0.1 ] High0.652.170.450.79
Mid0.400.580.260.23
Low0.190.230.150.16
α [ 0.1 , 0.2 ] High1.242.520.980.88
Mid0.810.770.660.45
Low0.400.410.360.35
α [ 0.1 , 0.5 ] High1.952.891.691.27
Mid1.180.910.940.64
Low0.610.600.510.51
Table A3. Computation results of m of 5 and n of 80.
Table A3. Computation results of m of 5 and n of 80.
ResourceSetupDedicationA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
r = 2 α [ 0.01 , 0.1 ] High0.501.640.400.59
Mid0.320.480.220.22
Low0.220.200.150.14
α [ 0.1 , 0.2 ] High1.892.021.430.98
Mid1.691.081.260.79
Low1.240.830.980.73
α [ 0.1 , 0.5 ] High4.743.443.592.33
Mid4.452.743.522.15
Low3.612.622.942.17
r = 3 α [ 0.01 , 0.1 ] High0.451.840.350.56
Mid0.260.420.190.17
Low0.150.170.110.12
α [ 0.1 , 0.2 ] High1.081.630.880.64
Mid0.660.660.530.40
Low0.490.390.400.34
α [ 0.1 , 0.5 ] High2.121.951.630.97
Mid1.621.031.250.74
Low1.100.760.860.66
r = 4 α [ 0.01 , 0.1 ] High0.461.770.350.46
Mid0.250.490.180.18
Low0.140.180.110.12
α [ 0.1 , 0.2 ] High0.941.400.780.54
Mid0.520.560.420.32
Low0.320.320.280.28
α [ 0.1 , 0.5 ] High1.522.081.270.95
Mid1.070.760.820.52
Low0.570.530.490.45
r = 5 α [ 0.01 , 0.1 ] High0.441.440.340.41
Mid0.260.430.180.18
Low0.140.170.110.11
α [ 0.1 , 0.2 ] High0.981.390.800.54
Mid0.510.540.420.34
Low0.300.320.270.28
α [ 0.1 , 0.5 ] High1.451.621.240.68
Mid0.830.760.630.46
Low0.480.480.420.42
Table A4. Computation results of m of 20 and n of 40.
Table A4. Computation results of m of 20 and n of 40.
ResourceSetupDedicationA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
r = 7 α [ 0.01 , 0.1 ] High3.313.262.312.03
Mid2.632.292.061.62
Low2.302.121.921.55
α [ 0.1 , 0.2 ] High6.936.325.754.40
Mid6.074.845.213.79
Low5.524.734.893.70
α [ 0.1 , 0.5 ] High10.169.498.697.07
Mid9.138.108.266.26
Low8.357.697.676.15
r = 10 α [ 0.01 , 0.1 ] High3.073.152.191.95
Mid2.592.401.961.64
Low2.452.081.921.53
α [ 0.1 , 0.2 ] High6.736.635.504.54
Mid5.874.675.003.61
Low5.384.454.803.59
α [ 0.1 , 0.5 ] High10.289.648.607.27
Mid8.867.778.036.25
Low8.367.037.695.94
r = 13 α [ 0.01 , 0.1 ] High3.204.072.192.00
Mid2.502.241.951.58
Low2.352.281.891.52
α [ 0.1 , 0.2 ] High6.575.915.504.29
Mid5.984.605.073.54
Low5.374.354.843.45
α [ 0.1 , 0.5 ] High9.859.278.536.96
Mid8.917.688.096.16
Low8.197.087.596.08
r = 16 α [ 0.01 , 0.1 ] High3.153.632.261.99
Mid2.412.291.961.57
Low2.342.181.871.47
α [ 0.1 , 0.2 ] High6.715.895.614.51
Mid5.824.805.133.71
Low5.524.584.893.48
α [ 0.1 , 0.5 ] High9.949.228.467.14
Mid9.057.548.046.22
Low8.177.107.565.93
Table A5. Computation results of m of 20 and n of 60.
Table A5. Computation results of m of 20 and n of 60.
ResourceSetupDedicationA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
r = 7 α [ 0.01 , 0.1 ] High2.111.561.651.02
Mid1.791.111.500.76
Low1.750.951.460.69
α [ 0.1 , 0.2 ] High5.013.084.222.23
Mid4.472.093.911.69
Low4.292.023.761.64
α [ 0.1 , 0.5 ] High7.775.626.764.14
Mid7.144.726.413.55
Low6.954.526.313.27
r = 10 α [ 0.01 , 0.1 ] High2.101.671.641.01
Mid1.841.111.540.77
Low1.750.961.470.73
α [ 0.1 , 0.2 ] High4.902.734.162.06
Mid4.422.043.851.70
Low4.251.863.701.54
α [ 0.1 , 0.5 ] High7.344.946.333.57
Mid6.834.126.093.05
Low6.533.875.962.97
r = 13 α [ 0.01 , 0.1 ] High2.001.581.601.01
Mid1.821.031.490.74
Low1.780.971.450.71
α [ 0.1 , 0.2 ] High5.062.714.262.08
Mid4.642.073.951.69
Low4.271.963.711.59
α [ 0.1 , 0.5 ] High7.364.986.393.67
Mid6.774.016.133.13
Low6.274.085.732.91
r = 16 α [ 0.01 , 0.1 ] High2.061.741.631.01
Mid1.821.031.500.74
Low1.700.971.440.71
α [ 0.1 , 0.2 ] High5.052.694.162.06
Mid4.582.123.951.71
Low4.241.873.741.58
α [ 0.1 , 0.5 ] High7.114.846.263.60
Mid6.774.076.013.14
Low6.473.775.812.92
Table A6. Computation results of m of 20 and n of 80.
Table A6. Computation results of m of 20 and n of 80.
ResourceSetupDedicationA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
r = 7 α [ 0.01 , 0.1 ] High1.690.951.340.63
Mid1.540.681.250.50
Low1.510.621.220.47
α [ 0.1 , 0.2 ] High4.091.793.341.41
Mid3.611.423.181.18
Low3.751.293.191.12
α [ 0.1 , 0.5 ] High6.413.645.532.76
Mid6.193.255.522.57
Low6.762.996.132.42
r = 10 α [ 0.01 , 0.1 ] High1.720.991.300.64
Mid1.550.671.250.50
Low1.500.601.220.46
α [ 0.1 , 0.2 ] High4.011.783.361.39
Mid3.661.293.131.12
Low3.441.213.011.06
α [ 0.1 , 0.5 ] High6.193.185.232.34
Mid5.472.464.981.97
Low5.522.315.031.85
r = 13 α [ 0.01 , 0.1 ] High1.731.011.350.65
Mid1.490.661.190.50
Low1.460.621.210.47
α [ 0.1 , 0.2 ] High3.971.733.341.38
Mid3.701.273.191.09
Low3.481.193.031.02
α [ 0.1 , 0.5 ] High5.903.125.062.29
Mid5.512.494.951.98
Low5.382.254.891.76
r = 16 α [ 0.01 , 0.1 ] High1.661.051.340.64
Mid1.500.681.240.51
Low1.490.621.210.47
α [ 0.1 , 0.2 ] High4.051.723.381.36
Mid3.631.333.161.13
Low3.531.163.071.02
α [ 0.1 , 0.5 ] High5.883.035.112.30
Mid5.582.504.881.92
Low5.472.304.941.80

References

  1. Energy Statistics Division of National Bureau of Statistics. China Energy Statistical Yearbook; China Statistics Press: Beijing, China, 2015. [Google Scholar]
  2. Zhang, Z.; Wu, L.; Peng, T.; Jia, S. An improved scheduling approach for minimizing total energy consumption and makespan in a flexible job shop environment. Sustainability 2019, 11, 179. [Google Scholar] [CrossRef] [Green Version]
  3. Ma, M.; Fan, H.; Jiang, X.; Guo, Z. Truck arrivals scheduling with vessel dependent time windows to reduce carbon emissions. Sustainability 2019, 11, 6410. [Google Scholar] [CrossRef] [Green Version]
  4. Torkjazi, M.; Huynh, N. Effectiveness of dynamic insertion scheduling strategy for demand-responsive paratransit vehicles using agent-based simulation. Sustainability 2019, 11, 5391. [Google Scholar] [CrossRef] [Green Version]
  5. Moon, J.Y.; Park, J. Smart production scheduling with time-dependent and machine-dependent electricity cost by considering distributed energy resources and energy storage. Int. J. Prod. Res. 2014, 52, 3922–3939. [Google Scholar] [CrossRef]
  6. Yalaoui, F.; Chu, C. An efficient heuristic approach for parallel machine scheduling with job splitting and sequence-dependent setup times. IIE Trans. 2003, 35, 183–190. [Google Scholar] [CrossRef]
  7. Kim, Y.D.; Shim, S.O.; Kim, S.B.; Choi, Y.C.; Yoon, H. Parallel machine scheduling considering a job-splitting property. Int. J. Prod. Res. 2004, 42, 4531–4546. [Google Scholar] [CrossRef]
  8. Graham, R.L. Bounds for certain multiprocessing anomalies. Bell Labs Tech. J. 1966, 45, 1563–1581. [Google Scholar] [CrossRef]
  9. Graham, R.L. Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 1969, 17, 416–429. [Google Scholar] [CrossRef] [Green Version]
  10. Garey, M.R.; Graham, R.L. Bounds for multiprocessor scheduling with resource constraints. SIAM J. Comput. 1975, 4, 187–200. [Google Scholar] [CrossRef] [Green Version]
  11. Gonzalez, T.; Ibarra, O.H.; Sahni, S. Bounds for LPT schedules on uniform processors. SIAM J. Comput. 1977, 6, 155–166. [Google Scholar] [CrossRef]
  12. Friesen, D.K. Tighter bounds for LPT scheduling on uniform processors. SIAM J. Appl. Math. 1987, 16, 554–560. [Google Scholar] [CrossRef]
  13. Cho, Y.; Sahni, S. Bounds for LIST schedules on uniform processors. SIAM J. Comput. 1980, 9, 91–103. [Google Scholar] [CrossRef]
  14. Dessouky, M.I.; Lageweg, B.J.; Lenstra, J.K.; van de Velde, S.L. Scheduling identical jobs on uniform parallel machines. Stat. Neerl. 1990, 44, 115–123. [Google Scholar] [CrossRef] [Green Version]
  15. Dessouky, M.M. Scheduling identical jobs with unequal ready times on uniform parallel machines to minimize the maximum lateness. Comput. Ind. Eng. 1998, 34, 793–806. [Google Scholar] [CrossRef]
  16. Balakrishnan, N.; Kanet, J.J.; Sridharan, S.V. Early/tardy scheduling with sequence dependent setups on uniform parallel machines. Comput. Oper. Res. 1999, 26, 127–141. [Google Scholar] [CrossRef]
  17. Lee, W.C.; Chuang, M.C.; Yeh, W.C. Uniform parallel-machine scheduling to minimize makespan with position-based learning curves. Comput. Ind. Eng. 2012, 63, 813–818. [Google Scholar] [CrossRef]
  18. Elvikis, D.; Hamacher, H.W.; T’kindt, V. Scheduling two agents on uniform parallel machines with makespan and cost functions. J. Sched. 2011, 14, 471–481. [Google Scholar] [CrossRef] [Green Version]
  19. Elvikis, D.; T’kindt, V. Two-agent scheduling on uniform parallel machines with min-max criteria. Ann. Oper. Res. 2014, 213, 79–94. [Google Scholar] [CrossRef]
  20. Zhou, S.; Liu, M.; Chen, H.; Li, X. An effective discrete differential evolution algorithm for scheduling uniform parallel batch processing machines with non-identical capacities and arbitrary job sizes. Int. J. Prod. Econ. 2016, 179, 1–11. [Google Scholar] [CrossRef]
  21. Jiang, L.; Pei, J.; Liu, X.; Pardolos, P.M.; Yang, Y.; Qian, X. Uniform parallel batch machines scheduling considering transportation using a hybrid DPSO-GA algorithm. Int. J. Adv. Manuf. Technol. 2017, 89, 1887–1900. [Google Scholar] [CrossRef]
  22. Zeng, Y.; Che, A.; Wu, X. Bi-objective scheduling on uniform parallel machines considering electricity cost. Eng. Optim. 2018, 51, 19–36. [Google Scholar] [CrossRef]
  23. Serafini, P. Scheduling jobs on several machines with the job splitting property. Oper. Res. 1996, 44, 617–628. [Google Scholar] [CrossRef]
  24. Tahar, D.N.; Yalaoui, F.; Chu, C.; Amodeo, L. A linear programming approach for identical parallel machine scheduling with job splitting and sequence-dependent setup times. Int. J. Prod. Econ. 2006, 99, 63–73. [Google Scholar] [CrossRef]
  25. Shim, S.O.; Kim, Y.D. A branch and bound algorithm for an identical parallel machine scheduling problem with a job splitting property. Comput. Oper. Res. 2008, 35, 863–875. [Google Scholar] [CrossRef]
  26. Park, T.; Lee, T.; Kim, C.O. Due-date scheduling on parallel machines with job splitting and sequence-dependent major/minor setup times. Int. J. Adv. Manuf. Technol. 2012, 59, 325–333. [Google Scholar] [CrossRef]
  27. Wang, C.; Liu, C.; Zhang, Z.H.; Zheng, L. Minimizing the total completion time for parallel machine scheduling with job splitting and learning. Comput. Ind. Eng. 2012, 97, 170–182. [Google Scholar] [CrossRef]
  28. Kellerer, H.; Strusevich, V.A. Scheduling parallel dedicated machines under a single non-shared resource. Eur. J. Oper. Res. 2003, 147, 345–364. [Google Scholar] [CrossRef]
  29. Kellerer, H.; Strusevich, V.A. Scheduling problems for parallel dedicated machines under multiple resource constraints. Discret. Appl. Math. 2004, 133, 45–68. [Google Scholar] [CrossRef] [Green Version]
  30. Yeh, W.C.; Chuang, M.C.; Lee, W.C. Uniform parallel machine scheduling with resource consumption constraint. Appl. Math. Model. 2015, 39, 2131–2138. [Google Scholar] [CrossRef]
  31. Hall, N.G.; Potts, C.N.; Sriskandarajha, C. Parallel machine scheduling with a common server. Discret. Appl. Math. 2000, 102, 223–243. [Google Scholar] [CrossRef] [Green Version]
  32. Huang, S.; Cai, L.; Zhang, X. Parallel dedicated machine scheduling problem with sequence-dependent setups and a single server. Comput. Ind. Eng. 2010, 58, 165–174. [Google Scholar] [CrossRef]
  33. Jiang, Y.; Dong, J.; Ji, M. Preemptive scheduling on two parallel machines with a single server. Comput. Ind. Eng. 2013, 66, 514–518. [Google Scholar] [CrossRef]
  34. Hasani, K.; Kravchenko, S.A.; Werner, F. Simulated annealing and genetic algorithms for the two-machine scheduling problem with a single server. Int. J. Prod. Res. 2014, 52, 3778–3792. [Google Scholar] [CrossRef]
  35. Abdekhodaee, A.H.; Wirth, A.; Gen, H.S. Scheduling two parallel machines with a single server: The general case. Comput. Oper. Res. 2006, 33, 994–1009. [Google Scholar] [CrossRef]
  36. Cheng, T.C.E.; Kravchenko, S.A.; Lin, B.M.T. Preemptive parallel-machine scheduling with a common server to minimize makespan. Nav. Res. Logist. 2017, 64, 388–398. [Google Scholar] [CrossRef]
  37. Hamzadayi, A.; Yildiz, G. Modeling and solving static m identical parallel machines scheduling problem with a common server and sequence dependent setup times. Comput. Ind. Eng. 2017, 106, 287–298. [Google Scholar] [CrossRef]
  38. Ou, J.; Qi, X.; Lee, C.Y. Parallel machine scheduling with multiple unloading servers. J. Sched. 2010, 13, 213–226. [Google Scholar] [CrossRef]
  39. Cheng, T.C.E.; Sin, C.C.S. A state-of-the-art review of parallel-machine scheduling research. Eur. J. Oper. Res. 1990, 47, 271–292. [Google Scholar] [CrossRef]
  40. Li, K.; Yang, S. Non-identical parallel-machine scheduling research with minimizing total weighted completion times: Models, relaxations and algorithms. Appl. Math. Model. 2009, 33, 2145–2158. [Google Scholar] [CrossRef]
  41. Edis, E.B.; Ogus, C.; Ozkarahan, I. Parallel machine scheduling with additional resources: Notation, classification, models and solution methods. Eur. J. Oper. Res. 2013, 230, 449–463. [Google Scholar] [CrossRef]
  42. Pinedo, M.L. Scheduling Theory, Algorithms, and Systems; Springer: New York, NY, USA, 2012. [Google Scholar]
  43. Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. 1955, 2, 83–97. [Google Scholar] [CrossRef] [Green Version]
Figure 1. An example in FFU manufacturing.
Figure 1. An example in FFU manufacturing.
Sustainability 11 07137 g001
Figure 2. Schedules in parallel machines.
Figure 2. Schedules in parallel machines.
Sustainability 11 07137 g002
Figure 3. An example of Step 4 in Algorithm 1.
Figure 3. An example of Step 4 in Algorithm 1.
Sustainability 11 07137 g003
Figure 4. Schedules of proposed priority rules.
Figure 4. Schedules of proposed priority rules.
Sustainability 11 07137 g004
Figure 5. Experimental results with m of 10 and n of 40; in each graph, the x-axis indicates the number of setup resources, r, and the y-axis denotes gaps as defined in Equation (23).
Figure 5. Experimental results with m of 10 and n of 40; in each graph, the x-axis indicates the number of setup resources, r, and the y-axis denotes gaps as defined in Equation (23).
Sustainability 11 07137 g005
Figure 6. Experimental results with m of 10 and n of 60; in each graph, the x-axis indicates the number of setup resources, r, and the y-axis denotes gaps as defined in Equation (23).
Figure 6. Experimental results with m of 10 and n of 60; in each graph, the x-axis indicates the number of setup resources, r, and the y-axis denotes gaps as defined in Equation (23).
Sustainability 11 07137 g006
Figure 7. Experimental results with m of 10 and n of 80; in each graph, the x-axis indicates the number of setup resources, r, and the y-axis denotes gaps as defined in Equation (23).
Figure 7. Experimental results with m of 10 and n of 80; in each graph, the x-axis indicates the number of setup resources, r, and the y-axis denotes gaps as defined in Equation (23).
Sustainability 11 07137 g007
Table 1. Symbols and descriptions for the mathematical programming model.
Table 1. Symbols and descriptions for the mathematical programming model.
SymbolDescription
C m a x maximum completion time of machines
Hscheduling horizon; a given parameter
x i j t 1 if a setup for job j starts on machine i at time t, and 0, otherwise
y i j 1 if a section of job j is processed on machine i, and 0, otherwise
L i j length of a section of job j processed on machine i
z i j k 1 if a section of job k is processed right after a section of job j on machine i, and 0, otherwise
R i j t 1 if machine i is under the setup for a section of job j between t and t + 1 , and 0, otherwise
p j processing time of job j when the machine speed is 1
v i relative processing speed of machine i
s j setup time for job j; sequence-independent
Nset of jobs; 1 , , n
M j set of machines that can process job j
J i set of jobs that can be processed on machine i
Table 2. Summary of computational results with m of 10.
Table 2. Summary of computational results with m of 10.
InstancesA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
Machine Speed0.8–1.20.5–1.50.8–1.20.5–1.50.8–1.20.5–1.50.8–1.20.5–1.5
Dedication levelHigh3.484.283.054.552.793.451.802.84
Mid3.283.861.522.302.803.291.171.90
Low2.583.241.252.002.162.711.021.74
Setup range α [ 0.01 , 0.1 ] 1.171.521.121.740.871.150.570.99
α [ 0.1 , 0.2 ] 2.873.631.742.772.353.031.212.08
α [ 0.1 , 0.5 ] 5.316.232.964.344.535.282.213.42
Resources34.594.962.823.483.864.131.982.63
52.773.481.702.872.282.881.172.05
72.563.401.602.702.102.811.091.99
92.543.331.642.752.102.771.081.97
No. of jobs403.874.702.854.233.233.871.882.93
602.943.581.712.562.422.971.201.94
802.533.101.272.062.102.610.921.62
Table 3. Summary of computational results with m of 5.
Table 3. Summary of computational results with m of 5.
InstancesA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
Dedication levelHigh1.733.171.361.45
Mid1.401.201.120.73
Low0.850.710.700.59
Setup range α [ 0.01 , 0.1 ] 0.461.300.340.50
α [ 0.1 , 0.2 ] 1.181.540.950.83
α [ 0.1 , 0.5 ] 2.332.251.901.45
Resources22.272.251.791.41
31.171.600.940.82
40.951.490.780.75
50.911.440.740.71
No. of jobs401.692.551.371.38
601.241.450.980.78
801.051.090.830.61
Table 4. Summary of computational results with m of 20.
Table 4. Summary of computational results with m of 20.
InstancesA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
Dedication levelHigh5.143.854.292.76
Mid4.612.984.032.32
Low4.382.793.892.21
Setup range α [ 0.01 , 0.1 ] 2.051.551.611.02
α [ 0.1 , 0.2 ] 4.792.954.112.29
α [ 0.1 , 0.5 ] 7.305.136.493.98
Resources74.863.384.202.54
104.693.174.032.41
134.653.164.022.38
164.663.144.022.39
No. of jobs405.945.325.114.01
604.472.653.891.98
803.721.663.211.30
Table 5. Computation results of extreme cases.
Table 5. Computation results of extreme cases.
InstancesnA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
409.968.758.677.13
m = 5 , r = 1 609.667.868.456.77
809.027.237.816.29
408.496.347.384.54
m = 10 , r = 2 609.465.848.354.64
8010.205.869.084.89
407.166.016.204.44
m = 20 , r = 4 606.834.346.013.17
807.474.486.603.25
Table 6. Performance of the algorithm with a real application.
Table 6. Performance of the algorithm with a real application.
InstancesnA1 (LFJ) (%)A1 (LPT) (%)A2 (LFJ) (%)A2 (LPT) (%)
206.405.356.023.68
m = 7 , r = 2 , α [ 0.1 , 0.2 ] 305.014.243.912.61
404.843.983.272.41

Share and Cite

MDPI and ACS Style

Lee, J.-H.; Jang, H. Uniform Parallel Machine Scheduling with Dedicated Machines, Job Splitting and Setup Resources. Sustainability 2019, 11, 7137. https://doi.org/10.3390/su11247137

AMA Style

Lee J-H, Jang H. Uniform Parallel Machine Scheduling with Dedicated Machines, Job Splitting and Setup Resources. Sustainability. 2019; 11(24):7137. https://doi.org/10.3390/su11247137

Chicago/Turabian Style

Lee, Jun-Ho, and Hoon Jang. 2019. "Uniform Parallel Machine Scheduling with Dedicated Machines, Job Splitting and Setup Resources" Sustainability 11, no. 24: 7137. https://doi.org/10.3390/su11247137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop