Next Article in Journal
Back to Basics: Meaning of the Parameters of Fractional Order PID Controllers
Next Article in Special Issue
Pricing Decision within an Inventory Model for Complementary and Substitutable Products
Previous Article in Journal
Pre-Dual of Fofana’s Spaces
Previous Article in Special Issue
Dynamic Pricing in a Multi-Period Newsvendor Under Stochastic Price-Dependent Demand
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Meta-Model-Based Multi-Objective Evolutionary Approach to Robust Job Shop Scheduling

1
Department of Industrial Engineering, Northwestern Polytechnical University, Xi’an 710072, China
2
Laboratoire Genie Industriel, CentraleSupélec, Université Paris-Saclay, 91190 Saint-Aubin, France
3
Key Laboratory of Information Fusion Technology (Ministry of Education), School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
*
Authors to whom correspondence should be addressed.
Mathematics 2019, 7(6), 529; https://doi.org/10.3390/math7060529
Submission received: 15 May 2019 / Revised: 3 June 2019 / Accepted: 5 June 2019 / Published: 10 June 2019

Abstract

:
In the real-world manufacturing system, various uncertain events can occur and disrupt the normal production activities. This paper addresses the multi-objective job shop scheduling problem with random machine breakdowns. As the key of our approach, the robustness of a schedule is considered jointly with the makespan and is defined as expected makespan delay, for which a meta-model is designed by using a data-driven response surface method. Correspondingly, a multi-objective evolutionary algorithm (MOEA) is proposed based on the meta-model to solve the multi-objective optimization problem. Extensive experiments based on the job shop benchmark problems are conducted. The results demonstrate that the Pareto solution sets of the MOEA are much better in both convergence and diversity than those of the algorithms based on the existing slack-based surrogate measures. The MOEA is also compared with the algorithm based on Monte Carlo approximation, showing that their Pareto solution sets are close to each other while the MOEA is much more computationally efficient.

1. Introduction

Production scheduling is of great significance in both scientific study and engineering applications [1,2,3,4,5]. Generally, the aim of the job shop scheduling problem (JSS) is to find a schedule that minimizes certain performance objective, given a set of machines and a set of jobs. However, in practice, the execution of a schedule is usually confronted with disruptions and unforeseen events, such as random machine breakdowns (RMDs), which make the actual performance of a schedule hard to predict. Against this background, we will focus on the multi-objective robust JSS under RMDs with the goal of optimizing the makespan and the robustness simultaneously.
In the last few decades, the JSS problems have been extensively studied, most of which have addressed the JSS with makespan as the objective [6,7,8], such as the hybrid genetic algorithm [9], the genetic algorithm [10] with search area adaptation [11], the global optimization technique which combines tabu search with the ant colony optimization [12], and the memetic algorithm conditioned on a limited set of human operators [13]. However, it is often assumed that the problem parameters about jobs and machines are known and deterministic. This makes it difficult to generate a good schedule for a real-world job shop which is subjected to various uncertainties [14], such as machine breakdowns, variable processing times, and due date changes. Mehta et al. [15] had classified the uncertainties in the practical manufacturing into three main categories: complete unknowns, suspicions about the future, and known uncertainties. Complete unknowns are those unpredictable events, e.g. a sudden strike, about which no a-priori information is available, while suspicions about the future arise from the intuition and experience of the human scheduler. On the other hand, known uncertainties are those events about which some information is available in advance, such as machine breakdowns [16,17,18] whose frequency and duration may be characterized by probability distributions. Under these uncertainties, a schedule will be difficult to execute as planned, and finally the actual performance of the schedule will deteriorate.
Recently, robust optimization has gained intensive interest [19], where most of existing studies focus on addressing the scheduling problems under known uncertainties. The robustness of a schedule indicates the ability of the schedule to preserve a specified level of solution quality in the presence of uncertainties [20], which is generally measured by the expected deviation of the performance from its initial performance under uncertainties [21]. Liu et al. [22] defined the robust schedule as a schedule that is insensitive to uncertainties, such as that a schedule may degrade its performance to a very small degree under disruptions. Thus, in addition to the makespan, the robustness of a schedule will also be taken as one of the objectives in the robust scheduling. Xiao et al. [23] addressed the stochastic JSS problem with uncertain processing times, and the robustness took the expected relative deviation between the realized makespan and the predictive makespan. Zuo et al. [24] considered both the expectation and standard deviation of the performance of a schedule. Ahmadi et al. [25] defined the robustness of a schedule as the expected deviation of starting and completion time of each job between preschedule and realized schedule under RMD.
With simultaneous consideration of the performance and the robustness of a schedule, the JSS under uncertainties holds a multi-objective nature. Usually, the two objectives are combined by the weight sum, and then the problem with two objectives will be transferred into a single-objective problem, such as that described in [26,27]. However, providing a wide range of solutions to decision-makers might be more useful [20], since decision-makers can make a better trade-off between the performance and the robustness for their schedules. The multi-objective evolutionary algorithm (MOEA), such as NSGA II [28,29,30], has been successfully solved the classic JSS without considering uncertainties [31]. In addition, Hosseinabadi, et al. [32] proposed a TIME_GELS algorithm that uses the gravitational emulation local search [33] for solving the multiobjective flexible dynamic job-shop scheduling problem. But, when the robustness is considered, a MOEA should further be able to solve the problems in the presence of uncertainties [34].
Because of the intractable complexity of JSS with uncertainties, it is difficult to evaluate the effects of uncertainties on a schedule, and thus the robustness is not available in a closed form. In this case, approximation methods should be applied for fitness evaluation in the MOEA. The simplest way is to employ the Monte Carlo method [35,36] to estimate the robustness, by averaging the objective values over a few randomly sampled uncertainty scenarios. The Monte Carlo method is based on random sampling and has been proven to be a powerful method for estimation [37,38,39]. However, it may cause potentially expensive fitness evaluations and reduce the computing efficiency. For this reason, the problem approximation that tends to replace the original statement of a problem by one which is simple but easier to solve, has been applied. Mirabi, et al. [40] simplified the machine breakdown by assuming the repair time is constant, while Liu et al. [41] assumed that all possible machine breakdowns during a scheduling horizon are aggregated as one. However, this may lead to a large mismatch between the model and the actual problem. In contrast, the meta-model which is an approximate function of the real fitness, also known as the surrogate measure, may be preferred. However, the design of a meta-model for the robustness is challenging. So far, the available surrogate measures for the robustness measured by the expected makespan delay (EMD) only include the average total slack time of operations in [42] and the sum of free slack times of operations in [26]. Since these surrogate measures ignored uncertainties, their estimation accuracy will be reduced [43].
In comparison with the existing research, the main contributions of our approach are as follows:
  • The robustness of a schedule is considered jointly with the makespan and is defined as EMD, for which a meta-model is designed by using a data-driven response surface method.
  • A MOEA is proposed based on the meta-model, gaining excellent performance and efficiency.
The remainder of this paper is organized as follow. The multi-objective optimization model for the JSS under RMD is defined in the next section. In Section 3, a meta-model-based MOEA is proposed. The performance of the proposed algorithm is presented in Section 4, with comparison with the Monte Carlo method. In Section 5, we conclude the whole study.

2. Problem Definition

We consider the JSS problem with n jobs ( J = { J j | j = 1 , 2 , , n } ) to be processed on m machines ( M = { M i | i = 1 , 2 , , m } ). All jobs and machines are available at time zero. The processing of job j on machine i is called operation O i j ,   i [ 1 , m ] , j [ 1 , n ] , and its processing time p i j is constant and known. Each job includes m operations ( O j = { O i j | i = 1 , 2 , , m } , j [ 1 , n ] ) that must be processed in a specified sequence through each machine. A feasible schedule that specifies the starting and completion time of all operations should satisfy the constraints: (1) job splitting is not allowed; (2) an operation is not allowed to be preempted by the others; (3) each operation is performed only once on one machine; and (4) each machine performs only one operation at a time.
In this study, the operation-based machine breakdown model presented in [21] will be applied. More specifically, the machine breakdown is modeled by two parameters: the downtime required to repair the machine after its breakdown, and the breakdown probability during a time interval. The machine breakdown probability Pr i j when processing operation O i j can be calculated by Equation (1), where λ 0 is the machine failure rate of each machine.
Pr i j = min { λ 0 p i j , 1 }
The downtime D i j of a machine after a breakdown when processing operation O i j is modeled as an exponential distribution, as shown in Equation (2), where β 0 is the expectation of the downtime.
f ( D i j ) = 1 β 0 e 1 β 0 D i j D i j > 0 0 D i j 0
In the classic JSS problems without considering uncertainties, the aim of scheduling is generally to find a schedule that minimizes the makespan. The makespan of a schedule is equal to the maximum completion time of all operations in the schedule, which can be determined by Equation (3), where c t i j is the completion time of operation O i j .
C max 0 = max i M { max j J { c t i j } }
However, the makespan of a schedule will be affected by RMDs in practice [16,17,18]. Since RMDs postpone the completion time of operations, the actual makespan C max r of a schedule will be delayed. As shown in Figure 1a, the makespan C max 0 of the schedule before execution is equal to 10. If a machine breakdown with one unit of downtime occurring on the operation O 21 , as shown in Figure 1b, its completion time is directly postponed by one unit of time. Then, all the completion times of its subsequent operations { O 22 , O 11 , O 23 , O 12 , O 31 } are also delayed by one unit of time. Finally, the actual makespan C max r of the schedule is equal to 11, which has been delayed by one unit of time.
According to the analysis above, the influence of machine breakdowns on the makespan of a schedule can be given by the makespan delay δ c in Equation (4).
δ c = max ( C max r C max 0 ,   0 )
Since machine breakdowns take place randomly, the makespan delay of a schedule will vary with the actual scenario of machine breakdowns, which makes the actual makespan very instability. The stochastic change of the actual makespan will reduce the performance of a schedule, such as that it may lead to the products cannot be delivered on time and lose the customer good will. Therefore, it is preferred that the makespan of a schedule is robust under RMDs. To measure the robustness of makespan, the EMD will further be applied, as the Δ c shown in Equation (5).
Δ c = E ( δ c ) = 0 + δ c f ( δ c ) d δ
However, a schedule with the minimum makespan is generally very compact, which means that it may be very sensitive to RMD and with a large EMD. Thus, the makespan C max 0 of a schedule will conflict with the EMD. Since different decision-makers have various preferences for the makespan and the makespan delay, it is worth providing a wide range of schedules for decision-makers to make the best trade-off. When consider the makespan and the EMD of a schedule at the same time, the JSS under RMD can be modeled as a multi-objective optimization problem. Let A be the set of precedence constraints (Oij, Okj) that require job j to be processed on machine i before it is processed on machine k, the multi-objective optimization model can be provided as follows:
Minimize:
F = ( C max 0 ,   Δ c )
Subject to:
s t k j s t i j p i j   ,   for   all   ( O i j ,   O k j ) A
s t i j s t i l p i l   or   s t i l s t i j p i j ,   for   all   O i j   and   O i l
s t i j 0 ,   for   all   O i j
λ i = λ 0 ,   for   all   i M
D i j E x p ( 1 / β 0 ) ,   for   all   O i j

3. The Meta-Model Based MOEA

When solving the multi-objective optimization model in Section 2 by a MOEA, the primary task is to evaluate the fitness of each individual in a population. However, the EMD in Equation (5) cannot be analytically calculated for the intractable complexity of JSS under RMD. Although the commonly-used Monte Carlo simulation can be used to approximate the EMD, it will make the MOEA inefficient for it is very time-consuming to evaluate each single individual, especially for the problems with larger scales. In view of this, a meta-model-based MOEA will be proposed to solve the multi-objective optimization model for the robust JSS.

3.1. Framework of The Algorithm

The meta-model-based MOEA is designed according to the basic framework of the classic NSGA-II [28]. As shown in Figure 2, the algorithm begins with an initial population P0 with N randomly generated individuals. Before executing the following genetic operators, the meta-model Δ c a of the Δ c will be constructed based on the initial population P0. Then, the selection, crossover, and mutation operators will be applied on the current population Pk to generate new individuals and construct a combined population Rk+1. The fitness of individuals in the combined population Rk+1 will first be evaluated by the makespan C max 0 and the proposed meta-model Δ c a , and then the individual-based evolution control will be applied to update the fitness of some individuals. Finally, the next generation population Pk+1 will be generated according to the ranks of individuals. When the maximum generation number is reached, the algorithm will stop and return the obtained Pareto solution set.

3.2. Meta-Model of The EMD

The meta-model for the EMD will be constructed based on the response surface methodology. As shown in Equation (12), this method applies a quadratic polynomial f ^ ( x ) to approximate the function relation between the input x and the output y of a system,
y f ^ ( x ) = a 0 + Bx + xC x T ,
where, x is the input variant vector with v variables as shown in Equation (13), a 0 is the constant term, B is the coefficient vector of the linear term as shown in Equation (14), and C is the coefficient matrix of the quadratic term as shown in Equation (15).
x = ( x 1 , x 2 , , x v )
B = ( b 1 , b 2 , , b v )
C = c 11 c 12 c 1 v c 22 c 2 v c v v
However, a schedule cannot be directly taken as an input variant of the EMD, since it cannot be quantified. Therefore, to construct a meta-model by the response surface method for the EMD, the primary task is to extract features related to the EMD from the schedule. To this end, we will further analyze how RMDs affect the makespan of a schedule. As we all know, a feasible job shop schedule is decided by the process constraints and the resource constraints. As a result, machines may have some idle time during a schedule period, as shown in Figure 3a. In the classic JSS problems, we are devoted to reducing the idle time to minimize the makespan for improving the utilities of machines. However, when RMDs are considered, the idle time may be useful for an operation to control the influence of machine breakdowns on the makespan of a schedule and then improve the robustness of the makespan.
The available idle time of operations in a schedule can be classified into two types: the free slack time and the total slack time. The former is the time that an operation can be delayed without delaying the starting of its very next operations, while the latter is the difference between the earliest and latest starting times of an operation without delaying the makespan. Take the schedule in Figure 3a as an example, the free slack time of operations { O 13 , O 11 , O 12 , O 21 , O 22 , O 23 , O 32 , O 33 , O 31 } are {0, 0, 1, 0, 0, 1, 0, 1, 0}, respectively. The earliest and latest starting time of operations { O 13 , O 11 , O 12 , O 21 , O 22 , O 23 , O 32 , O 33 , O 31 } without delaying the makespan is {0, 2, 6, 0, 2, 5, 0, 1, 6} and {1, 2, 7, 0, 3, 6, 2, 3, 6}, respectively. Therefore, the total slack time of each operation is {1, 0, 1, 0, 1, 1, 2, 2, 0}, respectively.
It is clear that not all operations have the free/total slack time. For an operation without slack time, the makespan of a schedule will be directly delayed, when an RMD takes place on it. As shown in Figure 3b, when a RMD with one unit of downtime takes places on the operation O 11 , the actual makespan C max r is changed to 11, which is directly delayed by one unit of time. However, when an RMD takes places on an operation with slack time, the makespan will not be delayed until the slack time of this operation is used up. As shown in Figure 3c, after a RMD with one unit of downtime takes place on the operation O 33 with two units of free slack time, the makespan is still equal to 10, for the free slack time of this operation is larger than the downtime of the breakdown. On the other hand, although the operation O 22 has no free slack time, the makespan can also be protected by the total slack time of the operation, as shown in Figure 3d.
According to the analysis above, it can be found that the makespan delay of a schedule depends on the machine breakdown level, the free slack time and total slack time of each operation. In view of this, we will extract the mathematical features for the schedule under RMDs from these three aspects: the RMDs, the set O y of operations with slack time and the set O n of operations without slack time. For the RMDs, the machine failure rate λ 0 and the expected downtime β 0 after a breakdown will be taken. For the operations in the set O y , all their processing time p i j , free slack time f s i j and total slack time t s i j will be taken. As for the operations in the set O n , only their processing time p i j will be taken.
However, it is practically impossible to consider all the processing time, free slack time, and total slack time of operations as the input variants of the EMD. To reduce the number of input variants, we will further generalize these basic features into some comprehensive features. Formally, the sum of processing time p s y and p s n will be used to represent the processing time of operations in the sets O y and O n , respectively. For the slack time, the average free slack time f s a and the average total slack time t s a of operations in the set O y will be applied.
Finally, the input variants of the EMD can be listed as follows: the machine failure rate λ 0 , the expected downtime β 0 , the sum of processing time p s n , the sum of processing time p s y , the average free slack time f s a , and the average total slack time t s a . Then, the input variant vector x of the EMD can be set as x = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = ( λ 0 , β 0 , p s n , p s y , f s a , t s a ) , and then the meta-model Δ c a of Δ c can be defined by Equation (16).
Δ c Δ c a = a 0 + i = 1 6 b i x i + i = 1 5 j = 1 6 c i j x i x j
Then, the coefficients should further be determined to finalize the meta-model Δ c a . As shown in Algorithm 1, it takes the initial population P0 and the input variant vector x as the inputs, and outputs the meta-model Δ c a with the determined coefficients. First, a training data set D c which includes N data instances will first be generated based on the initial population P0. A data instance I i = ( x i , Δ c i ) is composed of the values of the input variant vector x i and the corresponding Δ c i . The values of the input variant vector x i can be determined once a schedule s i is generated based on the ith individual in the initial population P0. Since the EMD cannot be analytically calculated, it will be evaluated by the Monte Carlo approximation Δ c sim as shown in Equation (17). After the training data set D c is constructed, the Multiple Linear Regression will be used to determine the coefficients of the meta-model Δ c a for the EMD,
Δ c sim = 1 N s i = 1 N s ( C max i C max 0 ) ,
where N s is the simulation times and C max i is the makespan of a schedule under the ith simulation.
Algorithm 1 The pseudo-code to finalize the meta-model.
Inputs: the initial population P0 and the input variant vector x
Outputs: the meta-model Δ c a with the determined coefficients
1: Set the training data set D c = ;
2: Generate Ns machine breakdown scenarios with the machine failure rate λ 0 and the expected downtime β 0 ;
3: for i = 1 to N
4:   Generate the schedule s i based on the ith individual in P0;
5:   Determine the makespan C max 0 of the schedule s i by Equation (3);
6:   Calculate the free slack time f s i j and the total slack time t s i j in schedule s i ;
7:   Calculate the sum of processing time p s n and p s y ;
8:   Calculate the average free slack time f s a and the average total slack time t s a ;
9:   Determine the values x i of the input variant vector x = ( λ 0 , β 0 , p s n , p s y , f s a , t s a ) ;
10:   Determined the EMD Δ c i = Δ c sim by Equation (17);
11:   Generate the ith data instance I i = ( x i , Δ c i ) ;
12:   Update the training data set D c = D c I i ;
13: end for
14: Based on the training data set D c , apply the multiple linear regression to determine the coefficients of the meta-model Δ c a in Equation (16);
15: Return the finalized meta-model Δ c a .

3.3. Fitness Evaluation

To compare the fitness of different individuals, the makespan and the EMD of each individual in a population should be evaluated. Once a schedule is generated, the makespan can be directly determined by Equation (3). However, the EMD cannot be analytically calculated for the complexity of the JSS. Although it can be effectively approximated by the time-consuming Monte Carlo approximation in Equation (17), the efficiency of the algorithm will be significantly reduced. In view of this, we will apply the proposed meta-model Δ c a in Equation (16) to approximate the EMD. The basic motivation for using the meta-model in the fitness evaluation is to reduce the number of expensive fitness evaluations without degrading the quality of the obtained optimal solution.
Based on the proposed meta-model Δ c a , Algorithm 2 can be used to provide the fitness set F k + 1 = { ( C max 0 ( s 0 ) , Δ c a ( s 0 ) ) , , ( C max 0 ( s i ) , Δ c a ( s i ) ) , , ( C max 0 ( s i ) , Δ c a ( s i ) ) } of the individuals in the combined population Pk+1, where F k + 1 i = ( C max 0 ( s i ) , Δ c a ( s i ) ) represents the fitness of the ith individual in the combined population Pk+1 with the makespan F k + 1 i ( 1 ) = C max 0 ( s i ) and the EMD F k + 1 i ( 2 ) = Δ c a ( s i ) .
Algorithm 2 Fitness evaluation for the combined population
Inputs: the combined population Rk+1
Outputs: the fitness set F k + 1 of the individuals in Rk+1
1: Set the fitness set F k + 1 = ;
2: for i = 1 to 2 N
3:   Select the ith individual c h m i from the population Rk+1;
4:   Generate the schedule s i based on the individual c h m i ;
5:   Evaluate the makespan C max 0 , i of the schedule s i by Equation (3);
6:   Get machine failure rate λ 0 and expected downtime β 0 ;
7:   Calculate the free slack time f s i j and the total slack time t s i j in the schedule s i ;
8:   Calculate the sum of processing time p s n and p s y ;
9:   Calculate the average free slack time f s a and the average total slack time t s a ;
10:   Determine the values x i of the input variant vector x = ( λ 0 , β 0 , p s n , p s y , f s a , t s a )
11:   Evaluate the EMD by Δ c a ( s i ) in Equation (16) with the values of x i ;
12:   Update the fitness set F k + 1 = F k + 1 F k + 1 i = F k + 1 ( C max 0 ( s i ) , Δ c a ( s i ) ) ;
13: end for
14: Return the fitness set F k + 1 ;

3.4. Individual-Based Evolution Control

Generally, the approximate model is assumed to be of high fidelity and, therefore, the real fitness function will be not at all used in the evolution [20]. However, an evolutionary algorithm using meta-models without controlling the evolution using the real fitness function can run the risk of an incorrect convergence [28]. For this reason, the meta-model is combined with the real fitness function in our algorithm, which is often known as evolution control or model management.
As shown in Algorithm 3, the individual-based evolution control framework will be applied, which chooses the best individuals according to the pre-evaluation using the meta-model Δ c a for reevaluation using the real fitness function. For this purpose, the fitness set F k + 1 will first be ranked by the fast non-dominated sorting. And then, the individuals with the rank r a n k ( F k + 1 i ) = 1 will further be reevaluated by the Monte Carlo approximation Δ c sim in Equation (17). In addition, to avoid the unnecessary simulation computational time, the repeated individuals will only be evaluated once.
Algorithm 3 Individual-based evolution control framework
Inputs: the fitness set F k + 1 of the combined population Rk+1
Outputs: the modified fitness set F ^ k + 1
1: Generate Ns scenarios with the machine failure rate λ 0 and the expected downtime β 0 ;
2: Rank the fitness set F k + 1 by the fast non-dominated sorting;
3: Sort the fitness set F k + 1 in the ascending lexicographic order of the rank, the makespan and the EMD;
4: Set F ^ k + 1 = F k + 1 ;
5: for i = 1 to 2 N
6:   if r a n k ( F k + 1 i ) > 1
7:     Break;
8:   else if F k + 1 i ( 1 ) = F k + 1 i 1 ( 1 ) and F k + 1 i ( 2 ) = F k + 1 i 1 ( 2 )
9:     Update the fitness F ^ k + 1 i by F ^ k + 1 i ( 2 ) = F ^ k + 1 i 1 ( 2 ) ;
10:   else
11:     Generate the schedule s i of the individual associated by F k + 1 i in the Rk+1;
12:     Evaluate the EMD by the Monte Carlo method Δ c sim in Equation (17);
13:     Update the fitness F ^ k + 1 i by F ^ k + 1 i ( 2 ) = Δ c sim ;
14:   end if
15: end for
16: Return the modified fitness set F ^ k + 1 .

3.5. Evolutionary Operators

In our algorithm, the preference list representation is applied to code the chromosomes. The chromosome built by this coding method is made up of m substrings corresponding to m machines. Every substring is a preference list of n jobs on the corresponding machine. Supposing the chromosome is [(2 3 1) (1 3 2) (2 1 3)], the substring (2 3 1) is the preference list for machine 1, the substring (1 3 2) for machine 2 and the substring (2 1 3) for machine 3. When decoding, the job the first to appear in every precedence list will be selected firstly. If a selected operation meets the process constraint, the operation will be scheduled, and then it is removed from corresponding preference list. If there are more than one operation can be scheduled, then select one randomly.
The main genetic operators include selection, crossover and mutation. The usual binary tournament selection is used to select parent individuals for generating child solutions. Namely, randomly select two individuals from the population, and choose one of them with better fitness for the subsequent genetic operators. In the crossover operator, the substring crossover which exchanges the substrings of parents between two randomly selected machine numbers is applied. For the mutation operator, the swap-mutation operator to a randomly selected substring is applied.
Before updating the next generation population, the fast non-dominated sorting approach is applied to ranking the solutions in the combined population. Then, the population will be updated by choosing the individuals in the order of their ranks. Since all the previous and current population members are included in the combined population, the elitism can also be ensured.

4. Experimental Analysis

In this section, the performance of the meta-model in evaluating the EMD will first be presented, and then the meta-model-based MOEA will be used to solve the robust JSS problem.

4.1. Experiment Setting

The algorithm is implemented using C++ and run on a 2.8 GHz PC with an Intel Pentium dual-core CPU and 2 GB of RAM. The parameters are listed as follows: the population size is 1024; the generation number is 64; the crossover rate is 0.95; the mutation rate is 0.05; the machine breakdown ratio is 0.005; the expected downtime is 20; and the simulation times are 600.
In the literature, many benchmark problems have been generated by different researchers to test the performance of different algorithms, which are also very useful for this research for they include a wide range of problem instances. In this study, the problem instances La01-La40 with sizes from 10 × 5 to 30 × 10 in the benchmark problem set LA (Lawrence in 1984) and the problem instances Ta01-Ta40 with sizes from 15 × 15 to 30 × 15 in the benchmark problem set TA (Taillard in 1994) will be applied. In total, there are 80 benchmark problem instances will be used to test the performance of the proposed algorithm.

4.2. Evaluation Performance of The Proposed Mete-Model

To distinguish the robustness of different schedules, an effective meta-model must have high evaluation accuracy and perform a strong linear correlation to the real value of the robustness. To show the accuracy of the proposed meta-model Δ c a in evaluating the EMD, the average χ ¯ in Equation (18) and standard variance σ ( χ ) in Equation (19) of the absolute relative deviation χ ( Δ c a , Δ c sim ) from the Monte Carlo approximation Δ c sim will be applied. In addition, a correlation study will be conducted using IBM SPSS. For each test problem, the R 2 statistic and the significance level S i g . of the linear model ANOVA are recorded. The value of R 2 is used to measure the fitting degree of the meta-model, while the significance level S i g . is used to test whether there is a significant linear correlation between the meta-model and the EMD. Therefore, a good meta-model should be with a large value of R 2 and a small value of S i g . for each test problem.
χ ¯ = 1 N i = 1 N χ ( Δ c a , Δ c sim ) = 1 N i = 1 N Δ c a Δ c sim Δ c sim
σ ( χ ) = i = 1 N ( χ ( Δ c a , Δ c sim ) χ ¯ ) 2 N 1
The experimental results have been processed and recorded in Table 1 and Table 2 for the problem sets LA and TA, respectively. It can be found that the maximum and minimum values of χ ¯ for all problems in LA are equal to 0.041 and 0.020, respectively. And, the maximum and minimum values of χ ¯ for all problems in TA are equal to 0.026 and 0.017, respectively. That is, the values of χ ¯ for all test problems are less than 0.05 in average. Therefore, the values of the meta-model are very close to that of the Monte Carlo approximation, which indicate that the proposed meta-model have high accuracy in evaluating the EMD for the JSS under RMD. On the other hand, the maximum and minimum values of σ ( χ ) for all problems in LA are equal to 0.031 and 0.015, respectively. The maximum and minimum values of σ ( χ ) for all problems in TA are equal to 0.020 and 0.013, respectively. All these results show that the meta-model have a small variance in the absolute relative deviation, which indicate that the performance of the meta-model is also robust in evaluating the EMD.
The results can also be clearly presented by the quartile graphs of the absolute relative deviation χ ( Δ c a , Δ c sim ) , as shown in Figure 4 and Figure 5. In Figure 4, the maximum value of the absolute relative deviations χ ( Δ c a , Δ c sim ) for all problems in LA is about 0.19, but more than 75% of the values are less than 0.06 for all the problems in the problem set LA. Especially, when the problem scale is larger, such as the problems LA21-LA40, more than 75% of the values of χ ( Δ c a , Δ c sim ) are less than 0.04. As for the problem set TA in Figure 5, the maximum value of the absolute relative deviation χ ( Δ c a , Δ c sim ) for all problems is only about 0.12, and more than 75% of the values are less than 0.04 for all the test problems. Therefore, it can be concluded that the proposed meta-model has a very small estimation error for the EMD.
On the other hand, the results of the linear model ANOVA have also been provided in Table 1 and Table 2. For the results in Table 1, except for the problems La02, La03, La09, and La20, we can find that all the values of R 2 are larger than 0.70. Especially for the problems La21-La40 with larger problem scales, the average of R 2 even reaches to 0.76. All the values of R 2 are larger than 0.70 for the problems in TA and the average value is about 0.76 as shown in Table 2. In addition, for all the problems in LA and TA, the significance level S i g . is less than 0.01. All these results show that the proposed meta-model have a significant linear correlation with the expected makespan.
In summary, the proposed meta-model Δ c a have high evaluation accuracy and holds a strong linear correlation to the EMD. This means that the meta-model Δ c a is able to effectively distinguish the robustness of different schedules in the evolutionary algorithm.

4.3. Optimization Performance of The Proposed Algorithm

In this section, the performance of the proposed meta-model (MM)-based MOEA in optimizing the makespan and the EMD for the JSS under RMD will be presented. To show the performance of the algorithm, the Monte Carlo approximation in Equation (17)-based MOEA (MC) will be applied. In addition, the proposed meta-meta will also be compared with the existing surrogate measures, and thus the results on the MOEAs with the average total slack time (SM1) in [42] and the sum of free slack time (SM2) in [26] will also be provided. By implementing these algorithms with various approximations of EMD, four Pareto solution sets can be obtained for each problem.
To investigate the performance of MOEAs, many metrics have been developed and applied in the related research [44]. Since a single metric can only provide some specific but incomplete of performance, to comprehensively evaluate the performance of the proposed algorithm, both the average distance and the number of distinct choices will be used in our experiments to measure the convergence and diversity of the algorithm, respectively.
Average distance metric A d in Equation (20) evaluates the closeness of the obtained Pareto solution set P F find to the true Pareto solution set P F true , where d ( z i , a j ) denotes the Euclidean distance between z i in P F find and all points a j in P F true .
A d = 1 P F find i = 1 P F find min j = 1 P F true   d ( z i , a j )
Number of distinct choices N μ in Equation (21) focuses on the distribution of solutions, which defines the number of distinct choices for a pre-specified value of μ ,   0 < μ < 1 . In this metric, an m-dimensional objective space will be divided into 1 / μ m number of small grids, where any solutions within the same grid are considered similar to one another. If there are individuals in the obtained Pareto set P F find that fall into the region T ( l m , , l 2 , l 1 ) , N T ( l m , , l 2 , l 1 ) is equal to one, otherwise zero. In our experiments, the value of μ for the metric N μ is taken as 0.05.
N μ ( P F find ) = l m = 0 1 / μ 1 l 2 = 0 1 / μ 1 l 1 = 0 1 / μ 1 N T ( l m , , l 2 , l 1 )
Since the NP-hard nature of the JSS under RMD, the true Pareto set P F true is not available for all test problems. In view of this, the approximate Pareto solution set which is provided by all comparison algorithms will be used to represent the true one. That is, under the obtained Pareto fronts ( P F find 1 , P F find 2 , , P F find n p ) by different algorithms for a test problem, the true Pareto solution set can be approximated by Equation (22), where a j b i implies that a j dominates b i . Besides, in order to reduce the scale difference between different objectives, all Pareto fronts will be normalized by P F find = ( P F find P F true min ) / ( P F true max P F true min ) based on the maximum and minimum of objectives of the true Pareto set P F true .
P F true { b i | b i , ¬ a j ( P F find 1 P F find 2 P F find n p ) b i }
The results on the metric A d of the algorithms MC, MM, SM1, and SM2 have been provided in Table 3 and Table 4 for the problem sets LA and TA, respectively. The results show that the algorithms SM1 and SM2 have the similar performance in the convergence, for the values of A d of the algorithms SM1 and SM2 are always close to each other. For example, in Table 3, their average values of A d in the problems La01–La20 are 1.27 and 1.26, and the average values of A d in the problems La21–La40 are 0.21 and 0.21, respectively. The similar results can also be found in Table 4, the average values of A d in the problems Ta01–Ta20 are 0.21 and 0.24, while they are 0.22 and 0.23 in the problems Ta21–Ta40.
By comparison, the proposed algorithm MM performs better in the convergence than the algorithms SM1 and SM2. This is because that, except for the problems La39, Ta14, and Ta39, all the values of A d for the algorithm MM in the problem sets LA and TA are less than that of the algorithms SM1 and SM2. In average, the values of A d for the algorithm MM in the problems La01–La20 and La21–La40 are 0.16 and 0.08, while the corresponding values of A d for the algorithms SM1 and SM2 are about 1.26 and 0.21, respectively. And, in the problem set TA, the average values of A d for the algorithm MM in the problems La01–La20 and La21–La40 are both 0.09, while the average values of A d for the algorithms SM1 and SM2 are about 0.22.
On the other hand, the results also indicate that the convergence of the algorithm MM can even be very close to that of the algorithm MC. As shown in Table 3, the results show that the proposed algorithm MM can have the best convergence in the problems La04, La05, La07, La10, La12, La13, La14, La22, La24, La26, La27, and La31 from the problem set LA and Ta06, Ta09,Ta19, Ta23, Ta24, Ta37, and Ta37 from the problem set TA. In addition, the average of A d for the algorithm MM in the problems La01–La21 with smaller problem scales is 0.16, which is 0.12 larger than that of the algorithm MC. But, in the problems with larger problem scales, such as the problems La21–La40 and Ta01–Ta40, the average of A d for the algorithm MM is only 0.06 larger that of the algorithm MC. Therefore, we conclude that the proposed algorithm MM is better than the algorithms SM1 and SM2 and similar to the algorithm MC in convergence.
The results on the metric N μ of the algorithms MC, MM, SM1 and SM2 are presented in Table 5 and Table 6 for the problem sets LA and TA, respectively. The results show that the algorithms SM1 and SM2 also have the similar performance in the diversity. From the results in Table 5, we can found that the average values of N μ for the algorithms SM1 and SM2 in the problems La01–La20 are 4.7 and 5.2, and the average values of A d for the problems La21–La40 are 9.3 and 10.5, respectively. That is, the average difference of A d between the algorithms SM1 and SM2 is only about 1.0 in average. The similar results can also be found in Table 6, the average values of N μ for the problems Ta01–Ta20 are 10.2 and 9.7, while they are 8.9 and 8.4 in the problems Ta21–Ta40.
Compared with the algorithms SM1 and SM2, the proposed algorithm MM performs better in diversity. This is because that the values of A d for the algorithm MM are larger than that of the algorithms SM1 and SM2 for most of the problems in the problem sets LA and TA as shown in Table 5 and Table 6. In addition, the results have also shown the algorithm MM have the similar performance in diversity to that of the algorithm MC. For example, the average values of A d for the algorithms MM and MC in the problems La01–La20 are about 8.1 and 8.3, while the average values of A d are 15.3 and 13.7 in the problems La21–La40, respectively. Similar results can be found in Table 6 for the problem set TA. Therefore, we can conclude that the proposed algorithm MM is very close to the algorithm MC in diversity, but it is much better than the algorithms SM1 and SM2.
Taking the problems instances La06, La15, Ta15, and Ta36 as examples, the performance of the algorithms MC, MM, SM1, and SM2 can also be clearly illustrated by the Pareto fronts. As shown in Figure 6, the Pareto front of the algorithm MM is very close to that of the algorithm MC, while the Pareto fronts of the algorithms SM1 and SM2 are very far from that of the algorithm MC. On the other hand, the number of solutions in the Pareto front of the algorithm MM is approximately equal to that of the algorithm MC, while the number of solutions in the Pareto fronts of the algorithms SM1 and SM2 are much less in comparison.
However, as indicated earlier, the algorithm based on the Monte Carlo approximation will be very time-consuming. To investigate computational efforts for each algorithm, the average T a and the relative value T r of CPU time have been recorded, where T r is the ratio of the average T a of an algorithm (MC, MM, SM1, or SM2) to that of the algorithm MC. In this experiment, the simulation times for the Monte Carlo approximation are set as 30, which is generally the required minimum number of samples in statistical estimation [20]. As the results shown in Table 7, the algorithms SM1 and SM2 always consume the least time, while the algorithm MC consumes the most. By comparison, the time consumption of the proposed algorithm MM is almost equal to that of the algorithms SM1 and SM2, but it is much less than that of the algorithm MC. What is more, the time consumption of the algorithm MC increases significantly with the problem scale. For example, the average T a of the algorithm MC in the problems La01-La05 is 66, which is close to that of the algorithm MM with the value 53. But, in the problems Ta31-Ta40, the time consumption of the algorithm MC is about 8 times of that of the algorithm MM. In more complex problems or uncertain scenarios, a larger sample size may be needed for the algorithm MC, which will result in a computational time that is unacceptable.

5. Conclusions

In this study, we have addressed the robust JSS with RMDs, in which the makespan and the EMD are considered simultaneously. To improve the efficiency of the MOEA, a meta-model has been constructed by using the data-driven response surface method. Then, with the individual-based evolution control, the meta-model-based MOEA has been proposed to solve this problem. The results have shown that regarding the convergence and diversity, the proposed algorithm yields better Pareto solution sets than the algorithms with the existing slack time-based surrogate measures. Moreover, the meta-model has high accuracy in evaluating the EMD similar to the Monte Carlo approximation. Overall, the proposed meta-model-based MOEA can effectively and efficiently solve the robust JSS with RMDs.

Author Contributions

Conceptualization, Z.W.; Formal analysis, S.Y.; Funding acquisition, T.L.; Investigation, S.Y.; Methodology, Z.W.; Supervision, T.L.; Writing—original draft, Z.W.; Writing—review and editing, T.L.

Funding

This research was funded in part by National Natural Science Foundation of China under grant no 51475383 and the Career Start-up Grant of the Northwestern Polytechnical University.

Conflicts of Interest

The authors declare no conflict of interest.

Notations:

JSSJob shop scheduling problemRMDRandom machine breakdowns
EMDExpected makespan delayMOEAMulti-objective evolutionary algorithm
n Number of jobs m Number of machines
J j Job j, j = 1, 2, …, n M i Machine i, i = 1, 2, …, m
J Set of jobs, { J j | j = 1 , 2 , , n } M Set of machines { M i | i = 1 , 2 , , m }
O i j Operation that job j is on machine i O j Set of operations for job j
p i j Processing time of operation O i j s t i j Starting time of operation O i j
c t i j Completion time of operation O i j f s i j Free slack time of operation O i j
t s i j Total slack time of operation O i j Pr i j Machine breakdown probability of O i j
D i j Downtime when processing O i j C max 0 Makespan of a schedule before execution
C max r Actual makespan of a schedule δ c Makespan delay of a schedule
Δ c Expression of expected makespan delay Δ c a Meta-model of Δ c
Δ c sim Monte Carlo approximation of Δ c O n Set of operations without slack time
O y Set of operations with slack time p s n Sum of processing time in the set O n
p s y Sum of processing time in the set O y f s a Average free slack time in the set O y
t s a Average total slack time in the set O y λ 0 Machine failure rate of each machine
β 0 Expectation of the downtime P 0 Initial population
P k Current population in generation k R k + 1 Combined population in generation k
x Input vector x = ( λ 0 , β 0 , p s n , p s y , f s a , t s a ) I i A data instance I i = ( x i , Δ c i )
D c Training data set s i Schedule of ith individual in R k + 1
F k + 1 i Fitness of the ith individual in R k + 1 F k + 1 Fitness set of the population R k + 1

References

  1. Sotskov, Y.N.; Egorova, N.G. The optimality region for a single-machine scheduling problem with bounded durations of the jobs and the total completion time objective. Mathematics 2019, 7, 382. [Google Scholar] [CrossRef]
  2. Gafarov, E.; Werner, F. Two-machine job-shop scheduling with equal processing times on each machine. Mathematics 2019, 7, 301. [Google Scholar] [CrossRef]
  3. Luan, F.; Cai, Z.; Wu, S.; Jiang, T.; Li, F.; Yang, J. Improved whale algorithm for solving the flexible job shop scheduling problem. Mathematics 2019, 7, 384. [Google Scholar] [CrossRef]
  4. Turker, A.; Aktepe, A.; Inal, A.; Ersoz, O.; Das, G.; Birgoren, B. A decision support system for dynamic job-shop scheduling using real-time data with simulation. Mathematics 2019, 7, 278. [Google Scholar] [CrossRef]
  5. Sun, L.; Lin, L.; Li, H.; Gen, M. Cooperative co-evolution algorithm with an MRF-based decomposition strategy for stochastic flexible job shop scheduling. Mathematics 2019, 7, 318. [Google Scholar] [CrossRef]
  6. Zhang, J.; Ding, G.; Zou, Y.; Qin, S.; Fu, J. Review of job shop scheduling research and its new perspectives under Industry 4.0. J Intell. Manuf. 2019, 30, 1809–1830. [Google Scholar] [CrossRef]
  7. Potts, C.N.; Strusevich, V.A. Fifty years of scheduling: A survey of milestones. J. Oper. Res. Soc. 2009, 60, S41–S68. [Google Scholar] [CrossRef]
  8. García-León, A.A.; Dauzère-Pérèsb, S.; Mati, Y. An efficient Pareto approach for solving the multi-objective flexible job-shop scheduling problem with regular criterio. Comput. Oper. Res. 2019, 108, 187–200. [Google Scholar] [CrossRef]
  9. Zhang, C.; Rao, Y.; Li, P. An effective hybrid genetic algorithm for the job shop scheduling problem. Int. J. Adv. Manuf. Tech. 2008, 39, 965–974. [Google Scholar] [CrossRef]
  10. Li, L.; Jiao, L.; Stolkin, R.; Liu, F. Mixed second order partial derivatives decomposition method for large scale optimization. Appl. Soft Comput. 2017, 61, 1013–1021. [Google Scholar] [CrossRef]
  11. Watanabe, M.; Ida, K.; Gen, M. A genetic algorithm with modified crossover operator and search area adaptation for the job-shop scheduling problem. Comput. Ind. Eng. 2005, 48, 743–752. [Google Scholar] [CrossRef]
  12. Eswaramurthy, V.P.; Tamilarasi, A. Hybridizing tabu search with ant colony optimization for solving job shop scheduling problems. Int. J. Adv. Manuf. Tech. 2009, 40, 1004–1015. [Google Scholar] [CrossRef]
  13. Mencía, C.; Mencía, R.; Sierra, M.R.; Varela, R. Memetic algorithms for the job shop scheduling problem with operators. Appl. Soft. Comput. 2015, 34, 94–105. [Google Scholar] [CrossRef]
  14. Buddala, R.; Mahapatra, S.S. Two-stage teaching-learning-based optimization method for flexible job-shop scheduling under machine breakdown. Int. J. Adv. Manuf. Tech. 2019, 100, 1419–1432. [Google Scholar] [CrossRef]
  15. Mehta, S.V.; Uzsoy, R.M. Predictable scheduling of a job shop subject to breakdowns. IEEE Trans. Robotic. Autom. 1998, 14, 365–378. [Google Scholar] [CrossRef]
  16. Lei, D. Minimizing makespan for scheduling stochastic job shop with random breakdown. Appl. Math. Comput. 2012, 218, 11851–11858. [Google Scholar] [CrossRef]
  17. Nouiri, M.; Bekrar, A.; Jemai, A.; Trentesaux, D.; Ammari, A.C.; Niar, S. Two stage particle swarm optimization to solve the flexible job shop predictive scheduling problem considering possible machine breakdowns. Comput. Ind. Eng. 2017, 112, 595–606. [Google Scholar] [CrossRef]
  18. von Hoyningen-Huene, W.; Kiesmueller, G.P. Evaluation of the expected makespan of a set of non-resumable jobs on parallel machines with stochastic failures. Eur. J. Oper. Res. 2015, 240, 439–446. [Google Scholar] [CrossRef]
  19. Jamili, A. Robust job shop scheduling problem: Mathematical models, exact and heuristic algorithms. Expert Syst. Appl. 2016, 55, 341–350. [Google Scholar] [CrossRef]
  20. Xiong, J.; Xing, L.; Chen, Y. Robust scheduling for multi-objective flexible job-shop problems with random machine breakdowns. Int. J. Prod. Econ. 2013, 141, 112–126. [Google Scholar] [CrossRef]
  21. Wu, Z.; Sun, S.; Xiao, S. Risk measure of job shop scheduling with random machine breakdowns. Comput. Oper. Res. 2018, 99, 1–12. [Google Scholar] [CrossRef]
  22. Liu, N.; Abdelrahman, M.A.; Ramaswamy, S.R. A Complete Multiagent Framework for Robust and Adaptable Dynamic Job Shop Scheduling. IEEE Trans. Syst. Man. Cybern. Part C 2007, 37, 904–916. [Google Scholar] [CrossRef]
  23. Xiao, S.; Sun, S.; Jin, J.J. Surrogate Measures for the Robust Scheduling of Stochastic Job Shop Scheduling Problems. Energies 2017, 10, 543. [Google Scholar] [CrossRef]
  24. Zuo, X.; Mo, H.; Wu, J. A robust scheduling method based on a multi-objective immune algorithm. Inform Sciences 2009, 179, 3359–3369. [Google Scholar] [CrossRef]
  25. Ahmadi, E.; Zandieh, M.; Farrokh, M.; Emami, S.M. A multi objective optimization approach for flexible job shop scheduling problem under random machine breakdown by evolutionary algorithms. Comput. Oper. Res. 2016, 73, 56–66. [Google Scholar] [CrossRef]
  26. Al-Fawzan, M.A.; Haouari, M. A bi-objective model for robust resource-constrained project scheduling. Int. J. Prod. Econ. 2005, 96, 175–187. [Google Scholar] [CrossRef]
  27. Goren, S.; Sabuncuoglu, I. Optimization of schedule robustness and stability under random machine breakdowns and processing time variability. IIE Trans. 2010, 42, 203–220. [Google Scholar] [CrossRef]
  28. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  29. Li, L.; Yao, X.; Stolkin, R.; Gong, M.; He, S. An evolutionary multi-objective approach to sparse reconstruction. IEEE Trans. Evol. Comput. 2014, 18, 827–845. [Google Scholar]
  30. Zhou, A.; Qu, B.; Li, H.; Zhao, S.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  31. Xiong, J.; Tan, X.; Yang, K.; Xing, L.; Chen, Y. A Hybrid Multiobjective Evolutionary Approach for Flexible Job-Shop Scheduling Problems. Math. Probl. Eng. 2012, 2012, 1–27. [Google Scholar] [CrossRef]
  32. Hosseinabadi, A.A.R.; Siar, H.; Shamshirband, S.; Shojafar, M.; Nasir, M.H.N.M. Using the gravitational emulation local search algorithm to solve the multi-objective flexible dynamic job shop scheduling problem in Small and Medium Enterprises. Ann. Oper. Res. 2015, 229, 451–474. [Google Scholar] [CrossRef]
  33. Hosseinabadi, A.A.R.; Kardgar, M.; Shojafar, M.; Shamshirband, S.; Abraham, A. GELS-GA: Hybrid metaheuristic algorithm for solving Multiple Travelling Salesman Problem. In Proceedings of the 2014 IEEE 14th International Conference on Intelligent Systems Design and Applications, Okinawa, Japan, 28–30 November 2014. [Google Scholar]
  34. Jin, Y.; Branke, J. Evolutionary Optimization in Uncertain Environments: A Survey. IEEE T. Evolut. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  35. Al-Hinai, N.; Elmekkawy, T.Y. Robust and stable flexible job shop scheduling with random machine breakdowns using a hybrid genetic algorithm. Int. J. Prod. Econ. 2011, 132, 279–291. [Google Scholar] [CrossRef]
  36. Chaari, T.; Chaabane, S.; Loukil, T.; Trentesaux, D. A genetic algorithm for robust hybrid flow shop scheduling. Int. J. Comput. Integ. M. 2011, 24, 821–833. [Google Scholar] [CrossRef]
  37. Yang, F.; Zheng, L.; Luo, Y. A novel particle filter based on hybrid deterministic and random sampling. IEEE Access 2018, 6, 67536–67542. [Google Scholar] [CrossRef]
  38. Yang, F.; Luo, Y.; Zheng, L. Double-Layer Cubature Kalman Filter for Nonlinear Estimation. Sensors 2019, 19, 986. [Google Scholar] [CrossRef] [PubMed]
  39. Wang, X.; Li, T.; Sun, S.; Corchado, J.M. A survey of recent advances in particle filters and remaining challenges for multitarget tracking. Sensors 2017, 17, 2707. [Google Scholar] [CrossRef]
  40. Mirabi, M.; Ghomi, S.M.T.F.; Jolai, F. A two-stage hybrid flowshop scheduling problem in machine breakdown condition. J. Intell. Manuf. 2013, 24, 193–199. [Google Scholar] [CrossRef]
  41. Liu, L.; Gu, H.; Xi, Y. Robust and stable scheduling of a single machine with random machine breakdowns. Int. J. Adv. Manuf. Technol. 2006, 31, 645–654. [Google Scholar] [CrossRef]
  42. Leon, J.; Wu, S.D.; Storer, R.H. Robustness measures and robust scheduling for job shops. IIE Trans. 1994, 26, 32–43. [Google Scholar] [CrossRef]
  43. Jensen, M.T. Generating robust and flexible job shop schedules using genetic algorithms. IEEE Trans. Evol. Comput. 2003, 7, 275–288. [Google Scholar] [CrossRef]
  44. Yen, G.G.; He, Z. Performance Metric Ensemble for Multiobjective Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2014, 18, 131–144. [Google Scholar] [CrossRef]
Figure 1. An example for a schedule with 3 machines and 3 jobs under a machine breakdown. (a) The schedule before a machine breakdown; (b) the schedule after a machine breakdown.
Figure 1. An example for a schedule with 3 machines and 3 jobs under a machine breakdown. (a) The schedule before a machine breakdown; (b) the schedule after a machine breakdown.
Mathematics 07 00529 g001
Figure 2. The flow chart of the meta-model-based multi-objective evolutionary algorithm (MOEA).
Figure 2. The flow chart of the meta-model-based multi-objective evolutionary algorithm (MOEA).
Mathematics 07 00529 g002
Figure 3. The relationship between the makespan delay and the machine breakdowns on different operations. (a) No machine breakdown; (b) a machine breakdown on operation O11; (c) a machine breakdown on operation O33; (d) a machine breakdown on operation O22.
Figure 3. The relationship between the makespan delay and the machine breakdowns on different operations. (a) No machine breakdown; (b) a machine breakdown on operation O11; (c) a machine breakdown on operation O33; (d) a machine breakdown on operation O22.
Mathematics 07 00529 g003aMathematics 07 00529 g003b
Figure 4. The quartile graph of the absolute relative deviation χ ( Δ c a , Δ c sim ) in the problem set LA.
Figure 4. The quartile graph of the absolute relative deviation χ ( Δ c a , Δ c sim ) in the problem set LA.
Mathematics 07 00529 g004
Figure 5. The quartile graph of the absolute relative deviation χ ( Δ c a , Δ c sim ) in the problem set TA.
Figure 5. The quartile graph of the absolute relative deviation χ ( Δ c a , Δ c sim ) in the problem set TA.
Mathematics 07 00529 g005
Figure 6. The Pareto fronts of algorithms MC, MM, SM1, and SM2. (a) For the problem La15; (b) for the problem La30; (c) for the problem Ta06; (d) for the problem Ta18.
Figure 6. The Pareto fronts of algorithms MC, MM, SM1, and SM2. (a) For the problem La15; (b) for the problem La30; (c) for the problem Ta06; (d) for the problem Ta18.
Mathematics 07 00529 g006
Table 1. The experimental results of the meta-model in the problem set LA.
Table 1. The experimental results of the meta-model in the problem set LA.
Casesn × m χ ¯ σ ( χ ) R 2 Sig.Casesn × m χ ¯ σ ( χ ) R 2 Sig.
La0110 × 50.0350.0270.74<0.01La2115 × 100.0260.0190.77<0.01
La0210 × 50.0400.0290.59<0.01La2215 × 100.0280.0210.74<0.01
La0310 × 50.0410.0310.53<0.01La2315 × 100.0260.0200.78<0.01
La0410 × 50.0380.0280.71<0.01La2415 × 100.0270.0210.76<0.01
La0510 × 50.0340.0260.78<0.01La2515 × 100.0280.0210.75<0.01
La0615 × 50.0290.0220.80<0.01La2620 × 100.0230.0180.76<0.01
La0715 × 50.0330.0240.71<0.01La2720 × 100.0240.0180.74<0.01
La0815 × 50.0320.0240.72<0.01La2820 × 100.0240.0190.73<0.01
La0915 × 50.0330.0260.68<0.01La2920 × 100.0240.0180.75<0.01
La1015 × 50.0330.0240.76<0.01La3020 × 100.0250.0190.74<0.01
La1120 × 50.0260.0200.76<0.01La3130 × 100.0200.0160.75<0.01
La1220 × 50.0290.0230.76<0.01La3230 × 100.0200.0150.75<0.01
La1320 × 50.0290.0230.70<0.01La3320 × 100.0200.0150.76<0.01
La1420 × 50.0290.0230.76<0.01La3430 × 100.0210.0160.76<0.01
La1520 × 50.0290.0230.71<0.01La3530 × 100.0220.0170.82<0.01
La1610 × 100.0310.0240.74<0.01La3615 × 150.0230.0180.80<0.01
La1710 × 100.0310.0240.74<0.01La3715 × 150.0230.0180.75<0.01
La1810 × 100.0320.0250.72<0.01La3815 × 150.0250.0190.74<0.01
La1910 × 100.0300.0230.72<0.01La3915 × 150.0250.0180.75<0.01
La2010 × 100.0350.0260.65<0.01La4015 × 150.0250.0190.71<0.01
Aver./0.0320.0250.71<0.01Aver./0.0240.0180.76<0.01
Table 2. The experimental results of the meta-model in the problem set TA.
Table 2. The experimental results of the meta-model in the problem set TA.
Casesn × m χ ¯ σ ( χ ) R 2 Sig.Casesn × m χ ¯ σ ( χ ) R 2 Sig.
Ta0115 × 150.0260.0190.73<0.01Ta2120 × 200.0190.0150.78<0.01
Ta0215 × 150.0230.0180.76<0.01Ta2220 × 200.0180.0140.79<0.01
Ta0315 × 150.0250.0190.74<0.01Ta2320 × 200.0180.0140.76<0.01
Ta0415 × 150.0230.0180.77<0.01Ta2420 × 200.0190.0140.77<0.01
Ta0515 × 150.0230.0180.78<0.01Ta2520 × 200.0180.0140.79<0.01
Ta0615 × 150.0230.0170.76<0.01Ta2620 × 200.0180.0140.78<0.01
Ta0715 × 150.0260.0200.73<0.01Ta2720 × 200.0190.0150.75<0.01
Ta0815 × 150.0240.0180.76<0.01Ta2820 × 200.0190.0140.77<0.01
Ta0915 × 150.0250.0180.74<0.01Ta2920 × 200.0190.0150.76<0.01
Ta1015 × 150.0230.0180.77<0.01Ta3020 × 200.0180.0140.79<0.01
Ta1120 × 150.0220.0160.76<0.01Ta3130 × 150.0180.0140.77<0.01
Ta1220 × 150.0200.0160.78<0.01Ta3230 × 150.0190.0150.77<0.01
Ta1320 × 150.0210.0160.78<0.01Ta3330 × 150.0180.0140.74<0.01
Ta1420 × 150.0200.0160.77<0.01Ta3430 × 150.0170.0140.78<0.01
Ta1520 × 150.0210.0160.73<0.01Ta3530 × 150.0180.0140.80<0.01
Ta1620 × 150.0210.0150.78<0.01Ta3630 × 150.0180.0140.76<0.01
Ta1720 × 150.0230.0170.74<0.01Ta3730 × 150.0180.0140.76<0.01
Ta1820 × 150.0210.0160.75<0.01Ta3830 × 150.0180.0140.79<0.01
Ta1920 × 150.0210.0170.76<0.01Ta3930 × 150.0190.0130.81<0.01
Ta2020 × 150.0210.0160.79<0.01Ta4030 × 150.0170.0130.78<0.01
Aver./0.0230.0170.76<0.01Aver./0.0180.0140.77<0.01
Table 3. The values of A d for the algorithms Monte Carlo approximation-based MOEA (MC), meta-model (MM), total slack time (SM1), and free slack time (SM2) in the problem set LA.
Table 3. The values of A d for the algorithms Monte Carlo approximation-based MOEA (MC), meta-model (MM), total slack time (SM1), and free slack time (SM2) in the problem set LA.
Casesn × mMCMMSM1SM2Casesn × mMCMMSM1SM2
La0110 × 50.010.070.991.57La2115 × 100.000.100.310.34
La0210 × 50.010.040.260.50La2215 × 100.030.010.190.13
La0310 × 50.010.050.080.10La2315 × 100.000.140.190.31
La0410 × 50.070.000.160.36La2415 × 100.040.030.200.12
La0510 × 50.000.000.000.00La2515 × 100.000.070.190.30
La0615 × 50.000.923.967.19La2620 × 100.020.020.230.17
La0715 × 50.520.001.551.22La2720 × 100.060.020.260.06
La0815 × 50.000.333.702.95La2820 × 100.000.110.180.15
La0915 × 50.000.352.891.45La2920 × 100.010.070.170.08
La1015 × 50.000.000.000.00La3020 × 100.020.040.260.24
La1120 × 50.000.666.064.84La3130 × 100.150.000.210.25
La1220 × 50.040.031.380.92La3230 × 100.000.180.240.24
La1320 × 50.060.051.341.22La3320 × 100.000.170.240.09
La1420 × 50.000.000.000.00La3430 × 100.000.100.270.08
La1520 × 50.000.300.631.12La3530 × 100.010.160.180.26
La1610 × 100.090.141.090.36La3615 × 150.000.120.190.38
La1710 × 100.000.080.200.16La3715 × 150.000.120.220.25
La1810 × 100.000.150.300.46La3815 × 150.000.040.180.27
La1910 × 100.010.020.340.13La3915 × 150.000.130.040.20
La2010 × 100.020.090.400.55La4015 × 150.000.050.240.31
Aver./0.040.161.271.26Aver./0.020.080.210.21
Table 4. The values of A d for the algorithms MC, MM, SM1 and SM2 in the problem set TA.
Table 4. The values of A d for the algorithms MC, MM, SM1 and SM2 in the problem set TA.
Casesn × mMCMMSM1SM2Casesn × mMCMMSM1SM2
Ta0115 × 150.000.080.190.20Ta2120 × 200.020.050.230.31
Ta0215 × 150.000.080.220.33Ta2220 × 200.000.170.280.34
Ta0315 × 150.000.040.170.09Ta2320 × 200.090.010.140.19
Ta0415 × 150.010.120.210.17Ta2420 × 200.020.010.170.16
Ta0515 × 150.000.150.150.24Ta2520 × 200.000.220.290.24
Ta0615 × 150.040.010.230.22Ta2620 × 200.000.070.300.22
Ta0715 × 150.000.140.240.26Ta2720 × 200.000.090.180.38
Ta0815 × 150.010.020.080.11Ta2820 × 200.030.050.300.21
Ta0915 × 150.060.030.270.30Ta2920 × 200.020.060.310.34
Ta1015 × 150.060.100.290.22Ta3020 × 200.020.050.240.24
Ta1120 × 150.000.220.290.30Ta3130 × 150.020.040.160.17
Ta1220 × 150.010.040.240.19Ta3230 × 150.050.050.130.14
Ta1320 × 150.000.080.140.29Ta3330 × 150.030.050.340.26
Ta1420 × 150.010.120.100.17Ta3430 × 150.000.130.330.39
Ta1520 × 150.000.210.260.37Ta3530 × 150.000.090.210.14
Ta1620 × 150.000.060.230.21Ta3630 × 150.020.030.150.18
Ta1720 × 150.030.060.140.33Ta3730 × 150.030.020.170.08
Ta1820 × 150.000.070.240.30Ta3830 × 150.000.270.240.35
Ta1920 × 150.090.000.300.34Ta3930 × 150.070.160.010.13
Ta2020 × 150.000.140.170.19Ta4030 × 150.000.130.190.18
Aver./0.020.090.210.24Aver./0.020.090.220.23
Table 5. The values of N μ for the algorithms MC, MM, SM1, and SM2 in the problem set LA.
Table 5. The values of N μ for the algorithms MC, MM, SM1, and SM2 in the problem set LA.
Casesn × mMCMMSM1SM2Casesn × mMCMMSM1SM2
La0110 × 5111224La2115 × 101515813
La0210 × 512954La2215 × 101617109
La0310 × 511826La2315 × 101318612
La0410 × 59844La2415 × 1012121015
La0510 × 51123La2515 × 101415119
La0615 × 53336La2620 × 1015151010
La0715 × 511648La2720 × 1012141013
La0815 × 55739La2820 × 10141887
La0915 × 57997La2920 × 101314711
La1015 × 51242La3020 × 101316109
La1120 × 54565La3130 × 10151298
La1220 × 511857La3230 × 101312712
La1320 × 512776La3320 × 10111389
La1420 × 51131La3430 × 1013161110
La1520 × 5121676La3530 × 101116107
La1610 × 109822La3615 × 1512161111
La1710 × 10101177La3715 × 15141789
La1810 × 10121484La3815 × 1515181413
La1910 × 10121266La3915 × 151814812
La2010 × 10121447La4015 × 1514171011
Aver./8.38.14.75.2Aver./13.715.39.310.5
Table 6. The values of N μ for the algorithms MC, MM, SM1, and SM2 in the problem set TA.
Table 6. The values of N μ for the algorithms MC, MM, SM1, and SM2 in the problem set TA.
Casesn × mMCMMSM1SM2Casesn × mMCMMSM1SM2
Ta0115 × 15161899Ta2120 × 201714109
Ta0215 × 1514101012Ta2220 × 20131568
Ta0315 × 1515191210Ta2320 × 2015111111
Ta0415 × 1514191111Ta2420 × 201313119
Ta0515 × 151715911Ta2520 × 201412119
Ta0615 × 151614910Ta2620 × 2016121010
Ta0715 × 1515171210Ta2720 × 20148129
Ta0815 × 151711139Ta2820 × 20151779
Ta0915 × 151514108Ta2920 × 2017121210
Ta1015 × 1515141112Ta3020 × 20171077
Ta1120 × 1513151010Ta3130 × 151313910
Ta1220 × 15121487Ta3230 × 15131278
Ta1320 × 151613810Ta3330 × 15131165
Ta1420 × 151314910Ta3430 × 151413109
Ta1520 × 1517181211Ta3530 × 15101497
Ta1620 × 15191388Ta3630 × 15151096
Ta1720 × 151511109Ta3730 × 15131168
Ta1820 × 151615109Ta3830 × 15915810
Ta1920 × 151013128Ta3930 × 15111476
Ta2020 × 1516171110Ta4030 × 15131397
Aver./15.114.710.29.7Aver./13.812.58.98.4
Table 7. Results on the CPU time (in seconds) of algorithms MC, MM, SM1, and SM2.
Table 7. Results on the CPU time (in seconds) of algorithms MC, MM, SM1, and SM2.
Casesn × mMCMMSM1SM2
T a T r T a T r T a T r T a T r
La01–La0510 × 5661.00530.80500.76500.76
La06–La1015 × 5831.00540.65520.63500.60
La11–La1520 × 51131.00550.49550.49520.46
La16–La2010 × 101021.00540.53540.53510.50
La21–La2515 × 101531.00560.37550.36540.35
La26–La3020 × 102201.00600.27590.27570.26
La31–La3530 × 103981.00730.18700.18680.17
La36–La4015 × 152381.00650.27620.26600.25
Ta01–Ta1015 × 152311.00630.27610.26590.26
Ta11–Ta2020 × 153511.00750.21700.20680.19
Ta21–Ta3020 × 205431.00850.16820.15830.15
Ta31–Ta4030 × 157331.00950.13920.13890.12
Aver./269.31.0065.70.2463.50.2461.80.23

Share and Cite

MDPI and ACS Style

Wu, Z.; Yu, S.; Li, T. A Meta-Model-Based Multi-Objective Evolutionary Approach to Robust Job Shop Scheduling. Mathematics 2019, 7, 529. https://doi.org/10.3390/math7060529

AMA Style

Wu Z, Yu S, Li T. A Meta-Model-Based Multi-Objective Evolutionary Approach to Robust Job Shop Scheduling. Mathematics. 2019; 7(6):529. https://doi.org/10.3390/math7060529

Chicago/Turabian Style

Wu, Zigao, Shaohua Yu, and Tiancheng Li. 2019. "A Meta-Model-Based Multi-Objective Evolutionary Approach to Robust Job Shop Scheduling" Mathematics 7, no. 6: 529. https://doi.org/10.3390/math7060529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop