Next Article in Journal
Parallel Locally Strictly Convex Surfaces in Four-Dimensional Affine Space Contained in Hyperquadrics
Next Article in Special Issue
Collaborative Production Task Decomposition and Allocation among Multiple Manufacturing Enterprises in a Big Data Environment
Previous Article in Journal
Bifuzzy-Bilevel Programming Model: Solution and Application
Previous Article in Special Issue
Path Planning of AS/RS Based on Cost Matrix and Improved Greedy Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Shuffled Frog-Leaping Algorithm for Unrelated Parallel Machine Scheduling with Deteriorating Maintenance and Setup Time

School of Automation, Wuhan University of Technology, Wuhan 430000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(9), 1574; https://doi.org/10.3390/sym13091574
Submission received: 27 May 2021 / Revised: 12 July 2021 / Accepted: 12 July 2021 / Published: 26 August 2021
(This article belongs to the Special Issue Meta-Heuristics for Manufacturing Systems Optimization)

Abstract

:
Unrelated parallel machine scheduling problems (UPMSP) with various processing constraints have been considered fully; however, a UPMSP with deteriorating preventive maintenance (PM) and sequence-dependent setup time (SDST) is seldom considered. In this study, a new differentiated shuffled frog-leaping algorithm (DSFLA) is presented to solve the problem with makespan minimization. The whole search procedure consists of two phases. In the second phase, quality evaluation is done on each memeplex, then the differentiated search processes are implemented between good memeplexes and other ones, and a new population shuffling is proposed. We conducted a number of experiments. The computational results show that the main strategies of DSFLA were effective and reasonable and DSFLA was very competitive at solving UPMSP with deteriorating PM and SDST.

1. Introduction

The parallel machine scheduling problem (PMSP) is a typical scheduling problem that can be categorized into three types: identical PMSP, uniform PMSP, and unrelated PMSP (UPMSP). As the generalization of the other two types, UPMSP has attracted great attention, and a number of results have been obtained to solve UPMSP with various processing constraints, such as random breakdown and random rework [1,2,3,4].
Preventive maintenance (PM) often exists in many actual manufacturing cases, can effectively prevent potential failures and serious accidents in parallel machines, and is often required to be considered in UPMSP. Regarding UPMSP with maintenance, Yang et al. [5] studied UPMSP with aging effects and PM to minimize the total machine load and proved that the problem remained polynomially solvable when a maintenance frequency on every machine is given.
Tavana et al. [4] presented a three-stage maintenance scheduling model for UPMSP with aging effects and multi-maintenance activities. Wang and Liu [6] proposed an improved non-dominated sorting genetic algorithm-II for multi-objective UPMSP with multi-resources PM. Gara-Ali et al. [7] provided several performance criteria and different maintenance systems and gave a new method to solve the problem with deteriorating and maintenance. Lei and Liu [8] proposed an artificial bee colony (ABC) with division for distributed UPMSP with PM.
Deteriorating maintenance means that the length of maintenance activity is not constant and depends on the running time of the machine. UPMSP with deteriorating maintenance has also been studied. Cheng et al. [9] and Hsu et al. [10] provided some polynomial solutions. Lu et al. [11] considered UPMSP with parallel-batching processing, deteriorating jobs, and deteriorating maintenance and presented a mixed integer programming model and a hybrid ABC with tabu search (TS).
In many real-life industries, such as the chemical, printing, metal processing, and semiconductor industries, SDST often cannot be ignored [12]. UPMSP with SDST has been extensively addressed since the pioneering work of Parker et al. [13]. Kurz and Askin [14] proposed several heuristics. Arnaout et al. [15] designed an improved ant colony optimization with a pheromone re-initialization method. Vallada and Ruiz [16] presented a genetic algorithm to minimize the makespan. Lin and Ying [17] developed a hybrid ABC for UPMSP with machine-dependent setup times and SDST.
Caniyilmaz et al. [18] applied an ABC algorithm to solve UPMSP with processing set restrictions, an SDST, and a due date. Diana et al. [19] presented an improved immune algorithm by introducing a local search and a new selection operator. Wang and Zheng [20] proposed an estimation of distribution algorithm and gave five local search strategies. Ezugwu and Akutsah [21] proposed an improved firefly algorithm refined with a local search. Fanjul-Peyro et al. [22] presented an exact algorithm. Bektur and Sarac [23] introduced a TS and a simulated annealing algorithm for UPMSP with SDST, machine eligibility restrictions and a common server. Cota et al. [24] developed a multi-objective smart pool search algorithm for green UPMSP with SDST.
For UPMSP with PM and SDST, Avalos-Rosales et al. [25] developed an efficient meta-heuristic based on a multi-start strategy to minimize the makespan, and Wang and Pan [26] presented a novel imperialist competitive algorithm with an estimation of distribution algorithm to optimize the makespan and total tardiness.
SDST and deteriorating maintenance are common processing constraints and often exist simultaneously in the real-life production process; however, the previous works mainly deal with UPMSP with one of these two constraints, few papers focus on UPMSP with maintenance and SDST [25,26] and UPMSP with deteriorating PM and SDST is hardly studied. It is necessary to investigate UPMSP with deteriorating PM and SDST due to their extensive existences in production. On the other hand, meta-heuristics, including ABC, have been applied to solve UPMSP with various processing constraints, such as PM and SDST. As a meta-heuristic, by observing, imitating, and modeling the search behavior of frogs for the location with the maximum amount of available food, the shuffled frog-leaping algorithm (SFLA) is seldom used to handle UPMSP.
SFLA has a fast convergence speed and effective algorithm structure containing local search and global information exchanges [27]. It has been widely applied to solve various optimization problems, such as topology optimization and production scheduling problems [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. The existing works on scheduling problems revealed that SFLA has great potential in solving UPMSP with deteriorating PM and SDST. On the other hand, the same search process and parameters are adopted in all memeplexes, and the differentiated search process is seldom used, which can effectively intensify the exploration ability and avoid falling local optima; thus, it is necessary to investigate the possible applications of SFLA with new optimization mechanisms for UPMSP with SDST and PM.
In this study, UPMSP with deteriorating PM and SDST is considered, and a new differentiated shuffled frog-leaping algorithm (DSFLA) is applied to minimize the makespan. The entire search procedure is composed of two phases. In the second phase, the memeplex quality is evaluated on each memeplex to divide all memeplexes into good memeplexes and others, then the differentiated search processes are implemented between good memeplexes and others, and a new population shuffling is proposed. We conduct experiments to test the effect of the main strategies and the search advantages for DSFLA.
The remainder of the paper is organized as follows. The problem is described in Section 2 followed by an introduction to SFLA in Section 3. DSFLA for the considered problem is reported in Section 4. Numerical experiments on DSFLA are reported in Section 5, the conclusions are summarized in the final section, and some topics of future research are provided.

2. Problem Description

UPMSP with deteriorating PM and SDST is composed of n jobs J 1 , J 2 , , J n and m unrelated parallel machines M 1 , M 2 , , M m . Each job can be processed on any one of m machines. The processing time p k j of job J j depends on the performance of its processing machine M k . The processing times on different machines are usually different.
On machine M k , job is processed in a time interval between two consecutive maintenance activities, and the length of the interval is indicated as u k , and w k denotes the duration of each maintenance. For deteriorating maintenance, w k is not constant and depends on M k and the starting time of maintenance, w k = c k + d k × t k , where c k , d k are constant, and t k indicates the starting time of maintenance on M k . There are some intervals for processing on each machine. If the processing of a job cannot be completed in a processing interval, the job cannot be processed in the current interval and should be moved to the next interval.
For SDST, s k i j is the setup time for processing job J j after job J i on machine M k , s k 0 j indicates the setup time of machine M k to process the first job J j after a maintenance activity, and s k j 0 is the setup time of machine M k to perform a maintenance activity after the job J j .
There have the following constraints on jobs and machines.
Each job and machine is available at time zero.
Each job can be processed on only one machine at a time.
Operations cannot be interrupted.
Preemption is not allowed.
The problem is composed of the scheduling sub-problem and machine assignment sub-problem. The goal of the problem is to minimize the makespan.
Let C π ( j ) be a completion time of job j in schedule π , and the makespan can be defined by C max ( π ) = max j = 1 , , n { C π ( j ) } . Thus, the objective is to find a schedule π that minimizes the makespan C max * = min π Π { C max ( π ) } , where Π is the set of all feasible schedules π .
An illustrative example is provided. Table 1 and Table 2 give the processing time and setup time. There are two machines and eight jobs. Data on deteriorating PM are shown in Figure 1, where c k = 1 and d k = 0.1 for all machines.
When no PM is considered, any two jobs on any one machine are symmetrical, that is, exchanging them does not change the makespan. When PM is handled, any two jobs on a machine between time 0 and the first PM or two consecutive PMs, because of the above, are reasonable; thus, the consideration of PM has impact on the optimization of the considered UPMSP.

3. Introduction to SFLA

In SFLA, a solution is defined as the position of a frog, and there is a population of possible solutions defined by a set of virtual frogs. After the initial population P is produced, the following steps, which are population division, memeplex search, and population shuffling, are repeated until the stopping condition is met.
Population division is as follows. After all solutions are sorted, suppose that F i t 1 F i t 2 F i t N , and then solution x k is allocated into memeplex k ( m o d s ) + 1 , where k ( m o d s ) indicates the remainder of k k s s , F i t i is the fitness of solution x i , and s indicates the number of memeplexes.
The search process in memeplex M l is shown below. x w is used as optimization object, then a new solution x w is produced by Equation (2) with x w and x b . If the new one is better than x w , then replace x w with x w ; otherwise, x w and x g are used to generate a solution x w by Equation (3). If x w has better fitness than x w , then x w becomes the new x w ; otherwise, a randomly obtained solution directly substitutes for x w , where x w , x b and x g are the worst solution and best solution in memeplex M l and the best solution of P.
x w = x w + r a n d × ( x b x w )
x w = x w + r a n d × ( x g x w )
where r a n d is a random number following uniform distribution in [0.1].
A new population P is constructed by shuffling all evolved memeplexes.
As stated above, all memeplexes are often evolved by the same search process and parameters [29,32,33] and the differentiated search in memeplexes is seldom considered. When the differentiated search operators and parameters are introduced, the search ability will be intensified, and local optima can be effectively avoided, as a result, the search efficiency is greatly improved. In this study, DSFLA is presented to solve UPMSP with deteriorating PM and SDST.

4. DSFLA for UPMSP with Deteriorating PM and SDST

DSFLA is composed of two phases, and the differentiated search is implemented in the second phase.

4.1. Initialization, Population Division, and the First Phase

UPMSP consists of two sub-problems: machine assignment and scheduling, and two-string representation is often applied to indicate the solution of UPMSP [46,47]; however, two strings are often dependent each other, and it is difficult to design and apply global search or local search on each string independently. In this study, a solution of the problem is represented as a machine assignment string M θ 1 , M θ 2 , , M θ n and a scheduling string q 1 , q 2 , , q n , where M θ j is the assigned parallel machine for job J j , j = 1 , 2 , , n , and q l is real number and corresponds to J l . These two strings are independent.
Lei and Liu [8] analyzed why a scheduling string is introduced because of the above mentioned changes of symmetry. The decoding process is described below. First, we decide on a machine for each job according to each machine assignment string, and then on each machine M k , for all jobs J i , J i + 1 , , J j allocated on M k —that is, M θ i = M θ i + 1 , , = M θ j = M k . The processing sequence of these jobs is decided by the ascending order of q l , l [ i , j ] , i < j , and they process jobs and deal with maintenance on M k sequentially.
After the initial population P is randomly produced, population division is performed in the following way. Decide the best s solutions from P and sort them in the descending order of their objective. Then, the first solution is allocated into memeplex M 1 , the second solution is assigned into M 2 , and so on. Then, binary tournament selection is used to allocate other solutions into memeplexes: randomly select two solutions x i and x j , and then, if x i ( x j ) is better than x j ( x i ). Then, x i ( x j ) is included into M 1 . If two solutions have the same objective, then stochastically choose one of them and add it into M 1 . The unchosen solution goes back to population P. Repeat the above steps for deciding a solution for M 2 , M 3 , , M s , and then repeat the above procedure until all solutions are assigned. Obviously, N = s × θ , where θ denotes the size of each memeplex.
There are two phases in the search process of DSFLA. The steps of the first phase are identical with SFLA in Section 3. The search process in M i is shown below. Repeat the following steps R 1 times: decide x w , x b M i , execute two-point crossover on machine assignment string of x w and x b , if the obtained solution x is better than x w , then replace x w with x; otherwise, apply two-point crossover on a scheduling string between x w and x b . If the generated solution x has a smaller makespan than x w , x becomes the new x w , where R 1 is an integer.
In the first phase, global search is only used because of its good exploration ability in the early search stage. In the second phase, the differentiated search processes are implemented based on memeplex quality evaluation.

4.2. The Second Phase

The evaluation of memeplex quality is seldom considered in SFLA. In this study, memeplex quality is evaluated according to solution quality and evolution quality. For memeplex M l , its quality M e q l is defined by
M e q l = α 1 × m s q max m s q l m s q max m s q min + α 2 × m v q l m v q min m v q max m v q min
where α 1 , α 2 are real number, m s q l and m v q l indicate solution quality and evolution quality of M l , respectively, m s q max = max l = 1 , 2 , , s m s q l , m s q min = min l = 1 , 2 , , s m s q l , m v q max and m v q min represent the maximum and minimum evolution quality of all memeplexes, respectively.
After all solutions in M l are sorted in the ascending order of makespan, let H 1 indicate the set of the first θ / 2 solutions except x b and H 2 is the set of the remained θ / 2 solutions in M l ,
m s q l = C m a x ( x b ) + β 1 × C ¯ m a x ( H 1 ) + β 2 × C ¯ m a x ( H 2 )
where C ¯ m a x ( H i ) is the average makespan of all solutions in H i , i = 1 , 2 , β i ,   i = 1 , 2 is a real number.
Solutions of H 1 are better than those of H 2 ; therefore, we set β 1 > β 2 to reflect this feature. β 1 = 0.4 and β 2 = 0.1 are obtained by experiments.
Let I m x indicate the improved number of x between the first generation and the current generation. When x M l is chosen as an optimization object, such as x w , in general SFLA, if an obtained solution x is better than x, then I m x = I m x + 1 . S e x is the total search times from the first generation to the current generation.
m v q l = x M l I m x x M l I m x x M l S e x x M l S e x
For solution x i , its a c t x i is used to evaluate its evolution quality and is computed by
a c t x i = I m x i / S e x i
The second phase is shown as follows.
(1)
Perform population division, compute M e q l for all memeplexes, sort them in descending order of M e q l , and construct set Θ = M l M e q l > M e q ¯ , l η × θ .
(2)
For each memeplex M l , M l Θ , repeat the following steps R 1 times if | T | > 0 , execute global search between x b and a randomly chosen y T ; else perform global search between x b and a solution y M l with a c t y a c t x for all x M l .
(3)
For each memeplex M l Θ ,
1
sort all solutions in M l in the ascending order of makespan, suppose C m a x ( x 1 ) C m a x ( x 2 ) C m a x ( x θ ) , and construct a set φ = x i d i s t x i < d i s t ¯ , i θ θ 2 2 .
2
Repeat the following steps R 2 times, and randomly choose a solution x i M l φ ; if a c t x i > 0.5 , then select a solution y φ by roulette selection based on P r y , execute global search between x i and y, and update memory T ; else execute global search between x i and a solution z with a c t z a c t x i for all x i M l and update memory T .
(4)
Execute multiple neighborhood searches on each solution x φ .
(5)
Perform new population shuffling.
where d i s t x i = | C m a x ( x i ) C m a x ( x b ) | is defined for each solution x i M l and d i s t ¯ is the average value of all d i s t x i in M l . η is a real number and set to be 0.4 by experiments, M e q ¯ indicates the average quality of all memeplexes, Θ is the set of good memeplexes, and P r y is a probability and defined by
P r y = φ r a n k y φ × I m y x φ I m x
where r a n k y is an integer and decided by ranking according to makespan in the first step of step (3) in the above Algorithm.
In the second phase, after all memeplexes are sorted in the descending order of M e q l , suppose M e q 1 M e q 2 M e q s .
Memory T is used to store the intermediate solutions. The maximum size | T | m a x is given in advance. We set | T | m a x to be 200 by experiments. When the number of solutions exceeds | T | m a x , a solution x can be added into T when x is better than the worst solution of T and substitutes for the worst one.
Six neighborhood structures are used. N 1 is shown below. Randomly select a job from the machine M k with the largest C m a x k and move it into the machine M g with the smallest C m a x g , where C m a x k and C m a x g are the completion time of the last processed job on M k and M g , respectively. N 2 is performed in the following way. Decide on a machine M k with the largest C m a x k and a job J i with the largest processing time p k i on M k , randomly choose a machine M g , g k and a job J j with the largest p g j and exchange J i and J j between M k and M g .
N 3 is described as follows. Randomly select two machines M k and M g and exchange a job J i with the largest p k i and a job J j with the largest p g j between these two machines. N 1 , N 2 , N 3 only act on the machine assignment string.
N 4 , N 5 , N 6 are performed on a scheduling string by swapping two randomly chosen genes, inserting a randomly selected gene into a new randomly decided position, and inverting genes between two stochastically positions k 1 , k 2 , k 1 < k 2 .
Multiple neighborhood search is performed in the following way. For solution x, let u = 1 , repeat the following steps V times: produce a solution z N u ( x ) , u = u + 1 , let u = 1 if u = 7 , and if z is better than x, then replace x with z and I m x = I m x + 1 .
Global search is executed in the same way of the first phase.
In the existing SFLA [29,32,33], a new population P is constructed by using s evolved memeplexes. In this study, new population shuffling is done in the following way: γ best solutions of T and s memeplexes are added into new population P, and γ worst solutions of P are removed. We test by experiments and set γ = 0.1 × | T | m a x .
Some worst solutions of P can be updated by solutions of T , that is, solutions of P can be improved by memeplex search or shuffling.
In the second phase, the set Θ is composed of good memeplexes with better quality than other memeplexes, in the search process for a good memeplex, a global search of optimization object x is implemented according to a c t x , and then multiple neighborhood search acts on the solutions in φ . Only global search is executed for other memeplexes; moreover, different parameters, R 1 , R 2 , R 1 R 2 , are used, and, as a result, differentiated search is implemented.

4.3. Algorithm Description

The detailed steps of DSFLA are shown below.
(1)
Initialization. Randomly produce initial population P with N solutions, and let initial T be empty.
(2)
Population division. execute search process within each memeplex.
(3)
Perform population shuffling.
(4)
If the stopping condition of the first phase is not met, then go to step (2).
(5)
Execute the second phase until the stopping condition is met.
The computational complexity is O ( N × R 1 × L ) , where L is the repeated number of steps 2–3.
Unlike the previous SFLA [29,32,33], DSFLA has the following features. (1) Quality evaluation is done for all memeplexes according to the solution quality and evolution quality and all memeplexes are categorized into two types: good memeplexes and other memeplexes. (2) The differentiated search is implemented by different search strategies and parameters for two types of memeplexes; as a result, the exploration ability is intensified; and the possibility of falling into local optima diminishes greatly.

5. Computational Experiments

Extensive experiments were conducted on a set of instances to test the performance of DSFLA for UPMSP with deteriorating PM and SDST. All experiments were implemented by using Microsoft Visual C++ 2019 and run on 8.0 G RAM 2.30 GHz CPU PC.

5.1. Instances and Comparative Algorithms

We used 70 instances, which has n 15 , 20 , 25 , 30 , 35 and m 2 , 4 , 6 , 8 , or n 50 , 70 , 90 , 100 , 120 , 150 , 170 , 200 , 220 , 250 and m 10 , 15 , 20 , 25 , 30 , p k i [ 50 , 70 ] , setup time from [5,10], c k = 1 , d k = 0.1 , and u k = max 1 i n p k i + s k 0 i + s k i 0 for machine M k .
We proposed a hybrid particle swarm optimization and genetic algorithm (HPSOGA) [48] for UPMSP. It can be applied to our UPMSP after the decoding procedure of DSFLA is adopted, and thus we selected it as a comparative algorithm. Lu et al. [11] presented ABC-TS for an unrelated parallel-batching machine scheduling problem with deteriorating jobs and maintenance. Avalos-Rosales et al. [25] applied a multi-start algorithm (MSA) to solve UPMSP with PM and SDST. These two algorithms can be directly used to solve UPMSP with SDST and deteriorating PM and possess good performance; therefore, they are chosen as comparative algorithms.
The general SFLA in Section 3 was also implemented, in which global search between two solutions is performed in the same way as the first stage of DSFLA. The comparison between DSFLA and SFLA is to show the effect of the main strategies of DSFLA.

5.2. Parameter Settings

In DSFLA, the stopping condition is the maximum number m a x _ i t of objective function evaluations. We found that DSFLA can converge fully. We also tested this condition for other comparative algorithms when m a x _ i t is 10 5 . We also found that the above m a x _ i t was appropriate; thus, we used this stopping condition.
DSFLA possesses other main parameters: N, s, R 1 , R 2 , V, and m a x _ i t 1 , where m a x _ i t 1 denotes the maximum number of objective function evaluations in the first phase.
The Taguchi method [49] was used to decide the settings for parameters. We selected instance 150 × 20 for parameter tuning. The levels of each parameter are shown in Table 3. There were 27 parameter combinations according to the orthogonal array L 27 ( 3 6 ) .
DSFLA with each combination run 10 times independently for the chosen instance. The results of M I N and the S / N ratio are shown in Figure 2, in which the S / N ratio is defined as 10 × log 10 ( M I N 2 ) and M I N is the best solution found in 10 runs. From Figure 2, DSFLA with following combination N = 80 , s = 5 , R 1 = 50 , R 2 = 100 , V = 240 and m a x _ i t 1 = 10 4 obtained better results than DSFLA with other combinations; therefore, the above combination was adopted.

5.3. Results and Analyses

DSFLA was compared with SFLA, ABC-TS, HPSOGA, and MSA. All parameters except the stopping conditions of ABC-TS, HPSOGA, and MSA were directly adopted from the original references. For SFLA, there were no R 1 , R 2 , V and m a x _ i t 1 , and the other parameters were given the same settings as DSFLA. Each algorithm randomly ran 10 times for each instance. Table 4, Table 5 and Table 6 show the computational results of all algorithms, in which A V G is the average value of the obtained 10 elite solutions in 10 runs, and S D is the standard deviation of 10 elite solutions.
Table 7 describes the computational times of DSFLA and its comparative algorithms, in which the unit of time is seconds. Figure 3 gives a box plot of all algorithms, in which D M ( D A ) ( D S ) indicates the M I N ( A V G ) ( S D ) of an algorithm minus the M I N ( A V G ) ( S D ) of DSFLA. Figure 4 reveals the convergence curves of two instances.
As shown in Table 4, Table 5 and Table 6, SFLA could not produce better M I N than DSFLA on any instances and obtains bigger M I N than SFLA by at least 100 for most instances. DSFLA had better convergent performance than SFLA. DSFLA generated smaller A V G than SFLA on all instances, and the differences of A V G between the two algorithms were significant.
DSFLA performed better than SFLA for the average performance. The S D of SFLA was also worse than that of DSFLA for most instances, and SFLA was inferior to DSFLA regarding search stability. DSFLA performed notably better when compared with SFLA. This conclusion can also be drawn from Figure 3; thus, new strategies, such as differentiated search, had a positive impact on the performance of DSFLA.
It can be seen from Table 4 that DSFLA produced smaller M I N compared with HPSOGA and ABC-TS for nearly all instances and generated a worse M I N than MSA for only 11 instances. DSFLA converged better than its comparative algorithms. The convergence performance differences between DSFLA and its comparative algorithm can also be seen from the box plot and convergence curves in Figure 3 and Figure 4.
The results in Table 5 show that DSFLA obtained a better A V G over HPSOGA and ABC-TS for nearly all instances and possessed a smaller A V G than MSA for most instances. DSFLA had a better average performance than its three comparative algorithms. This conclusion can also be drawn from Figure 3. Table 6 and Figure 3 reveal that DSFLA had better stability than its three comparative algorithms; thus, we concluded that DSFLA can provide promising results for solving UPMSP with deteriorating PM and SDST.
The good performances of DSFLA mainly resulted from its new strategies in the second phase. The differentiated search was implemented by memeplex quality evaluation and different search combinations of global search and multiple neighborhood search. These strategies effectively intensified the exploration ability and avoided the search falling into local optima. As a result, a high search efficiency was obtained; thus, DSFLA is a very competitive method for the considered UPMSP.

6. Conclusions

UPMSP with various processing constraints has been extensively considered. This paper addressed UPMSP with deteriorating PM and SDST and provided a new algorithm named DSFLA to minimize the makespan. In DSFLA, two search phases exist, memeplex quality evaluation is performed, and the differentiated search processes between two kind of memeplexes are implemented in the second phase. A new population shuffling was also presented. A number of computational experiments were conducted. The computational results demonstrated that DSFLA had promising advantages, such as good convergence and stability in solving the considered UPMSP.
UPMSP with at least two actual constraints, such as PM, SDST, and learning effects, may attract attention. We will continue to focus on these UPMSP by using meta-heuristics, including ABC and the imperialist competitive algorithm. Uncertainty often cannot be neglected and should be added into scheduling problems. UPMSP with uncertainty and energy-related elements, etc. is our future topic. Reinforcement learning has been used to solve scheduling problems and we will pay attention to meta-heuristics with reinforcement learning for UPMSP with various processing constraints.

Author Contributions

Conceptualization, D.L.; software, T.Y.; writing—original draft preparation, D.L.; writing—review and editing, D.L. Both authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the National Natural Science Foundation of China (61573264).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, Y.-H.; Kim, R.-S. Insertion of new idle time for unrelated parallel machine scheduling with job splititing and machine breakdown. Comput. Ind. Eng. 2020, 147, 106630. [Google Scholar] [CrossRef]
  2. Wang, X.M.; Li, Z.T.; Chen, Q.X.; Mao, N. Meta-heuristics for unrelated parallel machines scheduling with random rework to minimize expected total weighted tardiness. Comput. Ind. Eng. 2020, 145, 106505. [Google Scholar] [CrossRef]
  3. Ding, J.; Shen, L.J.; Lu, Z.P.; Peng, B. Parallel machine scheduling with completion-time based criteria and sequence-dependent deterioration. Comput. Oper. Res. 2019, 103, 35–45. [Google Scholar] [CrossRef]
  4. Tavana, M.; Zarook, Y.; Santos-Arteaga, F.J. An integrated three-stage maintenance scheduling model for unrelated parallel machines with aging effect and multimaintenance activities. Comput. Ind. Eng. 2015, 83, 226–236. [Google Scholar] [CrossRef]
  5. Yang, D.L.; Cheng, T.C.E.; Yang, S.J.; Hsu, C.J. Unrelated parallel-machine scheduling with aging effects and multi-maintenance activities. Comput. Oper. Res. 2013, 39, 1458–1464. [Google Scholar] [CrossRef]
  6. Wang, S.J.; Liu, M. Multi-objective optimization of parallel machine scheduling integrated with multi-resources preventive maintenance planning. J. Manuf. Syst. 2015, 37, 182–192. [Google Scholar] [CrossRef]
  7. Gara-Ali, A.; Finke, G.; Espinouse, M.L. Parallel-machine scheduling with maintenance: Praising the assignment problem. Eur. J. Oper. Res. 2016, 252, 90–97. [Google Scholar] [CrossRef]
  8. Lei, D.M.; Liu, M.Y. An artificial bee colony with division for distributed unrelated parallel machine scheduling with preventive maintenance. Comput. Ind. Eng. 2020, 141, 106320. [Google Scholar] [CrossRef]
  9. Cheng, T.C.E.; Hsu, C.-J.; Yang, D.-L. Unrelated parallel-machine scheduling with deteriorating maintenance activities. Comput. Ind. Eng. 2011, 60, 602–605. [Google Scholar] [CrossRef]
  10. Hsu, C.-J.; Ji, M.; Guo, J.Y.; Yang, D.-L. Unrelated parallel-machine scheduling problems with aging effects and deteriorating maintenance activities. Infor. Sci. 2013, 253, 163–169. [Google Scholar] [CrossRef]
  11. Lu, S.J.; Liu, X.B.; Pei, J.; Thai, M.T.; Pardalos, P.M. A hybrid ABC-TS algorithm for the unrelated parallel-batching machines scheduling problem with deteriorating jobs and maintenance activity. Appl. Soft. Comput. 2018, 66, 168–182. [Google Scholar] [CrossRef]
  12. Allahverdi, A.; Gupta, J.N.; Aldowaisan, T. A review of scheduling research involving setup considerations. Omega 1999, 27, 219–239. [Google Scholar] [CrossRef]
  13. Parker, R.G.; Deana, R.H.; Holmes, R.A. On the use of a vehicle routing algorithm for the parallel processor problem with sequence dependent changeover costs. IIE Trans. 1977, 9, 155–160. [Google Scholar] [CrossRef]
  14. Kurz, M.; Askin, R. Heuristic scheduling of parallel machines with sequence-dependent set-up times. Int. J. Prod. Res. 2001, 39, 3747–3769. [Google Scholar] [CrossRef]
  15. Arnaout, J.P.; Rabad, G.; Musa, R. A two-stage ant colony optimization algorithm to minimize the makespan on unrelated parallel machines with sequence-dependent setup times. J. Intell. Manuf. 2010, 21, 693–701. [Google Scholar] [CrossRef]
  16. Vallada, E.; Ruiz, R. A genetic algorithm for the unrelated machine scheduling problem with sequence dependent setup times. Eur. J. Oper. Res. 2011, 211, 612–622. [Google Scholar] [CrossRef] [Green Version]
  17. Lin, S.W.; Ying, K.-C. ABC-based manufacturing scheduling for unrelated parallel machines with machine-dependent and job sequence-dependent setup times. Comput. Oper. Res. 2014, 51, 172–181. [Google Scholar] [CrossRef]
  18. Caniyilmaz, E.; Benli, B.; IIkay, M.S. An artificial bee colony algorithm approach for unrelated parallel machine scheduling with processing set restrictions, job sequence-dependent setup times, and due date. Int. J. Adv. Manuf. Technol. 2015, 77, 2105–2115. [Google Scholar] [CrossRef]
  19. Diana, R.O.M.; Filho, M.F.D.F.; Souza, S.R.D.; Vitor, J.F.D.A. An immune-inspired algorithm for an unrelated parallel machines scheduling problem with sequence and machine dependent setup-times for makespan minimisation. Neurocomputing 2015, 163, 94–105. [Google Scholar] [CrossRef]
  20. Wang, L.; Zheng, X.L. A hybrid estimation of distribution algorithm for unrelated parallel machine scheduling with sequence-dependent setup times. IEEE-CAA J. Autom. 2016, 3, 235–246. [Google Scholar]
  21. Ezugwu, A.E.; Akutsah, F. An improved firefly algorithm for the unrelated parallel machines scheduling problem with sequence-dependent setup times. IEEE Acc. 2018, 4, 54459–54478. [Google Scholar] [CrossRef]
  22. Fanjul-Peyro, L.; Ruiz, R.; Perea, F. Reformulations and an exact algorithm for unrelated parallel machine scheduling problems with setup times. Comput. Oper. Res. 2019, 10, 1173–1182. [Google Scholar] [CrossRef]
  23. Bektur, G.; Sarac, T. A mathematical model and heuristic algorithms for an unrelated parallel machine scheduling problem with sequence-dependent setup times, machine eligibility restrictions and a common server. Comput. Oper. Res. 2019, 103, 46–63. [Google Scholar] [CrossRef]
  24. Cota, L.P.; Coelho, V.N.; Guimaraes, F.G.; Souza, M.F. Bi-criteria formulation for green scheduling with unrelated parallel machines with sequence-depedent setup times. Int. Trans. Oper. Res. 2021, 28, 996–1007. [Google Scholar] [CrossRef]
  25. Avalos-Rosales, O.; Angel-Bello, F.; Alvarez, A.; Cardona-Valdes, Y. Including preventive maintenance activities in an unrelated parallel machine environment with dependent setup times. Comput. Ind. Eng. 2018, 123, 364–377. [Google Scholar] [CrossRef]
  26. Wang, M.; Pan, G.H. A novel imperialist competitive algoirthm with multi-elite individuals guidance for multi-objective unrelated parallel machine scheduling problem. IEEE Acc. 2019, 7, 121223–121235. [Google Scholar] [CrossRef]
  27. Eusuff, M.M.; Lansey, K.E.; Pasha, F. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2006, 38, 129–154. [Google Scholar] [CrossRef]
  28. Rahimi-Vahed, A.; Mirzaei, A.H. Solving a bi-criteria permutation flow-shop problem using shuffled frog-leaping algorithm. Appl. Soft. Comput. 2008, 12, 435–452. [Google Scholar] [CrossRef]
  29. Pan, Q.K.; Wang, L.; Gao, L.; Li, J.Q. An effective shuffled frog-leaping algorithm for lot-streaming flow shop scheduling problem. Int. J. Adv. Manuf. Technol. 2011, 52, 699–713. [Google Scholar] [CrossRef]
  30. Li, J.Q.; Pan, Q.K.; Xie, S.X. An effective shuffled frog-leaping algorithm for multi-objective flexible job shop scheduling problems. Appl. Math. Comput. 2012, 218, 9353–9371. [Google Scholar] [CrossRef]
  31. Xu, Y.; Wang, L.; Liu, M.; Wang, S.Y. An effective shuffled frog-leaping algorithm for hybrid flow-shop scheduling with multiprocessor tasks. Int. J. Adv. Manuf. Technol. 2013, 68, 1529–1537. [Google Scholar] [CrossRef]
  32. Lei, D.M.; Guo, X.P. A shuffled frog-leaping algorithm for hybrid flow shop scheduling with two agents. Expert Syst. Appl. 2015, 42, 9333–9339. [Google Scholar] [CrossRef]
  33. Lei, D.M.; Guo, X.P. A shuffled frog-leaping algorithm for job shop scheduling with outsourcing options. Int. J. Prod. Res. 2016, 54, 4793–4804. [Google Scholar] [CrossRef]
  34. Lei, D.M.; Zheng, Y.L.; Guo, X.P. A shuffled frog-leaping algorithm for flexible job shop scheduling with the consideration of energy consumption. Int. J. Prod. Res. 2017, 55, 3126–3140. [Google Scholar] [CrossRef]
  35. Lei, D.M.; Wang, T. Solving distributed two-stage hybrid flowshop scheduling using a shuffled frog-leaping algorithm with memeplex grouping. Eng. Optim. 2020, 52, 1461–1474. [Google Scholar] [CrossRef]
  36. Cai, J.C.; Zhou, R.; Lei, D.M. Dynamic shuffled frog-leaping algorithm for distributed hybrid flow shop scheduling with multiprocessor tasks. Eng. Appl. Artif. Intell. 2020, 90, 103540. [Google Scholar] [CrossRef]
  37. Kong, M.; Liu, X.B.; Pei, J.; Pardalos, P.M.; Mladenovic, N. Parallel-batching scheduling with nonlinear processing times on a single and unrelated parallel machines. J. Glob. Optim. 2020, 78, 693–715. [Google Scholar] [CrossRef]
  38. Lu, K.; Ting, L.; Keming, W.; Hanbing, Z.; Makoto, T.; Bin, Y. An improved shuffled frog-leaping algorithm for flexible job shop scheduling problem. Algorithms 2015, 8, 19–31. [Google Scholar] [CrossRef] [Green Version]
  39. Fernez, G.S.; Krishnasamy, V.; Kuppusamy, S.; Ali, J.S.; Ali, Z.M.; El-Shahat, A.; Abdel Aleem, S.H. Optimal dynamic scheduling of electric vehicles in a parking lot using particle swarm optimization and shuffled frog leaping algorithm. Energies 2020, 13, 6384. [Google Scholar]
  40. Yang, W.; Ho, S.L.; Fu, W. A modified shuffled frog leaping algorithm for the topology optimization of electromagnet devices. Appl. Sci. 2020, 10, 6186. [Google Scholar] [CrossRef]
  41. Moayedi, H.; Bui, D.T.; Thi Ngo, P.T. Shuffled frog leaping algorithm and wind-driven optimization technique modified with multilayer perceptron. Appl. Sci. 2020, 10, 689. [Google Scholar] [CrossRef] [Green Version]
  42. Hsu, H.-P.; Chiang, T.-L. An improved shuffled frog-leaping algorithm for solving the dynamic and continuous berth allocation problem (DCBAP). Appl. Sci. 2019, 9, 4682. [Google Scholar] [CrossRef] [Green Version]
  43. Mora-Melia, D.; Iglesias-Rey, P.L.; Martínez-Solano, F.J.; Muñoz-Velasco, P. The efficiency of setting parameters in a modified shuffled frog leaping algorithm applied to optimizing water distribution networks. Water 2016, 8, 182. [Google Scholar] [CrossRef] [Green Version]
  44. Amiri, B.; Fathian, M.; Maroosi, A. Application of shuffled frog-leaping algorithm on clustering. Int. J. Adv. Manuf. Technol. 2009, 45, 199–209. [Google Scholar] [CrossRef]
  45. Elbeltagi, E.; Hegazy, T.; Grierson, D. A modified shuffled frog-leaping optimization algorithm: Applications to project management. Struct. Infrastruct. Eng. 2007, 3, 53–60. [Google Scholar] [CrossRef]
  46. Gao, J.Q.; He, G.X.; Wang, Y.S. A new parallel genetic algorithm for solving multiobjective scheduling problems subjected to special process constraint. Int. J. Adv. Manuf. Technol. 2009, 43, 151–160. [Google Scholar] [CrossRef]
  47. Afzalirad, M.; Rezaein, J. A realistic variant of bi-objective unrelated parallel machine scheduling problem NSGA-II and MOACO approaches. Appl. Soft Comput. 2017, 50, 109–123. [Google Scholar] [CrossRef]
  48. Mir, M.S.S.; Rezaeian, J. A robust hybrid approach based on particle swarm optimization and genetic algorithm to minimize the total machine load on unrelated parallel machines. Appl. Soft Comput. 2016, 41, 488–504. [Google Scholar]
  49. Taguchi, G. Introduction to Quality Engineering; Asian Productivity Organization: Tokyo, Japan, 1986. [Google Scholar]
Figure 1. A schedule of the example.
Figure 1. A schedule of the example.
Symmetry 13 01574 g001
Figure 2. Results of M I N and the S / N ratio.
Figure 2. Results of M I N and the S / N ratio.
Symmetry 13 01574 g002
Figure 3. Box plot of all algorithms.
Figure 3. Box plot of all algorithms.
Symmetry 13 01574 g003
Figure 4. Convergence curves of DSFLA and its comparative algorithms.
Figure 4. Convergence curves of DSFLA and its comparative algorithms.
Symmetry 13 01574 g004
Table 1. Processing time.
Table 1. Processing time.
Job12345678
M 1 5657513070384262
M 2 5855346934576950
Table 2. Setup times s 1 j k and s 2 j k .
Table 2. Setup times s 1 j k and s 2 j k .
s 1 j k k = 0 12345678 s 2 j k k = 0 12345678
j = 0 0610698867j = 00109695887
18010571081071507997986
285067985828708106775
3779075997310670610855
457710061069461095079107
581067100965510710990866
6877986081066610776058
769893970107679588608
88101057556081098676790
Table 3. Parameters and their levels.
Table 3. Parameters and their levels.
Parameters Factor Level
123
N6080100
s4510
R 1 305070
R 2 80100120
V220240260
max _ i t 1 500010,00015,000
Table 4. Computational results of five algorithms on the metric MIN.
Table 4. Computational results of five algorithms on the metric MIN.
InstanceDSFLASFLAHPSOGAABC-TSMSAInstanceDSFLASFLAHPSOGAABC-TSMSA
15 × 2942963946976947120 × 1018892085190821881889
15 × 4365368379381375120 × 1510051151116514031008
15 × 6251259252263256120 × 20667812813995667
15 × 8153156164165157120 × 25510645646822521
20 × 213431368134013951404120 × 30391519516677388
20 × 4500509504514514150 × 1028543097285835432854
20 × 6367375367379368150 × 1514131617161418841413
20 × 8258260260267258150 × 20985115011661184994
25 × 220312076207620612078150 × 256738258101008673
25 × 4779794804794755150 × 30526664673821521
25 × 6491499498499493170 × 1036503977365045343654
25 × 8365367365376368170 × 1518671881187724221872
30 × 227302765279628042736170 × 2011771358120716131176
30 × 4943962963979952170 × 2582798910101179827
30 × 6515517517623517170 × 30682820824988668
30 × 8380382380391382200 × 1051715721517656385179
35 × 238883978393139453933200 × 1524802491251327532485
35 × 411621164116311751164200 × 2014181620163618691419
35 × 6645653650665657200 × 2510111196120413891004
35 × 8494504500517499200 × 308229919991168829
50 × 10514524523667516220 × 1064587046713179366459
50 × 15376379381507379220 × 1528533138285639892865
50 × 20260268273368260220 × 2016501894189724431652
50 × 25162262259276163220 × 2511841405160016331202
50 × 30159160170271160220 × 30994116611911397983
70 × 10820829828986820250 × 1089089685978496808916
70 × 15511520519658514250 × 1536144020407640793651
70 × 20382389386520379250 × 2021752440248827772177
70 × 25267375276395269250 × 2514181841188818911420
70 × 30263269273389264250 × 3011891389142216151173
100 × 1013981566140116471399300 × 1014,91916,33314,92416,35914,919
100 × 158148298241004819300 × 1551755726638762415175
100 × 20524650643818520300 × 2028593524361535982861
100 × 25390513502652393300 × 2519082484249525031908
100 × 30382393395523376300 × 3014201882190921451422
Table 5. Computational results of five algorithms on the metric AVG.
Table 5. Computational results of five algorithms on the metric AVG.
InstanceDSFLASFLAHPSOGAABC-TSMSAInstanceDSFLASFLAHPSOGAABC-TSMSA
15 × 2942963946976947120 × 1018892098191021981890
15 × 4365368380382375120 × 1510051178116814191008
15 × 6251260254267256120 × 206698198181007668
15 × 8153160165173158120 × 25511662658826523
20 × 213431368134213951404120 × 30393523522681389
20 × 4500509506517514150 × 1028553103286135692856
20 × 6367375369383369150 × 1514131626162419051414
20 × 8258261262268258150 × 20993116211681195996
25 × 220312076207620622078150 × 256738318181026673
25 × 4779794804797755150 × 30526668691826521
25 × 6491499498505493170 × 1036533984365045393655
25 × 8365367365381368170 × 1518681896188124331872
30 × 227302765279628052736170 × 2011791372121816191185
30 × 4943963965983953170 × 25827100110131191828
30 × 6515518518625518170 × 306848258371005669
30 × 8380383384400383200 × 1051725723517756785179
35 × 238883978393139463934200 × 1524802504252027832485
35 × 411621164116311771164200 × 2014211641164418861420
35 × 6646654651667658200 × 2510121202121514011005
35 × 8495505503519501200 × 3082799810071188829
50 × 10515524525674517220 × 1064597049713379476459
50 × 15377379384515380220 × 1528533141285839952890
50 × 20261268275385261220 × 2016501912191924591653
50 × 25163262264280165220 × 2511851410161616451204
50 × 30159167171273160220 × 30996117611961410984
70 × 108218298311003822250 × 1089099691978897018918
70 × 15512520521667515250 × 1536154029408440873651
70 × 20382389391524380250 × 2021772471249827842177
70 × 25268375278396270250 × 2514191855189919061421
70 × 30263269276392267250 × 3011901404142916211173
100 × 1014001566140316521400300 × 1014,92116,33514,92716,36414,920
100 × 158158298271019823300 × 1551765734638862695175
100 × 20526650651825521300 × 2028603546361836272862
100 × 25390513513667394300 × 2519092502250225081909
100 × 30384393397529378300 × 3014211897191621651423
Table 6. Computational results of five algorithms on the metric SD.
Table 6. Computational results of five algorithms on the metric SD.
InstanceDSFLASFLAHPSOGAABC-TSMSAInstanceDSFLASFLAHPSOGAABC-TSMSA
15 × 20.000.000.000.000.60120 × 100.008.721.2010.060.75
15 × 40.000.000.900.460.66120 × 151.509.402.845.830.00
15 × 60.651.201.772.220.30120 × 200.873.505.105.850.66
15 × 81.502.211.042.301.57120 × 250.665.8311.312.751.64
20 × 20.000.002.100.000.46120 × 301.472.215.763.100.94
20 × 40.000.002.292.320.49150 × 101.253.694.279.701.86
20 × 60.000.431.731.611.75150 × 150.807.4510.0611.950.54
20 × 80.640.602.280.830.54150 × 202.805.702.495.551.18
25 × 20.000.000.000.801.20150 × 250.003.243.757.001.31
25 × 40.000.000.001.040.00150 × 300.002.467.353.100.30
25 × 60.400.000.302.920.00170 × 100.985.542.022.762.19
25 × 80.000.300.002.270.40170 × 150.498.594.236.990.00
30 × 20.000.000.000.400.00170 × 200.605.8410.415.122.97
30 × 40.000.922.401.601.36170 × 251.205.243.674.681.72
30 × 60.000.661.031.201.09170 × 301.863.8711.796.680.64
30 × 80.630.933.784.611.81200 × 101.101.261.1513.373.25
35 × 20.000.000.000.490.64200 × 150.309.485.8710.910.30
35 × 40.000.300.003.150.60200 × 201.9910.416.595.951.36
35 × 61.180.840.231.430.30200 × 250.303.818.626.430.50
35 × 80.770.913.221.751.94200 × 303.335.295.417.420.78
50 × 100.492.681.943.680.80220 × 100.401.751.846.780.30
50 × 150.401.523.034.760.87220 × 151.022.963.163.1111.20
50 × 200.151.652.195.800.38220 × 200.308.568.7711.273.04
50 × 252.701.645.192.261.89220 × 251.943.227.755.722.69
50 × 300.001.621.161.710.49220 × 300.485.395.056.950.98
70 × 101.023.751.119.892.12250 × 101.193.814.677.660.81
70 × 151.102.562.944.611.76250 × 150.464.725.904.360.00
70 × 200.491.924.852.871.30250 × 201.0813.198.654.070.90
70 × 252.772.031.011.760.89250 × 252.107.699.609.521.76
70 × 300.652.002.462.091.36250 × 300.496.708.727.080.49
100 × 101.855.681.202.170.54300 × 101.372.273.414.040.66
100 × 151.501.902.705.063.36300 × 150.465.143.679.680.30
100 × 201.302.835.444.521.25300 × 201.568.652.3914.591.81
100 × 250.352.2511.525.511.34300 × 250.5011.934.634.571.40
100 × 300.533.761.593.951.46300 × 300.507.123.736.801.04
Table 7. The computational times of DSFLA, HPSOGA, ABC-TS, and MSA.
Table 7. The computational times of DSFLA, HPSOGA, ABC-TS, and MSA.
InstanceDSFLAHPSOGAABC-TSMSAInstanceDSFLAHPSOGAABC-TSMSA
15 × 20.250.190.630.10120 × 103.563.8812.423.04
15 × 40.220.200.350.17120 × 153.283.777.534.50
15 × 60.240.230.270.20120 × 202.943.735.705.64
15 × 80.330.230.230.20120 × 252.773.864.697.43
20 × 20.200.311.220.28120 × 302.744.024.779.53
20 × 40.280.400.590.30150 × 104.795.1020.414.90
20 × 60.270.400.430.36150 × 154.125.2914.935.91
20 × 80.220.400.360.29150 × 203.825.3810.087.10
25 × 20.240.412.100.31150 × 253.275.557.7111.27
25 × 40.190.460.930.38150 × 303.555.537.1812.94
25 × 60.270.470.660.38170 × 105.606.6428.425.26
25 × 80.200.470.550.40170 × 154.806.4218.546.23
30 × 20.220.483.250.41170 × 204.067.3012.978.32
30 × 40.370.491.360.46170 × 253.598.6210.1710.86
30 × 60.370.570.940.57170 × 303.958.209.0612.14
30 × 80.230.560.730.58200 × 107.3110.3339.297.85
35 × 20.330.555.811.36200 × 155.768.4825.829.22
35 × 40.330.651.971.34200 × 205.008.5719.349.60
35 × 60.270.621.291.30200 × 254.648.8814.6210.07
35 × 80.350.701.211.39200 × 304.779.1113.0311.11
50 × 101.850.972.201.72220 × 108.5010.6648.668.03
50 × 151.631.011.291.93220 × 156.708.9833.079.93
50 × 201.701.201.080.86220 × 205.648.9424.6010.01
50 × 251.581.216.041.81220 × 255.039.0318.6311.16
50 × 301.800.965.862.24220 × 305.209.0616.3011.54
70 × 102.221.623.452.27250 × 1010.1711.9169.3310.14
70 × 152.011.782.492.43250 × 158.1610.0842.5210.95
70 × 202.061.781.982.59250 × 206.9410.4531.3511.06
70 × 251.981.831.742.73250 × 255.8610.4025.2510.96
70 × 302.122.021.632.02250 × 306.1110.5922.3611.62
100 × 103.062.637.952.49300 × 1013.8515.42103.5813.51
100 × 152.672.785.272.67300 × 1510.3114.7779.3814.03
100 × 202.612.743.942.37300 × 208.8714.2359.0814.16
100 × 252.463.193.353.01300 × 257.8814.0254.3914.58
100 × 302.503.243.093.62300 × 307.7314.9956.8515.04
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lei, D.; Yi, T. A Novel Shuffled Frog-Leaping Algorithm for Unrelated Parallel Machine Scheduling with Deteriorating Maintenance and Setup Time. Symmetry 2021, 13, 1574. https://doi.org/10.3390/sym13091574

AMA Style

Lei D, Yi T. A Novel Shuffled Frog-Leaping Algorithm for Unrelated Parallel Machine Scheduling with Deteriorating Maintenance and Setup Time. Symmetry. 2021; 13(9):1574. https://doi.org/10.3390/sym13091574

Chicago/Turabian Style

Lei, Deming, and Tian Yi. 2021. "A Novel Shuffled Frog-Leaping Algorithm for Unrelated Parallel Machine Scheduling with Deteriorating Maintenance and Setup Time" Symmetry 13, no. 9: 1574. https://doi.org/10.3390/sym13091574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop