1. Introduction
Multi-objective optimization problems comprise some contradicting objectives to be optimized simultaneously when increasing one of them is the cause of decreasing the others [
1,
2,
3]. To deal with this type of problem, multi-objective optimization algorithms (metaheuristics) are widely applied by researchers, according to recent studies in the literature [
4,
5]. Multi-objective metaheuristics attempt to find a set of solutions for balancing the trade-off between the problem objectives. Therefore, the goal of metaheuristics is to extract a set of non-dominated solutions, Pareto-front, that optimize all objectives of the problem [
6,
7]. A non-dominated solution has at least one better objective and no worse objective than all other solutions [
2,
6]. A sample Pareto-front for minimization of a bi-objective problem is indicated in
Figure 1. The represented Pareto-front comprises seven non-dominated and nine dominated solutions out of a total of 16 solutions [
8].
Multi-objective task graph scheduling is an NP-hard problem that plays a significant role in the heterogeneous distributed systems. In a task graph scheduling problem, the goal is to distribute all tasks of an acyclic graph (parallel program) over the processors in such a way that all the objective functions are optimized. The solution to the problem is expected to optimize all the scheduling objectives, such as flow time, reliability, etc., simultaneously [
9,
10]. A detailed problem definition is presented in
Section 2.
A pretty wide range of developed algorithms may be found in the literature to solve the task graph scheduling problem, which indicates the great importance of the problem in engineering applications. The state-of-the-art algorithms in the literature mostly apply soft computing methods and metaheuristics for solving the problem [
11,
12,
13,
14]. For instance, SGA and EP methods were developed by authors in [
14], as were MOGA and MOEP methods applied in [
15]. Moreover, HEFT-NSGA, MFA, weighted sum MOEP, hybrid algorithms, and a multi-agent system were developed in [
8,
16,
17,
18], respectively. It can be seen that in the majority of state-of-the-art literature, a simple metaheuristic or weighted-sum method has been used. State-of-the-art methods are listed in
Section 3. Likewise,
Section 3 includes several new evolutionary algorithms proposed in recent literature.
This paper provides a novel hybrid method comprising the improved multi-objective differential evolution (MODE) method and variable neighborhood search (VNS) to schedule task graphs in distributed systems [
19]. The novelty of the proposed hybrid method is achieved by improving MODE and hybridizing it with the VNS approach. The proposed method improves the performance of MODE by applying appropriate solution representation as well as effective selection, crossover, and mutation operators. Likewise, the VNS algorithm is applied to improve the extracted Pareto-front, speed up the evolution process, and increase the power of determining more promising parts of the search space. In the modified version of MODE, the selection operator is more effective due to applying roulette-wheel selection based on dominance rankings instead of fully random selections. The number of solutions dominating a solution is known as dominance rank, and consequently, the better solutions will have lower ranks. Therefore, the lower ranks are arranged to occupy larger parts of the roulette wheel to increase their selection chance (probability). Likewise, effective mutation and crossover operators are proposed in this paper to speed up the evolution process and increase the driven Pareto-front quality. A more promising portion of the search space is found in a novel proposed mutation because both task order and processors are modified without breaking the feasibility of the solution. Details of the innovative mutation and crossover operators are given in
Section 4. In the modified MODE, non-dominated solutions found so far, Pareto-front, are kept in the archive and updated at the end of each MODE loop. Meanwhile, the proposed method applies a variable neighborhood search mechanism (VNS) over the archive after it is updated. This technique allows for more exploration and exploitation of the solutions in the archive to make them more accurate. However, to prevent time-consuming VNS, it is applied over a maximum of 10 solutions in the archive, and there are ten iterations of the inner loop. A description of the VNS method is given in
Section 4.
Apart from the fact that DE is a straightforward optimization method, it is also robust and powerful. Like many other optimization methods, DE operates based on some parameters and several operators. The aim of optimization methods is to explore a high-quality PF in an acceptable time, preventing early convergence to avoid local optimal solutions. It is obvious that the quality of operators as well as the solution representation scheme affect the ability of DE to find better PF and speed up convergence. Therefore, there is a rich literature to improve the performance of DE [
20]. Likewise, hybridizing DE with other methods is another way to add more power to DE in discovering better PFs [
21,
22,
23]. In our proposed method to optimize the well-known scheduling problem, not only operator improvement but also hybridization are applied to have a robust hybrid system to deal with objective functions (makespan, reliability, and flow time).
Evaluation of the proposed hybrid method’s effectiveness and performance against well-known benchmarks gleaned from cutting-edge literature. In addition, the values of spacing and hyper-volume metrics are calculated. Furthermore, the Wilcoxon signed method is applied to carry out pairwise statistical tests over the obtained results. The proposed method exceeds all the state-of-the-art methods in terms of performance and quality of objective values, according to all findings and test results.
As future works, different optimization problems, e.g., task scheduling in cloud computing, digital twins, and the internet of things [
24,
25,
26], can be solved by the proposed method. In addition, it is planned to replace the MODE algorithm with recently proposed optimization methods mentioned in
Section 3 and see how they affect the performance of the system [
27,
28,
29].
The remaining parts of the study are divided into five sections: The full definition of the multi-objective task graph scheduling problem is provided in
Section 2. Likewise,
Section 2 defines the objectives of the problem and related equations in detail. In
Section 3, the most recent approaches to scheduling multi-objective task graphs are briefly discussed. In addition, some recent robust metaheuristics are reviewed in this section.
Section 4 contains a description of the suggested unique hybrid technique in details. The section also represents the flowcharts and algorithms applied in the proposed system. The algorithm settings and experimental findings are reported and discussed in
Section 5 to prove the high performance of the proposed hybrid system. Finally,
Section 6 illustrates the study’s findings as well as a few potential future research projects.
2. Multi-Objective Task Graph Scheduling Problem
In a multi-objective task graph scheduling problem, all the tasks of a directed acyclic task graph representing a parallel program are distributed over a fully connected heterogeneous distributed system in order to minimize the Makespan (total completion time), minimize the average flow time, and maximize the reliability. Instead of the reliability maximization, the reliability index is minimized in the literature. A task graph comprises some nodes representing the tasks and some directed edges indicating inter-processor communications [
30]. The edges are weighted based on the communication cost between the processors when the tasks at the two ends of the edges are performed on different processors. The communication cost turns to zero if the tasks are executed on the same processor [
30]. A sample task graph comprising 19 tasks is presented in
Figure 2 including the name and time for each task [
8,
30]. The tasks are uniquely named by
t followed by a number, and the number in the box on the right side of each task number is the task time.
The goal of the task graph scheduling problem is to find an optimal schedule that maps tasks to the processors in a distributed system in order to optimize all the objectives.
Task graph scheduling problem is formulated as below [
11,
12,
13,
14]:
In Equation (1), f1, f2 and f3 state the objective functions where f1 indicates the total scheduling completion time that is known as Makespan. The value of f1 is computed as below.
Cj (
s) in Equation (2) is the time that processor p
j finishes the execution and becomes idle. Consequently,
maxj Cj (
s) indicates the completion time of the last processor in schedule
s. The symbol ‘
s’ denotes the scheduling and it points to the scheduling represented by solution representation scheme in
Figure 3.
Total processor p
j completion time is computed as Equation (3).
In Equation (3), all the tasks assigned to a processor pj belong to a set denoted by v(j, s). Likewise, start and finish times of the task vi on processor pj are denoted by stij and wij respectively. In other words, wij is the time processor pj finishes executing task vi.
Second objective in Equation (1),
f2, indicates the average flow-time that is computed as Equation (4).
The value of aft(s) in the schedule s is the summation of all completion times divided by |P| (number of processors).
f3 in Equation (1) denotes the value of the reliability index. It is important to notice that the reliability index minimization is equivalent to the reliability maximization [
11,
12]. There is a possibility for the processors to fail during the execution but failure of a processor does not affect the other processors. Probability of successfully performing all the tasks on processor p
j is computed as Equation (5).
In Equation (5),
λj is the processor p
j’s failure rate. Eventually, Probability of successfully performing a schedule s is calculated as Equation (6).
Likewise, the communication reliability between the processors p
m and p
n is computed as Equation (7).
In Equation (7), the set of tasks is denoted by V and the rate of communication failure rate between processors pm and pn is denoted by λmn. Meanwhile, sim and sjn indicate that tasks i and j have been mapped to processors pm and pn. The value of sim is 1 if task i has been scheduled to processor m. Likewise, cij is the communication cost between task i and j.
The Reliability index of schedule
s is calculated as Equation (8). The number of the tasks is denoted by |
V|.
All the objectives
f1,
f2 and
f3 are conflicting; hence the optimal values of them cannot be achieved in a single solution [
11,
12,
13]. Therefore, a set of non-dominated solutions, Pareto-front, is extracted for the multi-objective task graph scheduling problem.
3. Related Studies
In this part, the most recent approaches to the multi-objective task graph scheduling problem are briefly reviewed.
In [
14], several algorithms were applied to a bi-objective (Makespan and reliability index) Gaussian elimination graph with 18 nodes by Chitra et al. They also evaluated the performance of the algorithms using the randomly generated graphs. the standard genetic algorithms (SGA) and evolutionary programming (EP) were applied by the authors, and they used a weighted-sum approach to combine the objectives into one objective. Moreover, the authors applied the multi-objective GA (MOGA) [
30,
31] and multi-objective evolutionary programming (MOEP) [
32,
33], and outcomes illustrated that the MOEP provides better distribution in Pareto-front than SGA, EP, and MOGA. Carrying out the comparison only between the GA and EP as well as using very small graphs can be taken into account as constraints in the study.
Chen et al. [
17] suggested the HEFT-NSGA method to optimize the Makespan as well as the reliability index in the multi-objective task graph scheduling. Evaluations were carried out using some application graphs and random task graphs. the authors compared the outcomes with the Heterogeneous Earliest Finish Time (HEFT) method [
34] and Critical Path GA (CPGA) [
35] to illustrate that HEFT-NSGA extracts better solutions.
The multi-objective mean field annealing (MFA) is another metaheuristic used by Lotfi et al. [
11] for solving the task graph scheduling problems. The authors evaluated their introduced method against the NSGAII [
36] and MOGA [
31] metaheuristics, and outcomes proved that the MFA extracts better Pareto-front in comparison to NSGAII and MOGA. The constraint of the study is that only very small graphs have been used for evaluation.
To solve the three-objective task graph scheduling problem, Chitra et al. [
12] consumed the weighted sum GA [
37], weighted sum MOEP [
32], evolutionary programming (EP) [
38,
39], and MOGA [
31] methods over a Gaussian elimination graph. There are no metrics calculations and no comparisons with state-of-the-art algorithms in the study.
For the bi-objective task graph scheduling problem, Eswari and Nickolas [
40] proposed a firefly-based algorithm for optimizing Makespan and reliability solutions. In addition, comparisons and evaluations were done against modified GA (MGA) [
41] and bi-objective GA (BGA) [
42]. The findings indicated that the firefly method performs faster than MGA and BGA. Using a weighted-sum approach to merge objective values is the limitation of the suggested method. Likewise, no statistical analysis or metrics calculation can be found in [
40].
Chitra et al. [
14] merged multi-objective metaheuristics with a simple local search method to solve the bi-objective scheduling problem. SPEA2 and NSGAII metaheuristics were compared in their pure and hybrid versions.
Lotfi [
8] proposed a strategy to combine six metaheuristics to solve two- and three-objective task graph scheduling problems. According to the proposed strategy, six metaheuristics collaborate and cooperate together to improve a shared population. The common population is divided into the subpopulations to be improved by metaheuristics, and all non-dominated solutions found so far are kept in a common archive. Also, each metaheuristic has its own local archive to keep non-dominated solutions during individual execution. Evaluations were done over different task graphs and compared to most of the state-of-the-art methods. Consequently, the evaluation results showed that the proposed ensemble method outperformed all considered competitors.
Likewise, there is a wide literature on new evolutionary algorithms that have been recently proposed by researchers [
43,
44,
45,
46].
Yanjiao et al. suggested the NSGA-II-WA algorithm to enhance the NSGAIII standard. When it comes to the evolution strategy and weight vector modification, their suggested NSGA-II-WA outperforms NSGAIII [
43]. The suggested approach adds a discriminating condition, which speeds up the procedure without affecting performance. The effectiveness of the NSGA-II-WA in terms of convergence and distribution was tested by the authors using the DTLZ benchmark set [
43].
In 2017, Xiang et al. introduced the VAEA (Vector Angle-Based Evolutionary Algorithm), which is based on angle decomposition [
44]. Without the use of reference points, VAEA can adjust search space variety and convergence. The maximum vector-angle-first theory, used by VAEA, ensures that the solution set is wide and uniform. The findings of the authors’ evaluation of VAEA using numerous objective benchmarks showed that VAEA effectively tunes convergence and diversity.
Cheng et al. suggested the RVEA (Reference Vector Guided Evolutionary Algorithm) in 2016 [
46], which is based on reference vector guidance. RVEA tunes the weight vectors in accordance with objective functions dynamically. The authors compared the RVEA to five cutting-edge techniques and found that RVEA is efficient and effective.
Due to the wide range of state-of-the-art works in literature, it is useful to categorize the developed methods in terms of algorithm type and the problem type they are solving. The first categorization can be carried out in terms of the problem type, which is bi-objective or three-objective task graph scheduling problems. Likewise, the second organization is done based on the algorithm type, which can be either single-objective or multi-objective optimization approaches. The algorithms can also be either improved versions or hybrid types.
Table 1 represents the categorization of state-of-the-art methods. Three recently proposed evolutionary algorithms are also considered in the table.
The single-objective optimization algorithms use the weighted-sum method to be able to optimize more than one objective [
47,
48,
49]. The reported results show that most of the multi-objective optimization methods outperform the single-objective and weighted-sum approaches.
4. The Proposed Hybrid Method
Efficiently solving the multi-objective multiprocessor scheduling problem is of great importance in engineering applications. The reason is that the task graph is the representation of a parallel program running over multiprocessors in distributed systems. Decreasing the total execution time and optimizing other objectives of the problem play a remarkable role in distributed systems, and this is the reason why the problem has been widely solved by many different approaches up to now and is still going on. The new hybrid method that combines the improved MODE algorithm [
19,
50,
51] and VNS methodology [
52,
53] is described in this part. According to [
51], DE is a simple, effective, and robust algorithm for solving global optimization problems. Many research efforts have been made to improve DE and apply it to different practical problems. Differential evolution is able to search a very large space of candidate solutions, and its biggest advantage is stability. Since the DE is simple, robust, and stable, it was selected as the main method to be hybridized with another fast and efficient search technique called VNS. Applying the pure MODE algorithm is not promising; the selection, crossover, and mutation operators in MODE are therefore modified and improved in this paper. The dominance rank to be used in the selection part affects the performance in a good way. Likewise, to increase the performance, MODE has been hybridized with a fast and robust VNS. These are the motivations for this paper to merge MODE and VNS in order to reach a reliable and robust method. With regard to selection, crossover, and mutation operators, the suggested hybrid technique makes use of a modified MODE. All non-dominated solutions found so far, Pareto-front, are kept in an archive, which is updated at the end of each cycle in MODE.
The population is randomly initialized at the beginning of the proposed method. A solution (scheduling) is represented in the suggested way as an array including two rows and n columns, in which n is the total number of tasks in the graph. As an example,
Figure 3 provides a random schedule for the graph seen in
Figure 2. The
ti symbol is used to point the task number assigned to processor p
j explained in
Section 3. Hence, each column in array indicates the assignment of task
ti to processor p
j e.g., the second column illustrates that t
2 has been assigned to p
1.
The procedure depicted in Algorithm 1 is used to randomly initialize the population. While the tasks are determined based on the topological ordering, the processors are chosen at random. The ability of this algorithm to provide diverse, legitimate, random solutions—solutions that differ in terms of task ordering and processors—is one of its advantages. The algorithm retains a list of all ready tasks to be done, called ReadyTasks, and it selects tasks at random one by one from this list to generate solutions. If all of a task’s parents have previously been completed, the task is ready to be executed. For this, the algorithm gives each job a parent counter and decrements it whenever a parent is executed. In this manner, when the parent counter reaches zero, a task is added to the ReadyTasks list. The Successors of a task
ti in the algorithm is the set of its children on task graph e.g., the successors of t
5 on graph shown in
Figure 2 are {t
7, t
8, t
9, t
10}.
Algorithm 1: Schedule Initialization Algorithm |
Schedule-Initialization (schedule [1…2][1…n], V, P) // V is the set of tasks, //P is the set of processors |
For all Tasks ti ∈ V in task graph |
ParentsCount [ti] = number of ti parents in task graph |
ReadyTasks = {ti ϵ V | ParentsCount [ti] = 0} // Prepare the ready tasks to execute |
j = 1 |
While (ReadyTasks set is not Empty) |
Choose a Task tk from the ReadyTasks set randomly |
Add tk to Schedule [1][j] |
Choose a Processor p from the ProcessorList randomly |
Add p to Schedule [2][j] |
j = j + 1 |
For all Children ti ∈ {Successors of tk} |
ParentsCount [ti] = ParentsCount [ti] − 1 |
if (ParentsCount [ti] == 0) |
Add ti to ReadyTasks set |
The proposed hybrid method for tackling the mentioned problems is shown in
Figure 4. As can be seen in the flowchart, the system continues to operate in successive sessions until the termination requirements are satisfied. The system first determines the values of each objective using the formulae and explanations from
Section 2. The algorithm for calculating a schedule’s makespan is shown in Algorithm 2. The AT[
ti] and FT[
ti] variables in the algorithm represent the readiness of each task to begin execution and its completion, respectively. Also, the
P[p
i] records when the processor p
i will be free. Moreover, |
P| and |T| stand for the number of processors and tasks, respectively. The value of communication_time (
ti,
tj) also indicates the cost of communication between the processor executing
ti and the processor executing
tj. It should be noted that if both processors are same, the communication cost will be zero. Finally, the maximum
P[p
i] for all processors is considered to determine the makespan of the schedule.
The dominance rank of each solution is determined in the following phase, where s is the number of solutions that are dominating it. The lower rank value indicates the better quality of the answer. Dominance rank values are used in roulette wheel selection, in which good solutions are more likely to be selected than poor solutions. The roulette wheel selection mechanism selects the solutions according to the size of the region they occupy on the wheel. Hence, the value of dominance ranks is changed in such a way that big values show better ranks. To do this, all dominance ranks are subtracted from the biggest dominance rank.
Later on, the archive is updated with recently found non-dominated solutions, according to the flowchart in
Figure 4. This phase also uses dominance rank values, in which it inserts all solutions with dominance rank zero into the archive and then removes all solutions in the archive dominated by the newly inserted one.
Algorithm 2: Makespan Calculation |
Makespan-Calculation (Schedule [1…2][1…n], ExecutionTime [], CommunicationTime []) |
//ExecutionTime is the tasks execution time |
//CommunicationTime is the cost of edges between task pairs |
P [1|P|] = {0}, AT [1…|T|] = {0}, FT [1…|T|] = {0} |
//|P| and |T| are the number of processors and number of tasks respectively |
//P[pi] is the time at which processor pi becomes idle |
// AT[ti]is the time that ti would be ready to execute |
// FT[ti]is the finish time of taskti |
for i = 0 to |T| |
ti = Schedule [0][i] |
P [Schedule [1][i]] = max (AT [ti], P [Schedule [1][i]] + ExecutionTime(ti)) |
FT[ti] = P [schedule [1][i]] |
for all Tasks tj ϵ Successors(ti) in the task graph |
temp = FT [ti]; |
if (schedule [1][i] is not same as processor assigned to tj) |
temp = temp + Communication_time (ti, tj) |
AT [tj] = max (temp, AT [tj]) |
Makespan = Max (P [1…|P|]) |
In the next step, the VNS method is used across archives to further leverage the greatest solutions so far discovered. In this manner, the non-dominated solutions in the archive are modified and improved. Algorithm 3 shows the pseudocode for the VNS algorithm. The suggested hybrid method applies the VNS methodology over a maximum of 10 solutions in the archive and iterates the inner loop 10 times to prevent time-consuming VNS. The definition of the neighborhood structure N in the VNS algorithm results in a moderately significant alteration of the solution.
Algorithm 3: VNS method |
VNS (Archive) // Archive consists of all non-dominated solutions found so far |
Define a neighborhood structure // It is a modification way to change a solution |
// The modification is performed using the mutation operator presented in Figure 9 |
While (VNS has not been applied on 10 solutions) |
Choose a random solution X from archive |
for k = 1 to 10 |
Generate a solution Y from X using the structure N |
for p = 1 to 3 // Local Search is applied on solution Y |
Generate a new solution Z from Y by changing 3 processors randomly |
if (Z dominated Y) |
Copy Z to Y |
if (Y dominates X) |
Copy Y to X |
The hybrid algorithm then has an inner loop that uses the DE to perform the exploration task. For each solution among the population, the loop iterates the following steps again. Three potential solutions are chosen in the first stage using a roulette wheel selection method while considering the dominance rank values. Since the roulette wheel selection mechanism gives more chances to the bigger values, it will more likely select the solutions with a higher dominance rank. To prevent this, all dominance rank values are subtracted by the biggest dominance rank in the population. This way, the rank of the worst solution becomes zero, and for the other solutions, the higher dominance rank indicates a better solution. Therefore, the selection step chooses three random solutions for each solution in the population so that better solutions have a better chance of being selected. In
Figure 4, the algorithm is presented. To identify the final solution C, the crossover operator is then used over three solutions that were arbitrarily chosen in the following three steps. Then solution
i is replaced by solution C if the solution C dominates solution
i. Crossover operators are implemented in a way that results in workable solutions. Only processors are used for this two-point crossover. In this manner, only the processors are joined, and the order of the jobs is maintained because tasks follow a topological order and any random combination would break the feasibility of the solution. Meanwhile, the crossover operator is applied according to the crossover rate, which has been defined between 0 and 1. Algorithm 4 indicates the crossover algorithm. In the algorithm, random (0, 1) generates a uniformly distributed random number in the interval (0, 1). Likewise, Cutpoint1 and Cutpoit2 should be generated under the condition that Cutpoint1 must be smaller than Cutpoit2.
Algorithm 4: Crossover |
Crossover (Parent1 [1…2][1…n], Parent2 [1…2][1…n]) |
R = random (0, 1) // Generate a random number between 0 and 1 for Crossover Rate |
If (R < CrossoverProbability) |
Cutpoint1 = RandomNumber (1, n) |
Cutpoint2 = RandomNumber (1, n) |
For i = 1 to Cutpoint1 |
Swap (Parent1 [2][i] and Parent2 [2][i]) |
For i = Cutpoint2 to n |
Swap (Parent1 [2][i] and Parent2 [2][i]) |
The mutation operator is applied to the solution I in the inner loop’s final step. The sequence of the jobs is also changed by the suggested approach for the mutation operator. The algorithm selects a point on the solution between 1 and the number of tasks randomly. The order of the jobs is then kept unchanged up until that random point, but after that point, the order is changed arbitrarily. The adjustment is implemented using the procedure shown in Algorithm 1, in which the remaining portion of the solution is randomly generated after the given point. The benefit of the suggested mutation is that the solution created by this operator is altered in terms of both tasks and processors, which causes the algorithm to hop through the search space and find better solutions. The algorithm also picks a few CPUs and switches them at random. In Algorithm 5, the mutation algorithm is displayed.
Algorithm 5: Mutation |
Mutation (Schedule [1…2][1…n], V) // V is the set of tasks |
NewSchedule = Schedule // NewSchedule is mutated version of Schedule |
For all Tasks ti ∈ V in task graph //Count the number of parents for each task |
ParentsCount [ti] = number of ti parents in task graph |
ReadyTasks = {ti ϵ V | ParentsCount [ti] = 0} // Prepare the ready tasks to execute |
ReadyCount = Number of tasks in ReadyTasks set |
p = 0, pp = 0, cutpoint = RandomNumber (1, n) |
q = Random (1, cutpoint) // After cutpoint, the order of tasks will be changed randomly |
While (ReadyCount >= 0) |
SelectCount = Number of tasks in ReadyTasks set |
SelectList = ReadyTasks |
If (SelectCount > 1) |
pp = pp + 1 |
If (pp >= q) // if it is after cutpoint, the next task is selected randomly amongst ready tasks |
s = Random (1, SelectCount) |
t = SelectList (s) //choose a task from ready tasks randomly |
Remove t from ReadyTasks |
ReadyCount = ReadyCount − 1 |
p = p + 1 |
NewSchedule [1][p] = t |
Else // if it is before cutpoint, the next task is selected from Schedule |
p = p + 1 |
t = Schedule [1][p] |
ReadyCount = ReadyCount − 1 |
For all Children ci ∈ {Successors of t} |
ParentsCount [ci] = ParentsCount [ci] − 1 //decrement the number of parents by one |
If (ParentsCount [ci] == 0) |
Add ci to ReadyTasks set //add new ready tasks to ReadyTasks set |
For i = 1 to 3 //exchange the processors three times |
R1 = Random (1, n); |
R2 = Random (1, n); |
SWAP (NewSchedule [2] [R1] and solution [2] [R2]); |
If the termination requirements are not met when the inner loop is terminated, the hybrid method moves on to the next session. Otherwise, the extracted archive is submitted as the best Pareto front found so far.