1. Introduction
The computing tasks of cloud computing, which play a very important role nowadays due to the high development of technology and the many layers of application, resulting in the increasing application and demand for cloud computing, involve the delivery of on-demand computing resources ranging from applications to remote data centers over the internet on a pay-for-use basis. Therefore, many scholars and practitioners have devoted their efforts to strengthening or innovating this related research.
Section 2 summarizes the existing literature and demonstrates how this work differs in its approach.
In essence, the computing tasks of cloud computing, which are distributed across numerous resources for large-scale calculation and resolve the value, accessibility, reliability, and capability of cloud computing, comprise a computing style in which dynamically scalable and often virtualized resources are provided as an Internet service [
1]. The service model of computing tasks includes cloud platforms, users, applications, virtual machines, etc. Within each cloud platform, there are multiple platform users, each with the ability to run multiple applications on the platform. Each application corresponds to the jobs requested by a user and uses a certain quota of virtual machines, through which the application finishes and returns the jobs, thus completing the procedure.
Countless discussions and research have been conducted on the two prevailing issues in both grid and cloud computing: resource allocation and job scheduling [
2,
3,
4]. The job scheduling problem revolves around exploring how the provider of cloud computing services assigns the jobs of each client to each processor according to certain rules and regulations to ensure the cost-effectiveness of the job scheduling process. The efficiency and performance of cloud computing services are usually associated with the efficiency of job scheduling, which affects not only the performance efficiency of users’ jobs but also the utilization efficiency of system resources. Hence, the interdependent relationship between the efficiency and performance of cloud computing services and the efficiency of job scheduling necessitates research on the job scheduling problem of the computing tasks of cloud computing.
The job scheduling problem of the computing tasks of cloud computing services assign jobs to individual processors. It is an NP-hard combinatorial problem, which renders it difficult to obtain the global optima within polynomial time. The greater the problem size, e.g., the number of resources or jobs, the more difficult it is for traditional job scheduling algorithms to solve. Hence, alongside improving traditional algorithms, many scholars have introduced machine learning algorithms, e.g., the Pareto-set cluster genetic algorithm [
2] and particle swarm optimization [
2], the binary-code genetic algorithm [
3] and integer-code particle swarm optimization [
3], the simulated annealing algorithm [
4], particle swarm optimization (PSO) [
5], the genetic algorithm [
6], the machine learning algorithm [
7], Multi-Objective PSO [
8], the artificial bee colony algorithm [
9], the hybrid optimization algorithm [
10], etc., to solve the job scheduling problem of the computing tasks of cloud computing.
Numerous different objectives have been discussed to measure service performance, e.g., cost, reliability, makspan, and power consumption [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10]. However, when conducting research on the job assignment problem, most scholars emphasize the single-objective job scheduling problem and overlook job scheduling problems with more than one objective [
11,
12,
13,
14,
15,
16,
17]. By doing so, they fail to acknowledge other goals that may influence the quality of the cloud-computing service. It is thus necessary to balance various aspects when evaluating job scheduling problems [
15,
16,
17]; for example, the objective of minimizing total energy consumption and the objective of minimizing makspan are conflicting objectives in real-life applications [
17].
With the ever-advancing development of cloud computing, more and more data centers have successively been established to run a large number of applications that require considerable computations and storage capacity, but this wastes vast amounts of energy [
12,
15,
16,
17]. Reducing power consumption and cutting down energy costs has become a primary concern for today’s data center operators. Thus, striking a balance between reducing energy consumption and maintaining high computation capacity has become a timely and important challenge.
Furthermore, with more media and public attention shifting onto progressively severe environmental issues, governments worldwide have now adopted a stronger environmental protection stance [
12,
15,
16,
17]. This puts increasing pressure on enterprises to pursue higher output and also to focus on minimizing power consumption [
12,
15,
16,
17].
Stemming from real-life concerns, as mentioned above, our previous work considered a two-objective time-constrained job scheduling problem to measure the performance of a certain job schedule plan with two objectives: the quality of the cloud computing service in terms of the makspan and the environmental problem based on the energy consumption [
12,
15,
16,
17].
There are two different types of algorithms used to solve multi-objective problems. While one converts multi-objective problems to ones that are single objective in nature through methods such as ε-constraint, LP-metrics, goal programming, etc., the other solves multi-objective problems based on the concept of Pareto optimality. The latter is the one we adapted, both here and in our previous study [
10,
11,
12,
13,
14,
15,
16,
17].
Moreover, a new algorithm called the multi-objective simplified swarm optimization (MOSSO) was proposed to solve the above problem [
17]. However, there are some errors in the MOSSO source code, which ignores the multi-objective problem; the number of temporary nondominated solutions is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation. Therefore, the performance comparison is limited to MOSSO and MOPSO, despite both having an iterative local search [
17].
In order to rectify our previous source code with new concepts based on the Pareto optimality to solve the two-objective time-constrained job scheduling, we proposed a new algorithm called two-objective simplified swarm optimization (tSSO). It draws from the SSO update mechanism to generate offspring [
18], the crowding distance to rank nondominated solutions [
19], and new hybrid elite selection to select parents [
20], and the limited number of nondominated solutions adapted from multi-objective particle swarm optimization (MOPSO) serves to guide the update [
13].
The motivation and contribution of this work are highlighted as follows:
An improved algorithm named two-objective simplified swarm optimization (tSSO) is developed in this work to revise and improve errors in the previous MOSSO algorithm [
17], which ignores the fact that the number of temporary nondominated solutions is not always only one in the multi-objective problem, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation. The algorithm is based on SSO to deliver the job scheduling in cloud computing.
More new algorithms, testing, and comparisons have been implemented to solve the bi-objective time-constrained task scheduling problem in a more efficient manner.
In the experiments conducted, the proposed tSSO outperforms existing established algorithms, e.g., NSGA-II, MOPSO, and MOSSO, in the convergence, diversity, number of obtained temporary nondominated solutions, and the number of obtained real nondominated solutions.
The remainder of this paper is organized as follows. Related work in the existing literature is discussed in
Section 2, which also demonstrates how this work is different in its approach.
Section 3 presents notations, assumptions, and the mathematical modeling of the proposed two-objective time-constrained job scheduling problem to address energy consumption and service quality in terms of the makspan.
Section 4 introduces the simplified swarm optimization (SSO) [
18], the concept of crowd distance [
19], and traditional elite selection [
20]. The proposed tSSO is presented in
Section 5, together with the discussion of its novelties and pseudo code. The section also explains the errors in the previous MOSSO algorithm and how the proposed new algorithm overcomes them.
Section 6 compares the proposed tSSO in terms of nine different parameter settings with the MOSSO proposed in [
17], the MOPSO proposed in [
13], and the famous NSGA-II [
19,
20]. Three benchmark problems, one small-size, one medium-size, and one large-size, are utilized and analyzed from the viewpoint of convergence, diversity, and number of obtained nondominated solutions in order to demonstrate the performance of the tSSO. Our conclusions are given in
Section 7.
5. Proposed Algorithm
The proposed tSSO is a population-based, all-variable update, and it is a stepwise function-based method, i.e., there are a fixed number of solutions in each generation and all variables must be updated based on the stepwise function for each solution. Details of the proposed tSSO are discussed in this section.
5.1. Purpose of tSSO
The developed tSSO algorithm revises and improves the errors in the previous MOSSO algorithm, which ignores the fact that the number of temporary nondominated solutions in the multi-objective problem is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation.
The developed tSSO algorithm overcomes the errors in the previous MOSSO algorithm by the following methods:
The novel update mechanism of the role of
gBest and
pBest in the proposed tSSO, which are introduced in
Section 5.3.
The hybrid elite selection in the proposed tSSO, which is introduced in
Section 5.4.
5.2. Solution Structure
The first step of most machine learning algorithms is to define the solution structure [
2,
3,
4,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45]. The solution in the tSSO for the proposed problem is defined as a vector, and the value of the
ith coordinate is the processor utilized to process the
ith job. For example, let
X5 = (3, 4, 6, 4) be the 5th solution in the 10th generation of the problem with 4 jobs. In
X5, jobs 1, 2, 3, and 4 are processed by processors 3, 4, 6, and 4.
5.3. Novel Update Mechanism
The second step in developing machine learning algorithms is to create an update mechanism to update solutions [
2,
3,
4,
35,
36,
37,
38,
39,
40,
41,
42]. The stepwise update function is a unique update mechanism of SSO [
17,
18,
35,
36,
37,
38,
39,
40,
41,
42]. In the single-objective problem, there is only one
gBest for the traditional SSO. However, in the multi-objective problem, the number of temporary nondominated solutions is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation [
17]. Note that a nondominated solution is not dominated by any other solution, while a temporary nondominated solution is a solution nondominated by other found solutions that are temporary in the current generation, and it may be dominated by other updated solutions later [
13,
17,
19].
Hence, in the proposed tSSO, the role of
gBest is removed completely and the
pBest used in the tSSO is not the original definition in the SSO. The
pBest for each solution in the proposed tSSO is selected from temporary nondominated solutions, i.e., there is no need to follow its previous best predecessor. The stepwise update function used in the proposed tSSO is listed below for multi-objective problems for each solution
Xi, with
i = 1, 2, …, N
sol:
where
is one of the temporary nondominated solutions selected randomly, ρ
[0,1] is a random number generated uniformly in [0, 1], and
x is a random integer generated from 0, 1, 2, …, N
var.
For example, let
X6 = (1, 2, 3, 2, 4) and
X* = (2, 1, 4, 3, 3) be a temporary nondominated solution selected randomly. Let
Cp = 0.50,
Cw = 0.95, and ρ = (ρ
1, ρ
2, ρ
3, ρ
4, ρ
5) = (0.32, 0.75, 0.47, 0.99, 0.23). The procedure to update
X6 based on the stepwise function provided in Equation (8) is demonstrated in
Table 1.
From the above example, the simplicity, convenience, and efficiency of the SSO can also be found in the update mechanism of the proposed tSSO.
5.4. Hybrid Elite Selection
The last step is to determine the selection policy to decide which solutions, i.e., parents, are selected to generate solutions in the next generation. In the proposed tSSO, hybrid selection is harnessed.
Let πt be a set to store the temporary nondominated solutions found in the tth generation. It is impossible to have all temporary nondominated solutions because their number is infinite, and also because temporary nondominated solutions may not be real nondominated solutions. The value of |πt| is limited to Nsol, and parts of temporary nondominated solutions are abandoned to keep |πt| = Nsol.
If Nsol ≤ |πt|, the crowding distances need to be calculated for each temporary nondominated solution, and only the best Nsol solutions will be selected from πt to be parents. However, all temporary nondominated solutions in πt are used to serve as parents and (Nsol − |πt|) solutions are selected randomly from offspring if Nsol > |πt|. As discussed above, there are always Nsol parents for each generation, and these temporary nondominated solutions are usually chosen first to serve their role as parents.
Note that these temporary nondominated solutions are abandoned if they are not selected as parents.
5.5. Group Comparison
Referring to
Section 4.3, the temporary nondominated solutions play a paramount role in most multi-objective algorithms. Up to now, the most popular method to achieve the above goal is pairwise comparison, which takes O(N
2) [
13,
17,
19] for each solution in each generation, where N is the number of solutions from which we determine temporary nondominated solutions. Hence, the corresponding related time complexity of the tSSO and NSGA-II are both O(4N
sol2). Therefore, the most time-consuming aspect of solving multi-objective problems using machine learning algorithms is the search for all temporary nondominated solutions from all offspring.
In the proposed tSSO, the parents are selected from temporary nondominated solutions found in the previous generation and offspring generated in the current generation. To reduce the computation burden, a new method called group comparison is proposed in tSSO. All temporary nondominated solutions in offspring are found first using the pairwise comparison, which takes O(Nsol2) as the number of offspring in Nsol. The temporary nondominated solutions obtained in the previous generations are then compared with the new temporary nondominated solutions found in the current generation. The number of both sets of temporary nondominated solutions are at most Nsol, i.e., the time complexity is O(Nsol log(Nsol)) based on the merge sort.
Hence, the time complexity is O(Nsol log(Nsol)) + O(Nsol2) = O(Nsol2), which is only one quarter of the pairwise comparison.
5.6. Proposed tSSO
The procedure of the proposed tSSO, which has nine different parameter settings, denoted as tSSO
0, tSSO
1, tSSO
2, tSSO
3, tSSO
4, tSSO
5, tSSO
6, tSSO
7, and tSSO
8, has a detailed introduction in
Section 6, based on the solution structure, update mechanism, hybrid elite selection, and group comparison discussed in this section, which are presented in pseudo code as follows.
PROCEDURE tSSO
- STEP 0.
Create initial population Xi randomly for i = 1, 2, …, Nsol and let t = 2.
- STEP 1.
Let πt be all temporary nondominated solutions in S* = {Xk | for k = 1, 2, …, 2Nsol} and XNsol+k is updated from Xk based on Equation (8) for k = 1, 2, …, Nsol.
- STEP 2.
If Nsol ≤ |πt|, let S = {top Nsol solutions in πt based on crowding distances} and go to STEP 4.
- STEP 3.
Let S = πt ∪ {Nsol − |πt| solutions selected randomly from S* − πt}.
- STEP 4.
Re-index these solutions in S such that S = {Xk|for k = 1, 2, …, Nsol}.
- STEP 5.
If t < Ngen, then let t = t + 1 and go back to STEP 1. Otherwise, halt.
6. Simulation Results and Discussion
For fair comparison with MOSSO [
17], which was the previous work of the extended topic, the numerous experiments of the parameter-setting procedure sof the three different-sized benchmark problems used in [
17] were carried out using tSSO. The experimental results obtained by the proposed tSSO were compared with those obtained using MOSSO [
17], which were acquired using MOPSO [
13] and NSGA-II [
19,
20].
6.1. Parameter-Settings
All machine learning algorithms have parameters in their update procedures and/or the selection procedure. Thus, there is a need to tune parameters for better results. To determine the best parameters of the proposed tSSO, three different levels of the two factors of
cp and
cw, the low value of 0.1, the middle value of 0.3, and the high value of 0.5, have been combined, e.g., nine different parameter settings denoted as tSSO
0, tSSO
1, tSSO
2, tSSO
3, tSSO
4, tSSO
5, tSSO
6, tSSO
7, and tSSO
8, as shown in
Table 2.
Note that Cp = cp and Cw = cp + cw. The following provided the parameter settings for the other algorithms:
NSGA-II: ccrossover = 0.7, cmutation = 0.3
MOPSO:
w = 0.871111,
c1 = 1.496180,
c2 = 1.496180 [
17]
MOSSO:
Cg = 0.1 + 0.3t/N
gen,
Cp = 0.3 + 0.4t/N
gen,
Cw = 0.4 + 0.5t/N
gen, where
t is the current generation number [
17].
6.2. Experimental Environments
To demonstrate the performance of the proposed tSSO and select the best parameter settings, tSSOs with nine parameter settings were utilized for three job scheduling benchmarks [
17], namely (N
job, N
cpu) = (20, 5), (50, 10), and (100, 20), and the deadlines were all set as 30 for each test [
17].
The following lists the special characteristics of the processor speeds (in MIPs), the energy consumptions (in KW/per unit time), and the job sizes (in MI) in these three benchmark problems [
17]:
Each processor speed is generated between 1000 and 10,000 (MIPs) randomly, and the largest speed is ten times the smallest one.
The power consumptions grow polynomial as the speed of processors grow, and the value range is between 0.3 and 30 (KW) per unit time.
The values of job sizes are between 5000 and 15,000 (MIs).
Alongside the proposed tSSO with nine parameter settings, there are three other multi-objective algorithms; MOPSO [
13,
17], MOSSO [
17], and NSGA-II [
19,
20] were tested and compared to further validate the superiority of the proposed tSSO. NSGA-II [
19,
20] is the current best multi-objective algorithm and is based on genetic algorithms, while MOPSO is based on particle swarm optimization [
13], and MOSSO on SSO [
17].
The proposed tSSO with its nine parameter settings and NSGA-II have no iterative local search to improve the solution quality. To ensure a fair comparison, the iterative local search is removed from both MOPSO and MOSSO. All algorithms were coded in DEV C++ on a 64-bit Windows 10 PC, implemented on an Intel Core i7-6650U CPU @ 2.20 GHz notebook with 16 GB of memory.
In addition, for a fair comparison between all algorithms, Nsol = Nnon = 50, Ngen = 1000, and Nrun = 500, i.e., the same solution number, generation number, size of external repository, and run number for each benchmark problem were used. Furthermore, the calculation number of the fitness function of all algorithms was limited to Nsol × Ngen = 50,000, 100,000, and 150,000 in each run for the small-size, medium-size, and large-size benchmarks, respectively.
Note that the reason for the large value of N
run is to simulate the Pareto front, and the details are discussed in
Section 6.3.
6.3. Performance Metrics
The convergence metrics and diversity metrics are always both used to evaluate the performances of the multi-objective algorithms in the solution quality. Among these metrics, the general distance (GD), introduced by Van Veldhuizen et al. [
53], and spacing (SP), introduced by Schott [
54], are two general indexes for the convergence metrics and diversity metrics, respectively. Let
di be the shortest Euclidean distance between the
ith temporary nondominated solution and the Pareto front, and
d be the average sum of all
di for all
i. The GD and SP are defined below:
The GD is the average of the sum of the squares of
di, and the SP is very similar to the standard deviation in probability theory and statistics. If all temporary nondominated solutions are real nondominated solutions, we have GD = 0. The solutions are equality far from
d, we have SP = 0. Hence, in general, the smaller the SP is, the higher the diversity of solutions along the Pareto front and the better the solution quality becomes [
13,
17,
19,
46,
47].
The Pareto front is needed for both the GD and SP in calculating from their formulas [
43,
44]. Unfortunately, it requires infinite nondominated solutions to form the Pareto front, and this is impossible to accomplish, even with exhaustive methods for the job scheduling problem of cloud computing, which is an NP-hard problem that has no guarantee of having the ability to obtain the global optimal solution under the complexity of polynomial time, and the traditional algorithms are intractable, resulting to a special optimization algorithm having to be used to reduce the system search complexity and improve the overall search quality [
12,
15,
16,
17]. Rather than a real Pareto front, a simulated Pareto front is implemented by collecting all temporary nondominated solutions in the final generation from all different algorithm with different values of N
sol for the same problem.
There are 12 algorithms with three different Nsol = 50, 100, and 150 for Nrun = 500, i.e., 12 × 500 × (50 + 100 + 150) = 1,800,000 solutions obtained at the end for each test problem. All temporary nondominated solutions are found from these 1,800,000 final solutions to create a simulated Pareto front to calculate the GD and SP.
6.4. Numerical Results
All numerical results attained in the experiments are listed in this section.
Table 3,
Table 4 and
Table 5 list the averages and standard deviations of the obtained number of temporary nondominated solutions (denoted by N
n), the obtained number of nondominated solutions in the Pareto front (denoted by N
p), the converge metric GD, and the diversity metric SP, the required run time, the energy consumption values, and the makspan values for three different-sized benchmark problems with N
sol = 50, 100, and 150, respectively. In these tables, the best of all the algorithms is indicated in bold, and the proposed BSSO with its nine parameter settings are denoted as BSSO
0–8, respectively, in
Table 3,
Table 4 and
Table 5.
The lower the cr value, the better the performance. In the small-size problem, the proposed tSSO7 with cp = 0.5 and cw = 0.3 is the best among all 12 algorithms. However, the proposed tSSO8 with cp = 0.5 and cw = 0.5 is the best one for the medium- and large-size problems. The reason for this is that the number of real nondominated solutions is infinite. Even though in the update mechanism of the proposed tSSO when cr = 0 there is only an exchange of information between the current solution itself and one of selected temporary nondominated solutions, it is already able to update the current solution to a better solution without needing any random movement.
The larger the size of the problem, e.g., Njob, the fewer the number of obtained nondominated solutions, e.g., Nn and Np. There are two reasons why this is the case: (1) due to the characteristic of the NP-hard problems, i.e., the larger the size of the NP-hard problem, the more difficult it is to solve; (2) it is more difficult to find nondominated solutions for larger problems with the same deadline of 30 for all problems.
The larger the Nsol, the more likely it is to find more nondominated solutions, i.e., the larger Nn and Np for the best algorithm among these algorithms no matter the size of the problem. Hence, it is an effective method to have a larger value of Nsol if we intend to find more nondominated solutions.
The smaller the Njob, the fewer the number of obtained nondominated solutions, e.g., There are two reasons for the above: (1) due to the characteristic of the NP-hard problems, i.e., the larger the size of the NP-hard problem, the more difficult it is to solve; (2) it is more difficult to find nondominated solutions for larger problems with the same deadline, which is set 30 for all problems.
The smaller the value of Np, the shorter the run time. The most time-consuming aspect of finding nondominated solutions is filtering out these nondominated solutions from current solutions. Hence, a new method, called the group comparison, is proposed in this study to find these nondominated solutions from current solutions. However, even the proposed group comparison is more efficient than the traditional pairwise comparison on average, but it still needs O(Np2) to achieve the goal.
There is no special pattern in solution qualities, e.g., the value of GD, SP, Nn, and Np, from the final average values of the energy consumption and the makspan.
The one with the better number of obtained nondominated solutions also has better DP and SP values.
The MOPSO [
13] and the original MOSSO [
17] share one common factor: each solution must inherit and update based on its predecessor (parent) and its
pBest, and this is the main reason that it is less likely to find new nondominated solutions. The above observation is coincident to that observed in item 1. Hence, the proposed tSSO and the NSGA-II [
19,
20] are much better than MOSSO [
17] and MOPSO [
13] in solution quality.
In general, the proposed tSSO without cr has a more satisfying performance in all aspect of measures.
In addition, the histogram graphs for the average and standard deviations of the results of two-objective values, i.e., the energy consumption values and the makspan values, for three different-sized benchmark problems by the proposed tSSO and the compared MOPSO, MOSSO, and NSGA-II algorithms are further drawn, as shown in the following
Figure 1,
Figure 2,
Figure 3,
Figure 4,
Figure 5,
Figure 6,
Figure 7,
Figure 8,
Figure 9,
Figure 10,
Figure 11 and
Figure 12, in order to obtain more results and discussion to validate the performance of the proposed model and to have a comparison at a glance. The best results among tSSO
0 to tSSO
8 are taken as the solution of tSSO and plotted in
Figure 1,
Figure 2,
Figure 3,
Figure 4,
Figure 5,
Figure 6,
Figure 7,
Figure 8,
Figure 9,
Figure 10,
Figure 11 and
Figure 12.
On average, the average of the energy consumption for the small-size benchmark obtained by the proposed tSSO is better than those obtained by the compared algorithms, including MOPSO, MOSSO, and NSGA-II, though it is slightly worse than the performance of MOSSO when Nsol equals 50, as shown in
Figure 1.
The standard deviation of the energy consumption for the small-size benchmark obtained by the MOSSO is the best, as shown in
Figure 2.
The average of the makspan for the small-size benchmark obtained by the proposed tSSO is the best and is superior to MOSSO, as shown in
Figure 3.
The standard deviation of the makspan for the small-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in
Figure 4.
On average, the average of the energy consumption for the medium-size benchmark obtained by the proposed tSSO is better than those obtained by the compared algorithms, including MOPSO, MOSSO, and NSGA-II, though it is slightly worse than the performance of MOSSO when Nsol equals 50, as shown in
Figure 5.
The standard deviation of the energy consumption for the medium-size benchmark obtained by the MOSSO is the best, as shown in
Figure 6.
The average of the makspan for the medium-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in
Figure 7.
The standard deviation of the makspan for the medium-size benchmark obtained by the MOSSO is the best and is superior to the compared algorithms, including the proposed tSSO, MOPSO, and NSGA-II, as shown in
Figure 8.
The average of the energy consumption for the large-size benchmark obtained by the proposed tSSO is better than those obtained by the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in
Figure 9.
The standard deviation of the energy consumption for the large-size benchmark obtained by the MOSSO is the best, as shown in
Figure 10.
The average of the makspan for the large-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in
Figure 11.
The standard deviation of the makspan for the large-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in
Figure 12.
Overall, the performance of the proposed tSSO outperforms the compared algorithms, including MOPSO, MOSSO, and NSGA-II in all aspects of measures.
7. Conclusions
This study sheds light on a nascent two-objective time-constrained job scheduling problem focusing on energy consumption and service quality in terms of the makspan to find non-dominated solutions for the purpose of ameliorating the service quality and addressing the environmental issues of cloud computing services. In response to this two-objective problem, we proposed a new two-objective simplified swarm optimization (tSSO) algorithm to revise and improve the errors in the previous MOSSO algorithm [
17], which, in the multi-objective problem, ignores the fact that the number of temporary nondominated solutions is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation, based on SSO to deliver the job scheduling in cloud computing.
To ensure better solution quality, the tSSO algorithm integrates the crowding distance, a hybrid elite selection, and a new stepwise update mechanism, i.e., the proposed tSSO is a population-based, all-variable update, and stepwise function-based method. From the experiments conducted on three different-sized problems [
17], regardless of the parameter setting, each of the proposed tSSOs outperformed the MOPSO [
13], MOSSO [
17], and NSGA-II [
19,
20], in convergency, diversity, the number of obtained temporary nondominated solutions, and the number of obtained real nondominated solutions. Among nine different parameter settings, we concluded that the tSSO algorithm with
cp =
cw = 0.5 is the best one. The results prove that the proposed tSSO can successfully achieve the aim of this work.
In order for the readers to understand the real-life cloud problems, the assumptions in
Section 3.2 are simplified. Hence, future work will relax the assumptions to further meet and solve real-life cloud problems. In addition, a sequel work that considers real-life assumptions with changes in the algorithms is needed in the near future.
Moreover, in future works, the proposed model will be tested with more data from different domains, and the results will be compared with recently published studies from top journals and conferences.