Next Article in Journal
Experimental and 2-Step Finite Element Analysis of Cyclic Fatigue Resistance of Conventional and Heat-Treated Rotary Endodontic Nickel-Titanium Instruments
Previous Article in Journal
Pathological Background and Clinical Procedures in Oral Surgery Haemostasis Disorders: A Narrative Review
Previous Article in Special Issue
A Hybrid Algorithm Based on Simplified Swarm Optimization for Multi-Objective Optimizing on Combined Cooling, Heating and Power System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cloud Computing Considering Both Energy and Time Solved by Two-Objective Simplified Swarm Optimization

1
Integration and Collaboration Laboratory, Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 300, Taiwan
2
School of Mechatronical Engineering and Automation, Foshan University, Foshan 528000, China
3
Department of International Logistics and Transportation Management, Kainan University, Taoyuan 33857, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2077; https://doi.org/10.3390/app13042077
Submission received: 31 October 2022 / Revised: 18 November 2022 / Accepted: 22 November 2022 / Published: 6 February 2023
(This article belongs to the Special Issue Smart Manufacturing Networks for Industry 4.0)

Abstract

:
Cloud computing is an operation carried out via networks to provide resources and information to end users according to their demands. The job scheduling in cloud computing, which is distributed across numerous resources for large-scale calculation and resolves the value, accessibility, reliability, and capability of cloud computing, is important because of the high development of technology and the many layers of application. An extended and revised study was developed in our last work, titled “Multi Objective Scheduling in Cloud Computing Using Multi-Objective Simplified Swarm Optimization MOSSO” in IEEE CEC 2018. More new algorithms, testing, and comparisons have been implemented to solve the bi-objective time-constrained task scheduling problem in a more efficient manner. The job scheduling in cloud computing, with objectives including energy consumption and computing time, is solved by the newer algorithm developed in this study. The developed algorithm, named two-objective simplified swarm optimization (tSSO), revises and improves the errors in the previous MOSSO algorithm, which ignores the fact that the number of temporary nondominated solutions is not always only one in the multi-objective problem, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation based on simplified swarm optimization (SSO). The experimental results implemented show that the developed tSSO performs better than the best-known algorithms, including nondominated sorting genetic algorithm II (NSGA-II), multi-objective particle swarm optimization (MOPSO), and MOSSO in the convergence, diversity, number of obtained temporary nondominated solutions, and the number of obtained real nondominated solutions. The developed tSSO accomplishes the objective of this study, as proven by the experiments.

1. Introduction

The computing tasks of cloud computing, which play a very important role nowadays due to the high development of technology and the many layers of application, resulting in the increasing application and demand for cloud computing, involve the delivery of on-demand computing resources ranging from applications to remote data centers over the internet on a pay-for-use basis. Therefore, many scholars and practitioners have devoted their efforts to strengthening or innovating this related research. Section 2 summarizes the existing literature and demonstrates how this work differs in its approach.
In essence, the computing tasks of cloud computing, which are distributed across numerous resources for large-scale calculation and resolve the value, accessibility, reliability, and capability of cloud computing, comprise a computing style in which dynamically scalable and often virtualized resources are provided as an Internet service [1]. The service model of computing tasks includes cloud platforms, users, applications, virtual machines, etc. Within each cloud platform, there are multiple platform users, each with the ability to run multiple applications on the platform. Each application corresponds to the jobs requested by a user and uses a certain quota of virtual machines, through which the application finishes and returns the jobs, thus completing the procedure.
Countless discussions and research have been conducted on the two prevailing issues in both grid and cloud computing: resource allocation and job scheduling [2,3,4]. The job scheduling problem revolves around exploring how the provider of cloud computing services assigns the jobs of each client to each processor according to certain rules and regulations to ensure the cost-effectiveness of the job scheduling process. The efficiency and performance of cloud computing services are usually associated with the efficiency of job scheduling, which affects not only the performance efficiency of users’ jobs but also the utilization efficiency of system resources. Hence, the interdependent relationship between the efficiency and performance of cloud computing services and the efficiency of job scheduling necessitates research on the job scheduling problem of the computing tasks of cloud computing.
The job scheduling problem of the computing tasks of cloud computing services assign jobs to individual processors. It is an NP-hard combinatorial problem, which renders it difficult to obtain the global optima within polynomial time. The greater the problem size, e.g., the number of resources or jobs, the more difficult it is for traditional job scheduling algorithms to solve. Hence, alongside improving traditional algorithms, many scholars have introduced machine learning algorithms, e.g., the Pareto-set cluster genetic algorithm [2] and particle swarm optimization [2], the binary-code genetic algorithm [3] and integer-code particle swarm optimization [3], the simulated annealing algorithm [4], particle swarm optimization (PSO) [5], the genetic algorithm [6], the machine learning algorithm [7], Multi-Objective PSO [8], the artificial bee colony algorithm [9], the hybrid optimization algorithm [10], etc., to solve the job scheduling problem of the computing tasks of cloud computing.
Numerous different objectives have been discussed to measure service performance, e.g., cost, reliability, makspan, and power consumption [1,2,3,4,5,6,7,8,9,10]. However, when conducting research on the job assignment problem, most scholars emphasize the single-objective job scheduling problem and overlook job scheduling problems with more than one objective [11,12,13,14,15,16,17]. By doing so, they fail to acknowledge other goals that may influence the quality of the cloud-computing service. It is thus necessary to balance various aspects when evaluating job scheduling problems [15,16,17]; for example, the objective of minimizing total energy consumption and the objective of minimizing makspan are conflicting objectives in real-life applications [17].
With the ever-advancing development of cloud computing, more and more data centers have successively been established to run a large number of applications that require considerable computations and storage capacity, but this wastes vast amounts of energy [12,15,16,17]. Reducing power consumption and cutting down energy costs has become a primary concern for today’s data center operators. Thus, striking a balance between reducing energy consumption and maintaining high computation capacity has become a timely and important challenge.
Furthermore, with more media and public attention shifting onto progressively severe environmental issues, governments worldwide have now adopted a stronger environmental protection stance [12,15,16,17]. This puts increasing pressure on enterprises to pursue higher output and also to focus on minimizing power consumption [12,15,16,17].
Stemming from real-life concerns, as mentioned above, our previous work considered a two-objective time-constrained job scheduling problem to measure the performance of a certain job schedule plan with two objectives: the quality of the cloud computing service in terms of the makspan and the environmental problem based on the energy consumption [12,15,16,17].
There are two different types of algorithms used to solve multi-objective problems. While one converts multi-objective problems to ones that are single objective in nature through methods such as ε-constraint, LP-metrics, goal programming, etc., the other solves multi-objective problems based on the concept of Pareto optimality. The latter is the one we adapted, both here and in our previous study [10,11,12,13,14,15,16,17].
Moreover, a new algorithm called the multi-objective simplified swarm optimization (MOSSO) was proposed to solve the above problem [17]. However, there are some errors in the MOSSO source code, which ignores the multi-objective problem; the number of temporary nondominated solutions is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation. Therefore, the performance comparison is limited to MOSSO and MOPSO, despite both having an iterative local search [17].
In order to rectify our previous source code with new concepts based on the Pareto optimality to solve the two-objective time-constrained job scheduling, we proposed a new algorithm called two-objective simplified swarm optimization (tSSO). It draws from the SSO update mechanism to generate offspring [18], the crowding distance to rank nondominated solutions [19], and new hybrid elite selection to select parents [20], and the limited number of nondominated solutions adapted from multi-objective particle swarm optimization (MOPSO) serves to guide the update [13].
The motivation and contribution of this work are highlighted as follows:
  • An improved algorithm named two-objective simplified swarm optimization (tSSO) is developed in this work to revise and improve errors in the previous MOSSO algorithm [17], which ignores the fact that the number of temporary nondominated solutions is not always only one in the multi-objective problem, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation. The algorithm is based on SSO to deliver the job scheduling in cloud computing.
  • More new algorithms, testing, and comparisons have been implemented to solve the bi-objective time-constrained task scheduling problem in a more efficient manner.
  • In the experiments conducted, the proposed tSSO outperforms existing established algorithms, e.g., NSGA-II, MOPSO, and MOSSO, in the convergence, diversity, number of obtained temporary nondominated solutions, and the number of obtained real nondominated solutions.
The remainder of this paper is organized as follows. Related work in the existing literature is discussed in Section 2, which also demonstrates how this work is different in its approach. Section 3 presents notations, assumptions, and the mathematical modeling of the proposed two-objective time-constrained job scheduling problem to address energy consumption and service quality in terms of the makspan. Section 4 introduces the simplified swarm optimization (SSO) [18], the concept of crowd distance [19], and traditional elite selection [20]. The proposed tSSO is presented in Section 5, together with the discussion of its novelties and pseudo code. The section also explains the errors in the previous MOSSO algorithm and how the proposed new algorithm overcomes them. Section 6 compares the proposed tSSO in terms of nine different parameter settings with the MOSSO proposed in [17], the MOPSO proposed in [13], and the famous NSGA-II [19,20]. Three benchmark problems, one small-size, one medium-size, and one large-size, are utilized and analyzed from the viewpoint of convergence, diversity, and number of obtained nondominated solutions in order to demonstrate the performance of the tSSO. Our conclusions are given in Section 7.

2. Literature Review

The related work of the existing literature is provided in this section, which also demonstrates how this work is different in its approach. The job scheduling problem of cloud-computing services is very important, meaning that there is a significant amount of related research on it, which can be classified into the following types.

2.1. A Review of Job Scheduling in Cloud-Computing

Houssein et al. presented a review of numerous meta-heuristics for solving job scheduling in cloud-computing [21], and a literature review of the related methods used for solving job scheduling in cloud-computing has been conducted by Arunarani et al. [22] and Kumar et al. [23].

2.2. Algorithm Research for Solving the Single-Objective Problem for Job Scheduling in Cloud-Computing

Chen et al. proposed an improved meta-heuristic whale optimization algorithm to solve resource utilization for job scheduling in cloud-computing [24]. Attiya et al. studied improved simulated annealing (SA) to optimize makspan for job scheduling in cloud-computing [25]. Gąsior and Seredyński worked on a distributed security problem for job scheduling in cloud computing by a method based on the game theoretic approach [26]. Mansouri and Javidi resolved response time using a cost-based job scheduling method [27]. A deep reinforcement learning (DRL) algorithm was proposed by Cheng et al. [28] to reduce the cost of job scheduling in cloud computing.

2.3. Algorithm Research for Solving the Multi-Objective Problem for Job Scheduling in Cloud-Computing

Processing time and cost of job scheduling in cloud computing were optimized by Shukri et al. [29] using an improved multi-verse optimizer. Jacob and Pradeep minimized the multi-objective problem, including makspan, cost, and deadline violation rate, using a hybrid algorithm of cuckoo search (CS) and particle swarm optimization (PSO) [30]. Abualigah and Diabat optimized the makspan and resource utilization using an improved algorithm based on the elite-based differential evolution method [31]. Sanaj and Prathap proposed a chaotic squirrel search algorithm (CSSA) to solve the makspan and cost issues [32], and Abualigah and Alkhrabsheh optimized the cost and service availability in the multi-objective problem [33].
This work aims to optimize a two-objective problem that includes both energy and makspan, but the objectives of this work are different from the multi-objective contents of the existing literatures, as shown in the third part of the literature review [29,30,31,32,33]. Furthermore, this work proposes an improved algorithm to solve the two-objective problem for job scheduling in cloud computing, which is obviously different from the first and second parts of the existing literatures above [21,22,23,24,25,26,27,28].

3. Assumptions, and Mathematical Problem Description

The formal mathematical model of the two-objective constrained job scheduling problem of cloud-computing services [12,15,16,17] is described as follows, together with the assumptions used in the problem, as well as the algorithms.

3.1. Assumptions

In cloud-computing services, all requests from cloud-computing users are collected as jobs and subsequently divided into multiple jobs, which are then executed by data centers with multiple processors [2,3,4,10,11,12,15,16,17]. To simplify our model without foregoing generality, the following assumptions are utilized to prompt the two-objective constrained job scheduling problem to focus on the energy consumption and makspan of the cloud-computing service [2,3,4,10,11,12,15,16,17].
  • All jobs are available, independent, and equal in importance with two attributes, sizei and timei, for i = 1, 2, …, Nvar for processing simultaneously at time zero.
  • Each processor is available at any time with two attributes, denoted as speedj and powerj, j = 1, 2, …, Ncpu, and cannot process two or more jobs simultaneously.
  • All processing times include set-up time.
  • Job pre-emption and the splitting of jobs are not permitted.
  • There is an infinite buffer between the processors.

3.2. The Mathematical Model

A feasible job scheduling solution is a plan that assigns jobs to processors in a sequence such that the makspan, which is the time at which the final job is complete before a predefined limited time. Let solution X = (x1, x2, …, xNvar) be a feasible job scheduling plan and xi be the type of processor assigned to the job i for i = 1, 2, …, Nvar.
To maintain efficiency and ensure environmental protection, the two-objective constrained job scheduling problem considered here is to assign Nvar jobs to Ncpu processors to minimize both the energy consumption and makspan so that the makspan is not overdue [17]:
M i n   F e ( X ) = i = 1 n ( t i , x i · e x i )
M i n   F m ( X ) = M a x j i = 1 n t i , j
s.t. Fm(X) ≤ Tub.
Equation (1) is the total energy consumption of a job scheduling plan, and it is the sum of the usage of energy of all jobs assigned to processors based on X. Note that the energy consumption of each job is the product of the power cost per unit time and the running time is equal to the size of the job divided by the execution speed, as shown in formula (4).
t i , j = s i z e i / s p e e d j if   task   i   is   processed   on   processor   j 0 otherwise .
Equation (2) is the makspan, which is the time the last job is finished. Equation (3) is the time constraint for each job, such that each job must be finished before the deadline Tub.

4. SSO, Crowding Distance, and Elite Selection

The proposed tSSO, based on SSO [18], is used to update solutions from generation to generation intelligently; the crowding distance [19] is used to rank temporary nondominated solutions systematically, and the elite selection [34] is used to select solutions to act as parents. Hence, SSO, crowding distance, and elite selection are introduced briefly in this section.

4.1. Simplified Swarm Optimization

SSO, developed by Yeh in 2009 [18], is a simple but powerful machine leaning algorithm that is a hybrid of leader-solution swarm intelligence and population-based evolutionary computation.
In traditional SSO, all variables need to be updated (called the all-variable update in SSO) such that the jth variable of the ith solution (i.e., xi,j) is obtained from either the jth variable of the PgBest (i.e., pgBest,j) with probability cg of the Pi with probability cp of its current value (i.e., xi,j) with probability cw of a randomly generated feasible new value (say x) with probability cr, where the PgBest is the best solution among all existing solutions; Pi is the best ith solution in its evolutionary history, and cg + cp + cw + cr = 1.
The above update mechanism of SSO is very simple, efficient, and flexible [17,18,35,36,37,38,39,40], and can be presented as a stepwise-function update:
x i , j = p g B e s t , j if   ρ [ 0 , 1 ] [ 0 , C g ) p i , j if   ρ [ 0 , 1 ] [ C g , C p ) x i , j if   ρ [ 0 , 1 ] [ C p , C w ) x if   ρ [ 0 , 1 ] [ C w ,   1 ] ,
where Cg = cg, Cp = Cg + cp, and Cw = Cp + cw.
The SSO pseudo code is provided below [17,18,35,36,37,38,39]:
STEP S0.
Generate Xi randomly, find gBest, and let t = 1, k = 1, and Pi = Xi for i = 1, 2, …, Nsol.
STEP S1.
Update Xk.
STEP S2.
If F(Xk) is better than F(Pk), let Pk = Xk. Otherwise, go to STEP S5.
STEP S3.
If F(Pk) is better than F(PgBest), let gBest = k.
STEP S4.
If k < Nsol, let k = k + 1 and go to STEP S1.
STEP S5.
If t < Ngen, let t = t + 1, k = 1, and go to STEP S1. Otherwise, halt.
The stepwise-function update mechanism is very simple and efficient, but it is also very powerful and has various successful applications, e.g., the redundancy allocation problem [35,36], flexible grid trading [37], the disassembly sequencing problem [38], artificial neural networks [39], power systems [40], energy problems [41,42], and various other problems [17,18,43,44,45,46,47,48,49,50,51,52]. Moreover, the stepwise-function update mechanism is easier to customize by replacing any item of its stepwise function either with other algorithms [36,39] or hybrid algorithms [41], in sequence or in parallel [42].

4.2. Crowding Distance

Let
d l , i = M i n { F l ( X i ) F l ( X j ) M a x ( F l ) M i n ( F l ) }
be the shortest normalized Euclidean distance between the ith temporary nondominated solution and any other temporary nondominated solutions based on the lth objective function, where Min(Fl) is the minimal value of the lth objective function and Max(Fl) is the maximal value of the lth objective function.
The crowding distance is the sum of the individual distance and can be calculated as follows [19]:
i d 1 , i 2 + d 2 , i 2 ,
for all temporary nondominated solutions, Xi. The crowding distance is only used to rank temporary nondominated solutions for selecting as parents if the total number of determine are over a limited value, which is defined as Nnon and Nnon = Nsol in this study.

4.3. The Elite Selection

Among numerous different selection policies, the elite selection is the simplest and is thus adopted in the proposed tSSO. The elite selection chooses the best solutions in the current generation to generate and update solutions for the next generation [34]. For example, let Nsol = 50, X1, X2, …, X100 be the solutions required for selection. Elite selection ranks and selects the best 500 solutions among X1, X2, …, X100 and renumbers these selected solutions as X1, X2, …, X50.
In the tSSO, these best solutions are temporary nondominated solutions. Due to the fact that the number of temporary nondominated solutions may be less than or larger than the number of parents, a new elite selection called the hybrid elite selection is developed and used in the proposed tSSO, and the details are discussed in Section 5.3.

5. Proposed Algorithm

The proposed tSSO is a population-based, all-variable update, and it is a stepwise function-based method, i.e., there are a fixed number of solutions in each generation and all variables must be updated based on the stepwise function for each solution. Details of the proposed tSSO are discussed in this section.

5.1. Purpose of tSSO

The developed tSSO algorithm revises and improves the errors in the previous MOSSO algorithm, which ignores the fact that the number of temporary nondominated solutions in the multi-objective problem is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation.
The developed tSSO algorithm overcomes the errors in the previous MOSSO algorithm by the following methods:
  • The novel update mechanism of the role of gBest and pBest in the proposed tSSO, which are introduced in Section 5.3.
  • The hybrid elite selection in the proposed tSSO, which is introduced in Section 5.4.

5.2. Solution Structure

The first step of most machine learning algorithms is to define the solution structure [2,3,4,35,36,37,38,39,40,41,42,43,44,45]. The solution in the tSSO for the proposed problem is defined as a vector, and the value of the ith coordinate is the processor utilized to process the ith job. For example, let X5 = (3, 4, 6, 4) be the 5th solution in the 10th generation of the problem with 4 jobs. In X5, jobs 1, 2, 3, and 4 are processed by processors 3, 4, 6, and 4.

5.3. Novel Update Mechanism

The second step in developing machine learning algorithms is to create an update mechanism to update solutions [2,3,4,35,36,37,38,39,40,41,42]. The stepwise update function is a unique update mechanism of SSO [17,18,35,36,37,38,39,40,41,42]. In the single-objective problem, there is only one gBest for the traditional SSO. However, in the multi-objective problem, the number of temporary nondominated solutions is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation [17]. Note that a nondominated solution is not dominated by any other solution, while a temporary nondominated solution is a solution nondominated by other found solutions that are temporary in the current generation, and it may be dominated by other updated solutions later [13,17,19].
Hence, in the proposed tSSO, the role of gBest is removed completely and the pBest used in the tSSO is not the original definition in the SSO. The pBest for each solution in the proposed tSSO is selected from temporary nondominated solutions, i.e., there is no need to follow its previous best predecessor. The stepwise update function used in the proposed tSSO is listed below for multi-objective problems for each solution Xi, with i = 1, 2, …, Nsol:
x i , j = x i * if   ρ [ 0 , 1 ] [ 0 , C p ) x i , j if   ρ [ 0 , 1 ] [ C p , C w )   x otherwise ,
where X * = ( x 1 * , x 2 * , , x k * ) is one of the temporary nondominated solutions selected randomly, ρ[0,1] is a random number generated uniformly in [0, 1], and x is a random integer generated from 0, 1, 2, …, Nvar.
For example, let X6 = (1, 2, 3, 2, 4) and X* = (2, 1, 4, 3, 3) be a temporary nondominated solution selected randomly. Let Cp = 0.50, Cw = 0.95, and ρ = (ρ1, ρ2, ρ3, ρ4, ρ5) = (0.32, 0.75, 0.47, 0.99, 0.23). The procedure to update X6 based on the stepwise function provided in Equation (8) is demonstrated in Table 1.
From the above example, the simplicity, convenience, and efficiency of the SSO can also be found in the update mechanism of the proposed tSSO.

5.4. Hybrid Elite Selection

The last step is to determine the selection policy to decide which solutions, i.e., parents, are selected to generate solutions in the next generation. In the proposed tSSO, hybrid selection is harnessed.
Let πt be a set to store the temporary nondominated solutions found in the tth generation. It is impossible to have all temporary nondominated solutions because their number is infinite, and also because temporary nondominated solutions may not be real nondominated solutions. The value of |πt| is limited to Nsol, and parts of temporary nondominated solutions are abandoned to keep |πt| = Nsol.
If Nsol ≤ |πt|, the crowding distances need to be calculated for each temporary nondominated solution, and only the best Nsol solutions will be selected from πt to be parents. However, all temporary nondominated solutions in πt are used to serve as parents and (Nsol − |πt|) solutions are selected randomly from offspring if Nsol > |πt|. As discussed above, there are always Nsol parents for each generation, and these temporary nondominated solutions are usually chosen first to serve their role as parents.
Note that these temporary nondominated solutions are abandoned if they are not selected as parents.

5.5. Group Comparison

Referring to Section 4.3, the temporary nondominated solutions play a paramount role in most multi-objective algorithms. Up to now, the most popular method to achieve the above goal is pairwise comparison, which takes O(N2) [13,17,19] for each solution in each generation, where N is the number of solutions from which we determine temporary nondominated solutions. Hence, the corresponding related time complexity of the tSSO and NSGA-II are both O(4Nsol2). Therefore, the most time-consuming aspect of solving multi-objective problems using machine learning algorithms is the search for all temporary nondominated solutions from all offspring.
In the proposed tSSO, the parents are selected from temporary nondominated solutions found in the previous generation and offspring generated in the current generation. To reduce the computation burden, a new method called group comparison is proposed in tSSO. All temporary nondominated solutions in offspring are found first using the pairwise comparison, which takes O(Nsol2) as the number of offspring in Nsol. The temporary nondominated solutions obtained in the previous generations are then compared with the new temporary nondominated solutions found in the current generation. The number of both sets of temporary nondominated solutions are at most Nsol, i.e., the time complexity is O(Nsol log(Nsol)) based on the merge sort.
Hence, the time complexity is O(Nsol log(Nsol)) + O(Nsol2) = O(Nsol2), which is only one quarter of the pairwise comparison.

5.6. Proposed tSSO

The procedure of the proposed tSSO, which has nine different parameter settings, denoted as tSSO0, tSSO1, tSSO2, tSSO3, tSSO4, tSSO5, tSSO6, tSSO7, and tSSO8, has a detailed introduction in Section 6, based on the solution structure, update mechanism, hybrid elite selection, and group comparison discussed in this section, which are presented in pseudo code as follows.
PROCEDURE tSSO
STEP 0.
Create initial population Xi randomly for i = 1, 2, …, Nsol and let t = 2.
STEP 1.
Let πt be all temporary nondominated solutions in S* = {Xk | for k = 1, 2, …, 2Nsol} and XNsol+k is updated from Xk based on Equation (8) for k = 1, 2, …, Nsol.
STEP 2.
If Nsol ≤ |πt|, let S = {top Nsol solutions in πt based on crowding distances} and go to STEP 4.
STEP 3.
Let S = πt ∪ {Nsol − |πt| solutions selected randomly from S* − πt}.
STEP 4.
Re-index these solutions in S such that S = {Xk|for k = 1, 2, …, Nsol}.
STEP 5.
If t < Ngen, then let t = t + 1 and go back to STEP 1. Otherwise, halt.

6. Simulation Results and Discussion

For fair comparison with MOSSO [17], which was the previous work of the extended topic, the numerous experiments of the parameter-setting procedure sof the three different-sized benchmark problems used in [17] were carried out using tSSO. The experimental results obtained by the proposed tSSO were compared with those obtained using MOSSO [17], which were acquired using MOPSO [13] and NSGA-II [19,20].

6.1. Parameter-Settings

All machine learning algorithms have parameters in their update procedures and/or the selection procedure. Thus, there is a need to tune parameters for better results. To determine the best parameters of the proposed tSSO, three different levels of the two factors of cp and cw, the low value of 0.1, the middle value of 0.3, and the high value of 0.5, have been combined, e.g., nine different parameter settings denoted as tSSO0, tSSO1, tSSO2, tSSO3, tSSO4, tSSO5, tSSO6, tSSO7, and tSSO8, as shown in Table 2.
Note that Cp = cp and Cw = cp + cw. The following provided the parameter settings for the other algorithms:
NSGA-II: ccrossover = 0.7, cmutation = 0.3
MOPSO: w = 0.871111, c1 = 1.496180, c2 = 1.496180 [17]
MOSSO: Cg = 0.1 + 0.3t/Ngen, Cp = 0.3 + 0.4t/Ngen, Cw = 0.4 + 0.5t/Ngen, where t is the current generation number [17].

6.2. Experimental Environments

To demonstrate the performance of the proposed tSSO and select the best parameter settings, tSSOs with nine parameter settings were utilized for three job scheduling benchmarks [17], namely (Njob, Ncpu) = (20, 5), (50, 10), and (100, 20), and the deadlines were all set as 30 for each test [17].
The following lists the special characteristics of the processor speeds (in MIPs), the energy consumptions (in KW/per unit time), and the job sizes (in MI) in these three benchmark problems [17]:
  • Each processor speed is generated between 1000 and 10,000 (MIPs) randomly, and the largest speed is ten times the smallest one.
  • The power consumptions grow polynomial as the speed of processors grow, and the value range is between 0.3 and 30 (KW) per unit time.
  • The values of job sizes are between 5000 and 15,000 (MIs).
Alongside the proposed tSSO with nine parameter settings, there are three other multi-objective algorithms; MOPSO [13,17], MOSSO [17], and NSGA-II [19,20] were tested and compared to further validate the superiority of the proposed tSSO. NSGA-II [19,20] is the current best multi-objective algorithm and is based on genetic algorithms, while MOPSO is based on particle swarm optimization [13], and MOSSO on SSO [17].
The proposed tSSO with its nine parameter settings and NSGA-II have no iterative local search to improve the solution quality. To ensure a fair comparison, the iterative local search is removed from both MOPSO and MOSSO. All algorithms were coded in DEV C++ on a 64-bit Windows 10 PC, implemented on an Intel Core i7-6650U CPU @ 2.20 GHz notebook with 16 GB of memory.
In addition, for a fair comparison between all algorithms, Nsol = Nnon = 50, Ngen = 1000, and Nrun = 500, i.e., the same solution number, generation number, size of external repository, and run number for each benchmark problem were used. Furthermore, the calculation number of the fitness function of all algorithms was limited to Nsol × Ngen = 50,000, 100,000, and 150,000 in each run for the small-size, medium-size, and large-size benchmarks, respectively.
Note that the reason for the large value of Nrun is to simulate the Pareto front, and the details are discussed in Section 6.3.

6.3. Performance Metrics

The convergence metrics and diversity metrics are always both used to evaluate the performances of the multi-objective algorithms in the solution quality. Among these metrics, the general distance (GD), introduced by Van Veldhuizen et al. [53], and spacing (SP), introduced by Schott [54], are two general indexes for the convergence metrics and diversity metrics, respectively. Let di be the shortest Euclidean distance between the ith temporary nondominated solution and the Pareto front, and d be the average sum of all di for all i. The GD and SP are defined below:
GD = i = 1 N sol d i 2 N sol ,
SP = . i = 1 N sol ( d ¯ d i ) 2 N sol 1 .
The GD is the average of the sum of the squares of di, and the SP is very similar to the standard deviation in probability theory and statistics. If all temporary nondominated solutions are real nondominated solutions, we have GD = 0. The solutions are equality far from d, we have SP = 0. Hence, in general, the smaller the SP is, the higher the diversity of solutions along the Pareto front and the better the solution quality becomes [13,17,19,46,47].
The Pareto front is needed for both the GD and SP in calculating from their formulas [43,44]. Unfortunately, it requires infinite nondominated solutions to form the Pareto front, and this is impossible to accomplish, even with exhaustive methods for the job scheduling problem of cloud computing, which is an NP-hard problem that has no guarantee of having the ability to obtain the global optimal solution under the complexity of polynomial time, and the traditional algorithms are intractable, resulting to a special optimization algorithm having to be used to reduce the system search complexity and improve the overall search quality [12,15,16,17]. Rather than a real Pareto front, a simulated Pareto front is implemented by collecting all temporary nondominated solutions in the final generation from all different algorithm with different values of Nsol for the same problem.
There are 12 algorithms with three different Nsol = 50, 100, and 150 for Nrun = 500, i.e., 12 × 500 × (50 + 100 + 150) = 1,800,000 solutions obtained at the end for each test problem. All temporary nondominated solutions are found from these 1,800,000 final solutions to create a simulated Pareto front to calculate the GD and SP.

6.4. Numerical Results

All numerical results attained in the experiments are listed in this section. Table 3, Table 4 and Table 5 list the averages and standard deviations of the obtained number of temporary nondominated solutions (denoted by Nn), the obtained number of nondominated solutions in the Pareto front (denoted by Np), the converge metric GD, and the diversity metric SP, the required run time, the energy consumption values, and the makspan values for three different-sized benchmark problems with Nsol = 50, 100, and 150, respectively. In these tables, the best of all the algorithms is indicated in bold, and the proposed BSSO with its nine parameter settings are denoted as BSSO0–8, respectively, in Table 3, Table 4 and Table 5.
From Table 3, Table 4 and Table 5, we can make the following general observations:
  • The lower the cr value, the better the performance. In the small-size problem, the proposed tSSO7 with cp = 0.5 and cw = 0.3 is the best among all 12 algorithms. However, the proposed tSSO8 with cp = 0.5 and cw = 0.5 is the best one for the medium- and large-size problems. The reason for this is that the number of real nondominated solutions is infinite. Even though in the update mechanism of the proposed tSSO when cr = 0 there is only an exchange of information between the current solution itself and one of selected temporary nondominated solutions, it is already able to update the current solution to a better solution without needing any random movement.
  • The larger the size of the problem, e.g., Njob, the fewer the number of obtained nondominated solutions, e.g., Nn and Np. There are two reasons why this is the case: (1) due to the characteristic of the NP-hard problems, i.e., the larger the size of the NP-hard problem, the more difficult it is to solve; (2) it is more difficult to find nondominated solutions for larger problems with the same deadline of 30 for all problems.
  • The larger the Nsol, the more likely it is to find more nondominated solutions, i.e., the larger Nn and Np for the best algorithm among these algorithms no matter the size of the problem. Hence, it is an effective method to have a larger value of Nsol if we intend to find more nondominated solutions.
  • The smaller the Njob, the fewer the number of obtained nondominated solutions, e.g., There are two reasons for the above: (1) due to the characteristic of the NP-hard problems, i.e., the larger the size of the NP-hard problem, the more difficult it is to solve; (2) it is more difficult to find nondominated solutions for larger problems with the same deadline, which is set 30 for all problems.
  • The smaller the value of Np, the shorter the run time. The most time-consuming aspect of finding nondominated solutions is filtering out these nondominated solutions from current solutions. Hence, a new method, called the group comparison, is proposed in this study to find these nondominated solutions from current solutions. However, even the proposed group comparison is more efficient than the traditional pairwise comparison on average, but it still needs O(Np2) to achieve the goal.
  • There is no special pattern in solution qualities, e.g., the value of GD, SP, Nn, and Np, from the final average values of the energy consumption and the makspan.
  • The one with the better number of obtained nondominated solutions also has better DP and SP values.
  • The MOPSO [13] and the original MOSSO [17] share one common factor: each solution must inherit and update based on its predecessor (parent) and its pBest, and this is the main reason that it is less likely to find new nondominated solutions. The above observation is coincident to that observed in item 1. Hence, the proposed tSSO and the NSGA-II [19,20] are much better than MOSSO [17] and MOPSO [13] in solution quality.
In general, the proposed tSSO without cr has a more satisfying performance in all aspect of measures.
In addition, the histogram graphs for the average and standard deviations of the results of two-objective values, i.e., the energy consumption values and the makspan values, for three different-sized benchmark problems by the proposed tSSO and the compared MOPSO, MOSSO, and NSGA-II algorithms are further drawn, as shown in the following Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, in order to obtain more results and discussion to validate the performance of the proposed model and to have a comparison at a glance. The best results among tSSO0 to tSSO8 are taken as the solution of tSSO and plotted in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
On average, the average of the energy consumption for the small-size benchmark obtained by the proposed tSSO is better than those obtained by the compared algorithms, including MOPSO, MOSSO, and NSGA-II, though it is slightly worse than the performance of MOSSO when Nsol equals 50, as shown in Figure 1.
The standard deviation of the energy consumption for the small-size benchmark obtained by the MOSSO is the best, as shown in Figure 2.
The average of the makspan for the small-size benchmark obtained by the proposed tSSO is the best and is superior to MOSSO, as shown in Figure 3.
The standard deviation of the makspan for the small-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in Figure 4.
On average, the average of the energy consumption for the medium-size benchmark obtained by the proposed tSSO is better than those obtained by the compared algorithms, including MOPSO, MOSSO, and NSGA-II, though it is slightly worse than the performance of MOSSO when Nsol equals 50, as shown in Figure 5.
The standard deviation of the energy consumption for the medium-size benchmark obtained by the MOSSO is the best, as shown in Figure 6.
The average of the makspan for the medium-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in Figure 7.
The standard deviation of the makspan for the medium-size benchmark obtained by the MOSSO is the best and is superior to the compared algorithms, including the proposed tSSO, MOPSO, and NSGA-II, as shown in Figure 8.
The average of the energy consumption for the large-size benchmark obtained by the proposed tSSO is better than those obtained by the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in Figure 9.
The standard deviation of the energy consumption for the large-size benchmark obtained by the MOSSO is the best, as shown in Figure 10.
The average of the makspan for the large-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in Figure 11.
The standard deviation of the makspan for the large-size benchmark obtained by the proposed tSSO is the best and is superior to the compared algorithms, including MOPSO, MOSSO, and NSGA-II, as shown in Figure 12.
Overall, the performance of the proposed tSSO outperforms the compared algorithms, including MOPSO, MOSSO, and NSGA-II in all aspects of measures.

7. Conclusions

This study sheds light on a nascent two-objective time-constrained job scheduling problem focusing on energy consumption and service quality in terms of the makspan to find non-dominated solutions for the purpose of ameliorating the service quality and addressing the environmental issues of cloud computing services. In response to this two-objective problem, we proposed a new two-objective simplified swarm optimization (tSSO) algorithm to revise and improve the errors in the previous MOSSO algorithm [17], which, in the multi-objective problem, ignores the fact that the number of temporary nondominated solutions is not always only one, and some temporary nondominated solutions may not be temporary nondominated solutions in the next generation, based on SSO to deliver the job scheduling in cloud computing.
To ensure better solution quality, the tSSO algorithm integrates the crowding distance, a hybrid elite selection, and a new stepwise update mechanism, i.e., the proposed tSSO is a population-based, all-variable update, and stepwise function-based method. From the experiments conducted on three different-sized problems [17], regardless of the parameter setting, each of the proposed tSSOs outperformed the MOPSO [13], MOSSO [17], and NSGA-II [19,20], in convergency, diversity, the number of obtained temporary nondominated solutions, and the number of obtained real nondominated solutions. Among nine different parameter settings, we concluded that the tSSO algorithm with cp = cw = 0.5 is the best one. The results prove that the proposed tSSO can successfully achieve the aim of this work.
In order for the readers to understand the real-life cloud problems, the assumptions in Section 3.2 are simplified. Hence, future work will relax the assumptions to further meet and solve real-life cloud problems. In addition, a sequel work that considers real-life assumptions with changes in the algorithms is needed in the near future.
Moreover, in future works, the proposed model will be tested with more data from different domains, and the results will be compared with recently published studies from top journals and conferences.

Author Contributions

Conceptualization, W.-C.Y., W.Z., Y.Y. and C.-L.H.; methodology, W.-C.Y., W.Z., Y.Y. and C.-L.H.; software, W.-C.Y., W.Z., Y.Y. and C.-L.H.; validation, W.-C.Y.; formal analysis, W.-C.Y., W.Z. and Y.Y.; investigation, W.-C.Y., W.Z. and Y.Y.; resources, W.-C.Y., W.Z., Y.Y. and C.-L.H.; data curation, W.-C.Y., W.Z., Y.Y. and C.-L.H.; writing—original draft preparation, W.-C.Y., W.Z., Y.Y. and C.-L.H.; writing—review and editing, W.-C.Y. and W.Z.; visualization, W.-C.Y. and W.Z.; supervision, W.-C.Y.; project administration, W.-C.Y. and W.Z.; funding acquisition, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China (Grant No. 621060482), Research and Development Projects in Key Areas of Guangdong Province (Grant No. 2021B0101410002), and the National Science and Technology Council, R.O.C. (MOST 107-2221-E-007-072-MY3, MOST 110-2221-E-007-107-MY3, MOST 109-2221-E-424-002 and MOST 110-2511-H-130-002).

Acknowledgments

We wish to thank the anonymous editor and the referees for their constructive comments and recommendations, which significantly improved this paper. This article was once submitted to arXiv as a temporary submission that was only for reference and did not provide the copyright.

Conflicts of Interest

The authors declare no conflict of interest.

Notations

The following notations are used:
|●|number of elements in ●
Nvarnumber of jobs used in the test problem
Ncpunumber of processors contained in the given data center
Nrunnumber of runs for the algorithms
Ngennumber of generations in each run
Nsolnumber of solutions in each generation
Nnonnumber of selected temporary nondominated solutions
Xiith solution
xi,jjth variable in Xi
Pithe best solution among all solutions updated based on Xi in SSO
pi,jjth variable in Pi
gBestindex of the best solution among all solutions in SSO, i.e., F(PgBest) is better than or equal to F(Pi) for i = 1, 2, …, Nsol
ρIrandom number generated uniformly within interval I
cg, cp, cw, crpositive parameters used in SSO with cg + cp + cw + cr = 1
Cg, Cp, CwCg = cg, Cp = Cg + cp, and Cw = Cp + cw
Fl(●)lth fitness function value of solution ●
Max(●)maximal value of ●, i.e., Max(Fl) is the maximal value of the lth objective function
Min(●)minimal value of ●, i.e., Min(Fl) is the minimal value of the lth objective function
Stset of selected solutions from Πt to generate new solutions in the (t + 1)th generation. Note that S1 = Π1 and |St| = Nsol for i = 1, 2, …, Ngen
sizeisize of the job i for i = 1, 2, …, Nvar
startistart time of the job i for i = 1, 2, …, Nvar
speedjexecution speed of the processor j for j = 1, 2, …, Ncpu
ejenergy consumption per unit time of the processor j for j = 1, 2, …, Ncpu
Tubdeadline constraint of job scheduling
ti,jprocessing time ti,j where t i , j = s i z e i / s p e e d j if   task   i   is   processed   on   processor   j 0 otherwise for i = 1, 2, …, Nvar and j = 1, 2, …, Ncpu

References

  1. Wang, F.; Xu, J.; Cui, S. Optimal Energy Allocation and Task Offloading Policy for Wireless Powered Mobile Edge Computing Systems. IEEE Trans. Wirel. Commun. 2020, 19, 2443–2459. [Google Scholar] [CrossRef]
  2. Wei, S.C.; Yeh, W.C. Resource allocation decision model for dependable and cost-effective grid applications based on Grid Bank. Future Gener. Comput. Syst. 2017, 77, 12–28. [Google Scholar] [CrossRef]
  3. Yeh, W.C.; Wei, S.C. Economic-based resource allocation for reliable Grid-computing service based on Grid Bank. Future Gener. Comput. Syst. 2012, 28, 989–1002. [Google Scholar] [CrossRef]
  4. Manikandan, N.; Gobalakrishnan, N.; Pradeep, K. Bee optimization based random double adaptive whale optimization model for task scheduling in cloud computing environment. Comput. Commun. 2022, 187, 35–44. [Google Scholar] [CrossRef]
  5. Guo, W.; Li, J.; Chen, G.; Niu, Y.; Chen, C. A PSO-Optimized Real-Time Fault-Tolerant Task Allocation Algorithm in Wireless Sensor Networks. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 3236–3249. [Google Scholar] [CrossRef]
  6. Afifi, H.; Horbach, K.; Karl, H. A Genetic Algorithm Framework for Solving Wireless Virtual Network Embedding. In Proceedings of the 2019 International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Barcelona, Spain, 21–23 October 2019. [Google Scholar]
  7. Santos, J.; Hempel, M.; Sharif, H. Compression Distortion-Rate Analysis of Biomedical Signals in Machine Learning Tasks in Biomedical Wireless Sensor Network Applications. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020. [Google Scholar]
  8. Sun, Z.; Liu, Y.; Tao, L. Attack Localization Task Allocation in Wireless Sensor Networks Based on Multi-Objective Binary Particle Swarm Optimization. J. Netw. Comput. Appl. 2018, 112, 29–40. [Google Scholar] [CrossRef]
  9. Lu, Y.; Zhou, J.; Xu, M. Wireless Sensor Networks for Task Allocation using Clone Chaotic Artificial Bee Colony Algorithm. In Proceedings of the 2019 IEEE International Conference of Intelligent Applied Systems on Engineering (ICIASE), Fuzhou, China, 26–29 April 2019. [Google Scholar]
  10. Khan, M.S.A.; Santhosh, R. Task scheduling in cloud computing using hybrid optimization algorithm. Soft Comput. 2022, 26, 3069–13079. [Google Scholar] [CrossRef]
  11. Malawski, M.; Juve, G.; Deelman, E.; Nabrzyski, J. Algorithms for cost-and deadline-constrained provisioning for scientific workflow ensembles in IaaS clouds. Future Gener. Comput. Syst. 2015, 48, 1–18. [Google Scholar] [CrossRef]
  12. Chen, H.; Zhu, X.; Guo, H.; Zhu, J.; Qin, X.; Wu, J. Towards energy-efficient scheduling for real-time tasks under uncertain cloud computing environment. J. Syst. Softw. 2015, 99, 20–35. [Google Scholar] [CrossRef]
  13. Coello, C.A.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  14. Guo, X. Multi-objective task scheduling optimization in cloud computing based on fuzzy self-defense algorithm. Alex. Eng. J. 2021, 60, 5603–5609. [Google Scholar] [CrossRef]
  15. Liu, J.X.; Luo, G.; Zhang, X.M.; Zhang, F.; Li, B.N. Job scheduling model for cloud computing based on multi-objective genetic algorithm. Int. J. Comput. Sci. Issues 2013, 10, 134–139. [Google Scholar]
  16. Jena, R.K. Multi objective task scheduling in cloud environment using nested PSO framework. Procedia Comput. Sci. 2015, 57, 1219–1227. [Google Scholar] [CrossRef]
  17. Huang, C.L.; Jiang, Y.Z.; Yin, Y.; Yeh, W.C.; Chung, V.Y.Y.; Lai, C.M. Multi Objective Scheduling in Cloud Computing Using MOSSO. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar] [CrossRef]
  18. Yeh, W.C. A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems. Expert Syst. Appl. 2009, 36, 9192–9200. [Google Scholar] [CrossRef]
  19. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  20. Yeh, W.C.; Zhu, W.; Yin, Y.; Huang, C.L. Cloud computing task scheduling problem by Nondominated Sorting Genetic Algorithm II (NSGA-II). In Proceedings of the First Australian Conference on Industrial Engineering and Operations Management, Sydney, Australia, 20–22 December 2022. [Google Scholar]
  21. Houssein, E.H.; Gad, A.G.; Wazery, Y.M.; Suganthan, P.N. Task Scheduling in Cloud Computing based on Meta-heuristics: Review, Taxonomy, Open Challenges, and Future Trends. Swarm Evol. Comput. 2021, 62, 100841. [Google Scholar] [CrossRef]
  22. Arunarani, A.R.; Manjul, D.; Sugumaran, V. Task scheduling techniques in cloud computing: A literature survey. Future Gener. Comput. Syst. 2019, 91, 407–415. [Google Scholar] [CrossRef]
  23. Kumar, M.; Sharma, S.C.; Goel, A.; Singh, S.P. A comprehensive survey for scheduling techniques in cloud computing. J. Netw. Comput. Appl. 2019, 143, 1–33. [Google Scholar] [CrossRef]
  24. Chen, X.; Cheng, L.; Liu, C.; Liu, Q.; Liu, J.; Mao, Y.; Murphy, J. A WOA-Based Optimization Approach for Task Scheduling in Cloud Computing Systems. IEEE Syst. J. 2020, 14, 3117–3128. [Google Scholar] [CrossRef]
  25. Attiya, I.; Elaziz, M.A.; Xiong, S. Job Scheduling in Cloud Computing Using a Modified Harris Hawks Optimization and Simulated Annealing Algorithm. Comput. Intell. Neurosci. 2020, 2020, 3504642. [Google Scholar] [CrossRef] [Green Version]
  26. Gąsior, J.; Seredyński, F. Security-Aware Distributed Job Scheduling in Cloud Computing Systems: A Game-Theoretic Cellular Automata-Based Approach. In Proceedings of the International Conference on Computational Science ICCS 2019: Computational Science—ICCS; 2019; pp. 449–462. [Google Scholar]
  27. Mansouri, N.; Javidi, M.M. Cost-based job scheduling strategy in cloud computing environments. Distrib. Parallel Databases 2020, 38, 365–400. [Google Scholar] [CrossRef]
  28. Cheng, F.; Huang, Y.; Tanpure, B.; Sawalani, P.; Cheng, L.; Liu, C. Cost-aware job scheduling for cloud instances using deep reinforcement learning. Clust. Comput. 2022, 25, 619–631. [Google Scholar] [CrossRef]
  29. Shukri, S.E.; Al-Sayyed, R.; Hudaib, A.; Mirjalili, S. Enhanced multi-verse optimizer for task scheduling in cloud computing environments. Expert Syst. Appl. 2021, 168, 114230. [Google Scholar] [CrossRef]
  30. Jacob, T.P.; Pradeep, K. A Multi-objective Optimal Task Scheduling in Cloud Environment Using Cuckoo Particle Swarm Optimization. Wirel. Pers. Commun. 2019, 109, 315–331. [Google Scholar] [CrossRef]
  31. Abualigah, L.; Diabat, A. A novel hybrid antlion optimization algorithm for multi-objective task scheduling problems in cloud computing environments. Clust. Comput. 2021, 24, 205–223. [Google Scholar] [CrossRef]
  32. Sanaj, M.S.; Prathap, P.M.J. Nature inspired chaotic squirrel search algorithm (CSSA) for multi objective task scheduling in an IAAS cloud computing atmosphere. Eng. Sci. Technol. Int. J. 2020, 23, 891–902. [Google Scholar] [CrossRef]
  33. Abualigah, L.; Alkhrabsheh, M. Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J. Supercomput. 2022, 78, 740–765. [Google Scholar] [CrossRef]
  34. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: London, UK, 1998. [Google Scholar]
  35. Yeh, W.C. Orthogonal simplified swarm optimization for the series–parallel redundancy allocation problem with a mix of components. Knowl.-Based Syst. 2014, 64, 1–12. [Google Scholar] [CrossRef]
  36. Yeh, W.C. A New Exact Solution Algorithm for a Novel Generalized Redundancy Allocation Problem. Inf. Sci. 2017, 408, 182–197. [Google Scholar] [CrossRef]
  37. Yeh, W.C.; Hsieh, Y.H.; Hsu, K.Y.; Huang, C.L. ANN and SSO Algorithms for a Newly Developed Flexible Grid Trading Model. Electronics 2022, 11, 11193259. [Google Scholar] [CrossRef]
  38. Yeh, W.C. Simplified swarm optimization in disassembly sequencing problems with learning effects. Comput. Oper. Res. 2012, 39, 2168–2177. [Google Scholar] [CrossRef]
  39. Yeh, W.C. New parameter-free simplified swarm optimization for artificial neural network training and its application in the prediction of time series. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 661–665. [Google Scholar] [PubMed]
  40. Yeh, W.C.; Zhu, W.; Peng, Y.F.; Huang, C.L. A Hybrid Algorithm Based on Simplified Swarm Optimization for Multi-Objective Optimizing on Combined Cooling, Heating and Power System. Appl. Sci. 2022, 12, 10595. [Google Scholar] [CrossRef]
  41. Yeh, W.C.; Huang, C.L.; Lin, P.; Chen, Z.; Jiang, Y.; Sun, B. Simplex Simplified Swarm Optimization for the Efficient Optimization of Parameter Identification for Solar Cell Models. IET Renew. Power Gener. 2018, 12, 45–51. [Google Scholar] [CrossRef]
  42. Yeh, W.C.; Ke, Y.C.; Chang, P.C.; Yeh, Y.M.; Chung, V. Forecasting Wind Power in the Mai Liao Wind Farm based on the Multi-Layer Perceptron Artificial Neural Network Model with Improved Simplified Swarm Optimization. Int. J. Electr. Power Energy Syst. 2014, 55, 741–748. [Google Scholar] [CrossRef]
  43. Yeh, W.C.; Liu, Z.; Yang, Y.C.; Tan, S.Y. Solving Dual-Channel Supply Chain Pricing Strategy Problem with Multi-Level Programming Based on Improved Simplified Swarm Optimization. Technologies 2022, 2022, 10030073. [Google Scholar] [CrossRef]
  44. Lin, H.C.S.; Huang, C.L.; Yeh, W.C. A Novel Constraints Model of Credibility-Fuzzy for Reliability Redundancy Allocation Problem by Simplified Swarm Optimization. Appl. Sci. 2021, 11, 10765. [Google Scholar] [CrossRef]
  45. Tan, S.Y.; Yeh, W.C. The Vehicle Routing Problem: State-of-the-Art Classification and Review. Appl. Sci. 2021, 11, 10295. [Google Scholar] [CrossRef]
  46. Zhu, W.; Huang, C.L.; Yeh, W.C.; Jiang, Y.; Tan, S.Y. A Novel Bi-Tuning SSO Algorithm for Optimizing the Budget-Limited Sensing Coverage Problem in Wireless Sensor Networks. Appl. Sci. 2021, 11, 10197. [Google Scholar] [CrossRef]
  47. Yeh, W.C.; Jiang, Y.; Tan, S.Y.; Yeh, C.Y. A New Support Vector Machine Based on Convolution Product. Complexity 2021, 2021, 9932292. [Google Scholar] [CrossRef]
  48. Wu, T.Y.; Jiang, Y.Z.; Su, Y.Z.; Yeh, W.C. Using Simplified Swarm Optimization on Multiloop Fuzzy PID Controller Tuning Design for Flow and Temperature Control System. Appl. Sci. 2020, 10, 8472. [Google Scholar] [CrossRef]
  49. Yeh, W.C.; Jiang, Y.; Huang, C.L.; Xiong, N.N.; Hu, C.F.; Yeh, Y.H. Improve Energy Consumption and Signal Transmission Quality of Routings in Wireless Sensor Networks. IEEE Access 2020, 8, 198254–198264. [Google Scholar] [CrossRef]
  50. Yeh, W.C. A new harmonic continuous simplified swarm optimization. Appl. Soft Comput. 2019, 85, 105544. [Google Scholar] [CrossRef]
  51. Yeh, W.C.; Lai, C.M.; Tseng, K.C. Fog computing task scheduling optimization based on multi-objective simplified swarm optimization. J. Phys. Conf. Ser. 2019, 1411, 012007. [Google Scholar] [CrossRef]
  52. Yeh, W.C. Solving cold-standby reliability redundancy allocation problems using a new swarm intelligence algorithm. Appl. Soft Comput. 2019, 83, 105582. [Google Scholar] [CrossRef]
  53. Veldhuizen, V.D.A.; Lamont, G.B. Multiobjective evolutionary algorithm research: A history and analysis. Evol. Comput. 1999, 8, 125–147. [Google Scholar] [CrossRef]
  54. Schott, J.R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization. 1995. Available online: http://hdl.handle.net/1721.1/11582 (accessed on 11 January 2022).
Figure 1. Average of the energy consumption for small-size benchmark.
Figure 1. Average of the energy consumption for small-size benchmark.
Applsci 13 02077 g001
Figure 2. Std of the energy consumption for small-size benchmark.
Figure 2. Std of the energy consumption for small-size benchmark.
Applsci 13 02077 g002
Figure 3. Average of the makspan for small-size benchmark.
Figure 3. Average of the makspan for small-size benchmark.
Applsci 13 02077 g003
Figure 4. Std of the makspan for small-size benchmark.
Figure 4. Std of the makspan for small-size benchmark.
Applsci 13 02077 g004
Figure 5. Average of the energy consumption for medium-size benchmark.
Figure 5. Average of the energy consumption for medium-size benchmark.
Applsci 13 02077 g005
Figure 6. Std of the energy consumption for medium-size benchmark.
Figure 6. Std of the energy consumption for medium-size benchmark.
Applsci 13 02077 g006
Figure 7. Average of the makspan for medium-size benchmark.
Figure 7. Average of the makspan for medium-size benchmark.
Applsci 13 02077 g007
Figure 8. Std of the makspan for medium-size benchmark.
Figure 8. Std of the makspan for medium-size benchmark.
Applsci 13 02077 g008
Figure 9. Average of the energy consumption for large-size benchmark.
Figure 9. Average of the energy consumption for large-size benchmark.
Applsci 13 02077 g009
Figure 10. Std of the energy consumption for large-size benchmark.
Figure 10. Std of the energy consumption for large-size benchmark.
Applsci 13 02077 g010
Figure 11. Average of the makspan for large-size benchmark.
Figure 11. Average of the makspan for large-size benchmark.
Applsci 13 02077 g011
Figure 12. Std of the makspan for large-size benchmark.
Figure 12. Std of the makspan for large-size benchmark.
Applsci 13 02077 g012
Table 1. Example of the update process in the proposed tSSO.
Table 1. Example of the update process in the proposed tSSO.
Variable12345
X512324
X*21433
ρ0.320.750.470.990.23
New X52244 #3
#” indicates that the corresponding feasible value is generated randomly.
Table 2. Nine different parameter setting of the proposed tSSO.
Table 2. Nine different parameter setting of the proposed tSSO.
IDtSSO0tSSO1tSSO2tSSO3tSSO4tSSO5tSSO6tSSO7tSSO8
Cp0.10.10.10.30.30.30.50.50.5
Cw0.20.40.60.40.60.80.60.81.0
Table 3. Result for small-size problem.
Table 3. Result for small-size problem.
NsolAlgAvg (Nn)Std (Nn)Avg (Np)Std (Np)Avg (GD)Std (GD)Avg (SP)Std (SP)Avg (T)Std (T)Avg (F1)Std (F1)Avg (F2)Std (F2)
50tSSO048.342.220.0120.1090.5650.6113.5274.3432.89540.583911,937.28261.32062539.9574798.1386
 tSSO149.6340.9040.0180.1330.260.251.4751.7813.41030.670811,689.93272.50461015.7901926.8740
 tSSO249.9660.2020.0580.2340.1380.0590.7220.4454.17890.690111,185.89273.4064838.2936632.7848
 tSSO349.4941.1650.0120.1090.3110.3491.8192.4953.29510.657911,766.60269.76301208.26002270.2913
 tSSO449.9360.2760.0340.2020.170.0730.9150.5473.98370.691911,317.03289.0288779.7255448.9989
 tSSO55000.2840.5440.1180.0650.710.4784.76040.753510,603.56229.7030938.633422.5991
 tSSO649.9680.2080.0540.2350.1530.1010.8210.7344.21850.692811,230.14290.7808817.7619631.6859
 tSSO75000.2180.4640.1210.0640.7220.4754.860.753110,561.73236.0619936.698224.2198
 tSSO849.70.7660.1880.4570.1530.1030.7760.5984.40770.804410,088.10550.39181010.433888.7098
 MOPSO9.2742.026002.9470.90817.3666.3143.12380.544512,176.37230.15129,378.7816,620.2163
 MOSSO1.230.508009.9140.1228.4174.9781.22820.1879833.0324.8444488,648.810,786.0609
 NSGA-II17.223.216002.2131.2713.1078.6760.01020.020211,963.00487.921219,913.2320,625.1095
100tSSO061.8565.5030.0240.1531.9960.50218.5484.72110.30521.403224,162.67365.253548,016.9820,979.7233
 tSSO171.0165.8080.050.2271.2840.50612.064.96810.29021.369323,633.28438.697524,077.6615,460.6944
 tSSO295.074.7790.1840.4630.2890.3062.7123.06411.2821.676722,010.09475.51564596.5495938.6783
 tSSO371.3165.6190.0640.2531.3470.50812.6784.94610.56881.353623,621.20426.82626,436.1716,447.3743
 tSSO488.8946.0610.1160.3620.5370.3875.1023.83910.88081.401622,401.50480.80598531.3138913.8709
 tSSO599.980.1650.6680.8240.0610.0370.5140.38221.77622.641819,982.09420.38261891.765452.4866
 tSSO698.6382.1350.2780.5490.1370.1741.2121.75513.3352.39921,646.20473.83942588.2893570.9857
 tSSO799.9960.0880.7140.8490.0620.0350.5210.36923.09322.696519,860.81428.85911879.87357.8411
 tSSO899.890.5461.3541.1730.0590.040.4970.40126.02264.331319,086.17614.09852056.674772.1143
 MOPSO12.0422.554002.1130.45417.7584.2912.08561.653824,347.86307.494957,505.3623,415.3032
 MOSSO1.580.699007.010.0559.2713.1174.74020.479719,666.6833.48977,197.913,878.8233
 NSGA-II21.0963.25002.250.77719.4357.0760.03920.048924,289.14713.095365,231.9540,247.3188
150tSSO067.965.770.0360.1971.970.34321.9013.81723.09043.50936,497.8446.213599,073.1431,223.9014
 tSSO176.3346.3470.0780.2761.5520.34917.4584.01422.83753.304335,809.46542.949669,700.0627,524.2792
 tSSO2102.6247.730.220.4730.9290.31210.6993.72323.89083.386733,666.71657.864936,715.220,668.0023
 tSSO379.25.7260.0980.3041.4650.32716.5183.81423.61393.408235,679.86536.164764,560.624,571.6138
 tSSO498.2966.90.1660.4130.9320.27610.6553.31324.07713.397933,969.23669.993336,254.8918,845.1508
 tSSO5149.920.4271.3621.1770.0420.0280.4410.36143.01857.853629,176.09593.43062833.142627.5616
 tSSO6122.0927.6320.4080.6310.5910.2546.9073.08725.9713.426532,432.3657.775220,747.7414,131.2079
 tSSO7149.9540.2771.471.1510.040.0280.4140.35651.06547.783828,979.39598.07272870.726638.1370
 tSSO8149.9040.6023.5482.0130.0360.0280.390.34774.699113.584927,451.57612.0083233.779995.0084
 MOPSO13.6922.496001.7440.28918.0113.3226.70544.196636,486.46388.525387,266.126,859.1298
 MOSSO1.9860.921005.7240.0389.3312.55110.44721.239829,494.1543.41111,466,86418,034.0917
 NSGA-II23.453.732002.1090.57922.2516.2130.08550.074336,643.47992.1199120,913.560,590.6631
Table 4. Result for medium-size problem.
Table 4. Result for medium-size problem.
NsolAlgAvg (Nn)Std (Nn)Avg (Np)Std (Np)Avg (GD)Std (GD)Avg (SP)Std (SP)Avg (T)Std (T)Avg (F1)Std (F1)Avg (F2)Std (F2)
50tSSO024.4724.1770020.0012.216109.4977.6574.57510.787132,537.48456.8619196,678.137,821.6436
 tSSO125.9544.1050018.0742.375103.299.4314.4030.753332,405.09566.3836167,400.637,900.9380
 tSSO229.6724.6480014.6292.77889.00113.5214.30790.723832,177.28716.8466120,611.939,137.8242
 tSSO326.744.270017.7012.308102.1049.3464.53630.76532,413.09581.7217161,630.737,035.2625
 tSSO429.874.4930014.2582.51887.07512.4684.3970.739532,032.87750.0289118,041.136,646.8146
 tSSO535.195.1420.0060.0779.0592.67658.73316.0134.08440.676831,237.8919.805865,527.0532,220.9459
 tSSO632.814.6490012.1382.6476.71214.8844.4170.74832,117.32801.445190,607.9732,073.7435
 tSSO736.3644.9460.0020.0458.2582.65654.10816.5844.25890.715531,191.78946.22856,523.2328,417.2914
 tSSO844.0824.4630.0640.3030.9761.1846.0548.464.05380.684628,331.111073.4775418.8248430.7514
 MOPSO5.2781.580018.8211.43485.4564.3056.70451.124431,253.08456.2674286,050.135,533.4621
 MOSSO100020.3930.1065.1483.9442.64250.392428,024.2444.3314499,880.31086.7815
 NSGA-II8.5282.420023.3573.072109.15510.1350.0130.02232,648.83926.4653272,009.962,082.4709
100tSSO027.0564.5070016.9770.907113.3053.75617.82482.974865,051.87667.284558,643.950,252.7832
 tSSO128.4764.3880015.9631.018111.6744.17617.12942.73964,873.31824.3303509,758.253,979.6751
 tSSO231.55.1730014.4061.139107.4245.39216.67022.608364,582.331120.178438,233.460,357.1999
 tSSO330.1164.6890015.3310.948110.5714.61517.5692.820364,877.63892.6177478,286.649,968.1420
 tSSO432.4424.550013.5631.04104.0435.57716.99322.647564,347.251159.757402,642.255,190.6790
 tSSO537.5245.40010.6421.06787.8826.86815.74262.391662,982.561581.812302,505.758,114.0986
 tSSO637.1884.950.0020.04511.9091.06696.4736.80217.17042.70264,295.331245.893334,714.550,535.6466
 tSSO740.2625.5290.0020.0459.491.12680.6798.00516.54282.471762,883.591635.266260,267.755,235.1007
 tSSO857.3026.8490.1520.4352.3290.99221.9169.29115.80482.326255,830.022048.33457,761.3642,030.1210
 MOPSO6.8481.8430013.3210.69785.5253.17225.88863.99362,458.84631.2151573,135.148,380.8306
 MOSSO100014.4180.0545.8692.96610.11841.16156,046.9962.3825999,720.81646.6140
 NSGA-II10.4442.6630017.9061.378109.4957.8920.04760.0565,316.821320.111621,792.585,797.1254
150tSSO029.2344.4090014.510.548111.2283.18639.09095.846297,500.52813.8773924,285.256,397.7725
 tSSO130.0444.7520.0040.06313.9870.624111.2093.44637.51835.444197,367.751050.166873,603.562,586.7269
 tSSO232.6024.9760012.9750.721109.9373.88736.33965.128897,031.771314.433781,236.972,066.7149
 tSSO332.5664.6360013.2380.588110.8493.48238.49485.511297,373.331186.611801,509.859,333.8052
 tSSO434.5524.9150012.020.672106.8224.39137.34435.44896,646.121497.259702,948.267,389.4702
 tSSO538.8425.225009.8650.65794.6625.11234.5574.843794,766.852117.23561,339.476,923.5292
 tSSO639.384.8760.0020.04510.630.684100.7615.23137.70045.509696,480.241760.161593,359.766,298.1134
 tSSO743.3265.1560.0020.0458.6520.6686.6625.56936.30515.183294,508.792363.881476,97177,652.8961
 tSSO862.4667.3290.3360.7132.5840.70229.277.64434.44034.811683,208.22666.568142,370.869,677.5283
 MOPSO7.8161.8960010.8790.49785.5982.49256.51948.136293,759.12788.4589857,348.163,773.8475
 MOSSO100011.7770.0365.7492.40622.22282.706884,077.8477.17041,499,8011397.3776
 NSGA-II11.272.6090015.0740.927107.6257.1590.10590.068497,772.241688.187994,780.5101,431.9720
Table 5. Result for large-size problem.
Table 5. Result for large-size problem.
NsolAlgAvg (Nn)Std (Nn)Avg (Np)Std (Np)Avg (GD)Std (GD)Avg (SP)Std (SP)Avg (T)Std (T)Avg (F1)Std (F1)Avg (F2)Std (F2)
50tSSO027.1744.15500748.6296.6564480.39365.7578.41671.532998,628.83771.7182143,665.535,722.9236
 tSSO129.7984.68600639.303109.3164019.28526.1138.05331.34398,279.76960.7157106,329.434,186.9693
 tSSO234.795.07400478.211136.1593161.886809.0147.77581.213297,705.91259.99562,884.0730,649.8855
 tSSO330.6164.38500633.698108.0073994.477524.3188.2181.314298,354.38893.9128104,469.433,485.2153
 tSSO434.0584.86700484.791118.4083210.835695.267.91611.222997,382.841207.6663,353.5727,636.0545
 tSSO542.1045.07300237.875158.781641.2711075.7137.2931.09295,492.671530.15521,440.2519,011.2797
 tSSO637.1984.82200408.566128.7572754.903808.2187.9041.208997,480.851267.52446,923.724,637.2246
 tSSO742.9344.7240.0020.045205.495154.7371424.2051060.2777.63091.10895,610.531573.73117,520.3216,311.2192
 tSSO845.9182.9980.0060.16.7336.20145.981255.5486.85341.006689,370.732219.4321315.0292029.9054
 MOPSO6.3281.72600904.79776.0974909.066144.9612.41311.9397,663.75709.6203207,628.734,209.3182
 MOSSO4.9481.556001152.11658.6784687.853261.9034.73270.619398,649.82634.5005334,246.633,687.3944
 NSGA-II10.3682.79900856.652145.4534711.009392.0090.02010.024598,891.731471.68190,118.461,317.1989
100tSSO030.4484.49400657.70435.5374947.35869.80633.26465.1487197,248.51061.073436,547.446,564.5095
 tSSO132.064.89400613.19442.9454832.571135.14131.89984.8814196,881.61430.933380,499.452,806.8613
 tSSO236.2745.13100539.15747.9634531.276241.0830.87964.6327196,021.51827.626295,533.351,535.3657
 tSSO334.1584.69900586.50242.7384742.246167.80432.66764.963196,844.51456.553348,421.849,963.8885
 tSSO437.1565.09200511.66348.7634388.164277.23231.35924.5905195,426.12058.278266,699.649,262.8425
 tSSO544.1965.80100400.69156.6413663.083419.51528.85624.0773192,100.92664.295166,154.745,483.4355
 tSSO641.9185.68300453.72649.9784037.566329.06531.4534.5322195,306.72181.89210,798.245,526.3109
 tSSO747.335.83600363.75957.1523381.729449.96230.1274.1393191,714.32919.089137,930.941,874.8504
 tSSO872.5287.3240.1160.46838.46958.147382.143578.59527.6163.7534176,917.24417.3636889.3278665.4426
 MOPSO7.9482.06500641.55138.1064910.51490.91949.28147.0987195,334.2995.1512416,075.848,864.6986
 MOSSO6.5081.7200813.3328.1374705.793166.87319.10442.2699197,323843.5352665,380.845,753.2918
 NSGA-II12.4822.85600686.50364.4434926.057119.8710.07940.041197,624.32143.287478,296.487,822.5684
150tSSO031.8924.49900571.33423.5534986.07523.80674.920511.0373295,934.71364.945739,913.760,689.2869
 tSSO133.9484.79900541.51125.2464952.63654.77871.715610.4578295,532.61663.517665,347.361,362.5540
 tSSO237.0585.30600497.39431.5754816.909128.20669.25539.7601294,401.42369.671562,929.270,555.4639
 tSSO336.1765.03500515.03326.9754884.2388.47573.492210.7752295,225.51901.874602,559.362,546.5565
 tSSO438.6545.27400461.16833.5084648.141179.43970.701610.1622293,575.62662.045485,020.569,694.8734
 tSSO546.0746.1070.0020.045375.9938.0454077.306301.45165.06559.094289,048.33537.873325,141.964,553.0441
 tSSO644.5485.49800407.53232.4474317.863231.21270.938610.2171292,870.22835.374379,900.759,375.2787
 tSSO749.5925.6430.0040.063339.38235.9213774.411317.91268.02269.4273288,517.23575.387265,771.254,642.6111
 tSSO877.037.7050.3621.14175.41553.721915.721647.91662.048.258263,672.36382.97622,508.0720,508.0012
 MOPSO8.7582.17500522.93426.4684906.14578.318110.373614.8236293,040.91222.526621,413.862,184.5910
 MOSSO7.3982.04400666.56818.5254689.758137.57643.28614.9709295,948.81105.6131,005,11355,590.2798
 NSGA-II13.363.00700590.49241.6854939.70188.3640.180.0623296,189.82758.174792,818.2110,125.8133
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeh, W.-C.; Zhu, W.; Yin, Y.; Huang, C.-L. Cloud Computing Considering Both Energy and Time Solved by Two-Objective Simplified Swarm Optimization. Appl. Sci. 2023, 13, 2077. https://doi.org/10.3390/app13042077

AMA Style

Yeh W-C, Zhu W, Yin Y, Huang C-L. Cloud Computing Considering Both Energy and Time Solved by Two-Objective Simplified Swarm Optimization. Applied Sciences. 2023; 13(4):2077. https://doi.org/10.3390/app13042077

Chicago/Turabian Style

Yeh, Wei-Chang, Wenbo Zhu, Ying Yin, and Chia-Ling Huang. 2023. "Cloud Computing Considering Both Energy and Time Solved by Two-Objective Simplified Swarm Optimization" Applied Sciences 13, no. 4: 2077. https://doi.org/10.3390/app13042077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop