Next Article in Journal
Influence of Graphene Nanoplatelet and Silver Nanoparticle on the Rheological Properties of Water-Based Mud
Previous Article in Journal
Electricity Generation in LCA of Electric Vehicles: A Review
Previous Article in Special Issue
Trajectory Tracking between Josephson Junction and Classical Chaotic System via Iterative Learning Control
Article Menu
Issue 8 (August) cover image

Export Article

Appl. Sci. 2018, 8(8), 1385; doi:10.3390/app8081385

Article
Highly Reliable and Efficient Three-Layer Cloud Dispatching Architecture in the Heterogeneous Cloud Computing Environment
1
Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung City 41349, Taiwan
2
Department of Information Communication, Hsing Wu University, New Taipei City 24452, Taiwan
*
Correspondence: yfahuang@cyut.edu.tw; Tel.: +886-4-2332-3000 (ext. 4419)
This paper is an extended version of our paper published in 2017 IEEE 8th International Conference on Awareness Science and Technology (iCAST2017), Taichung, Taiwan, 8–10 November 2017.
Received: 21 July 2018 / Accepted: 13 August 2018 / Published: 16 August 2018

Abstract

:
Due to the rapid development and popularity of the Internet, cloud computing has become an indispensable application service. However, how to assign various tasks to the appropriate service nodes is an important issue. Based on the reason above, an efficient scheduling algorithm is necessary to enhance the performance of the system. Therefore, a Three-Layer Cloud Dispatching (TLCD) architecture is proposed to enhance the performance of task scheduling. In the first layer, the tasks need to be distinguished into different types by their characters. Subsequently, the Cluster Selection Algorithm is proposed to dispatch the tasks to appropriate service clusters in the second layer. Besides this, a new scheduling algorithm is proposed in the third layer to dispatch the task to a suitable server in a server cluster to enhance the scheduling efficiency. Basically, the best task completion time can be obtained in our TLCD architecture. Furthermore, load balancing and reliability can be achieved under a cloud computing network environment.
Keywords:
cloud computing; reliability; load balancing; Sufferage; task dispatching

1. Introduction

Due to the rapid development and popularity of the Internet, cloud computing has become an indispensable and highly demanded application service [1]. In order to meet the greater storage requirements of Big Data, the system will continue to upgrade and expand its capabilities, resulting in a large number of heterogeneous servers, storage, and related equipment in a cloud computing environment.
The server type of cloud computing can be classified into three service types: Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS) [2,3,4]. In SaaS, the frequently used software and programs are provided by the Internet vendor, such as Gmail and Google Maps [5]. In PaaS, a platform for users is provided for programs, such as the Google APP Engine, one kind of PaaS platform. In the last type of service, the IaaS, the user can rent the hardware resources to build their own framework, such as a Cloud server [2,3,4,6]. No matter what kind of cloud service is applied to the cloud computing network, there is a common character: each cloud server has different capabilities and computing abilities. According to the reason above, there is a need for an efficient scheduling algorithm that can dispatch tasks to the appropriate cloud server, which is an important challenge in current cloud computing environments.
Basically, in the traditional cloud clustering architecture, the system only considers the heterogeneity of tasks while executing scheduling procedures and ignores the heterogeneity of tasks which come from different platforms, and its categories are not distinguishable by nodes. Therefore, heterogeneous tasks have become major flaws in traditional cloud architectures. In addition, cloud service types are very diverse. As a result, the cloud computing environment becomes complicated and the reliability is greatly reduced. So, the scheduling of cloud environments is more difficult.
Therefore, the Three-Layer Cloud Dispatching (TLCD) architecture is proposed to handle the scheduling problem while heterogeneous nodes and tasks exist in the cloud system at the same time. For the first layer, a Category Assignment Cluster (CAC) [7,8] layer was proposed to reduce the task delay and the overloading by classifying the heterogeneous tasks. In the CAC layer, the various tasks can be classified into three types according to the IaaS, SaaS, and PaaS categories. Subsequently, the homogeneous tasks can be dispatched to the corresponding service category clusters in the following layer.
In the second layer, called the Cluster Selection (CS) layer, the homogenous tasks can be assigned to appropriate clusters by the Cluster Scheduling Algorithm (CSA) to enhance the reliability of the system. Besides this, the cost and completion time of task scheduling can be reduced in this layer. Finally, tasks can be dispatched to service nodes by the scheduling algorithm in the third layer, the Server Nodes Selection (SNS) layer. In this layer, an Advanced Cluster Sufferage Scheduling (ACSS) algorithm is proposed to enhance resource utilization and to achieve load balancing [2,3,9,10,11].
The rest of this paper is organized as follows. Section 2 illustrates the previous works on scheduling algorithms. Subsequently, the details of the TLCD architecture are shown in Section 3 and examples are given in Section 4. Finally, Section 5 gives the conclusions of this paper.

2. Materials and Methods

So far, many scheduling algorithms been proposed for various application scenarios [12,13,14]. For example, in [12], the authors propose an energy-efficient adaptive resource scheduler architecture for providing real-time cloud services to Vehicular Clients. The main contribution of the proposed protocol is maximizing the overall communication-plus-computing energy efficiency while meeting the application-induced hard Quality of Service (QoS) [14] requirements on the minimum transmission rates, maximum delays, and delay jitters. In [14], the authors propose a joint computing-plus-communication optimization framework exploiting virtualization technologies to ensure users the QoS, to achieve maximum energy savings, and to attain green cloud computing goals in a fully distributed fashion. Basically, these proposed algorithms and architectures can help with rethinking the design of scheduling algorithms in depth.
Generally, there are two types of scheduling algorithm can be provided based on the time of scheduling, namely, the real-time type and the batch type. Basically, the received tasks are immediately assigned to cloud server nodes in the real-time type. Conversely, in the batch type, the received tasks are accumulated for a period of time and subsequently dispatched to cloud server nodes. A scheduling algorithm of the batch type can obtain better performance than one of the real-time type. This is because the assignment results of all tasks can be considered in a batch-based scheduling algorithm [11,15,16].
So far, many scheduling algorithms [2,3,9,10,11,15,17] have been proposed to dispatch the task to cloud server nodes in the cloud computing network, such as the Min-Min [9,10,15,17], Max-Min [9,15,17], Sufferage [2,9,11,17], and MaxSufferage algorithms [2,3,9,15]. However, these algorithms only consider the factor of the expected completion time (ECT) without considering the load status of the server node. Therefore, the performance is not as good as expected, and the minimum completion time cannot be obtained.
For example, in the Min-Min algorithm, the task i with the minimum ECT of the unassigned tasks T is called min_ECT. Subsequently, the task which has minimum ECT can be elected and dispatched to corresponding server node. Then, the task which is newly matched is eliminated from a set T and the procedure is repeated until all tasks are dispatched. Under this circumstance, the workload will easy to unbalance while there are too many tasks waiting to be scheduled.
In the Max-Min algorithm, the ECT is also to be used for dispatching tasks. Conversely, the task with overall maximum ECT in Max-Min algorithm is always to be selected to assign; thus, the overall ECT will be increased significantly [9,15,17]. Besides this, both of above algorithms easily cause higher-capacity server nodes to be assigned more tasks than lower-capacity server nodes. The workloads of cloud server nodes are unbalanced and inefficient. Therefore, an improvement algorithm, the Sufferage algorithm [1,17,18,19], was proposed to reduce the workloads of the cloud server nodes. In the Sufferage algorithm, the Sufferage Value (SV), which is calculated by the second earliest ECT minus the earliest ECT, is used as an estimated factor to dispatch the task. Then, the task which has largest SV value can be selected and dispatched to the corresponding cloud server node which has minimum ECT. However, the Sufferage algorithm cannot obtain efficient performance when the number of waiting tasks is very large.
Therefore, the MaxSufferage algorithm [3], which is improved from the Sufferage algorithm, has three phases to solve the above problem. At first, the SV values of all tasks need to be calculated in the SVi calculation phase. Subsequently, in the MSVi calculation phase, the second earliest ECT of task i which has the maximum SV value will be elected as the MaxSufferage Value (MSV) value. In the final phase, the task dispatch phase, task i which has maximum SV can be dispatched to corresponding server node j when the ECTij of server node j is less than MSVi. Conversely, task i with maximum ECTij value can be dispatched to server node j. However, in this algorithm, the large tasks are easily dispatched to low-capability server nodes under the heterogeneous environments.
For solving the problem above, the Advanced MaxSufferage (AMS) algorithm [20] is proposed to improve this drawback of the MaxSufferage algorithm. However, the AMS only considers the task scheduling of service nodes, regardless of the cluster and type of service. As a result, an incremental algorithm is proposed to solve the scheduling service types, clusters, and service nodes simultaneously. In addition, all tasks can be dispatched to the appropriate server nodes in the cloud computing network even if the server nodes are in a heterogeneous environment.
Subsequently, our algorithm is described and explained as follows.

3. Three-Layer Cloud Dispatching Architecture

The traditional dispatch algorithm of cluster architecture does not dispatch tasks by the capacity of clusters, which may cause the drawbacks of task delay, low reliability, and high makespan. Therefore, a Three-Layer Cloud Dispatching (TLCD) architecture and related scheduling algorithm are proposed for application to cluster-based cloud environments, as shown in Figure 1. Before introducing the detail of the proposed protocol, all notation and their descriptions as used in the algorithm are organized in Table 1.
In the cloud computing environment, improving the accuracy of cloud service search is challenging [7]. Thus, to improve the overall performance, the first thing in the proposed protocol is identifying cloud service categories. Based on [7], the system will randomly select a cloud service task as the core task of the first cluster. After that, the similarities between this randomly selected cloud service task and other cloud service tasks will be calculated. Here, all cloud service tasks which have larger similarity scores than the predefined threshold will be added to the first cluster. Then these tasks will be removed from the candidate set of cloud service tasks. At this time, the system will continue to select another core task randomly from the remaining cloud service tasks to generate a second cloud cluster. Again, the system will apply a similar selection process to add similar cloud service tasks to this cluster. The selection process will repeat until all cloud service tasks are categorized into a cluster. After identifying and categorizing the tasks, all classified tasks will continuously be executed in the proposed TLCD architecture. The details of TLCD are shown as follows.

3.1. Category Assignment Cluster Layer

The traditional cluster architecture collects and distributes tasks to the cluster by way of cloud resource managers. However, the allocation process may be affected by cluster heterogeneity, causing the task to be insignificant in terms of scheduling. This is because that the tasks are allocated to an idle cluster by cloud resource managers. Therefore, the scheduling result is not ideal. This will increase the complexity of the cloud computing system.
Besides this, the diversity of tasks increases the delay in processing time. To reduce the delay and the complexity of scheduling, the heterogeneity task can be classified into different categories according to demand defined in the Category Assignment Cluster (CAC) layer [7,8]. The category cloud clusters can be divided into three types: SaaS, PaaS, and IaaS. Through these three categories of classification, the difficulty of scheduling heterogeneous tasks and scheduling delays can be reduced.

3.2. Cluster Selection Layer

After completing the classification of the category assignment cluster layer, the classified tasks can be dispatched to the corresponding category cluster. Subsequently, a Cluster Selection Algorithm (CSA) is proposed to assign the tasks to appropriate clusters by using the factors of Reliability (Ri), Cost (Ci), and MakeSpan (Mi) [3,21], as shown in Equations (1)–(3). MakeSpan is the length of time to complete a task. Basically, when the MakeSpan value increases, the system will need longer operation time. Due to the similarity of the fault tolerance of clusters, the computing power will increase as the Reliability gets higher. Therefore, we need to focus on clusters’ computing power [3,21]. Finally, the cost factor is defined as the cost needed for a task to be sent and responded to. When taking these three factors into account, tasks can be assigned to suitable clusters and the system efficiency can be enhanced. In addition, users and service providers can customize those three factors based on their own requirements. According to the above description, in the following example, we customize the Reliability (Ri) and Cost (Ci), and arrange the tasks to the suitable clusters. The reliability and the cost of assignment i are expressed by
R i = ( k = 1 C R k n k ) / N
and
C i = ( k = 1 L C k n k )
respectively. Moreover, the makespan of assignment i is expressed by
M i = ( k = 1 C M k n k ) / N
Subsequently, we propose an example to explain Algorithm 1. In Line (4) of Algorithm 1, we arrange the combination of tasks in all clusters. The cluster will choose an appropriate task combination and then help the node to adjust these tasks. Furthermore, Lines (5) to (8) are proposed to check if the R i and C i of each assignment i agree with RiTr && CiTc. Then, among those assignments, the one with the smallest Mi is scheduled. If there are more than two eligible groups, we compare Ri and Mi and choose Ai as the combination of the highest reliability and the least time.
In CSA, users can customize the quality of services by reliability, cost, and MakeSpan factors. Thus, algorithms can meet the requirements of various users and can enhance the efficiency of job scheduling. Subsequently, an example is shown to explain the CSA algorithm and the related assumptions are shown in Table 2.
Algorithm 1. Cluster Selection Algorithm
1: for total tasks N
2: for total clusters L
3:  get R k , M k , and C k of each cluster k
4:  Arrange all tasks in the cluster and assign the number of A i
5:   for calculating the R i , C i , and M i of the assignment A i
6:    if R i T r && C i < T c in assignment A i then A i is candidate assignment
7:   end for
8:   choose the smallest M i in candidate assignment A i
9: end for
10: end for
11:End
We assumed that there are 10 tasks need to be sent to 3 cloud clusters; the process is the following:
Step 1: Tr and Tc are set by the user. In this example, Tr and Tc are 22 and 260, respectively.
Step 2: Calculate Ri, Ci, and Mi of each allocation combination.
According to Table 2, assignment A1 assigns five tasks to Cluster 1, two tasks to Cluster 2 and three tasks to Cluster 3. We used Equations (1)–(3) to calculate the average of Ri, Ci, and Mi, and the results are R 1 = 22 × 5 + 24 × 2 + 25 × 3 10 = 23.3 ; C 1 = 25 × 5 + 28 × 2 + 32 × 3 = 242; M 1 = 30 × 5 + 36 × 2 + 15 × 3 10 = 26.7 .
The same procedure is followed for the other assignments, too.
Step 3: Select the schedule that meets the condition of Ri ≥ 22 and Ci < 260; Here, A1, A2, and A6 are selected.
Step 4: According to the conditions of Step 3, we choose A1 with the highest reliability and smallest MakeSpan. Since the result of assignment A1 is better than others, assignment A1 is elected as the combination to dispatch in this example.
The above procedure is the most suitable solution when the MakeSpan is the main concern. However, when the reliability is the main concern, the MakeSpan and Cost become the masking factors to filter out the schedule with the best reliability. After finished the CSA layer, the tasks can be dispatched to the corresponding clusters in the cloud cluster section layer. Subsequently, the appropriate server nodes need to be elected to complete the task in the next layer.

3.3. Server Nodes Selection Layer

After finishing the first two layers, the homogeneous tasks can be dispatched to homogeneous cloud clusters. However, the cloud workload can be unbalanced when a large number of tasks exits due to inappropriate task assignment. Based on the reasons above, this paper proposed a novel algorithm, the Advanced Cluster Sufferage Scheduling (ACSS), to solve the drawback of the MaxSufferage algorithm [12,17,18], especially in the heterogeneous environment. In ACSS, each task can be assigned to the appropriate server nodes by Sj which is calculated from the average ECT of the server node to reduce the influence of inappropriate assignment. The notation and details of the ACSS algorithm are shown in Table 3 and Algorithm 2.
In general, there are three phases in the ACSS algorithm. At first, the E E C T i (the earliest expected completion time) and S E E C T i (the second earliest expect completion time) of the Sj are found in order to calculate the SV value in the SVj calculation phase; the detailed process is shown in Lines (8) and (9) in the ACSS algorithm. Then, the MSV value will be set to the second earliest ECT value of task i in the second phase, which is called the MSVi calculation phase, while task i has the maximum SV value among all SV values.
In the final phase, the task dispatching phase, task i will be dispatched to Sj while MSVi > ECTij and EECTi > AECTj of Sj. Conversely, when EECTi < AECTj, task i can be dispatched to server node j where the ECTi is approximate to AECTj and the ECTi needs to be larger than AECTj. As a result, the main concept is different from those of previous algorithms; the detail of the algorithm is shown in Line (11) and (12) of Algorithm 2.
Algorithm 2. Advanced Cluster Sufferage Scheduling
1: for all unassigned tasks i
2:  for all server nodes S j
3:   T C T i j = E C T i j + r j
4:   do scheduling for all job assignments
5:    mark all server nodes as unassigned;
6:    for each task i in S j
7:     find server node S j that gives the earliest completion time;
8:     calculate the Sufferage Value ( S V = E E C T i S E E C T i );
9:      If the maximum value of S V i has two or even more the same
        then choose the assignment S E E C T i to M S V with maximum;
       else the assignment i with maximum S E C T i can be compared to other E E C T i ;
10:      end if;
11:      If ( M S V i < E C T i j   o f   S j )
        then the task i with maximum ECT can be dispatched to server nodes S j ;
12:      else if ( M S V i > E C T i j ) && ( E E C T i > A E C T j   o f   S j )
        then the task i can be dispatched to server nodes S j ;
13:      else if ( M S V i > E C T i j ) && ( E E C T i < A E C T j   o f   S j )
        then assignment task i > A E C T j _ A V G   o f   S j   & &   t a s k   i A E C T j   o f   S j can be
        dispatched to server nodes S j ;
14:      end if;
15:     end for
16:    end do
17:   end for
18:  r j = r j + E C T i j
19:  update   T C T i j = E C T i j + r j
20: end for
21:End
Based on the procedure above, the completion time and load balancing of the workload can be reduced efficiently in the heterogeneous cloud computing network. For the security issue, we can employ a Message Authentication Code (MAC) algorithm [18,22] with a session key to confirm that the message truly comes from the sender and has not been modified. Subsequently, the example is provided to help understanding of the ACSS algorithm in the server node selection layer.

4. Example and Comparison Results

In this section, there are four heterogeneous environments that can be discussed, such as HiHi (High heterogeneity task, High heterogeneity server node), LoLo, HiLo, and LoHi [8,19]. In general, HiHi is the most complex case among all of environments. As a result, an example of task assignment including four server nodes and twelve tasks is discussed under the HiHi heterogeneous environment. At first, the SV value and the MSV value can be calculated in the SVj calculation phase and the MSVi calculation phase separately. Subsequently, the AECTj of Sj is calculated in the task dispatching phase. In addition, the related parameters of the HiHi environment are shown in Table 4.

4.1. The SVi Calculation Phase

Step 1. List the expected execution times for all tasks i on Sj, as shown in Table 3.
Step 2. Calculate the SV value for each task. For example, the SV value of Task a is equal to SEECTi minus EECTi, which is 23526 − 22345 = 1181, and the same procedures will be executed for tasks b to l to calculate the SV values. The calculation results are shown in Table 5.

4.2. The MSVi Calculation Phase

Step 1. The second earliest ECT value of task i can be selected as the MSV value while task i has the maximum SVi value among all SV values. As shown in Table 6, the task with the largest SVi value is task g; thus, the second earliest ECT (here, it is 21486) of ECTaB is selected as the MSV value.

4.3. The Task Dispatching Phase

Basically, there are three cases that need to be discussed in different heterogeneous environments, and the cases are illustrated and discussed as follows, step by step.
Case 1.MSVi > ECTij of Sj and EECTi > AECTj of Sj
Compare the MSV value found in the MSVi calculation phase with the earliest expected completion time of other tasks. Task i can be dispatched to the appropriate server node j while MSVi > ECTij of Sj and EECTi > AECTj of S. Therefore, task d is dispatched to Node D while MSVi > ECTdD and ECTdD > AECTj in Table 7 and Table 8.
Case 2.MSVi < ECTij of Sj
Compare the MSV value found in the MSVi calculation phase with the earliest expected completion time of other tasks. Task i with maximum ECT can be dispatched to appropriate server node j when MSVi < ECTij of server node j. Therefore, task l is assigned to Node A in Table 9 because its ECTlA is greater than the MSV value (43336 > 21486).
Case 3.MSVi > ECTij of Sj and EECTi < AECTj of Sj
Compare the MSV value found in the MSVi calculation phase with the earliest expected completion time of other tasks. Task i in Table 10 will be dispatched to server node j where the ECTi is approximate to AECTj and the ECTi is larger than AECTj under the conditions that MSVi > ECTij and EECTi < AECTj of Sj. Therefore, task j is assigned to Node B in Table 11 because ECTjB is bigger than and closer to AECTB.
Subsequently, the above examples can be simulated in our experiment. At first, the number of tasks is set to 50 to 100 and the computing ability of the 5 cloud server nodes is set to {500~1000} units to adapt to the high-heterogeneity environment. Besides this, the comparisons of MakeSpan and load balancing among the Sufferage, MaxSufferage, and ACSS algorithms are simulated 50 times and the average value of makespan is taken, as shown in Figure 2, Figure 3, Figure 4 and Figure 5. In Figure 2, the proposed algorithm has a better MakeSpan than the others, especially in the large tasks environment. In addition, the load balancing index can be defined by rmin/rmax, where rmin is the shortest completed task time of all tasks and rmax is the longest completed task time of all tasks [1,15].
In general, the value of the load balancing index is a number between 0 and 1, with 0 being the worst load balance and 1 being the optimal load balance.
As shown in Figure 3, Figure 4 and Figure 5, the ACSS algorithm can obtain the best load balancing index (0.88) over Sufferage (0.87) and MaxSufferage (0.83). Because the ACSS algorithm uses the distribution of the average value, the MakeSpan of each node can achieve similar results. However, MaxSufferage completed time is better than that of Sufferage, but the load balancing results are similar. This is because MaxSufferage did not consider the load status of the node during the selection of tasks. As a result, the proposed ACSS algorithm can obtain the best results of complete time and load balance among these algorithms even if the heterogeneous cloud computing network is complex.
Besides this, the formula R U = j = 1 N T C j N m × 100 % is used to calculate the ratio of resource utilization to show whether the use of resource in this paper is maximized. In factor RU, the T C j represents the total expected completion time by virtual machine j, N represents the number of virtual machines, and m represents the final completion time of the virtual machine. The related ratio results of resource utilization are shown in Figure 6. In Figure 6, the ratio of resource utilization of ACCS can reach 89%, and this result is better than for other algorithms. This is because that the average value is used to consider the allocation status of nodes in the ACSS algorithm.
Subsequently, the parameter of matching proximity [11] is used to evaluate the degree of proximity of various scheduling algorithms. In Figure 7, the MET (Minimum Execution Time) and ECT (Expected Computing Time) are used to estimate whether the task can be quickly matched. A large value for matching proximity means that a large number of tasks are assigned to the machine that executes them faster, as expressed by
Matching   Proximity = i T a s k s E C T i S i i T a s k s E C T i M E T i .
As show in Figure 7, the matching ratio of the three algorithms is close to 1. These three algorithms have good matching efficiency. Subsequently, the performance and complexity of algorithms can be compared in Table 12. The results of the comparison table show that ACSS can obtain the best performance among all algorithms in evaluation factors including makespan, load balance, resource utilization, and matching proximity. Based on [9], the Big O notation is used to estimate the complexity of these algorithms. Then, the results of complexity of MaxSufferage and ACSS are O(n2rm) because parameter r indicates that the alternative condition is selected when the condition of (MSVi > ECTij) && (EECTi < AECTij of Sj) is satisfied. As a result, the complexity of the ACC algorithm is approximately equal to that of the Sufferage algorithm.

5. Conclusions

In this study, the TLCD architecture is proposed to provide secure and reliable scheduling and to improve the defect of slow response in cloud systems. Basically, TLCD includes three layers of procedures. In the first layer, which is called the CAC layer, the system can dispatch the heterogeneous tasks into appropriate category clusters to reduce task delay and overloading. Subsequently, a CSA algorithm is proposed in the CS layer to dispatch the task to an appropriate cluster to enhance the reliability and reduce the cost and completion time. In the final layer, which is defined as the SNS layer, the system can improve the load balancing and reduce the completion time by elements of MSV and the average ECT of Sj. Simulation results show that the proposed algorithms can obtain the best results among all algorithms in evaluation factors including makespan, load balance, resource utilization, and matching proximity under heterogeneous environments.

Author Contributions

M.-L.C. and Y.-F.H. designed the framework and wrote the manuscript. H.-C.H. and W.-C.T. verified the results of our work and conceived the experiments together. M.-L.C. and Y.-F.H. discussed the results and contributed to the final manuscript.

Funding

This research was funded by Ministry of Science and Technology of Taiwan under Grant MOST 106-2221-E-324-020.

Conflicts of Interest

The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Petkovic, I. CRM in the cloud. In Proceedings of the IEEE 8th International Symposium on Intelligent Systems and Informatics, Subotica, Serbia, 10–11 September 2010; pp. 365–370. [Google Scholar]
  2. Casanova, H.; Legrand, A.; Zagorodnov, D.; Berman, F. Heuristics for scheduling parameter sweep applications in grid environment. In Proceedings of the 9th Heterogeneous Computing Workshop, Cancun, Mexico, 1 May 2000; pp. 349–363. [Google Scholar]
  3. Chiang, M.L.; Luo, J.A.; Lin, C.B. High-Reliable Dispatching Mechanisms for Tasks in Cloud Computing. In Proceedings of the BAI2013 International Conference on Business and Information, Bali, Indonesia, 7–9 July 2013; p. 73. [Google Scholar]
  4. Buyya, R.; Ranjan, R.; Calheiros, R.N. Modeling and simulation of scalable Cloud computing environments and the CloudSim toolkit: Challenges and opportunities. In Proceedings of the International Conference on High Performance Computing & Simulation, Leipzig, Germany, 21–24 June 2009; pp. 1–11. [Google Scholar]
  5. Jones, M.T. Google’s Geospatial Organizing Principle. IEEE Comput. Graph. Appl. 2017, 27, 8–13. [Google Scholar] [CrossRef]
  6. Lee, Y.H.; Huang, K.C.; Wu, C.H.; Kuo, Y.H.; Lai, K.C. A Framework of Proactive Resource Provisioning in IaaS Clouds. Appl. Sci. 2017, 7, 777. [Google Scholar] [CrossRef]
  7. Alfazi, A.; Sheng, Q.Z.; Qin, Y.; Noor, T.H. Ontology-Based Automatic Cloud ServiceCategorization for Enhancing Cloud ServiceDiscovery. In Proceedings of the IEEE 19th International Enterprise Distributed Object Computing Conference, Adelaide, SA, Australia, 21–25 September 2015; pp. 151–158. [Google Scholar]
  8. Salton, G.; Buckley, C. Term-weighting approaches in automatic text retrieval. Inf. Process. Manag. 1988, 24, 513–523. [Google Scholar] [CrossRef]
  9. Reda, N.M.; Tawfik, A.; Marzok, M.A.; Khamis, S.M. Sort-Mid tasks scheduling algorithm in grid computing. J. Adv. Res. 2015, 6, 987–993. [Google Scholar] [CrossRef] [PubMed]
  10. Anousha, S.; Ahmadi, M. An improved Min-Min task scheduling algorithm in grid computing. Lect. Notes Comput. Sci. Grid Pervasive Comput. 2013, 7861, 103–113. [Google Scholar]
  11. Merajiand, S.; Salehnamadi, M.R. A batch mode scheduling algorithm for grid computing. J. Basic Appl. Sci. Res. 2013, 3, 173–181. [Google Scholar]
  12. Shojafar, M.; Cordeschi, N.; Baccarelli, E. Energy-efficient Adaptive Resource Management for Real-time Vehicular Cloud Services. IEEE Trans. Cloud Comput. 2018. [Google Scholar] [CrossRef]
  13. Shojafar, M.; Javanmardi, S.; Abolfazli, S.; Cordeschi, N. FUGE: A joint meta-heuristic approach to cloud job scheduling algorithm using fuzzy theory and a genetic method. Clust. Comput. 2015, 18, 829–844. [Google Scholar] [CrossRef]
  14. Shojafar, M.; Canali, C.; Lancellotti, R.; Abawajy, J. Adaptive Computing-plus-Communication Optimization Framework for Multimedia Processing in Cloud Systems. IEEE Trans. Cloud Comput. 2018. [Google Scholar] [CrossRef]
  15. Maheswaran, M.; Ali, S.; Siegel, H.J.; Hensgen, D.; Freund, R.F. Dynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems. J. Parallel Distrib. Comput. 1999, 59, 107–131. [Google Scholar] [CrossRef]
  16. Braun, T.D.; et al. A comparison study of static mapping heuristics for a class of meta-tasks on heterogeneous computing systems. In Proceedings of the Heterogeneous Computing Workshop (HCW ‘99), San Juan, Puerto Rico, 12 April 1999; pp. 15–29. [Google Scholar]
  17. Etminani, K.; Naghibzadeh, M. A Min-Min Max-Min selective algorithm for grid task scheduling. In Proceedings of the Third IEEE/IFIP International Conference in Central Asia on Internet, Tashkent, Uzbekistan, 26–28 September 2007; pp. 138–144. [Google Scholar]
  18. Lan, J.; Zhou, J.; Liu, X. An area-efficient implementation of a Message Authentication Code (MAC) algorithm for cryptographic systems. In Proceedings of the IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 1977–1979. [Google Scholar]
  19. Li, S.; Liu, J.; Wang, S.; Li, D.; Huang, T.; Dou, W. A Novel Node Selection Method for Real-Time Collaborative Computation in Cloud. In Proceedings of the International Conference on Advanced Cloud and Big Data (CBD), Chengdu, China, 13–16 August 2016; pp. 98–103. [Google Scholar]
  20. Chiang, M.L.; Hsieh, H.C.; Tsai, W.C.; Ke, M.C. An Improved Task Scheduling and Load Balancing Algorithm under the Heterogeneous Cloud Computing Network. In Proceedings of the IEEE 8th International Conference on Awareness Science and Technology (iCAST2017), Taichung, Taiwan, 8–10 November 2017; p. 61. [Google Scholar]
  21. Deng, J.; Huang, S.C.H.; Han, Y.S.; Deng, J.H. Fault-Tolerant and Reliable Computation in Cloud Computing. In Proceedings of the IEEE Globecom 2010 Workshop on Web and Pervasive Security, Miami, FL, USA, 6–10 December 2010; pp. 1601–1605. [Google Scholar]
  22. Yoon, E.J.; Yoo, K.Y. An Efficient Diffie-Hellman-MAC Key Exchange Scheme. In Proceedings of the Fourth International Conference on Innovative Computing, Information and Control (ICICIC), Kaohsiung, Taiwan, 7–9 December 2009; pp. 398–400. [Google Scholar]
Figure 1. Three-Layer Cloud Dispatching Architecture.
Figure 1. Three-Layer Cloud Dispatching Architecture.
Applsci 08 01385 g001
Figure 2. The comparison results of MakeSpan among Sufferage, MaxSufferage, and ACSS algorithms for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Figure 2. The comparison results of MakeSpan among Sufferage, MaxSufferage, and ACSS algorithms for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Applsci 08 01385 g002
Figure 3. The results of load balancing index in the Sufferage scheduling algorithm for (a) n = 4 and number of task = 50 and (b) n = 4 and number of tasks = 100.
Figure 3. The results of load balancing index in the Sufferage scheduling algorithm for (a) n = 4 and number of task = 50 and (b) n = 4 and number of tasks = 100.
Applsci 08 01385 g003
Figure 4. The results of load balancing index in the MaxSufferage scheduling algorithm for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Figure 4. The results of load balancing index in the MaxSufferage scheduling algorithm for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Applsci 08 01385 g004
Figure 5. The results of load balancing index in the Advanced Cluster Sufferage Scheduling (ACSS) scheduling algorithm for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Figure 5. The results of load balancing index in the Advanced Cluster Sufferage Scheduling (ACSS) scheduling algorithm for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Applsci 08 01385 g005
Figure 6. The ratio of Resource Utilization in the ACSS scheduling algorithm for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Figure 6. The ratio of Resource Utilization in the ACSS scheduling algorithm for (a) n = 4 and number of tasks = 50 and (b) n = 4 and number of tasks = 100.
Applsci 08 01385 g006
Figure 7. The ratio of Matching Proximity in all of scheduling algorithms.
Figure 7. The ratio of Matching Proximity in all of scheduling algorithms.
Applsci 08 01385 g007
Table 1. The descriptions of notation in the cluster selection layer.
Table 1. The descriptions of notation in the cluster selection layer.
NotationsDescription
NThe total number of tasks
LThe total number of clusters
kThe cluster k
n k The number of tasks assigned to the k cluster
T r The threshold value of the overall reliability
T c The threshold value of the overall cost
A i The assignment i
R k The reliability of the cluster k.
M k The makespan of the cluster k
C k The cost of the cluster k
Table 2. Example of Cluster Selection Algorithm.
Table 2. Example of Cluster Selection Algorithm.
Cluster 1Cluster 2Cluster 3(Ri, Ci, Mi)
RkCkMkRkCkMkRkCkMk
(22,25,30)(24,28,36)(25,32,15)
A1523(23.3, 242, 26.7)
A2820(22.4, 256, 31.2)
A3442(23.4, 276, 29.4)
A4361(23.5, 275, 32.1)
A5514(23.4, 281, 24.6)
A61000(22, 250, 30)
Table 3. The notations of ACSS algorithm in the server nodes selection layer.
Table 3. The notations of ACSS algorithm in the server nodes selection layer.
TCTijThe task completion time of the task i in the server node Sj
ECTijThe expected completion time of task i in the server node Sj
rjThe expected time server node Sj will become ready to execute for next task
AECTjThe average expected completion time in the server node Sj
EECTiThe earliest expected completion time of task i
SEECTiThe second earliest expected completion time of task i
SViThe Sufferage Value of task i
MSViThe MaxSufferage Value of task i
Table 4. Illustration of the expected execution times of tasks.
Table 4. Illustration of the expected execution times of tasks.
Node ANode BNode CNode D
Task a22345235262432325328
Task b16667179301869620187
Task c31083318973203434678
Task d24712251562577426330
Task e17018180691866420348
Task f12050132891491115918
Task g19035214862237124031
Task h13911146021565517678
Task i8016853689489685
Task j13618145961533116558
Task k27861281562865529791
Task l43336447314677749659
Table 5. Calculation of the SV values of tasks.
Table 5. Calculation of the SV values of tasks.
Node ANode BSV
Task a22345235261181
Task b16667179301263
Task c3108331897814
Task d2471225156444
Task e17018180691051
Task f12050132891239
Task g19035214862451
Task h1391114602691
Task i80168536520
Task j1361814596978
Task k2786128156295
Task l43336447311395
Table 6. Calculation of the MSV values of tasks.
Table 6. Calculation of the MSV values of tasks.
Node ANode BSVMSV
Task a22345235261181-
Task b16667179301263-
Task c3108331897814-
Task d2471225156444-
Task e17018180691051-
Task f12050132891239-
Task g1903521486245121486
Task h1391114602691-
Task i80168536520-
Task j1361814596978-
Task k2786128156295-
Task l43336447311395-
Table 7. Comparison of the MSV values of tasks.
Table 7. Comparison of the MSV values of tasks.
Applsci 08 01385 i001
Table 8. Comparison of the average ECT values of tasks in Node D under Case 1.
Table 8. Comparison of the average ECT values of tasks in Node D under Case 1.
Applsci 08 01385 i002
Table 9. Comparison of the average ECT values of tasks in Node A under Case 2.
Table 9. Comparison of the average ECT values of tasks in Node A under Case 2.
Applsci 08 01385 i003
Table 10. Comparison of the average ECT values of tasks in Node D under Case 3.
Table 10. Comparison of the average ECT values of tasks in Node D under Case 3.
Applsci 08 01385 i004
Table 11. Assigning tasks to the server nodes.
Table 11. Assigning tasks to the server nodes.
Applsci 08 01385 i005
Table 12. The performance and complexity comparisons of all of algorithms.
Table 12. The performance and complexity comparisons of all of algorithms.
SufferageMaxSufferageACSS
ExperimentsMakeSpan7.57 × 1046.97 × 1046.96 × 104
Load Balance0.870.840.88
Resource Utilization91%95.5%96%
Matching Proximity0.970.9650.963
ComplexityO (n2m)O (n2rm)O (n2rm)

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top