Next Article in Journal
Analytical Attitude Guidance Planner for Multiple Ground Targets Acquisitions
Previous Article in Journal
Ignoring Internal Utilities in High-Utility Itemset Mining
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Energy-Efficient Task Scheduling and Resource Allocation for Improving the Performance of a Cloud–Fog Environment

1
Department of Information Technology, Sri Krishna College of Engineering and Technology, Coimbatore 641008, India
2
Department of Information Technology, Karpagam College of Engineering, Coimbatore 641032, India
3
Department of Computer Science and Engineering, Sri Krishna College of Engineering and Technology, Coimbatore 641008, India
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(11), 2340; https://doi.org/10.3390/sym14112340
Submission received: 22 September 2022 / Revised: 7 October 2022 / Accepted: 17 October 2022 / Published: 7 November 2022

Abstract

:
Inadequate resources and facilities with zero latency affect the efficiencies of task scheduling (TS) and resource allocation (RA) in the fog paradigm. Only the incoming tasks can be completed within the deadline if the resource availability in the cloud and fog is symmetrically matched with them. A container-based TS algorithm (CBTSA) determines the symmetry relationship of the task/workload with the fog node (FN) or the cloud to decide the scheduling workloads (whether in the fog or a cloud). Furthermore, by allocating and de-allocating resources, the RA algorithm reduces workload delays while increasing resource utilization. However, the unbounded cloud resources and the computational difficulty of finding resource usage have not been considered in CBTSA. Hence, this article proposes an enhanced CBTSA with intelligent RA (ECBTSA-IRA), which symmetrically balances energy efficiency, cost, and the performance-effectiveness of TS and RA. Initially, this algorithm determines whether the workloads are accepted for scheduling. An energy-cost–makespan-aware scheduling algorithm is proposed that uses a directed acyclic graph (DAG) to represent the dependency of tasks in the workload as a graph. Workloads are prioritized and selected for the node to process the prioritized workload. The selected node for processing the workload might be a FN or cloud and is decided by an optimum efficiency factor that trades off the schedule length, cost, and energy. Moreover, a Markov decision process (MDP) was adopted to allocate the best resources using the reinforcement learning scheme. Finally, the investigational findings reveal the efficacy of the presented algorithms compared to the existing CBTSA in terms of various performance metrics.

1. Introduction

In the era of network technology, cloud computing experienced extraordinary growth owing to the explosive usage of the web and innovation of the communication technology for solving large-scale problems. This has led to both software and hardware resources over the web for cloud clients. Typically, it means a web-based computing framework that distributes data, resources, and services to different systems of the client upon request. It minimizes both the computation and processing difficulties of conventional information processing systems. However, several challenges have been observed in the development of cloud computing with the Internet of Things (IoT). The IoT paradigm is altering the mode in which clients interact with the physical world, increasing the growth of connectivity towards it [1].
It can lead to novel systems with infinite abilities and huge influences by facilitating many client contributions. It exploits cloud computing to manage a huge quantity of data and services. On the contrary, it may cause an increase in maximum latency and system failure challenges. As a result, fog computing has been developed to handle these challenges [2]. Fog computing is an expansion of cloud computing in which the computations and processing can be shifted between the network hub and the edge to support real-world uses and reduce the data center complexity [3]. Generally, fog computing involves a vast amount of FNs configured using various cloud servers at various localities to provide different information facilities and systems to the clients.
Moreover, virtualization is utilized in the fog paradigm, in which FNs are imperceptible to the clients and the clients will only acquire the cloud services for processing their additional tasks, e.g., in business process management [4]. Considering that FNs can make resources to different systems and that each system demands their use for processing the observed information, virtualization will provide a remote configuration for avoiding the impacts of resource scarcity. In state-of-the-art techniques, virtual machines (VMs) are broadly used for numerous real-world purposes. However, the use of VMs is very complex for workloads that desire to be executed directly. Therefore, the fog paradigm has been deployed for providing a quicker reaction and supply to many systems than VMs [5]. Although VMs only need a few seconds to begin and pause, latency-sensitive workloads are intolerable. Further, regardless of the VM’s higher seclusion ability, the VM forecasting cost is very expensive. So, VM efficiency minimizes comparatively when increasing the amount of VMs.
To combat these challenges of VMs, the RA and TS are major concerns in the cloud–fog paradigm. In recent VM applications, a portable virtualization method known as a container has been employed. Yin et al. [6] modeled a fog paradigm based on container qualities and suggested a CBTSA for latency-sensitive fog systems. In this algorithm, the workload implementation process was split into two phases: (i) estimating if the workload was either decided or discarded and allocating the decided workloads to either the FN or the cloud. Moreover, every decided workload was executed during the latency limit. Further, a resource reassignment method was adopted for every workload’s RA wherein the FN was predicted to enhance resource use and minimize workload delays. However, this algorithm avoids unbounded cloud services and neglects computational difficulties.
Therefore in this article, an ECBTSA-IRA is proposed to enhance the CBTSA by considering the computation time reduction. At first, it determines whether the workloads are decided or not. Then, an energy-cost–makespan-aware scheduling algorithm is applied in which the decided workloads are denoted as graphs using a DAG. In this algorithm, two phases are performed—workload prioritizing and node selection. First, the workloads in the DAG are prioritized depending on the power criteria, cost criteria, computation, and communication period. Then, these prioritized workloads are assigned to a suitable processing node, i.e., either cloud or fog, to achieve the best efficiency factor that computes the tradeoff among the schedule interval, cost, and energy. Once the TS is completed, the MDP is developed, which considers the RA problem, and different reinforcement learning methods, such as QL, SARSA, ESARSA, and MC are applied to allocate the optimal resources. Moreover, it satisfies the low-delay criteria of heterogeneous IoT uses and allocates resources efficiently.
The remaining sections are prepared as follows: Section 2 studies the prior research related to the RA schemes in the cloud–fog paradigms. Section 3 explains the presented algorithms and Section 4 displays their efficiency. Section 5 summarizes the entire article.

2. Literature Survey

Hasan and Huh [7] suggested a heuristic-based RA of the VM selection and a VM assignment approach for reducing the total energy consumption and operating costs when satisfying the client-level service level agreement (SLA). Originally, an architectural model was defined for optimal RA and control. Then, heuristic-based energy-aware resource provisioning was examined without the negotiated SLA. Moreover, dynamicity and self-control modifications were provided to allocate resources for dynamic and unpredictable workloads.
Alsaffar et al. [8] noted that RA depends on the interaction between fog and cloud systems. In this structure, novel algorithms, namely selection policies of the linearized choice tree, were adopted depending on the number of facilities, execution interval, and VM’s ability for directing and assigning the client demand. Further, an algorithm was presented for allocating the resources to meet SLA and quality-of-service (QoS), including modification of the big data sharing in fog and cloud systems.
Yu et al. [9] formulated a fog-enabled effective price reduction challenge for a cloud source in which the price included the power of cloud servers, system spectrum, and income failure (because of the propagation latency and the monetary reward to fog systems). After that, an analogous and dispersed scheme was suggested depending on the proximal Jacobian alternating direction method of the multipliers (PJ-ADMM). Hoang and Dang [10] developed a region-based cloud algorithm for TS in fog computing. At first, the fog-based region and cloud (FBRC) structure were developed for making closer resources. A case was reported as to whether the estimations were done at the farthest data centers or neighboring areas or together. Moreover, the TS problem was formulated as an integer program.
Pham et al. [11] developed a cost–makespan-aware scheduling (CMaS) scheme for TS in fog computing. In this algorithm, a workload reallocation method was introduced to filter the CMaS scheme outcomes that satisfied the client-described target limits. Moreover, a balance between essential costs was achieved for the usage of cloud resources and the efficiency of application execution. Ni et al. [12] developed a RA method in the fog paradigm depending on the valued duration of Petri Nets, which were used for choosing the satisfying resources autonomously by accounting for the expense and duration needed to execute the workload and the integrity analysis of fog resources and users. Moreover, it was built according to the characteristics of fog resources.
Sun and Zhang [13] developed the RA framework depending on the repetitive players in the fog paradigm. Initially, a system framework was presented depending on deep learning that used the qualities of cloud and fog servers. After that, a reward and punishment method was established based on the resource set-funding mechanism for incorporating sporadic resources to construct the variable resource group, which optimized the spare resources in the neighboring system. Further, an incentive method was applied to motivate several resource providers to distribute their resources with the resource group and manage the resource followers since they vigorously executed their workloads.
Nie et al. [14] designed a VM allocation algorithm based on multi-dimensional resources that considered the diversity of users’ requests. At first, the utilization of each dimension resource of physical machines was considered and a D-dimensional resource state model was constructed. Then, an energy-resource state metric was introduced and an energy-aware multi-dimensional RA named MRBEA was adopted to assign the resources according to the resource state and energy consumption of physical machines.
Liu et al. [15] developed a general multi-user system framework for achieving efficient TS in heterogeneous fog systems. Originally, processing competence was applied to combine computing resources and transfer facilities. After, a dispersive stable TS (DATS) strategy was applied to lessen the service latency using two major elements, such as a processing efficacy-based progressive computing resources competition (PCRC) and a synchronized TS (STS).
Yang et al. [16] developed a widespread systematic framework for precisely analyzing the total power efficacy of homogeneous fog systems. First, the tradeoff between efficiency and power expense in the combined workload offloading was analyzed. Then, a maximal energy-efficient TS (MEETS) scheme was employed to derive the best-assigned choice for a workload.
Jia et al. [17] investigated the computing RA challenge in a three-layer fog computing network. The key goal of this analysis was to develop a RA scheme to increase cost efficiency. For this purpose, the RA challenge was formulated as a deferred acceptance-based double-matching scheme (DA-DMS) according to the price efficiency obtained by evaluating the efficacy and price of the fog computing systems.
Yang et al. [18] analyzed the collaborative TS problem for general homogeneous fog networks. In this analysis, fair network efficiency in provision latency and power usage was achieved. To achieve this, client data, neighborhood implementation, incentive limit, queuing, workload offloading, and wireless communications were considered. Moreover, delay energy-balanced TS (DEBTS) was adopted for TS. In this method, the evaluation of the control variable was integrated with the Lyapunov optimizer to reduce power usage and provision latency in fog networks.
Balevi and Gitlin [19] presented a stochastic geometry analysis for computing the optimal amount of FNs while the end systems transmitted their packets to the FNs. In this model, FNs and end systems were considered points in a two-dimensional Euclidean space. Through this model, the average data rate was enhanced and the transmission delay was minimized. Moreover, the optimal amount of FNs was reduced for high path loss exponent channels denoting that FNs should have been chosen among the nodes that had maximum computational power for these channels.
Wang et al. [20] designed the dynamic TS strategy depending on a weighted bi-graph (DTSWB), which formulated the scheduling dilemma as the maximum WB harmonizing challenge. This dilemma was resolved using different operations: state data acquisition of offloaded workloads and network operators, correlation finding, revenue matrix determination, and best harmonizing. Li et al. [21] suggested an energy-efficient computation offloading and RA (ECORA) method in the fog paradigm. In ECORA, the computation offloading challenge was decoupled into sub-problems of RA and offloading decisions.
Zhou et al. [22] developed a new algorithm called the improved TS algorithm (ITSA) using the gain value of a workload swap. Initially, the idea of the gain value of a workload swap was introduced. After that, the workload with the least gain value and the workload with the highest gain value were combined to create a workload pair. Further, scheduling was performed by the greedy mechanism.
Ren et al. [23] designed an improved three-layer fog-to-cloud structure and schedule fit algorithm to use flog-to-cloud resources effectively, achieving QoS regarding delay and service failure probability. Guevara and da Fonseca [24] presented two schedulers depending on the integer linear programming, which schedules workloads either in the cloud or on fog devices. The schedulers used the class of facilities to choose the processing components wherein the workloads must be performed.
Movahedi Z. and Defude [25] presented a fog-based structure to handle the TS requests and give the best decisions. Then, the TS issue was modeled as an integer linear programming optimization, which took both time and fog energy usage. Further, this issue was resolved by an opposition-based chaotic whale optimization algorithm (OppoCWOA).
Yin et al. [26] developed the TS mechanism using workload priority. Initially, a cloud–fog model was constructed for smart production lines and the multi-objective function was created for TS, which reduced the service latency and energy usage of the workloads. Moreover, an improved hybrid monarch butterfly optimization and improved ant colony optimization algorithm (called HMA) were utilized to explore the best TS mechanism.
Guo et al. [27] designed an intelligent genetic scheme (IGS) for multi-objective collaboration service scheduling. In the initial population selection step, the initial population creation method was modified, a portion of the population was arbitrarily chosen, and the selection procedure was iteratively optimized. The diversity of the population in the dynamic selection was enhanced by the mutation aspects depending on individual innate efficiencies. According to the fitness function, the optimal collaborative services were scheduled, which reduced the cost and enhanced efficiency.

2.1. Drawbacks in the Existing Algorithms

  • In [7], the authors did not consider the network bandwidth, co-location, and parallelization, which were major concerns due to data centers from various locations, serving user requests, network criteria, and delays.
  • In [8], the computational time complexity was high. Moreover, the authors did not consider the minimization of the workload completion time to resolve the TS dilemma in the cloud–fog networks.
  • In [9], the memory and execution time were not taken into the objective function.
  • In [10], the longer queuing delay provided a high computation time.
  • The CMaS scheme [11] was not suitable for real-world applications.
  • In [12], the authors did not provide more suitable services to users.
  • In [13], the costs of the service providers and the power usage in fog servers were not reduced.
  • The efficiency of MRBEA [14] was not effective due to more VM migrations.
  • In [15], the authors did not consider the power–delay tradeoff dilemma in heterogeneous fog networks while constructing the preference profile.
  • The MEETS scheme [16] did not consider the offloading workloads in heterogeneous fog systems.
  • The computational complexity of the DA-DMS [17] was high.
  • The DEBTS scheme [18] was not effective for complex homogeneous fog networks.
  • In [19], the optimum locations of FNs were not determined, and caching efficiency was not enhanced.
  • In [20], the authors did not consider the dependent types of workloads, and the execution order of the workloads for further enhancing the performance.
  • The ECORA [21] method has high complexity.
  • In ITSA [22], workloads were not managed instantly in a few situations where workloads contained arbitrariness.
  • In [23], the service failure probability was high because, in this structure, the workload was performed on a single VM, which restricted the CPU resources. Moreover, the workload was not properly sent while there were several workloads in the network.
  • In [24], the decision was contradictory in real-time scenarios since the framework was only suitable for a single objective function.
  • In OppoCWOA [25], the scenario was not considered, where the workload implementation was unsuccessful on a fog node because of the destruction of the CPU.
  • In [26], the authors did not consider the execution time and memory to determine the objective function.
  • In [27], the running period of IGS increased while the population was huge.
From this literature—most of the algorithms did not consider the computational costs (i.e., processing time, completion time, ideal time, and so on,) of TS as objective functions. Moreover, the unbounded cloud services were not considered. To resolve these challenges, this study considers computational time reduction as an objective (cost) function to schedule workloads, as well as distributes the optimal resources for cloud–fog systems.

2.2. Contribution of the Study

The major contributions of this study are the following:
  • It determines whether the workloads are decided or not.
  • An energy-cost–makespan-aware scheduling algorithm is applied, in which the decided workloads are denoted as graphs using a DAG.
  • Once TS is completed, this ECBTSA-IRA algorithm using different reinforcement learning methods, such as QL, SARSA, ESARSA, and MC, is performed to allocate the resources efficiently.

3. Proposed Methodology

In this section, the proposed algorithms are briefly explained. At first, CBTSA is executed to determine whether the given workload is decided or not [6]. Then, an ECBTSA algorithm is described, which schedules the workloads according to their priority levels and efficiency factors. Moreover, an ECBTSA-IRA algorithm is explained for allocating resources efficiently. Figure 1 portrays the flow diagram of the proposed ECBTSA-IRA algorithm.

3.1. Energy-Efficient, Cost- and Performance-Effective Container-Based Task Scheduling Algorithm

Once the workloads t T are decided, the workload scheduler distributes the decided workloads according to their priority levels. If the workload is accomplished in either the cloud or the fog node, then it is simply shared. If the cloud and fog nodes finish the workload, then the workload assigner requests to choose where to locate the workload. So, the workload requests access to the processed information in the cloud when the workload requests more resources. For this reason, a threshold is set and fine-tuned according to the present group of workloads to maximize the number of effective workloads on the node.
The workload assigner must allocate the choice threshold δ p j for every time, i.e., the resource threshold of node j in the interval p . First, the resource requirements R e a v g j per period of workload are computed by the workload assigner.
If the resource requirement is greater than δ p j , then the workload is distributed to the cloud. Otherwise, the workload is allocated to the node for implementation. If the number of workloads decided in p is superior to the number of workloads in p 1 , then the threshold of p is not optimum. So, the workload assigner minimizes the threshold according to the mean of each workload in p , denoted as:
δ p + 1 j = δ p j , p > 1 , T p j < T p 1 j m a x R e a v g j t | t T p j , p > 1 , T p j T p 1 j R j , p = 1
In Equation (1), R j is the overall resource of the node in any interval. During the threshold update, the priority for each decided workload is required to compute the most suitable schedule for executing the decided workloads. Moreover, the node selection allocates each decided and prioritized workload to a suitable cloud or FN to achieve the best efficiency (cost) factor.

3.1.1. Workload Prioritizing Step

In this step, the decided workloads are ranked, depending on their arrangement priorities. Consider p r i t p j is the priority of t p at j and defined as:
p r i t p j = w ¯ t p j + p r i t p i , i f   j i w ¯ t p j , i f   j = i
In Equation (2), w ¯ t p j denotes the average computation time of t p at j and is calculated as:
w ¯ t p j = p = 1 P t w t p j = v t , d a t a p = 1 P m a x t r t , p j
In Equation (3), v t , d a t a is the amount of data needed to be processed by t during p , w t p j is the computation time of t p at j during p , P m a x t is the maximum amount of interval that the workload can maintain, P t is the definite amount of interval used by t , and r t , p j is the allocated resources by the fog node j . To satisfy the delay constraint, P m a x t can be calculated as:
P m a x t = d e l a y t / p j
In Equation (4), d e l a y t is the deadline of t and p j is the constant, i.e., the time of j . Thus, each decided workload is prioritized and ranked in a non-growing manner of the priority range that gives a topological level of each workload. So, the priority limit among all decided workloads is preserved.

3.1.2. Node Selection Step

The minimum priority workload t p only initiates its execution after the maximum priority workloads t p of j are finished. Consider D T T t j as the completion time of the last decided least priority t of j . Moreover, it is the interval, while the incoming information of t is prepared to be broadcasted to the chosen node for executing t . It is computed as:
D T T t j = v t , d a t a ω n j
In Equation (5), ω n j is the bandwidth that j assigns to t for linking it to the cloud. For the entry task, D T T t j = 0 . If t p is allocated to j , then the execution time of t ,   E x t j is determined as:
E x t j = w ¯ t p j + D T T t j
The ready time of t on j ,   r d y t j is the interval while the essential incoming information of t , which was transferred from the memory disk on either the cloud or FN, was reached at target j . Therefore, r d y t j is defined by
r d y t j = D T T t j + max E x t j
In Equation (7), n is the number of fog/cloud nodes in the network. Consider E S T t j and E F T t j as the initial commence interval and the completion interval of t on j . Moreover, E S T t j is defined by exploring the initial ideal time (IT) of t on j , which can maintain the finishing interval of   t .
E S T t j = m a x I T t , r d y t j
E F T t j = E S T t j + w t j
In Equation (9), w t j is the computation interval of t on j . Moreover, it perceives the economic price at which the cloud clients are stimulated for the utilization of data center resources, storage, and memory. The fog–cloud architecture is made up of FNs and developed using the cloud nodes offered as data center services.
Thus, consider c o s t t j as the economic cost to execute t on j . If j is a cloud node, c o s t t j comprises processing, storage, memory expenses of t on j , and transfers expenses for the (number of) departing information from other cloud nodes to the desired j to execute t . If j is an FN, then cloud clients will charge for broadcasting the departing information from the cloud nodes to the desired FN in the neighboring network. As a result, c o s t t j is defined as:
c o s t t j = c p r o c t , j + c s t r t , j + c m e m t , j + j n c l o u d c c o m m t , j , j n c l o u d j n c l o u d c c o m m t , j , j n f o g
In Equation (10), every expense is computed as:
The processing expense is defined as:
c p r o c t , j = c 1 w t j
Here, c 1 refers to the processing expense per interval of task implementation on j . Consider c 2 as the storage expense per information and s t r t as the storage volume of t on j . After, the storage expense of t is determined as:
c s t r t , j = c 2 s t r t
Moreover, the expense of utilizing j t h memory for t is determined as:
c m e m t , j = c 3 s m e m
In Equation (13), s m e m is the amount of memory utilized for t and c 3 is the memory expense per information. If c 4 is the total expense per information to broadcast departing information from j , then the transfer expense is computed as:
c c o m m t , j = c 4 E x t j
Then, an efficiency factor is defined to compute the tradeoff between the expense and EFT as:
U t j = min c o s t t j c o s t t j min E F T t j E F T t j
After that, t is allocated to j , which gives the highest tradeoff U t j . After t is allocated on j , the initial completion interval of t , i.e., A F T t j is equal to the E F T t j value. Thus, all of the decided workloads are prioritized and scheduled for completing their executions efficiently.
Algorithm 1
Input: Decided workload set t T
Output: A workload schedule
Initialize
Determine the priority range p r i t p j of every t T ;
Rank all decided workloads T into a list L according to their priority levels;
f o r a l l   t L
    f o r a l l   j n
      Compute E S T t j ,   E F T t j and c o s t t j ;
      Compute U t j ;
    e n d   f o r
   Allocate t to j that increases U t j of   t ;
e n d   f o r
End

3.2. ECBTSA with Intelligent Resource Allocation

The main concept of this IRA is that FN trains the IoT paradigm through an interface and modifies it. The FN receives incentives for each resource it employs; once FN is trained in an effective resource strategy, it will be able to increase its forecasted-combined incentives, modify the IoT paradigm, and fulfill the requirements.

3.2.1. MDP Problem Formulation

For a resource demand from a client with efficiency u t at interval τ , if the FN wants to use the resource x τ = s u p p l y that indicates to supply the user at the edge, then it can obtain a direct incentive r τ , and any resource blocks (RBs) can be engaged. Otherwise, for x τ = d i s c a r d , which indicates to decline (to supply the edge client) and transfer it to the cloud, the FN can sustain its accessible RBs and obtain r τ . The r τ range relates to x τ   and u τ . Assume that the quantized u τ 1 , 2 , , U and the FN state s τ at τ are represented as:
s τ = 10 b τ + u τ
In Equation (16), b τ 0 , 1 , , N is the quantity of engaged RBs at τ . Notice that the succeeding state s τ + 1 relies only on the present state s τ , the efficiency u τ + 1 of the consecutive service demand, and the resource used s u p p l y   o r   d i s c a r d , ensuring the Markov state P s τ + 1 | s 0 , , s τ 1 , s τ , x τ = P s τ + 1 | s τ , x τ .
So, the cloud–fog RA dilemma is devised as MDP, which is represented by the tuple S , A , P s s x , R s s x , where S is the group of every promising state, i.e., s τ S , A is the group of resources, i.e., x τ A = s u p p l y , d i s c a r d , P s s x is the transition chance from s to s whist x is considered, i.e., P s s x = P s | s , x where s is a term for s τ + 1 and R s s x is the direct incentive accepted if x is considered at s that completes in s , e.g., r τ = R s τ s τ + 1 x τ R . The profit G τ is the total deducted incentives accepted from τ ahead and described as:
G τ = r τ + Υ r τ + 1 + Υ r τ + 2 + = j = 0 Υ j r τ + j
In Equation (17), Υ 0 , 1 indicates the coupon rate, i.e., the weight of upcoming incentives regarding the direct incentive, Υ = 0 rejects upcoming incentives and Υ = 1 indicates the upcoming incentives are of equal importance similar to the direct incentives. The MDP dilemma’s purpose is to increase the predicted main profit E G 0 . In this MDP, for the FN consisting N RBs, there are U N + 1 states, s τ S = 1 , , U N + 1 where U refers to the highest discrete efficiency factor. For τ = 0 , every RB is accessible i.e., b = 0 , so from Equation (16), there are U promising s 0 1 , , U based on u 0 . It completes at τ if each RB is engaged, i.e., b τ = N , so there are U     s τ U N + 1 , U N + 2 , U N + 1 . Moreover, the incentive strategy is presented depending on the decided efficiency and the resource considered for it.
In particular, at   τ , depending on u τ and x τ , the FN accepts r τ R = r s h , r s l , r r h , r r l and travels to the s τ + 1 where r s h represents the incentive to supply a high-efficiency demand, r s l denotes the incentive to supply a low-efficiency demand, r r h and r r l indicate the incentive to omit the high- and low-efficiency demands, accordingly. Demand is measured as high- or low-efficiency associated with the framework depending on the threshold δ τ j for every interval, i.e., resource threshold of FN j in τ . For the case, δ τ j is chosen as a particular average of the efficiency in the framework. Therefore, the incentive factor is determined as:
r t = r s h , i f   x τ = s u p p l y ,   u τ δ τ j r r h , i f   x τ = d i s c a r d ,   u τ δ τ j r s l , i f   x τ = s u p p l y ,   u τ < δ τ j r r l , i f   x τ = d i s c a r d ,   u τ < δ τ j

3.2.2. Optimum Strategies

The state value factor V s in Equation (19) is the long-term significance of being in s based on the predicted profit that is gathered, initiating from this state ahead until the end. Thus, the ending state includes 0 significance because no incentive is gathered from that state and the primary state significance is corresponding to the E G 0 . Moreover, the state significance is observed in the direct incentive from the resource considered and the offered significance of s τ + 1 . Likewise, the resource significance factor Q s , x is the predicted profit, which is realized followed by considering r at s as given in Equation (20). The expressions in Equations (19) and (20) are called the Bellman expectancies for state and resource significances, accordingly.
V s = E G τ | s τ = s = E r τ + Υ V s | s
Q s , x = E G τ | s , x = E r τ + Υ Q s , x | s , x
Here, x is the succeeding resource at s . The FN’s goal is to employ N RBs for high-efficiency IoT uses. It is realized by increasing the primary state significance so that the best decision strategy is essential. A strategy π is a method for choosing resources. It is the group of chances of considering a specified resource in the state, i.e., π = P x | s for every promising state resource set. Moreover, π is termed as the best if it increases every state’s significance, i.e., π * = a r g   max π V π s ,   s .
Thus, for resolving this MDP dilemma, the FN requires the best strategy by discovering the best state significance factor V * s = max π V π s , which is identical to discovering the best resource significance factor Q * s , x = max π Q π s , x for every state resource set. From Equations (19) and (20), the Bellman optimality formulas for V * s and Q * s , x are the following:
V * s = max x A Q * s , x = max x A E r τ + Υ V * s | s , x
Q * s , x = E r τ + Υ max x A Q * s , x | s , x
The concept of the best state resource factor V * s significantly shortens the exploration ability for the best strategy. Because the objective of increasing the predicted upcoming incentives is considered for the best significance of s τ + 1 , V * s is considered the expectancy in Equation (21). Therefore, the best strategy is considered the optimum neighboring resources in every state. Using Q * s , x for selecting the best resources is less complicated since with Q * s , x the FN does not need to execute the single-phase–one-step-onward exploration, rather it chooses the optimum resource, which increases Q * s , x at every state. The best resources are described as:
x * = a r g   max x A Q * s , x         = a r g   max x A E r τ | s , x + Υ V * s | s , x
Once the efficiency is discretized into U stages, the state capacity becomes well-mannered with cardinality S = U N + 1 ; thus, the best strategy is trained in this scenario by determining the best significant factors with the help of MC, SARSA, E-SARSA, and QL.
Initially, FN accepts a demand from an IoT system of   u and it provides a choice for supplying or discarding, i.e., the incentive for supplying r s r s h , r s l and the incentive for discarding r r r r h , r r l are recognized at the period of choice selection. Therefore, from Equations (19) and (20), the best resource at s is defined as:
x * = s u p p l y , i f   r s + Υ E u V * s s u p p l y = 10 b + 1 + u τ + 1 > r r + Υ E u V * s d i s c a r d = 10 b + u τ + 1 d i s c a r d , O r   e l s e
In Equation (24), s s u p p l y refers to the succeeding state if x = s u p p l y , s d i s c a r d denotes the succeeding state if x = d i s c a r d , and E u refers to the expectancy regarding u in the IoT paradigm. This is highly preferable based on the significance iteration by MC calculations for determining the best state significances needed by the best strategy. For the variables N , Υ , u h , r s h , r s l , r r h , r r l and the information of IoT clients u τ , MC trains the best strategy for this MDP dilemma.
Observe that u τ is actual information from the IoT paradigm and implementations when the chance distribution is identified. In this algorithm, the P r o f i t s array is a matrix to store the profit of all states at every iteration. Initially, each state significance is 0 and the present state significances that comprise the present strategy are applied for considering the resources until the ending state is achieved. Then, a profit vector for each state in the iteration G s is added to the P r o f i t s array for modifying the state significances.
It continues until each state significance joins and calculates the resources using Equation (24). Related to Equation (24), the best resource at s is defined based on Q * s , x as:
x * = s u p p l y , Q * s , s u p p l y > Q * s , d i s c a r d d i s c a r d , O r   e l s e
Algorithm 2. Learning the Best Strategy using MC
Choose:   Υ , ϵ 0 , 1 ,   α 0 , 1 ,   n 1 , and maximum iteration i m a x
Input:   N //Number of RBs;   i m a x = 100;
Initialize V s 0 ,   s ; P r o f i t s s   //an array to store the state’s profits in all iterations
f o r i t e r a t i o n = 0 , 1 , , i m a x
    Set   b 0 ;
    Obtain resources using Equation (24) until the end;
     G s Total of offered incentives from s till end state for every state;
    Add G s to   P r o f i t s s ;
     V s m e a n P r o f i t s s ;
     i f V s   j o i n s   f o r   e a c h   s
        Break
     V * s V s ,   s ;
     e n d   i f
e n d   f o r
Exploit the computed V * s to discover the best resources using Equation (24);
The best resource significance factors needed by the best strategy in Equation (25) are computed by the QL, SARSA, and E-SARSA schemes. These schemes are employed for training the best strategy for the MDP by determining Q * s , x . Moreover, α denotes the training rate, ϵ denotes the chance of providing an arbitrary resource for search, and n is the interval slots after modifying Q s , x . The algorithm for QL, E-SARSA, and SARSA is the following:
Algorithm 3. Learning the Best Strategy using QL, E-SARSA, and SARSA
Choose:   Υ , ϵ 0 , 1 ,   α 0 , 1 ,   n 1 , and number of time steps n
Input:   N //Number of RBs;
Initialize Q s , x arbitrarily in   ,   s , x ; Initialize   b 0 ;
f o r τ = 0 , 1 , n
   Obtain x τ based on π and accumulate r τ and s τ + 1 ;
    i f τ n 1
    ρ τ + 1 n ;
   //QL:
    G j = r τ + 1 Υ j r r j + Υ n max a Q s τ + 1 , x ;
   //E-SARSA:
    G j = r τ + 1 Υ j r r j + Υ n E a Q s τ + 1 , x ;
   //SARSA:
    G j = r τ + 1 Υ j r r j + Υ n Q s τ + 1 , x τ + 1 ;
    Q s ρ , x ρ Q s ρ , x ρ + α G Q s ρ , x ρ ;
   Modify with Q s ρ , x ρ ;
    e n d   i f
    i f Q s , x   j o i n s   f o r   e a c h   s , x
    Q * s , x Q s , x ;
    e n d   i f
e n d   f o r
Exploit Q * s , x computed in for π * using Equation (25).
Thus, this algorithm can be used to assign resources and effectively schedule the workload.

4. Simulation Results

In this section, the ECBTSA and ECBTSA-IRA are simulated using the standard CloudSimAPI 3.0.3 and their effectiveness are compared with the existing CBTSA. In this simulation, the processing ability of processors is specified by MIPS (million instructions per second). The mixture of 30 cloud nodes with multiple settings and 20 fog nodes is considered. The fog node’s processing rate is assigned less than that of cloud nodes and the fog node’s spectrum is greater than that of cloud nodes. The workload data varies between 100 and 500 MB. Moreover, 10 arbitrary DAGs having multiple edges and node weights for all workload pattern densities are generated. After that, each algorithm is executed for scheduling these workload graphs and assigning the resources. The comparison is prepared in terms of the mean interval of received workloads, schedule length, cost–makespan tradeoff, and economic cost. The simulation parameters of the cloud and fog environment are presented in Table 1.

4.1. Mean Interval of Received Workloads

Figure 2 shows the average ‘reducing’ times of the accepted workloads under different latency limits. According to the results, ECBTSA-IRA and ECBTSA used for TS and RA significantly reduce the delay while the workload latency limit is between 100 and 500 ms. This is because of the concept that the resource allocation per interval is adjusted as the reassigned amount of a workload rises. The goal of these algorithms is to improve the resource use of the fog and cloud nodes, which refers to maximizing the node’s information processing facility efficiently.

4.2. Schedule Length

Figure 3 demonstrates the schedule lengths according to different workloads for ECBTSA-IRA, ECBTSA, and CBTSA. From this analysis, it is observed that the ECBTSA-IRA achieves fewer schedule lengths compared to other algorithms. The scheduled length of ECBTSA-IRA is 26.81% less than CBTSA and 10.14% smaller than ECBTSA.

4.3. Cost–Makespan Tradeoff

This is used for estimating the optimum strategy at every workload pattern density, as:
C o s t M a k e s p a n   T r a d e o f f a i = min a k A L c o s t a k c o s t a i * min a l A L m a k e s p a n a l m a k e s p a n a i
In Equation (26), A L = a 1 , , a n represents the catalog of each scheduling strategy wherein the cost–makespan tradeoff of all strategies a i A L is computed. The higher cost–makespan tradeoff will realize a higher tradeoff rank on the economic cost and schedule duration.
Figure 4 depicts the cost–makespan tradeoffs according to different workloads for ECBTSA-IRA, ECBTSA, and CBTSA. From this analysis, it is observed that the ECBTSA-IRA achieves better cost–makespan tradeoffs than other algorithms. The cost–makespan tradeoff of ECBTSA-IRA is about 13.77% higher than CBTSA and 5.85% higher than ECBTSA.

4.4. Economic Cost

The economic cost is defined as the price required to pay for executing all workloads.
Figure 5 portrays the economic costs of cloud resources according to different workloads. From this analysis, it is observed that the ECBTSA-IRA achieves less cost compared to other algorithms. The cost of ECBTSA-IRA is 30.46% less than CBTSA and 20.29% smaller than ECBTSA. Therefore, the efficiency of ECBTSA-IRA is accomplished better by increasing the number of workloads.

5. Discussion and Limitations

The ECBTSA-IRA has a minimum cost, an average ‘reducing’ time of accepted workloads, and a schedule length. This is because both the cloud and fog layers implement priority scheduling in these layers, and locate workloads in the proper priority levels according to their tolerance delays. After that, scheduling according to the priority levels enhances the number of tasks completed, which results in the minimum overall response time and the total cost. The ECBTSA-IRA algorithm achieves a good balance between cost savings and schedule length. The greater CMT indicates the better tradeoff level on economic cost and schedule length that an algorithm can provide. This proposed ECBTSA-IRA algorithm reduces the schedule length, which needs much less economic costs for cloud resources compared to the other algorithms. At the same time, ECBTSA-IRA improved over traditional TS and RA algorithms, but the time required to allocate the resource request by each device was high. Moreover, the ECBTSA-IRA algorithm did not support the dynamic RA, which impacted the network QoS efficiency. Experimental results show that the ECBTSA-IRA algorithm had a clear advantage in TS and RA, which also implied that the ECBTSA-IRA algorithm could be utilized to solve the problem of optimizing heterogeneous IoT applications.

6. Conclusions

An ECBTSA-IRA is proposed to improve the CBTSA by considering both the computation time and finiteness of cloud resources. An optimum value of an efficiency factor is obtained by using this algorithm for measuring the tradeoff between the schedule length, cost, and energy. This algorithm uses MDP to consider the RA problem and execute QL, SARSA, ESARSA, and MC to allocate the optimal resources. This algorithm reduces the response time and significantly minimizes the cost. This is due to the effective prioritization of the tasks according to their delay, schedule length, energy, and cost, resulting in less mean response time and overall cost. Eventually, the simulation outcomes proved that the ECBTSA-IRA has a 7.1 ms average ‘reducing’ time of accepted workloads for 100 ms workload delays, whereas ECBTSA and CBTSA algorithms have 7.5 ms and 8 ms, correspondingly. Moreover, the ECBTSA-IRA has a 577-s schedule length, 0.84 cost–makespan tradeoff, and USD 27,300 G cost for 100 workloads, whereas the ECBTSA has a 611-s schedule length, 0.8 cost–makespan tradeoff, and USD 32,000 G cost for 100 workloads; the CBTSA has a 700-s schedule length, 0.75 cost–makespan tradeoff, and USD 35,000 G cost for 100 workloads.
In future work, we need to further consider the case that the dynamic RA for heterogeneous IoT devices to achieve their QoS requirements by considering more network parameters, such as throughput, average link usage, and so on. At the same time, further research is made on how to apply the ECBTSA-IRA algorithm in real-time application deployment, such as healthcare monitoring, smart manufacturing, and so on.

Author Contributions

Writing—original draft preparation, S.V.; Supervision, P.M.; Validation, M.K.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kunal, S.; Saha, A.; Amin, R. An overview of cloud-fog computing: Architectures, applications with security challenges. Secur. Priv. 2019, 2, e72. [Google Scholar] [CrossRef]
  2. Naha, R.K.; Garg, S.; Georgakopoulos, D.; Jayaraman, P.P.; Gao, L.; Xiang, Y.; Ranjan, R. Fog computing: Survey of trends, architectures, requirements, and research directions. IEEE Access 2018, 6, 47980–48009. [Google Scholar] [CrossRef]
  3. Kimovski, D.; Ijaz, H.; Saurabh, N.; Prodan, R. Adaptive nature-inspired fog architecture. In Proceedings of the IEEE 2nd International Conference on Fog and Edge Computing (ICFEC), Washington, DC, USA, 1–3 May 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  4. Gorski, T.; Woźniak, A.P. Optimization of business process execution in services architecture: A systematic literature review. IEEE Access 2016, 4, 1–20. [Google Scholar] [CrossRef]
  5. Liu, L.; Qi, D.; Zhou, N.; Wu, Y. A task scheduling algorithm based on classification mining in fog computing environment. Wirel. Commun. Mob. Comput. 2018, 2018, 2102348. [Google Scholar] [CrossRef] [Green Version]
  6. Yin, L.; Luo, J.; Luo, H. Tasks scheduling and resource allocation in fog computing based on containers for smart manufacturing. IEEE Trans. Ind. Inform. 2018, 14, 4712–4721. [Google Scholar] [CrossRef]
  7. Hasan, S.; Huh, E.N. Heuristic based energy-aware resource allocation by dynamic consolidation of virtual machines in cloud data center. KSII Trans. Internet Inf. Syst. 2013, 7, 1825–1842. [Google Scholar] [CrossRef]
  8. Alsaffar, A.A.; Pham, H.P.; Hong, C.S.; Huh, E.N.; Aazam, M. An architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. Mob. Inf. Syst. 2016, 2016, 6123234. [Google Scholar] [CrossRef] [Green Version]
  9. Yu, L.; Jiang, T.; Zou, Y. Fog-assisted operational cost reduction for cloud data centers. IEEE Access 2017, 5, 13578–13586. [Google Scholar] [CrossRef]
  10. Hoang, D.; Dang, T.D. FBRC: Optimization of task scheduling in fog-based region and cloud. In Proceedings of the IEEE Trustcom/BigDataSE/ICESS, Sydney, NSW, Australia, 1–4 August 2017; pp. 1109–1114. [Google Scholar] [CrossRef]
  11. Pham, X.Q.; Man, N.D.; Tri, N.D.T.; Thai, N.Q.; Huh, E.N. A cost-and performance-effective approach for task scheduling based on collaboration between cloud and fog computing. Int. J. Distrib. Sens. Netw. 2017, 13, 1–16. [Google Scholar] [CrossRef] [Green Version]
  12. Ni, L.; Zhang, J.; Jiang, C.; Yan, C.; Yu, K. Resource allocation strategy in fog computing based on priced timed petri nets. IEEE Internet Things J. 2017, 4, 1216–1228. [Google Scholar] [CrossRef]
  13. Sun, Y.; Zhang, N. A resource-sharing model based on a repeated game in fog computing. Saudi J. Biol. Sci. 2017, 24, 687–694. [Google Scholar] [CrossRef] [PubMed]
  14. Nie, J.; Luo, J.; Yin, L. Energy-aware multi-dimensional resource allocation algorithm in cloud data center. KSII Trans. Internet Inf. Syst. 2017, 11, 4320–4333. [Google Scholar] [CrossRef]
  15. Liu, Z.; Yang, X.; Yang, Y.; Wang, K.; Mao, G. DATS: Dispersive stable task scheduling in heterogeneous fog networks. IEEE Internet Things J. 2018, 6, 3423–3436. [Google Scholar] [CrossRef]
  16. Yang, Y.; Wang, K.; Zhang, G.; Chen, X.; Luo, X.; Zhou, M.T. MEETS: Maximal energy efficient task scheduling in homogeneous fog networks. IEEE Internet Things J. 2018, 5, 4076–4087. [Google Scholar] [CrossRef]
  17. Jia, B.; Hu, H.; Zeng, Y.; Xu, T.; Yang, Y. Double-matching resource allocation strategy in fog computing networks based on cost efficiency. J. Commun. Netw. 2018, 20, 237–246. [Google Scholar] [CrossRef]
  18. Yang, Y.; Zhao, S.; Zhang, W.; Chen, Y.; Luo, X.; Wang, J. DEBTS: Delay energy balanced task scheduling in homogeneous fog networks. IEEE Internet Things J. 2018, 5, 2094–2106. [Google Scholar] [CrossRef]
  19. Balevi, E.; Gitlin, R.D. Optimizing the number of fog nodes for cloud-fog-thing networks. IEEE Access 2018, 6, 11173–11183. [Google Scholar] [CrossRef]
  20. Wang, T.; Wei, X.; Liang, T.; Fan, J. Dynamic tasks scheduling based on weighted bi-graph in mobile cloud computing. Sustain. Comput. Inform. Syst. 2018, 19, 214–222. [Google Scholar] [CrossRef]
  21. Li, Q.; Zhao, J.; Gong, Y.; Zhang, Q. Energy-efficient computation offloading and resource allocation in fog computing for internet of everything. China Commun. 2019, 16, 32–41. [Google Scholar] [CrossRef]
  22. Zhou, Z.; Wang, H.; Shao, H.; Dong, L.; Yu, J. A high-performance scheduling algorithm using greedy strategy toward quality of service in the cloud environments. Peer-to-Peer Netw. Appl. 2020, 13, 2214–2223. [Google Scholar] [CrossRef]
  23. Ren, Z.; Lu, T.; Wang, X.; Guo, W.; Liu, G.; Chang, S. Resource scheduling for delay-sensitive application in three-layer fog-to-cloud architecture. Peer-to-Peer Netw. Appl. 2020, 13, 1474–1485. [Google Scholar] [CrossRef]
  24. Guevara, J.C.; da Fonseca, N.L. Task scheduling in cloud-fog computing systems. Peer-to-Peer Netw. Appl. 2021, 14, 962–977. [Google Scholar] [CrossRef]
  25. Movahedi, Z.; Defude, B. An efficient population-based multi-objective task scheduling approach in fog computing systems. J. Cloud Comput. 2021, 10, 53. [Google Scholar] [CrossRef]
  26. Yin, Z.; Xu, F.; Li, Y.; Fan, C.; Zhang, F.; Han, G.; Bi, Y. A multi-objective task scheduling strategy for intelligent production line based on cloud-fog computing. Sensors 2022, 22, 1555. [Google Scholar] [CrossRef]
  27. Guo, W.; Kong, L.; Lu, X.; Cui, L. An intelligent genetic scheme for multi-objective collaboration services scheduling. Symmetry 2022, 14, 2037. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed ECBTSA-IRA algorithm in the cloud–fog environment.
Figure 1. Block diagram of the proposed ECBTSA-IRA algorithm in the cloud–fog environment.
Symmetry 14 02340 g001
Figure 2. Average ‘reducing’ time of the accepted workloads vs. the delay of workloads.
Figure 2. Average ‘reducing’ time of the accepted workloads vs. the delay of workloads.
Symmetry 14 02340 g002
Figure 3. Schedule length vs. no. of workloads.
Figure 3. Schedule length vs. no. of workloads.
Symmetry 14 02340 g003
Figure 4. CMT vs. no. of workloads.
Figure 4. CMT vs. no. of workloads.
Symmetry 14 02340 g004
Figure 5. Cost vs. no. of workloads.
Figure 5. Cost vs. no. of workloads.
Symmetry 14 02340 g005
Table 1. Simulation parameters.
Table 1. Simulation parameters.
EnvironmentParameterRange
FogNumber of processors20
Number of containers/VMs10
Processing rate 10 , 500 MIPS
Bandwidth1024 Mbps
CloudNumber of processors30
Number of container/VMs15
Processing rate 250 , 1500 MIPS
Bandwidth10, 100, 512 and 1024 Mbps
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

V, S.; M, P.; P, M.K. Energy-Efficient Task Scheduling and Resource Allocation for Improving the Performance of a Cloud–Fog Environment. Symmetry 2022, 14, 2340. https://doi.org/10.3390/sym14112340

AMA Style

V S, M P, P MK. Energy-Efficient Task Scheduling and Resource Allocation for Improving the Performance of a Cloud–Fog Environment. Symmetry. 2022; 14(11):2340. https://doi.org/10.3390/sym14112340

Chicago/Turabian Style

V, Sindhu, Prakash M, and Mohan Kumar P. 2022. "Energy-Efficient Task Scheduling and Resource Allocation for Improving the Performance of a Cloud–Fog Environment" Symmetry 14, no. 11: 2340. https://doi.org/10.3390/sym14112340

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop