Improved Performance and Cost Algorithm for Scheduling IoT Tasks in Fog–Cloud Environment Using Gray Wolf Optimization Algorithm

: Today, the IoT has become a vital part of our lives because it has entered into the precise details of human life, like smart homes, healthcare, eldercare, vehicles, augmented reality, and industrial robotics. Cloud computing and fog computing give us services to process IoT tasks, and we are seeing a growth in the number of IoT devices every day. This massive increase needs huge amounts of resources to process it, and these vast resources need a lot of power to work because the fog and cloud are based on the term pay-per-use. We make to improve the performance and cost (PC) algorithm to give priority to the high-profit cost and to reduce energy consumption and Makespan; in this paper, we propose the performance and cost–gray wolf optimization (PC-GWO) algorithm, which is the combination of the PCA and GWO algorithms. The results of the trial reveal that the PC-GWO algorithm reduces the average overall energy usage by 12.17%, 11.57%, and 7.19%, and reduces the Makespan by 16.72%, 16.38%, and 14.107%, with the best average resource utilization enhanced by 13.2%, 12.05%, and 10.9% compared with the gray wolf optimization (GWO) algorithm, performance and cost algorithm (PCA), and Particle Swarm Optimization (PSO) algorithm.


Introduction
Cloud computing is a development in technology that emphasizes how we create apps, design computer systems, and use already-existing services to produce software.It is based on the idea of "dynamic provisioning", which applies to computing power, storage, networking, and infrastructure for information technology (IT) in general, as well as services.The National Institute of Standards and Technology (NIST) defined the cloud as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" [1].This cloud model is composed of five essential characteristics, which are on-demand self-service; three service models, which are Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS); and four deployment models, which are the private cloud, community cloud, public cloud, and hybrid cloud.Cloud services are provided to the Internet of Things devices (IoT); the IoT is a network of physical items (called "things") that are constructed with various sensors, software, and other technologies in order to connect with other devices and systems over the internet and share data with them.These gadgets range from simple household items to high-tech tools used in industry.Experts think that by 2025, there will be 75 billion connected IoT devices.Today, there are more than 15 billion IoT devices that are connected [2,3].Energy consumption is a significant concern because a large number of devices need to be online.All these devices Appl.Sci.2024, 14, 1670 2 of 15 require a lot of data storage, computing power, and other resources [4] because of the time it takes for data to travel between the cloud and an IoT device.The applications that need real-time processing, like healthcare, vehicles, augmented reality, elder care, and industrial robots, are not easy to use with the cloud's storage and processing capabilities to obtain realtime services.Cisco proposed the idea of fog computing as a solution to this problem; it is essentially an expansion of cloud computing [5].
Fog computing was created to be a small cloud between the central cloud and IoT devices.The main advantage of this is that it reduces the distance between the cloud and IoT devices for real-time services.All of the client computers that are part of the fog network make up the devices layer of the architecture's top level.Between the endpoints and the cloud lies a layer of networking equipment known as the fog layer [6].The cloud layer is depicted at the bottom, and the third layer is the cloud layer, which serves as the storage and processing hub for IoT devices.The proximity of the fog layer to the end devices means that edge devices can process Internet of Things (IoT) tasks much more quickly than the cloud [7].Reduced latency, enhanced resource utilization, and reduced energy use are some crucial methods used to process them by building a scheduling algorithm to maximize system performance; however, sophisticated scheduling algorithms are required because the fog layer typically has fewer resources than the cloud.Task scheduling in cloud computing is an NP-hard problem.Therefore, we can maintain and build new scheduling algorithms to enhance the system's performance.Time and resource sharing, priority scheduling, and queuing approaches are just a few of the scheduling policies that can play a role in fog computing.In order to minimize power consumption and maximize response times for real-time applications, efficient scheduling is essential.In conclusion, the Internet of Things (IoT) is rapidly expanding, and the massive amounts of data it generates necessitate a lot of space, a lot of processing power, and a lot of other resources.The cloud can facilitate the management of such data streams, but it is not designed for use in real-time processing environments.Fog computing is a potential choice since it reduces query latency for IoT devices and makes the most efficient use of available resources.An attractive alternative for managing the growing number of IoT devices is fog computing due to the importance of efficient scheduling algorithms for optimizing system performance and resource utilization.Scheduling encompasses various policies, as outlined in ref. [8]:

•
Protecting the system from harm.
The swarm intelligence paradigm is utilized by numerous artificial and biological systems [9].The combination of the simple processes performed by many small, independent entities, driven by local information, allows for the emergence of sophisticated, intelligent behavior.The foraging behavior of ants, for instance, was studied.Another example is how birds are able to quickly adjust their motions to the changing characteristics of their environment through flocking behavior.A set of basic principles that individual birds obey can be defined to achieve this behavior.The authors in [10] proposed the gray wolf optimizer algorithm, inspired by the behavior of gray wolves to obtain prey.In this paper, we propose the PC-GWO algorithm, which is a hybrid algorithm, to enhance scheduling in a fog-cloud environment.
In this paper, we offer a unique scheduling method to assure the predictable execution of IoT activities and answer concerns about latency reduction, resource efficiency improvement, and decreased energy consumption.This method considers both task cost and power energy consumption constraints.This paper is structured as follows: Section 2 analyzes related work, and presents the system model, Our proposed algorithm is described and analyzed in Section 3, Section 4 explains the assessment, simulation model, and results, and Section 5 discusses the conclusion and future works.

Related Work
Efficient scheduling is critical to improving a system's performance.As a result, substantial research on fog-cloud computing job scheduling has been performed to address the challenge of distributing work to a collection of computers.Various approaches for task scheduling in fog-cloud computing have been developed, and a plethora of algorithms have been offered in the past to meet the issues connected to this environment.The following are some of these algorithms.
In ref. [11], in a cloud-fog scenario, the author describes a new technique for scheduling IoT device requests.This method involves incorporating Slap Swarm Algorithm (SSA) operators into an existing optimization methodology known as artificial ecosystem-based optimization (AEO).This move is intended to strengthen AEO's ability to address IoT-related concerns.The author also introduces ATS-FOA, a revolutionary collection of techniques that integrate algorithms such as approximate nearest neighbor (ANN) and fruit fly optimization (FOA) to reduce processing time, latency, memory utilization, and cost.
In ref. [12], the authors propose a new technique for scheduling tasks, which enhances firework algorithms.They suggest introducing a detection system for the explosion radius of fireworks.This approach can be employed in a cloud computing and fog computing system to reduce task processing time and achieve more effective load balancing across fog devices.
In ref. [13] researchers propose exploring serverless computing and its advantages for event-driven cloud applications.The proposed machine learning model utilizes gray wolf optimization (GWO) and reinforcement learning (RIL) to parallelize tasks and optimize task allocation in the serverless framework.The results of the simulation experiments show that the GWO-RIL approach reduces runtimes, adapts to varying load conditions, and achieves energy savings while maintaining similar latency.The findings suggest the potential for further development and research in distributed deep learning, task allocation for cloud-based and edge-based applications, and task offloading.
In ref. [6], the authors propose a new heuristic algorithm depending on the deadline and task bandwidth using the ACO algorithm for IoT task scheduling in a fog-cloud environment, optimizing Makespan and task deadline satisfaction.It prioritizes the earliest deadlines, assigning tasks to resources for minimum completion time.The evaluation shows superiority over existing methods in achieving low Makespan and high deadline satisfaction.
In ref. [14], the authors propose a new approach for the effective scheduling and activation management of fog computing services using the multi-objective gray wolf optimization (GWO) technique.This approach employs a multi-objective function to allocate resources and handle tasks, aiming to strike a balance between energy consumption and task execution time.The proposed method comprises two main stages: first, the GWO algorithm is utilized for scheduling, and then, container migration is employed for task offloading and resource allocation.
In ref. [15], researchers propose aims to address the problem of task scheduling in cloud computing environments, which is crucial for improving performance and scheduling tasks on virtual machines.The authors propose a modified gray wolf optimization (MGWO) algorithm, which utilizes a multi-objective fitness function that considers both Makespan and cost.By reducing both the cost and Makespan, the proposed technique aims to increase job scheduling efficiency.
In their work [16], the authors propose using Bumble Bee Mating Optimization (BBMO); the authors present a practical work scheduling approach for cloud computing.The strategy seeks to reduce Makespan when running tasks across several virtual machines (VMs).When compared to Honey Bee Mating Optimization (HBMO) and the Genetic Algorithm (GA), BBMO surpasses both.However, the additional complexity of BBMO necessitates more calculation time.Future studies will concentrate on optimizing the complexity of BBMO and investigating various consistencies in estimated task completion times (ETCs).
In ref. [17], the authors focus on load balancing in fog computing for healthcare applications.The authors present a probability-based strategy for determining job offloading to fog nodes with the least amount of latency.The fog master makes use of ant colony optimization (ACO) to assign tasks to appropriate nodes.Balancing load between fog nodes enhances the quality of service and reduces response time.The proposed work provides significant benefits for time-sensitive healthcare applications in fog computing.
In their publication [18], the researchers propose and the writers detail a technique for arranging tasks with multiple goals in fog computing, which utilizes an upgraded ant colony algorithm.This approach is customized to the features of fog nodes and accounts for expenses related to computing resources, power consumption, and network usage.
In this paper [19], PSO-GWO is a novel hybrid metaheuristic algorithm suggested for scheduling dependent tasks on virtual machines in cloud computing.To optimize the overall execution cost, the PSO-GWO technique combines Particle Swarm Optimization with gray wolf optimization.PSO-GWO beats the regular PSO and GWO algorithms in terms of average total execution cost and time, according to experimental data.When evaluated on scientific processes, the algorithm obtains significant percentage reductions in execution time and overall execution cost when compared to PSO and GWO, suggesting higher performance.
In this paper [20], the author introduces the Dynamic and Fault-Tolerant Scheduling Algorithm (DRFTSA).Leveraging task priorities based on unit electricity cost and a deep reinforcement learning model (DQN) within CloudSim, DRFTSA dynamically optimizes scheduling.Real-time Google Cloud Jobs data, categorized into regular and large datasets, are used for evaluation.
In this paper [21], the above-proposed method addresses the critical challenge of cloud task scheduling; this proposal introduces the Sine Cosine-based Elephant Herding Optimization (SCEHO) algorithm coupled with Improved Particle Swarm Optimization (IPSO), which enhances resource selection for tasks by incorporating load balancing and resource allocation parameters.
Table 1 provides a concise overview of the methodologies and critical parameters addressed by each task scheduling algorithm in the mentioned references.We can say that these works did not consider the huge amount of data that workflow tasks create or how quickly the fog-cloud computing world changes, which generates a huge amount of energy.Therefore, we propose a new algorithm that considers cost and energy in addition to Makespan to control the energy consumption and give the tasks priority depending on the term (pay as you go).Pay-as-you-go (PAYG) cloud computing is a way to pay for cloud computing that charges you based on how much you use it.It is kind of like how electricity bills work; they only use the resources they need [22].

Reference
Methodology Key Parameters [11] Incorporates SSA into AEO IoT concerns, processing time, latency, memory, cost [12] Enhanced firework algorithms Explosion radius detection, cloud-fog system, load balancing [13] GWO and RIL for serverless computing Task parallelization, task allocation, runtime, energy [6] ACO algorithm for fog-cloud IoT scheduling Makespan, task deadline satisfaction [14] Multi-objective GWO for fog computing Energy consumption, task execution time, container migration [15] Modified GWO for cloud task scheduling Makespan, cost, job scheduling efficiency [16] Bumble Bee Mating Optimization for cloud Makespan, VMs, comparison with HBMO and GA [17] ACO for load balancing in fog computing Latency, quality of service, response time [18] Upgraded ant colony algorithm for fog Computing resource costs, power consumption, network usage [19] PSO-GWO hybrid for cloud task scheduling Execution cost, time, performance [20] Dynamic and Fault-Tolerant Scheduling Algorithm Task priorities, unit electricity cost, deep reinforcement learning [21] SCEHO and IPSO for cloud task scheduling Load balancing, resource allocation

System Model
This portion provides an overview of the architecture of the IoT, fog computing, and cloud computing, and outlines the characteristics of the tasks that this system must handle.

Architecture
The structure of the IoT-fog-cloud system is depicted in Figure 1, beginning with the foundational layer of IoT devices.This layer comprises mobile devices, unmanned aerial vehicles (UAVs), medical devices, industrial equipment, smart homes, and other devices [23,24].Data are generated as tasks and transferred to the fog or cloud computing layers.Fog computing acts as an intermediary between the IoT and the cloud, sharing similar features but with restricted computing capabilities.The cloud, on the other hand, has vast resources for storage and processing [25]."Cloud computing is a technological advancement that focuses on the way we design computing systems, develop applications, and leverage existing services for building software.It is based on the concept of dynamic provisioning, which is applied not only to services but also to compute capability, storage, networking, and information technology (IT) infrastructure in general" [26].To ensure the timely completion of critical tasks, such tasks are sent to the fog level, whereas tasks with lower costs can be directed to the cloud to optimize resource allocation.A fog broker [13,27,28] is the entity responsible for handing the assignment of a particular job to either the fog or cloud, and it is deployed within the fog layer.The fog controller is made up of three essential parts: (1) Task director (TD): this component receives the task and analyzes its characteristics.
(2) Resource monitoring (RM): it gathers information about the available resources.
(3) Task scheduler (TS): this component receives the task and resource information from the TD and RM, respectively, and then determines the appropriate resource for task execution.
Once the task scheduler assigns tasks to the suitable resources, the processing takes place, and upon completion, the results are sent back to the original sender.

Problem Definitions
As all tasks in the fog-cloud share identical attributes, we proceeded to outline these task characteristics using mathematical equations.

Execution Time
The duration required for task Ti to execute by resource Rj can be determined using Equation (1): A fog broker [13,27,28] is the entity responsible for handing the assignment of a particular job to either the fog or cloud, and it is deployed within the fog layer.The fog controller is made up of three essential parts: (1) Task director (TD): this component receives the task and analyzes its characteristics.
(2) Resource monitoring (RM): it gathers information about the available resources.
(3) Task scheduler (TS): this component receives the task and resource information from the TD and RM, respectively, and then determines the appropriate resource for task execution.
Once the task scheduler assigns tasks to the suitable resources, the processing takes place, and upon completion, the results are sent back to the original sender.

Problem Definitions
As all tasks in the fog-cloud share identical attributes, we proceeded to outline these task characteristics using mathematical equations.

Execution Time
The duration required for task T i to execute by resource R j can be determined using Equation (1): where t s(i,j) represents the task (t i ) size that needs to be executed, and r p(i,j) represents the power computation of resource j.

Completion Time
The completion time corresponds to the duration required for resource R j to complete the processing of task T i .Equation (2) computes the completion time: In this case, r j denotes the ready time of resource R j .In some situations, the resources in the fog or cloud may not be instantly available, requiring the job to wait until the resource is ready for processing.This waiting period is known as the readiness time, and it is combined with the execution time to establish the completion time.

Makespan
The difference in time between the commencement and completion of a series of work or tasks, denoted as "t i ", can be determined through the utilization of Equation (3).
By and large, scheduling with the minimum Makespan is preferable.

Calculate the Priority Depending on the Task Cost
In step one, we calculate the cost of each activity for each available resource using Equation (4): where CI t i R j is the expense of t i on resource per instruction R j , TB(t i ) is the data for a task (t i ), and CPBW t i R j is the cost of performing the job per bandwidth T i on resource R j .
The profit is then determined for each job using Equation ( 5) for the resource with the most significant cost. Pro where CsU(T i ) is the cost paid by the user to run task T i , CO T i R max is the actual cost of running task T i on the resource R max (R max is the resource that has the highest cost).
Subsequently, the tasks are organized and allocated to their respective queues.In our study, we introduced three levels of queues (Q1, Q2, and Q3) with equal priority ranges.This means that the overall priority range of the latest tasks is evenly distributed among these queues.Consequently, utilizing Equation ( 6), we can compute the rounded-up queue range (Q r ) of priorities for each queue since we have four queues in total.
Q r = Pr max (T i )/3 (6) where Pr max (T i ) can be paraphrased in the text as follows: In PCA, "the highest profit received from completing task T i is divided by three to account for the presence of a three queue.The algorithm consists of three queues, each with a different priority region: the High queue, which has the most significant priority zone; the Middle queue, which has a medium priority region; and the Low queue, which has the lowest priority region.As previously said, this enables us to view [7]: THEN the task T i is added to the Q1.

GWO Algorithm
The gray wolf optimizer (GWO) is a metaheuristic algorithm inspired by the social structure and hunting behaviors of gray wolves.In the wild, gray wolves have a hierarchical structure, with an alpha leader and lower-ranking beta and delta wolves.To effectively seek prey, these wolves collaborate as a team.Likewise, the GWO method is intended to tackle optimization problems by emulating the cooperative and communicative behaviors of wolves.The various solutions to the problem are modeled as a pack of wolves, and their placements in the solution space represent their fitness.The alpha wolf is currently the most excellent solution, although the beta and delta wolves are also viable options [10].
The description of the mathematical representation for the GWO (gray wolf optimization) algorithm is outlined below.
Encircling Prey: This phase involves the gray wolves encircling their prey.Mathematically, this behavior is represented by the following equations, Equations ( 4) and ( 5): where t stands for the current iteration,

→
A and → C are the coefficient vectors, the prey's position vector is indicated by X p , and → X w represents a gray wolf's position vector.In addition,

→
A and → C are calculated using the following formulas: Equations ( 9) and (10).
where r 1 and r 2 depict a set of random vectors in [0, 1], and → a is linearly decreased from 2 to 0. Hunting: During this stage, the alpha wolf, serving as the leader, and the beta and delta wolves, acting as consultants, possess ample knowledge about the prey's whereabouts.As a result, the other wolves must adjust their positions based on the location of the most skilled agent, represented by mathematical equations Equations ( 11)-( 13): Attacking Prey: Exploitation and exploration, which involve pursuing and searching for prey, are essential aspects of wolves' hunting behavior.When wolves chase and attack prey, it showcases their proficiency in capturing prey, which can be considered as their exploitation ability.This exploitation ability allows wolves to reach global optima, maximizing their success in hunting.The value of parameter A plays a vital role in this process.If the magnitude of |A| < 1, the gray wolves are compelled to pursue the current prey aggressively.However, if the magnitude of |A| > 1, the gray wolves are compelled to abandon the current prey and explore other potential targets.The pseudocode of the GWO algorithm [10] is depicted in Algorithm 1.
Equations Power Consumption Models: The energy usage of a server can be categorized into two components: idle and dynamic.Idle consumption refers to the power used when the server is inactive and not performing any tasks, while dynamic consumption represents the power utilized during active operations and computations.We can calculate the idle power consumption and dynamic power consumption given by Equations ( 14) and ( 15), respectively [29,30].
Initialize a, A, and C 3.
Calculate the fitness of each search agent.
X_β = the second-best search agent 6.
X_δ = the third-best search agent 7.
While (t < maximum number of iterations) 8.
For each search agent 9.
Update the position of the current search agent by.
where P ri represents the reduced power consumption of R i .
where P i denotes the dynamic power consumption of the R i having a utilization (load) of L i .Total power consumption: A multi-core processor's total power consumption is defined by Equation ( 16): Fitness Function: In this paper, we consider two parameters: the first one is the minimum completion time, and the second one is the power consumption.The fitness function (FF) can be calculated using Equation (17).

Our Proposed Algorithm
The energy consumption of these data centers, currently amounting to 2% of global power usage, or 416.2 terawatt hours, is estimated to increase dramatically.By the year 2022, projections suggest it will make up 7% of the world's energy consumption, indicating a significant surge in demand [31].Therefore, we propose a new algorithm that considers the cost and energy, in addition to the Makespan, of task execution by enhancing the PCA algorithm; our algorithm first applies the PCA algorithm, as shown in the pseudocode [8] in Algorithm 2. When the algorithm arrives at step four, the GWO algorithm is called, as shown in Algorithm 3, which represents the pseudocode of PC-GWO algorithm.

Algorithm 2: Pseudocode of PCA Algorithm [8]
"1-FOR all available tasks DO Calculate the priority of each task.END FOR 2-Sort the tasks according to the priorities in the scheduler's queues.

3-FOR all tasks Ti in meta-task DO
FOR all resources Rj DO Calculate the completion time: (CT)_ij = (EC) ij +r j END FOR END FOR 4-Find task Tk which has the highest Priority and assign this task Tk to the resource which has the minimum completion time.
5-Remove task Tk from Meta-tasks set and update rj for the selected Rj and Update CTij for all j.6-IF the waiting time of any task in the lower queues has exceeded the threshold THEN Move this/these tasks to the next upper queue.END IF.
7-IF there is a new task has arrived THEN Calculate its priority and sort it in the end of appropriate queue and repeat the above steps END IF"

Algorithm 3: Pseudocode of PC-GWO Algorithm
Step 1-FOR all tasks in meta task DO Calculate the priority of each task.

END FOR
Step 2-insert the tasks according to the priorities in the scheduler's queues.
Step 3-FOR all tasks Ti in meta-task DO FOR all resources Rj DO Calculate the completion time: Step 4-Call the GWO with the fitness function using equation (17) Step 5-Strip task T k from Meta-tasks set and update rj for the selected Rj and update CTij for all j.
Step 6-IF the waiting time for any task in the lower queues has gone over the threshold, THEN Shift this/these tasks to the next upper queue.END IF.
Step 7-IF there is a new task has arrived THEN Calculate its priority and place it at the end of the right queue and repeat the steps above.END IF Appl.Sci.2024, 14, 1670 10 of 15 Where: Step 1: Calculate the priority of each task in the meta-task using Equation (5).
Step 2: Sort the tasks in the queue according to their priority using Equation (6).
Step 3: Calculate the completion time for all tasks in the meta-task for each available resource.
Step 4: Call the GWO algorithm to choose the best solutions to process the task using the fitness function by using Equation (17).
Step 5: Remove the task that was processed from the meta-task, update the ready time of resource j, and update the completion time of all resources.
Step 6: In this step, we consider the waiting time of the task in the queue to determine if it will arrive at the deadline or not by migrating the task, which is estimated to arrive at the next level.
Step 7: In this step, if a new task is coming, the algorithm repeats all the steps above.

Performance Evaluation and Result
This section focuses on evaluating the performance of the proposed algorithms in comparison to other algorithms.The evaluation is based on three key metrics: Makespan, cost, and energy consumption.The simulation implementing and evaluating the suggested approach necessitates a set of programming tools to execute various scenarios, along with a comprehensive comparison of the proposed solutions.In this chapter, we first explore the implementation of the proposed solution (PC-GWO).Using the Java programming language, we created a rudimentary version of the algorithm.This simulation was made on a computer equipped with an Apple M1 chip with 8 GB of memory and the operating system macOS Ventura 13.2.To conduct the experimental testing, three different scenarios were created with different values of priority to check the system performance in different cases to be sure if it was potentially influenced by fortuitous initial conditions or not.
Scenario 1: There are numerous tasks of high priority, accompanied by a smaller number of tasks with medium and low priority.
Scenario 2: There are numerous tasks of medium priority, along with a few tasks of high and low priority.
Scenario 3: There are numerous tasks of low priority, along with a few tasks of high and medium priority.
Tables 2 and 3 show the cloud and fog characteristics, respectively, and the simulation and algorithm of GWO parameters are shown in Table 4.For the purpose of demonstrating the efficacy of our proposed algorithms, we compared them with the benchmarks that follow.
(1) Performance and Cost Algorithm (PCA): Performance and cost scheduling algorithm for cloud services.The (PCA) algorithm is introduced, which prioritizes tasks based on profits to optimize resource utilization and minimize Makespan.The PCA considers both completion time and cost priority to provide cost-effective services with enhanced performance for cloud users.(2) The Gray Wolf Optimizer: The gray wolf optimizer (GWO) algorithm is a metaheuristic algorithm that was inspired by gray wolves' social structure and hunting activity.Gray wolves have a hierarchical structure in nature, with an alpha leader and lower-ranking beta and delta wolves.(3) Particle Swarm Optimization (PSO): PSO is a clever approach that mimics the behavior of a flock of birds.Particles represent the birds in the context of the issue being addressed in this technique, and a multidimensional velocity governs their movement.A particle's position is impacted at each iteration by both the best position identified thus far and the best position among all particles in the whole problem space.A fitness function evaluates each particle's fitness value, showing its proximity to the desired aim in the search space and, hence, its relevance.Furthermore, each particle has a velocity that influences its path.Each particle successfully explores the issue space by continually pursuing the most promising particles at any given time [32].
The results of our experiments are presented on this page.First, we look into the impact of the algorithms we propose on performance.Following that, we compare these algorithms to existing methods by altering the number of tasks, fog nodes, and cloud nodes.Figure 2 depicts the Makespan for the four approaches under the conditions mentioned above.We note that our algorithm minimizes the Makespan compared with the GWO, PCA, and PSO algorithms, 16.72%, 16.38%, and 14.107%, respectively.Our proposed technique beats the other three algorithms (GWO, PCA, and PSO) in all circumstances.Furthermore, Figure 3 depicts the average resource usage for the four algorithms (PC-GWO, GWO, PCA, and PSO) in scenarios A, B, and C, each with 10, 20, and 30 for fog and five nodes of cloud for each scenario: 94%, 72%, 78%, and 85%; 90%, 70%, 75%, and 83%; and 92%, 74%, 76%, 84%, respectively, clearly demonstrating that our suggested approach outperforms GWO, PCA, and PSO in terms of resource utilization.Finally, Figure 4 displays the average energy in kilojoules used for the four algorithms (PC-GWO, GWO, PCA, and PSO) in scenarios A, B, and C, each with 10, 20, and 30 for fog and five nodes of cloud for each scenario: 22.75, 24.75, 26.25, and 23.55; 32.5, 35.89, 34.7, and 33.88; and 39.6, 47.25, 46.32, and 44.77, respectively.This shows that our proposed method is better than GWO, PCA, and PSO in this area.We created three scenarios in our study to measure the performance of our suggested algorithm.Scenario one included a more significant number of high-priority tasks and a lesser number of lowand medium-priority tasks.
Scenario two included a significant number of tasks of medium priority, as well as a few tasks of high and low priority.Scenario three had several low-priority tasks, as well as a few high-and medium-priority tasks.
We employed three node number phases, the first with ten nodes, the second with twenty, and the third with thirty.Each stage contained five cloud nodes with a 100-millisecond delay in the cloud and a 20-millisecond delay in the fog.The number of tasks ranged from 50 to 200 in all circumstances.
83%; and 92%, 74%, 76%, 84%, respectively, clearly demonstrating that our suggested approach outperforms GWO, PCA, and PSO in terms of resource utilization.Finally, Figure 4 displays the average energy in kilojoules used for the four algorithms (PC-GWO, GWO, PCA, and PSO) in scenarios A, B, and C, each with 10, 20, and 30 for fog and five nodes of cloud for each scenario: 22.75, 24.75, 26.25, and 23.55; 32.5, 35.89, 34.7, and 33.88; and 39.6, 47.25, 46.32, and 44.77, respectively.This shows that our proposed method is better than GWO, PCA, and PSO in this area.We created three scenarios in our study to measure the performance of our suggested algorithm.Scenario one included a more significant number of high-priority tasks and a lesser number of low-and medium-priority tasks.Scenario two included a significant number of tasks of medium priority, as well as a few tasks of high and low priority.Scenario three had several low-priority tasks, as well as a few high-and medium-priority tasks.
We employed three node number phases, the first with ten nodes, the second with twenty, and the third with thirty.Each stage contained five cloud nodes with a 100-millisecond delay in the cloud and a 20-millisecond delay in the fog.The number of tasks ranged from 50 to 200 in all circumstances.The results of the experiment demonstrated that our algorithm performed exceptionally well in contrast to the other algorithms that were investigated.Our findings are supported by Figures 2-4, which show that, in three scenarios, our proposed algorithm is superior to the alternatives in terms of Makespan, average resource utilization, and energy power usage.This benefit results from the evaluation that our algorithm performs on job priority, in the cost field, the minimal completion time, the energy power consumption, and the efficient distribution of work across the fog and cloud environments, using the GWO algorithm to determine the optimal solution.In addition to this, it determines the quickest route between Internet of Things devices and the fog-cloud environment, which further contributes to its efficiency.The results of the experiment demonstrated that our algorithm performed exceptionally well in contrast to the other algorithms that were investigated.Our findings are supported by Figures 2-4, which show that, in three scenarios, our proposed algorithm is superior to the alternatives in terms of Makespan, average resource utilization, and energy power usage.This benefit results from the evaluation that our algorithm performs on job priority, in the cost field, the minimal completion time, the energy power consumption, and the efficient distribution of work across the fog and cloud environments, using the GWO algorithm to determine the optimal solution.In addition to this, it determines the quickest route between Internet of Things devices and the fog-cloud environment, which further contributes to its efficiency.

Conclusions
Within the scope of this investigation, we investigated the IoT job scheduling method within a fog-cloud setting.The primary objective of this research was to increase the system's overall Makespan by enhancing the performance and cost algorithm (PCA) with gray wolf optimization (GWO) methodologies.This algorithm considers the cost-to-profit ratio and allocates the task that would yield the most significant profit to the resource that would accomplish it in the least amount of time while also consuming the least amount of energy.The new algorithm was put through a battery of tests to determine its efficacy, and the results of the trial reveal that the PC-GWO algorithm reduces the average overall energy usage by 12.17%, 11.57%, and 7.19%, and reduces the Makespan by 16.72%, 16.38%, and 14.107%, with the best average resource utilization by 13.2%, 12.05% and 10.9% compared with the gray wolf optimization (GWO) algorithm, performance and cost algorithm (PCA), and Particle Swarm Optimization (PSO) algorithm.In the future, there will be a need to address plenty of issues, such as the temperatures of the system and the times of the deadlines.

Figure 2 .
Figure 2. Display of the tasks' Makespan for each scenario (A, B, and C).

Figure 2 .
Figure 2. Display of the tasks' Makespan for each scenario (A, B, and C).

Table 1 .
Examining current methods for scheduling tasks.