Next Article in Journal
Multi-Sensor NDVI Time Series for Crop and Fallow Land Classification in Khabarovsk Krai, Russia
Previous Article in Journal
Markerless Motion Capture Parameters Associated with Fall Risk or Frailty: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing

by
S. Sudheer Mangalampalli
1,
Pillareddy Vamsheedhar Reddy
2,*,
Ganesh Reddy Karri
3,
Gayathri Tippani
1 and
Harini Kota
1
1
Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal 560064, India
2
Department of CSE (AIML), Keshav Memorial Engineering College, Hyderabad 500088, India
3
School of Computer Science and Engineering, VIT-AP University, Amaravati 522241, India
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(18), 5744; https://doi.org/10.3390/s25185744
Submission received: 7 August 2025 / Revised: 12 September 2025 / Accepted: 12 September 2025 / Published: 15 September 2025
(This article belongs to the Section Internet of Things)

Abstract

The number of IoT devices has been increasing at a rapid rate, and the advent of information-intensive Internet of Multimedia Things (IoMT) applications has placed serious challenges on computing infrastructure, especially for latency, energy efficiency, and responsiveness to tasks. The legacy cloud-centric approach cannot meet such requirements because it suffers from local latency and central resource allocation. To overcome such limitations, fog computing proposes a decentralized model by reducing latency and bringing computation closer to data sources. However, effective scheduling of tasks within heterogeneous and resource-limited fog environments is still an NP-hard problem, especially in multi-criteria optimization and priority-sensitive situations. This research work proposes a new simulated annealing (SA)-based task scheduling framework to perform multi-objective optimization for fog computing environments. The proposed model minimizes makespan, energy consumption, and execution cost, and integrates a priority-aware penalty function to provide high responsiveness to high-priority tasks. The SA algorithm searches the scheduling solution space by accepting potentially sub-optimal configurations during the initial iterations and further improving towards optimality as the temperature decreases. Experimental analyses on benchmark datasets obtained from Google Cloud Job Workloads demonstrate that the proposed approach outperforms ACO, PSO, I-FASC and M2MPA approaches in terms of makespan, energy consumption, execution cost, and reliability at all task volume scales. These results confirm the proposed SA-based scheduler as a scalable and effective solution for smart task scheduling within fog-enabled IoT infrastructures.

1. Introduction

The increasing adoption of Internet of Things (IoT) technologies has led to a revolutionary change in many areas, transforming the way devices communicate, create and process information [1]. As IoT applications expand into everyday life from smart homes and wearable technology to industrial automation and healthcare systems, the demand for smart, scalable and efficient computing infrastructure is increasing [2]. These interacting devices, ranging from sensors in industrial equipment to real-time health monitoring wearables, constantly generate huge amounts of heterogeneous data that require high bandwidth, low latency and real-time processing [3]. Of all the subsystems in the Internet of Multimedia Things (IoMT), the IoT world is the most unusual because it requires high-intensity computing, fast data transmission and enormous storage, in most cases, real-time video, audio and image data. The volume and intensity of multimedia data generated from IoMT-based applications are often used to test the responsiveness and capacity of traditional cloud-based systems and create latency and bandwidth issues that can impair free operation and user experience [4].
To overcome these shortcomings, cloud computing has long been used as a centralized model for providing scalable, virtualized resources over the Internet. It facilitates resource sharing, storage management, and on-demand computing [5] services required to handle large-scale data streams from the IoT. However, the centralized nature of cloud computing presents inherent disadvantages when faced with latency-demanding, location-based, and real-time applications [6]. For example, network-edge-generated data must reach centralized data centers for processing, which causes significant latency and reduces the quality of services [7]. Additionally, growing data security concerns, heavy network traffic, and wasteful bandwidth consumption further reduce the applicability of centralized cloud infrastructures to contemporary IoT needs. As IoT devices mature to enable next-generation applications with instant responsiveness and reliability, a new paradigm in architecture is needed—one that brings computational intelligence closer to the data source.
Fog computing acts as a vital paradigm to bridge the gap between cloud data centers and edge devices. By bringing cloud features to the network’s edge, fog computing makes it possible to handle computing, storage, and networking near the devices that create data [8]. This decentralized approach not only reduces data transmission latency but also relieves bandwidth pressure at the core network and increases the responsiveness of latency-intensive applications. In this context, fog nodes—typically lightweight servers, routers or gateways—act as intermediate processing units that allow for local decision-making and task execution [9]. This proximity to data sources enables fog computing to effectively support real-time applications such as augmented reality, industrial automation, autonomous driving and remote health diagnostics. In addition to latency minimization, fog computing allows for increased data security and privacy for transmitting sensitive data to centralized points, which is critical for applications that handle personal or critical infrastructure data [10,11].
However, the dynamic characteristics of a fog environment and the decentralization impose new challenges, particularly in task scheduling. Efficiently assigning and executing computational jobs across geographically dispersed and resource-constrained fog nodes is no small task. IoT tasks are typically represented as directed acyclic graphs (DAGs), where each node represents an individual computer task and an edge represents task dependency. In these representations, scheduling means mapping tasks onto appropriate fog nodes without violating their execution order and resource requirements. The key goals of task scheduling using fog are to minimize makespan (the sum of the time it takes for all tasks to complete reduce energy), optimize operational cost, and ensure high reliability and fault tolerance. These goals are typically interdependent and must be traded off to achieve maximum system performance.
Figure 1 describes the fog computing architecture. The layers of fog computing architecture are:
  • IoT/End Devices Layer (Edge Layer): Comprises sensors, actuators, smart devices, and mobile devices that generate raw data. Performs data collection and can do preliminary filtering or processing. Sends data upstream to fog nodes for further processing.
  • Fog Layer (Intermediate Layer): Consists of fog nodes deployed near the network edge, such as gateways, routers, micro data centers, or edge servers. Responsible for local data processing, storage, and analytics, reducing latency and network congestion. Supports real-time decision making, task offloading, and context-aware services. Communicates with both the IoT layer and cloud layer.
  • Cloud Layer (Central Layer): Centralized, large-scale data centers for long-term storage, heavy computation, and advanced analytics. Handles non-latency-sensitive tasks that require high processing power. Acts as a backup and coordination layer for fog nodes.
Despite the advantages of fog computing, efficient task scheduling remains a major challenge due to fluctuating workloads, heterogeneous resources, limited energy availability, and the need to handle task priorities in real time. Task scheduling is an NP-hard problem because of the increasing number of devices and dynamic operational environments. Traditional heuristic and metaheuristic methods often optimize only a single objective or fail to adapt under dynamic conditions; although they are computationally efficient, they often fail to adapt to real-time demands and balance multiple performance metrics, leading to poor resource utilization and degradation of QoS. To overcome this, we propose a simulated annealing (SA)-based multi-objective optimization framework for fog task scheduling. SA, a probabilistic technique inspired by metallurgical annealing, is effective in exploring large solution spaces and avoiding local optima. Our approach optimizes a composite objective function that includes makespan, energy consumption, cost, and reliability. A priority-based penalty mechanism is included to ensure that urgent tasks are not delayed by less critical ones, which improves system responsiveness. The algorithm starts with an initial task-node mapping and iteratively explores neighboring schedules. Each configuration is evaluated using a cost function, and new schedules are accepted based on temperature-controlled probabilities, balancing exploration, and convergence. Gradual cooling of the schedule allows for stable, nearly optimal solutions to be found.

1.1. Motivation and Key Challenges

1.1.1. Motivation

The boom in IoT devices has led to big changes in computing approaches. It has brought challenges like handling huge data and meeting instant processing demands. Cloud systems have become key players because they offer the capacity to meet the fast response and dependability needs of time-sensitive IoT applications. Fog computing, on the other hand, brings computing closer to the edge. This approach lowers delays and reduces data use. However, distributing tasks this way creates issues. These include changes in resource availability, different node abilities, and unpredictable task loads. To tackle this, there is a strong need to create scheduling methods. Such methods should use the closeness of fog nodes to provide faster, more dependable, and energy-friendly task execution.

1.1.2. Challenges

To schedule tasks in fog environments, planners need to tackle multiple challenges and aim to achieve goals like shorter makespan, lower energy use, cost reductions, and higher system reliability. Many existing methods fail because they depend on fixed techniques or aim at narrow goals. This issue gets worse when task priorities change or node conditions shift. Handling task scheduling in fog networks is NP-hard. Traditional algorithms do not perform in large systems. The need to assign task priorities complicates things further, as it requires prioritizing heavy tasks while ensuring fairness. Edge nodes with limited power face significant challenges in staying energy efficient. Keeping operational costs low is also necessary to sustain fog infrastructure in the long run. Handling fault tolerance and keeping nodes reliable makes scheduling harder. This situation creates the need to adopt systems capable of managing potential failures. Shifting computing closer to the network’s edge helps lower delays and reduce data usage. However, this distributed setup creates challenges with resource availability that changes, differences in how capable nodes are, and varying workloads. These challenges highlight the demand to develop better scheduling methods. These methods can take advantage of the nearby fog nodes to provide quick, dependable, and energy-saving task performance.

1.2. Key Contributions

  • We proposed a priority-aware, multi-objective task scheduling framework for fog–cloud environments. This framework explicitly integrates urgent task responsiveness into the scheduling process, which is often overlooked in prior methods.
  • We developed a Simulated Annealing (SA)-based scheduling algorithm that incorporates priority-aware penalties. Unlike GA- or RL-based schedulers, this approach balances cost and makespan while improving responsiveness for critical tasks.
  • The Simulated Annealing algorithm is tested against the GoCJ dataset (task sizes between 15,000 and 900,000 units), ensuring the evaluation against realistic and varied workloads found in fog–cloud environments.
  • The proposed SA method is systematically compared against well-known scheduling frameworks, including ACO, PSO, I-FASC, and M2MPA. This allows us to demonstrate how the proposed model addresses their limitations, particularly in handling urgent tasks.
The outline of the paper is condensed as follows: Section 2 reviews the related literature relevant to the proposed method. Section 3 describes the problem formulation of the proposed work. Section 4 illustrates the working procedure of Simulated Annealing for Task Scheduling. Section 5 reports the experimental results and provides a detailed discussion based on performance metrics, and finally, Section 6 concludes the paper, highlighting key findings and suggesting directions for future work.

2. Related Work

The authors of [12] examine how to schedule tasks in fog-based IoT apps to cut down on service delays and energy expenses while meeting task deadlines. The researchers introduce a new way to schedule tasks called Fuzzy Reinforcement Learning (FRL), which combines fuzzy control and reinforcement learning techniques. The rise in IoT devices has brought big changes to computing models, leading to huge amounts of data and the need for quick processing. Cloud systems play a key role by handling the fast-response and high-reliability demands of time-sensitive IoT applications. Fog computing provides a flexible solution by bringing computing power closer to the edge to reduce delay and lower data usage. They build a mixed-integer nonlinear programming (MINLP) model to minimize the energy consumed by fog resources and the time required to complete tasks. The approach also considers deadlines and resource limitations. To manage a large number of tasks in changing environments, they use a fuzzy-based reinforcement learning technique. This method starts by sorting tasks with fuzzy logic, focusing on things like deadlines, data sizes, and instruction counts. After that, they apply an on-policy reinforcement learning scheduling method called SARSA learning, which has a stronger influence on long-term rewards compared to Q-learning. The results reveal that their FRL method performs better than other algorithms, leading to a 23% improvement in service latency and an 18% increase in energy costs. This method handles service time and energy use, ensuring deadlines are met to maximize resource use in fog computing setups.
The authors of [13] suggest an effective load-balancing system specifically tailored for resource scheduling in cloud-based communication systems, focusing mainly on healthcare applications. The authors tackle the increasing need for real-time data processing in healthcare systems, where the most important phase is the allocation of resources on time. In order to overcome inefficiencies like unbalanced load assignment, absence of fault tolerance, and higher latency in current methods, the current framework employs RL and genetic algorithm to dynamically assign the resources. Agent-based models are used by the system for assigning resources and scheduling tasks with the goal of minimizing make-span, throughput, and latency. According to experimental tests, the current framework has been shown to outperform other approaches.
The authors of [14] address the intricate issue of scheduling Bag-of-Tasks (BoT) applications in fog-based Internet of Things (IoT) networks by developing a multi-objective optimization strategy. The core objective is to achieve an optimal balance among three inherently conflicting performance metrics: minimizing execution time (makespan), reducing operational costs, and enhancing system reliability. The makespan criterion focuses on ensuring timely task completion, while cost optimization considers the monetary expenses associated with CPU, memory, and bandwidth usage. Reliability is quantified through a novel metric, the Scheduling Failure Factor (SFF), which reflects the historical failure tendencies of fog nodes based on Mean-Time-To-Failure (MTTF) data.
The authors of [15] proposed a hybrid approach, combining an FIS with a GA to address multi-objective optimization and condition satisfaction problems. The proposed model integrates sequencing, scheduling algorithms, and partitioning to optimize the trade-off between the users and resource providers, such as cost, resource utilization, and makespan. The FIS performs scheduling based on constraints like budget and deadline, while the GA optimizes task allocation by resource usage and reducing cost and makespan. The experiments are implemented using workflows, and the results are outperformed in terms of makespan by reducing to 48% compared with EM-MOO and 41% compared with MAS-GA. Additionally, the process ensures better compliance with user-defined deadlines and budget constraints.
The author [16] introduces a novel approach for cyber-physical-social systems (CPSSs), utilizing a modified version of the MPA, referred to as M2MPA. The main task is to minimize the energy consumption and makespan during the processing of IoT tasks offloaded to fog nodes. The authors argue that the existing metaheuristic-based schedulers and traditional techniques are insufficient for solving this NP-hard problem with high precision in an appropriate time. To address this, M2MPA incorporates two key improvements over the classical MPA: changing the fish aggregating devices (FAD) mechanism with a polynomial crossover operator to enhance examination and using an adaptive CF parameter to improve exploitation.
The authors of [17] proposed an improved task scheduling method called I-FASC (Improved Firework Algorithm-based Scheduling and Clustering) for fog computing environments. The authors address the challenges of minimizing the processing time and check balanced load distribution across fog devices, which are critical due to the weak processing and limited resource capabilities of fog nodes compared to cloud servers. To achieve this, they introduce an enhanced version of the Firework Algorithm (I-FA), incorporating an explosion radius detection mechanism to prevent optimal solutions from being overlooked. The I-FASC method includes task clustering based on likely storage space, completion time, and bandwidth requirements, enabling targeted resource scheduling. Resources in the fog layer are categorized into three types: storage, computing, and bandwidth, and allocated according to their respective capacities. The fitness function measures both time and load values, with weights tuned to focus on either execution time or load balance according to the optimization target.
The author [18] proposes FLight, for federated learning (FL) on the edge- and fog-computing platforms. The authors generalize the FogBus2 framework to develop FLight, incorporating new machine learning (ML) models and accommodating mechanisms for access control, worker selection, and storage implementation. The framework is intended to be conveniently extensible, with the capability to add different ML models as long as import/export and merging of model weights are specified. FLight also accommodates asynchronous FL and provides flexibility in storing model weights on different media, which eases deployment across various computing resources. The paper mentions two worker selection algorithms presented in FLight to enhance training time effectively. These algorithms emphasize quicker workers while increasingly the utilization of slower ones in subsequent rounds of training. This approach balances accuracy improvement with training time efficiency, avoiding the pitfalls of excluding slower workers entirely or over-relying on them. The authors also discuss the trade-offs between energy consumption, model accuracy, and time efficiency, noting that previous research often focused on accuracy-energy trade-offs without fully considering time efficiency.
The authors [19] introduce a novel EEIoMT. The aim of EEIoMT is to reduce latency, reduce energy consumption, and enhance the QoS for IoMT applications. It is specially suited for monitoring the patient data to raise the quality of life by reducing medical expenses. The framework categorizes tasks into three categories (normal, moderate, and critical) based on the requirement for efficient scheduling by reducing energy consumption. The proposed framework utilizes the ifogsim2 simulator, and results demonstrated that the EEIoMT significantly reduces energy consumption, network utilization, and response time to existing methods. However, authors did not consider important parameters like cost and reliability for efficient scheduling.
The authors of [20] address the challenge of task scheduling in fog computing environments, proposing an optimized solution using an ant colony algorithm to achieve multi-objective goals such as minimizing makespan, improving load balancing, and reducing energy consumption. The authors are specifically interested in Directed Acyclic Graph (DAG) problems, which are prevalent in most applications involving effective resource allocation and processing. The method proposed starts with the initialization of pheromone levels on all possible routes, with ants deployed randomly on these routes to search for potential solutions. Each ant computes transition probabilities from heuristic information and pheromone trails, updating the levels of the pheromones after every iteration in order to steer subsequent searches towards better solutions. This is how ants solve the shortest path to food using positive feedback processes and converge towards near-optimal solutions for the NP-hard DAG task scheduling problem.
The authors of [21] introduced an EDLB framework, which combines MPSO and CNN for the limitations of existing load balancing methods. The framework is composed of three modules: Resource Monitor(FRM), CNN-Base classifier (CBC) and Optimized Dynamic Scheduler (ODS) to achieve real-time load balancing. Frm is used to monitor the resources and store the data, CBC calculates the fitness function of the servers and ODS schedules tasks dynamically by optimizing resource utilization and latency. EDLB evaluation results are better than the GA, BLA, WRR, and EPSO in terms of cost, makespan, resource utilization and load balancing. However, the authors did not consider impact parameters like reliability and energy consumption to improve the efficiency of scheduling.
The primary aim of [22] is to enhance real-time data processing and reduce latency for critical healthcare tasks by leveraging fog computing’s proximity to end-users. HealthFog uses fog nodes to process and analyze cardiac patient data locally, minimizing reliance on cloud resources and improving overall efficiency. The proposed framework consists of three main components: workload management, resource arbitration, and deep learning modules. The workload manager monitors task queues and job requests, while the arbitration component assigns fog or cloud resources for optimal load balancing. The deep learning module utilizes patient datasets for training, testing, and validation in a 7:2:1 ratio, enabling automated analysis of cardiac data. By deploying HealthFog within the FogBus framework, the system achieves seamless integration with IoT-edge-cloud environments, ensuring real-time data processing. The authors identify a number of the benefits of fog computing compared to conventional cloud-based systems, including lower data movement, removal of bottlenecks due to centralized systems, reduced data latency, location awareness, and enhanced security for encrypted data. These capabilities render fog computing well-suited to time-sensitive healthcare applications, like continuous patient monitoring.
The challenge of providing accurate execution of tasks in fog computing systems is addressed by the authors of [23] through the presentation of a learning-based primary-backup approach named ReLIEF (Reliable Learning based In Fog Execution). The authors concentrate on enhancing the reliability of running tasks within fog nodes with deadlines and reducing delays. The suggested strategy comes into play especially for applications that demand low-latency processing near the data source. ReLIEF provides a framework for choosing both primary and secondary fog nodes for task processing. Contrary to conventional methods where a task is allocated to one single fog node without redundancy, this method provides fault tolerance by allocating a standby node to process tasks in case the primary one fails or encounters delays. The selection process uses machine learning algorithms to estimate the most appropriate primary and backup nodes according to capabilities such as computational power, network status, and past performance data. The paper compares ReLIEF with three cutting-edge approaches: DDTP (Dynamic Deadline-aware Task Partitioning), MOO (Multi Objective Optimization), and DPTO (Deadline and Priority aware Task Offloading). Experimental results show that ReLIEF performs better than these algorithms on reliability, with a greater percentage of tasks completing their deadlines even in the face of fluctuating workload levels and resource limitations. In particular, the strategy performs well in high uncertainty or unforeseen environmental conditions, including sudden hikes in network latency or short-term unavailable fog nodes.
From Table 1, existing research on task scheduling, ACO and PSO algorithms are metaheuristic approaches used for global search and task allocation. However, they often suffer from premature convergence and lack of ability to adapt dynamic fog–cloud workloads, which reduces their applicability in latency-sensitive environments. Frameworks like I-FASC and M2MPA are important for fog scheduling, but their focus is mainly on cost efficiency and throughput, and they do not address the priority handling or urgent task responsiveness, both of which are critical in fog computing. Recent research incorporating reinforcement learning and deep learning is adaptive, but suffers from high computational overhead. Hybrid fog–cloud models utilize fog for latency-critical tasks and cloud for compute-intensive tasks, but it is difficult to balance workload against reliability. To solve these issues, our Simulated Annealing (SA) model incorporates a priority-aware penalty mechanism that balances cost and makespan while also ensuring responsiveness to urgent tasks. Therefore, while ACO, PSO, I-FASC, and M2MPA are valuable baselines, their limitations in addressing urgency and priority-awareness highlight the necessity of our SA-based solution.

3. Problem Formulation

Figure 2 shows the proposed architecture of the priority-aware multi-objective task scheduling framework in fog computing. In the architecture, there are three major components: Input Layer: Tasks generated from IoT/IoMT devices are collected, each having parameters such as execution time, reliability requirement, energy demand, cost factor, and priority level. These inputs form the basis for scheduling decisions.
Scheduling Core: At the heart of the framework lies the Simulated Annealing (SA)-based scheduler. This module will explore the task-to-node mapping space using a balancing exploration, probabilistic search strategy, and exploitation. The optimization works through a combined cost function. It ties together four main goals: makespan, energy use, execution cost, and reliability. A priority-focused penalty gets used to make sure important tasks go to nodes that are less crowded and more dependable. Each part of the architecture matches up with specific math formulas (Equations (1)–(7)), which are laid out in the problem description. This connects the theoretical aspects with how the system functions.
Execution & Feedback Layer: After deciding on schedules, the system runs tasks across fog nodes. It tracks metrics such as how long tasks take, how work is shared, and how resources are used. These results go back to the scheduler, allowing the task allocation process to improve through repeated adjustments.
Within the context of fog computing systems where many tasks are being scheduled across a distributed collection of edge and fog nodes, it is necessary to integrate multi-objective optimization techniques that account for task-specific properties like execution time, energy usage, cost, node reliability, and most importantly, task priority. Task priority is instrumental in making the system responsive and efficient, especially for time-critical or mission-critical applications. Each task T i is assigned a priority level p i { 1 , 2 , , 10 } , such that higher values of pi represent higher urgency. The objective of the scheduler is to allocate the high-priority jobs to less-loaded and more reliable nodes to reduce execution delays and minimize resource contention. The Simulated Annealing (SA) algorithm is used as the fundamental optimization paradigm, able to navigate intricate scheduling spaces and escape local minima. For the purpose of directing the optimization, a composite cost function is specified, with four primary performance measures: energy usage, execution cost, makespan, and a priority-aware penalty on tasks. The makespan M is the maximum completion time among all fog nodes and reflects the overall system latency. It is formally defined as:
M = max j { 1 , , N } i = 1 T d i · I ( n i = j )
where d i is the execution duration of task T i , n i is the index of the node assigned to T i , N is the total number of fog nodes, and I ( · ) is the indicator function, which equals 1 if the condition inside holds true and 0 otherwise. This approach makes sure tasks get spread out so all nodes have a similar workload, cutting down the time needed to finish the whole schedule.
Total energy consumed, E, is determined by summing up the amount of energy each activity consumes while running. Assuming each fog node consumes power at a constant rate P (watts), the total energy consumption for all activities resembles this:
E = i = 1 T d i · P
This measure, expressed in joules or kilojoules, has an important role in situations where energy resources are scarce such as in green computing scenarios or devices powered by batteries. The running cost C, is calculated by taking the time it takes to complete a task and multiplying it by a unit cost rate, ρ . The unit cost rate represents the monetary cost per unit of computation time:
C = i = 1 T d i · ρ
This concept lets users schedule while considering economic factors. This becomes important when fog resources charge based on the amount of computation used.
The system’s reliability R is defined as the likelihood that all planned tasks finish without any issues. If we assume each node has a reliability factor r [ 0 , 1 ] , and if we consider the nodes to be independent, the reliability for carrying out all T tasks becomes:
R = i = 1 T r = r T
This product assumes all nodes are reliable to keep things simple. However, it can be extended to work with different environments where each node has its own reliability r i .
To integrate task priority explicitly into the optimization model, a priority penalty function P p is introduced. This penalty discourages assigning high-priority tasks to heavily loaded nodes, which would result in increased waiting time and QoS degradation. The priority penalty is defined as:
P p = i = 1 T p i · W n i
where W n i is the total workload on the node n i to which task T i is assigned. The workload W j on node j is computed as:
W j = k = 1 T d k · I ( n k = j )
where Equation (6) computes the workload of node j by adding up the demands ( d k ) of all tasks that are mapped to it. The indicator function I ( n k = j ) guarantees that only tasks mapped onto the node are added.
The introduction of energy, cost, reliability, and the priority aware penalty into the optimization model is driven by realistic constraints of fog computing. Equation (2) accounts for limited power budgets of fog nodes; the cost of execution Equation (3) captures the economic viability of using resources such as reliability; Equation (4) provides assurance against failures in heterogeneous nodes; and the priority penalty Equation (5) ensures responsiveness towards time-critical tasks. Together, these metrics meet the core QoS objectives of fog-enabled IoT systems and thus make the optimization framework theoretically sound and practically feasible.
This penalty ensures that tasks with higher p i values (more urgent tasks) significantly increase the cost when scheduled on overloaded nodes, encouraging the scheduler to favor lightweight and fast nodes for important tasks.
Bringing together all the above components, the total cost function used for Simulated Annealing becomes:
Total Cos t = α M + β E + γ C + δ P p
Here, α ,   β ,   γ ,   δ R + are tunable weights that determine the importance of each metric in the optimization process. These weights allow the scheduler to prioritize different objectives depending on the application context, for instance, assigning greater weight to δ in scenarios where task urgency dominates.
While optimizing, the Simulated Annealing method looks at the solution space by creating fresh schedule options by reassigning tasks to different nodes. If the new schedule S ends up with a total cost smaller than the current schedule S, it gets accepted. If not, it might still be accepted based on a probability determined by the Boltzmann distribution.
P accept = 1 if Δ Cost 0 exp Δ Cost T if Δ Cost > 0
where Δ Cost = Cost ( S ) Cos t ( S ) , and T represents the current temperature in the annealing schedule, which falls over time. Algorithm 1 can optimize and converge toward an optimal or nearly optimal schedule when the temperature drops because there is less chance of accepting subpar solutions. By adding task priority to the multi-criteria cost model. This model ensures that the scheduler can adapt dynamically to crucial application demands without sacrificing system wide efficiency, in addition to makespan, energy, cost, and reliability. The result is a robust scheduling framework suitable for the latency-sensitive and heterogeneous characteristics of modern fog computing systems.
Algorithm 1 Priority-Aware Task Scheduling in Fog Computing
  • Require:
  •         T = { T 1 , T 2 , , T n } : Set of tasks
  •         P = { p 1 , p 2 , , p n } : Priority of each task (1 to 10; higher = higher priority)
  •         D = { d 1 , d 2 , , d n } : Execution time for each task
  •         N = { N 1 , N 2 , , N k } : Set of fog nodes
  • Ensure:
  •        schedule[ T 1 T n ]: Final task-to-node assignments
  •        workload[ N 1 N k ]: Workload per node after assignment
      1:
Initialize workload[ N 1 N k ] 0
      2:
Initialize schedule[ T 1 T n ] 1
      3:
for each task T i in T do
      4:
       b e s t _ n o d e  None
      5:
       m i n _ p e n a l t y
      6:
      for each node N j in N do
      7:
             p r o j e c t e d workload[ N j ] + D [ i ]
      8:
             p e n a l t y p i × p r o j e c t e d
      9:
            if  p e n a l t y < m i n _ p e n a l t y  then
    10:
                   m i n _ p e n a l t y p e n a l t y
    11:
                   b e s t _ n o d e N j
    12:
            end if
    13:
      end for
    14:
      Assign T i to b e s t _ n o d e
    15:
      workload[ b e s t _ n o d e ] + =   D [ i ]
    16:
      schedule[ T i ] b e s t _ n o d e
    17:
end for
The time complexity of the priority-aware task scheduling algorithm in Fog Computing is O(n.k), where n denotes the number of tasks, and k denotes the number of fog nodes. In the first stage, workload and schedule arrays take the time of O(n + k) from the initialization part. The outer loop from lines 3–17 runs with n iterations. The inner loop that runs from lines 6–13 takes O(k) as it needs to find the best assignment of a task to be chosen from the available k nodes. The total time complexity is calculated as O(n + k) + O(n.k). The term n.k dominates for larger inputs. Therefore, the time complexity for the above approach is O(n.k).

4. Simulated Annealing Task Scheduling

The Simulated Annealing Figure 3 begins by making a random task plan for nodes. A cost function then evaluates this plan. It considers things like task completion time, energy usage, execution cost, and a penalty tied to priority. After that, the algorithm tweaks the plan with small changes called neighbors. Sometimes, it even chooses worse options based on probability linked to temperature. This helps it escape local optima and work toward a near-optimal setup that balances the main goals. The method continues to run a set number of times or stops when the temperature drops under a specific limit. At that moment, it returns the best schedule found.
Unlike GA-based schedulers [15] and RL-based methods [12,13], which primarily focus on cost and energy, our SA-based model incorporates priority-aware penalties to improve responsiveness for critical tasks. Previous models such as I-FASC [17] and EEIoMT [19] do not consider urgent task responsiveness, whereas our method integrates the urgency and priority-awareness within a multi-objective formulation, positioning it as an advancement over these existing approaches.

4.1. Initialization

To begin the Simulated Annealing (SA) process for task scheduling, we first formalize a schedule σ as a mapping
σ ( 0 ) : { 1 , , T } { 1 , , N }
where each task i { 1 , , T } is assigned to a fog node σ ( 0 ) ( i ) { 1 , , N } . In its simplest form, this initial schedule is generated uniformly at random:
Pr ( σ ( 0 ) ( i ) = j ) = 1 N , for j = 1 , , N
ensuring that every possible assignment has equal probability. This purely stochastic start helps the algorithm cover the entire solution space without bias, promoting broader exploration in early iterations.
However, uniform random seeding can be augmented by heuristic or hybrid strategies to improve convergence speed. For example, one might pre-assign high-priority tasks to the least-loaded nodes using a simple greedy rule, or incorporate domain knowledge by placing tasks with the longest durations on more powerful nodes. Such heuristic seeding yields an initial σ ( 0 ) closer to the optimum, reducing the number of uphill moves required during the annealing process.
Once σ ( 0 ) is fixed, we compute its initial cost via the multi-objective function:
Cost ( σ ( 0 ) ) = α M ( σ ( 0 ) ) + β E ( σ ( 0 ) ) + γ C ( σ ( 0 ) ) + δ Priority ( σ ( 0 ) )
This value populates both the current cost and best-so-far cost:
Cost current = Cost best = Cost ( σ ( 0 ) )
Maintaining these two references allows SA to track progress and preserve the most promising schedule encountered.
A critical SA hyperparameter is the initial temperature T 0 . If chosen too low, the algorithm behaves like a greedy hill-climber and easily becomes trapped in local minima; if too high, it wastes time exploring dominated regions. A common approach is to estimate T 0 based on the cost landscape of random samples. Specifically, by generating S random schedules { σ ( s ) } s = 1 S , computing their pairwise cost differences Δ s , and taking the sample standard deviation σ Δ , one sets
T 0 = σ Δ ln ( p 0 )
where p 0 ( 0.5 , 0.9 ) is a target initial acceptance probability (e.g., p 0 = 0.8 ). This principled choice ensures that a high fraction of uphill moves are accepted at the start, facilitating wide-ranging exploration.
Finally, before entering the main annealing loop, practitioners often precompute cost components (e.g., node workloads, per-task energy) and initialize logging structures to record metrics such as cost, temperature, and acceptance rates. This rich initialization phase—combining random or heuristic seeding, informed temperature selection, and systematic bookkeeping—lays a robust foundation for the subsequent SA iterations, dramatically improving both the efficiency and the quality of the final task schedule.

4.2. Neighbor Generation

In the Neighbor Generation phase of Simulated Annealing (SA) task scheduling, the focus is on generating new candidate solutions by making small, incremental changes to the current schedule. This iterative process allows the algorithm to explore the solution space in a controlled manner, gradually improving the task-to-node assignments over time. The concept is to shake up the current schedule to explore the solution space and move towards the optimal or near-optimal solutions.
In each round k, the system picks a task i * at random from the set of tasks { 1 , , T } . After choosing task i * , the system gives it to a new node j * which is not the same as its current node σ ( i * ) . The new node is chosen randomly from the set of available nodes { 1 , , N } excluding the current node assignment, i.e., j * { 1 , , N } { σ ( i * ) } . The new schedule, σ , is then formed by applying this perturbation. Formally, the new schedule is given by:
σ ( i ) = j * , if i = i * , σ ( i ) , if i i * .
This simple yet effective step creates a neighboring schedule that differs slightly from the current one by ensuring that only one task is changed at a time. The algorithm ensures that the search process is controlled and focused by making only small changes to the schedule. What makes this strategy beautiful is one change to the task-node allocation affects only two workloads: the workload of the old node that performed the task, W σ ( i * ) and the workload of the new node, W j * . This small change makes it less expensive to examine the new schedule because only the two workloads that need to be updated are changed. This strategy is useful because the new schedule cost is updated incrementally. Because only two nodes are changed, the algorithm does not need to recalculate the entire schedule’s cost. It can compute the cost through the changes based on the workload nodes, which is faster and more effective. We define the cost function as Cost ( σ ) ,) where σ represents a particular schedule. After we reassign task i * to node j * , we can express the change in cost Δ Cost like this:
Δ Cost = Cost ( σ ) Cost ( σ ) = Δ W σ ( i * ) + Δ W j *
where Δ W σ ( i * ) and Δ W j * represent the changes in the workloads of the affected nodes. This formulation allows for efficient cost evaluation without needing to recompute the entire schedule’s cost from scratch.
A detailed search enables Simulated Annealing to methodically navigate through the solution space, tweaking small elements that impact the overall schedule cost, which is crucial in optimizing the schedule. The method guarantees that the solution space is approached in a very constrained, incremental way to avoid large, sudden shifts that may result in a locally optimal solution. Also, the changes make sure that each new schedule links to the one before, creating a chain of better and better schedules as the algorithm goes on. This method is good because it can balance looking around the solution space with using the best solutions found so far. At the beginning of the search, when the temperature is still high, the algorithm is more likely to accept steps down the hierarchy that are far from optimal in order to circumvent local optima. When the temperature is lowered, the algorithm becomes more focused on the best solutions found so far and only attempts to change them in a more fine-grained manner. The Simulated Annealing algorithm’s effectiveness in solving the task scheduling problems in the changing balance between exploration and exploitation.
Thus, the way in which neighbors are formed is critical to the SA task-scheduling algorithm. It offers a means to enhance the schedule without excessive computational expense. The algorithm is able to gradually and purposely improve the schedule by making small changes, computing the cost of each change and iteratively refining the schedule towards an optimal solution.

4.3. Cost Evaluation

In the Cost Evaluation phase of Simulated Annealing (SA) for task scheduling, the goal is to assess how the new schedule compares to the current one by calculating the change in cost:
Δ Cost = Cost ( σ ) Cost ( σ )
However, instead of figuring out the whole cost from the beginning, we use the fact that one task’s assignment changes. This means we can work out the change in cost bit by bit, looking at just the parts affected by the switch. This makes our calculations more efficient.
When it comes to task scheduling, the cost function for a specific schedule σ has two main parts: the makespan and the priority penalty terms. The makespan is the total time it takes to finish all tasks, which depends on how much work each node has to do. The priority penalty term deals with the order in which tasks need to be done punishing schedules that do not follow the right task order.
For a given schedule σ , you can figure out how much work node j has by adding up the time it takes to do each task i that is assigned to that node:
W j = i : σ ( i ) = j d i
Now, when we perturb the schedule by moving task i * from node a = σ ( i * ) to a new node b = j * , the workloads of nodes a and b are updated accordingly. Specifically, the new workload for node a, denoted W a , is obtained by subtracting the duration of the moved task d i * , while the new workload for node b, denoted W b , is obtained by adding the duration of the moved task d i * :
W a = W a d i * , W b = W b + d i *
The changes to the node workloads have an impact on the schedule’s makespan. We define the makespan as the highest workload among all nodes. The difference between the new and old makespan is what we call the change in makespan, Δ M . The new makespan is calculated as:
Δ M = max ( W a , W b , ) max ( W a , W b , )
This equation reflects the change in the overall task completion time as a result of reassigning task i * to node b. By only recalculating the workloads of the two affected nodes, this process is computationally efficient and avoids the need to recompute the makespan for the entire schedule.
The task reassignment affects not only the makespan but also the priority-penalty term. This penalty represents the cost when tasks do not follow their priority rules. For task i * , if it is assigned to node a, it incurs a priority penalty p i * W a . When it is moved to node b, the priority penalty becomes p i * ( W b + d i * ) . The change in the priority penalty, denoted Δ P , is given by:
Δ P = p i * ( W b + d i * ) p i * W a = p i * ( W b + d i * W a )
This term adjusts the total cost based on how the task reassignment impacts the priority order and the penalty associated with violating the priority constraints.
Finally, the total change in cost Δ is a weighted sum of the changes in makespan and priority penalty:
Δ = α Δ M + δ Δ P
where α and δ are user-defined constants that control the relative importance of the makespan and priority-penalty components in the overall cost function. The values of α and δ are typically set based on the specific requirements of the scheduling problem, such as whether the makespan or priority penalties should be more heavily penalized.
Reviewing the cost changes step by step makes the Cost Evaluation process quicker. This method examines small changes that happen when tasks are moved around. It lets the system decide if the change makes things better or worse and whether to keep or reject the new plan. Fast checks are important to how Simulated Annealing works. These checks help it test many options without using too much time.
The approach works better when the cost or energy function is made simpler. If nodes behave by using resources or dealing with costs, it can balance the energy and money used. This simplifies the evaluation process and reduces its difficulty. It offers an easy way to assess costs and keeps the SA task scheduling method effective and simple to handle. This makes it a solid and flexible choice to manage scheduling for large tasks.

4.4. Acceptance Criterion

In the Acceptance Criterion step of Simulated Annealing, which is applied in task scheduling, deciding whether to accept a new neighbor schedule σ relies on the Metropolis rule. This rule uses probability to introduce randomness into the exploration of solutions. In the annealing process, this randomness helps the algorithm look at different parts of the solution space. During this stage, the algorithm prioritizes exploring new options over refining existing ones. Using this approach prevents the algorithm from getting stuck in local optima too, ensuring it has a chance to find the best overall solution. The Metropolis rule’s core idea compares the difference in the cost function written as Δ = Cos t ( σ ) Cos t ( σ ) to a factor that depends on temperature.
The basic form of the Metropolis rule involves comparing the change in the cost function, Δ = Cos t ( σ ) Cos t ( σ ) , with a temperature-dependent factor. If the change in cost Δ is less than or equal to zero, the proposed solution σ is accepted unconditionally, as it improves the overall cost or does not worsen it. In mathematical terms, this is expressed as:
P accept = 1 , if Δ 0 , exp Δ T k , if Δ > 0 .
Here, P accept represents the probability of accepting the new schedule σ , and T k is the current temperature at iteration k. The temperature T k plays a crucial role in determining the likelihood of accepting a worsening solution (i.e., a solution with Δ > 0 ). As the temperature decreases over time (due to the cooling schedule), the acceptance probability for worse solutions becomes lower, which ensures that the algorithm gradually shifts from exploration to exploitation.
When Δ > 0 , meaning the new schedule σ results in a worse solution, the decision to accept it is made probabilistically. The probability of accepting a worse solution decreases as the temperature lowers, with probability given by the exponential function exp Δ T k . This function means that when the temperature is high, even large increases in cost may be accepted, allowing the algorithm to escape from local minima and explore broader regions of the solution space. On the other hand, as the temperature cools, the acceptance of worse solutions becomes increasingly rare, and the algorithm focuses more on refining the current best solution.
The acceptance decision is made by drawing a uniform random number u from the interval ( 0 , 1 ) , i.e., u U ( 0 , 1 ) . If this random value is less than or equal to P accept , the new schedule σ is accepted, and the current schedule σ is updated to σ . In other words, the acceptance condition is:
accept if u P accept
This random acceptance mechanism ensures that an algorithm does not get stuck in local optima too early in the process. By allowing uphill moves (i.e., accepting worse solutions) with a certain probability, the algorithm maintains the capacity to explore the solution space more widely in the early stages of the process.
The temperature T k starts high and gradually decreases according to the cooling schedule. The annealing process begins with the algorithm choosing weaker solutions since the temperature starts off high. This allows it to explore more options. By doing so, it avoids getting stuck in poor local spots and improves its chances of reaching an optimal or global result. When the temperature lowers, the algorithm leans more toward the stronger solutions it has already uncovered. It fine-tunes its focus to move closer to the best possible outcome.
The acceptance criterion plays a key role in balancing exploration and exploitation. The temperature schedule serves as the main factor in this process. It decides how the algorithm shifts from accepting a wider range of exploratory solutions to favoring fewer, more-refined exploitative solutions. Striking this balance is crucial to allowing the Simulated Annealing algorithm to find effective solutions to tough optimization tasks like task scheduling.

4.5. Cooling

The Cooling and Termination phase in Simulated Annealing (SA) allows the algorithm to change from exploring possible solutions to focusing on refining a single solution. It balances searching the solution space while settling on a high-quality result. A geometric cooling schedule controls this phase by lowering the temperature T k with each step. The temperature update rule is given by:
T k + 1 = τ T k
where T k is the temperature at iteration k, T k + 1 is the temperature at the next iteration, and τ is a cooling rate parameter, typically chosen to lie within the range [ 0.90 , 0.99 ] . This cooling rate parameter τ controls the speed at which the temperature decreases. When τ is close to 1, the temperature decreases slowly, allowing the algorithm to continue exploring for a longer period of time. Conversely, smaller values of τ result in faster cooling, causing the algorithm to shift more quickly toward exploitation of the best solutions found.
The cooling schedule plays a role in balancing exploration and exploitation in the algorithm when the temperature remains high, and the acceptance rule allows it to accept worse solutions more often. This helps to explore the solution space and lets the algorithm move past local minima. When the temperature lowers, the chances of choosing weaker solutions decrease, and the algorithm focuses more on using the best solutions it has found up to that point. This gradual cooling helps the algorithm settle into an ideal or almost ideal solution while keeping the search area open enough to avoid cutting off possibilities too soon. The algorithm runs by adjusting the temperature and evaluating potential solutions. It stops when one of two conditions is met. The first condition happens if the temperature T k falls below a set minimum T min . The second condition occurs if it completes the maximum allowed number of iterations, K. The T min level is typically a very low value close to zero. This ensures the process ends when there is little chance of finding further improvements. The iteration limit K is a second stopping criterion that avoids the algorithm running infinitely, setting an upper limit on the iterations and guaranteeing that the algorithm will terminate after a reasonable amount of computational effort.
As termination is reached, the algorithm outputs the optimum schedule discovered in all iterations. The schedule assigns tasks to nodes in a way that lowers the cost function or meets other optimization goals specific to the problem being solved. The best schedule balances exploration and exploitation during the annealing steps.
The way the cooling and stopping methods are managed has a big effect on the performance of the Simulated Annealing algorithm. By reducing the temperature geometrically, the algorithm ensures that it explores broadly in the early stages and converges more tightly toward a high-quality solution in the later stages. Additionally, the termination conditions provide a mechanism to stop the algorithm after a predefined amount of time or computational resources, ensuring that the solution process is both efficient and effective. The final schedule returned is the result of a delicate interplay between randomness and control, balancing the need for global exploration with the goal of achieving a high-quality, optimal schedule.
The proposed SA-based scheduling framework directly addresses the gaps identified in existing literature. While traditional metaheuristics such as ACO and PSO [20] mainly optimize makespan or energy, they do not effectively handle task priority. Similarly, reinforcement learning-based methods [12,13] improve adaptability but introduce high computational overheads. Other works, such as I-FASC [17] and EEIoMT [19], consider load balancing or energy but neglect reliability and urgency. In contrast, our model integrates four critical objectives—makespan, energy, cost, and reliability—along with a priority-aware penalty function. This explicit inclusion of task priority ensures responsiveness for urgent IoT applications, thereby positioning our method as a novel, multi-objective, and priority-aware solution for fog-enabled IoT environments.

5. Experimental Results

In this section, we discuss the simulation and results of the proposed Simulated Annealing (SA)-based Task Scheduling Framework. The complete proposed work carried out using Simpy simulation of benchmark datasets obtained from Google Cloud Job Workloads, representing small, medium, and large datasets [30].
Table 2 describes the simulation parameters used for our study. total no. of tasks 100–1000 with varying length between 15,000 to 900,000 considered. The simulation environment consists of 10 fog nodes and 6 virtual machines (VMs) to emulate realistic multi-core resource availability. Each VM supports a bandwidth of 200 MBPS, ensuring sufficient data transfer capability for task execution. The simulation was conducted on a Windows 11 operating system running on an Intel i7 processor, with the model implemented in SimPy.

5.1. Makespan

In fog computing, makespan is the total time needed to complete a set of tasks in a decentralized network. It directly influences overall system effectiveness. An optimal scheduler reduces makespan by maximizing effective offloading, balancing loads across nodes, and adapting dynamically to queue and dependency conditions thereby eliminating idle time, honoring task precedences, and improving responsiveness and throughput near the edge. We evaluated makespan across varying task volumes using the Google Cloud Job Workloads benchmark. Datasets were grouped as small (100–350 tasks), medium (400–650), and large (700–1000), with task execution times ranging from 15,000 to 900,000 units. The proposed Simulated Annealing (SA) scheduler was compared against classical baselines Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO), and two stronger, recently proposed metaheuristics tailored for fog/edge settings: I-FASC (Improved Fireworks Algorithm–based Scheduling and Clustering) and M2MPA (Modified Marine Predators Algorithm). Small datasets: Table 3 (visualized in Figure 4) reports makespan for 100–350 tasks. SA consistently achieves the lowest makespan, e.g., 670.83, 742.08, 820.78, 901.56, 969.22, 1048.32 outperforming all baselines. Among the competitors, M2MPA is the strongest challenger, followed by I-FASC, then PSO, and finally ACO. This ranking indicates that even against more recent metaheuristics, SA delivers the most efficient scheduling under constrained edge resources. Medium datasets: Table 4 (Figure 5) shows the same pattern for 400–650 tasks. SA records makespans of 1115.64, 1191.61, 1263.31, 1277.57, 1346.38, and 1421.81, again beating all four baselines. As task volume grows, the gap between SA and legacy baselines (ACO/PSO) widens, while the SA vs. M2MPA/I-FASC margins remain favorable, highlighting SA’s scalability and robustness at higher loads. Large datasets: Table 5 (Figure 6) summarizes results for 700–1000 tasks. SA maintains the lead with makespans 1495.40, 1601.41, 1622.68, 1725.43, 1800.86, 1882.23, 1957.13. M2MPA remains the closest competitor, then I-FASC, with PSO and ACO trailing. SA’s advantage at scale is attributable to its ability to explore globally early on and converge adaptively, avoiding local minima and allocating work so that high-priority/long tasks are mapped to less-loaded nodes.

5.2. Energy Consumption

In fog computing environments, energy consumption is a critical performance metric, particularly in edge-based systems where computational nodes often operate with limited power resources. Efficient task scheduling not only improves execution performance but also significantly reduces the total energy consumed during task execution. Lowering energy usage is vital for extending the operational lifespan of fog nodes, especially those powered by batteries or deployed in remote and resource-constrained locations. Optimized scheduling contributes by minimizing execution times, balancing computational loads, and preventing the overuse of specific nodes that would otherwise drain energy disproportionately. For our experiments, energy consumption was computed as the cumulative energy drawn by all nodes, assuming a uniform per-node power rate, with execution durations derived from the same Google Cloud Job Workload datasets used in the makespan evaluation. Datasets were grouped as small (100–350 tasks), medium (400–650 tasks), and large (700–1000 tasks). The proposed Simulated Annealing (SA) approach was compared not only with classical baselines ACO and PSO but also with two recent swarm/metaheuristic models tailored to fog systems I-FASC and M2MPA. Small datasets: Table 6 and Figure 7 show results for 100–350 tasks. SA achieves the lowest energy consumption across all task volumes, e.g., 607.34 J, 951.77 J, 1272.90 J, 1675.60 J, 2070.52 J, and 2560.49 J. M2MPA consistently comes second, followed by I-FASC, while ACO and PSO consume the most, showing a clear descending order. Medium datasets: Table 7 and Figure 8 present results for 400–650 tasks. SA again delivers the lowest values, e.g., 3091.08 J to 6074.64 J, across the range, maintaining a strong gap from ACO/PSO and a consistent advantage over I-FASC and M2MPA. Although all algorithms show proportional growth in energy usage with task size, SA’s energy footprint increases at a slower rate, highlighting its scalability. Large datasets: Table 8 and Figure 9 summarize results for 700–1000 tasks. SA records 6809.73 J, 7467.54 J, 8176.36 J, 8974.15 J, 9704.25 J, 10544.25 J, and 11319.15 J. Even in large-scale settings, SA outperforms M2MPA, I-FASC, PSO, and ACO. M2MPA again proves competitive, but SA’s ability to intelligently allocate tasks to less congested nodes and avoid hotspots leads to the most efficient energy use at scale.

5.3. Cost

In fog computing infrastructures, execution cost is a vital performance indicator that reflects the economic feasibility of a scheduling strategy. Each task incurs a cost proportional to its execution time and the computational resources consumed. Since fog nodes often operate as part of commercial edge-cloud ecosystems or resource-leased platforms, minimizing cost is crucial for cost-effective and scalable deployments. An efficient scheduler reduces cost by minimizing task durations, avoiding overloading expensive nodes, and exploiting resource-efficient execution paths. In our experiments, cost was computed as the product of task execution time and a constant per-time-unit processing rate. The evaluation used the same workload datasets as for makespan and energy: small (100–350 tasks), medium (400–650 tasks), and large (700–1000 tasks). The proposed Simulated Annealing (SA) scheduler was benchmarked against classical baselines (ACO, PSO) and the more advanced metaheuristics I-FASC and M2MPA. Small datasets: Table 9 and Figure 10 summarize results for 100–350 tasks. SA achieved the lowest costs of 5.74, 9.58, 12.80, 16.68, 20.66, and 25.54, outperforming all competitors. M2MPA emerges as the second-best, followed by I-FASC, PSO, and ACO. The clear hierarchy demonstrates SA’s strong ability to reduce resource billing time by minimizing makespan and balancing node utilization. Medium datasets. Table 10 and Figure 11 show results for 400–650 tasks. SA maintains its advantage, with cost values from 30.53 to 60.32, while M2MPA and I-FASC record higher but still competitive results. ACO and PSO incur the highest costs. The controlled rise of SA’s costs with increasing workload size highlights its economic scalability and efficiency in handling larger deployments without proportionally increasing overhead. Large datasets (Table 11 and Figure 12) report costs for 700–1000 tasks. SA again dominates, with values of 68.07, 74.61, 81.64, 89.83, 97.25, 105.51, 112.90 across task sizes. Even under heavy load, SA remains consistently below M2MPA, I-FASC, PSO, and ACO. The margin of improvement at scale demonstrates SA’s suitability for economically constrained fog deployments, where cost minimization is paramount. Overall: The comprehensive analysis across Table 9, Table 10 and Table 11 and their figures establishes a consistent pattern (lower is better). This validates that the priority-aware SA scheduler not only reduces makespan and energy but also delivers superior cost-efficiency, making it a robust solution for sustainable fog and IoT scheduling.

5.4. Reliability

In fog computing systems, reliability represents the probability that scheduled tasks are successfully executed without node failure. Given the decentralized and heterogeneous nature of fog environments, ensuring high reliability is essential, especially in mission-critical applications such as healthcare monitoring, industrial automation, and autonomous systems. Reliability in scheduling directly affects system robustness and its ability to maintain service continuity under conditions like node overload, performance degradation, or partial outages. In this work, system reliability was modeled as the compound probability that all scheduled tasks execute successfully, assuming a constant reliability factor per node. The experiments were conducted across datasets of increasing size: small (100–350 tasks), medium (400–650 tasks), and large (700–1000 tasks). The proposed Simulated Annealing (SA) scheduler was compared not only with ACO and PSO but also with two advanced metaheuristics designed for fog systems: I-FASC and M2MPA. Small datasets: Table 12 and Figure 13 show the reliability outcomes for 100–350 tasks. SA consistently achieved the highest reliability, with values of 0.9849, 0.9846, 0.9842, 0.9834, 0.9830, and 0.9823 across increasing task counts. M2MPA follows closely, then I-FASC, while PSO and ACO trail behind. The results demonstrate that SA not only balances workloads effectively but also avoids risky node assignments, ensuring higher task completion guarantees. Medium datasets: Table 13 and Figure 14 extend the analysis to 400–650 tasks. SA achieved reliability values of 0.9818, 0.9817, 0.9813, 0.9805, 0.9802, and 0.9896, consistently outperforming all baselines. M2MPA and I-FASC remain competitive, but SA’s adaptive search and dynamic task reassignment strategies maintain a clear edge. As workloads scale up, SA continues to prevent node saturation, sustaining reliability above competing models. Large datasets: Table 14 and Figure 15 summarize reliability results for 700–1000 tasks. SA again demonstrates the highest dependability with scores of 0.9889, 0.9886, 0.9883, 0.9875, 0.9873, 0.9860, and 0.9862. Even under heavy load, SA ensures stable and robust performance. M2MPA comes second, then I-FASC, followed by PSO and ACO. The superiority of SA here underscores its ability to sustain reliability at scale, making it suitable for environments where task failures are unacceptable.
The comparison algorithms ACO and PSO are metaheuristic approaches, but they often suffer from premature convergence and lack of ability to adapt dynamic fog–cloud workloads, which reduces their applicability in latency-sensitive environments. Our SA-based method addresses these challenges through a local neighborhood search with priority-aware penalties, which helps in handling urgent tasks more effectively. Frame works like I-FASC and M2MPA are important fog scheduling, but their focus is mainly on cost efficiency and throughput, and they do not address the priority handling or urgent task responsiveness. By integrating urgency and priority awareness into the scheduling process, our SA method goes beyond these frameworks. Therefore, ACO, PSO, I-FASC, and M2MPA are used not only as benchmarks but also as examples of existing limitations that our proposed solution directly overcomes.
For evaluation, we compared our proposed SA-based framework against ACO, PSO, I-FASC and M2MPA. These methods were selected as baselines because they are widely used metaheuristics in fog and cloud scheduling [14,20,26,27]. The comparison was carried out under identical experimental conditions using the Google Cloud Job Workloads dataset, with the same number of tasks, fog nodes, and bandwidth constraints. Performance was measured on makespan, energy consumption, cost, and reliability. This ensures that improvements observed with SA are attributable to the proposed framework rather than differences in setup. Furthermore, while ACO and PSO optimize scheduling, they do not incorporate task priority; our SA-based model uniquely integrates a priority-aware penalty function, providing an additional justification for its effectiveness.

6. Conclusions

In this work, we propose a priority-sensitive, multi-objective task scheduling framework for fog computing environments based on the simulated annealing (SA) algorithm. By incorporating a task-priority penalty function to formulate the task scheduling problem as a compound optimization challenge with improved makespan, energy consumption, execution cost, and reliability, we address the fundamental weakness of current static and single-objective methods. The proposed model shows flexibility for the dynamic and heterogeneous nature of fog infrastructures, effectively balancing computational loads while guaranteeing timely execution of high-priority tasks. Our SA-based method efficiently traverses the enormous scheduling solution space by adopting a probabilistic exploration–exploitation strategy and achieves near-optimal task-node mappings even in large-scale settings. Large-scale experimental results on various task volumes validate the proposed model as it outperforms proven metaheuristics such as ACO, PSO, I-FASC and M2MPA on all major performance measures such as makespan minimization, energy efficiency, cost-effectiveness, and reliability improvement. The framework’s ability to dynamically adapt to task urgency and responsive changes in resource availability positions it as an effective solution for latency-aware, mission-critical applications in fog-based IoT environments. Future work will include the integration of real-time feedback mechanisms, improving fault-tolerance in heterogeneous and mobile fog architectures, and integrating learning-based metaheuristics to further improve scheduling efficiency under uncertain and dynamic workload conditions.

Author Contributions

Conceptualization, S.S.M. and G.R.K.; methodology P.V.R.; writing—original draft preparation, S.S.M., G.R.K. and P.V.R.; writing—review and editing, G.T. and H.K.; visualization, G.R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Choppara, P.; Mangalampalli, S. Reliability and trust aware task scheduler for cloud-fog computing using advantage actor critic (A2C) algorithm. IEEE Access 2024, 12, 102126–102145. [Google Scholar] [CrossRef]
  2. Abdelmoneem, R.M.; Benslimane, A.; Shaaban, E. Mobility-aware task scheduling in cloud-Fog IoT-based healthcare architectures. Comput. Netw. 2020, 179, 107348. [Google Scholar] [CrossRef]
  3. Gowda, D.; Sharma, A.; Rao, B.K.; Shankar, R.; Sarma, P.; Chaturvedi, A.; Hussain, N. Industrial quality healthcare services using Internet of Things and fog computing approach. Meas. Sens. 2022, 24, 100517. [Google Scholar] [CrossRef]
  4. Choppara, P.; Lokesh, B. Efficient Task Scheduling and Load Balancing in Fog Computing for crucial Healthcare through Deep Reinforcement Learning. IEEE Access 2025, 13, 26542–26563. [Google Scholar] [CrossRef]
  5. Reddy, P.V.; Reddy, K.G. An energy efficient RL based workflow scheduling in cloud computing. Expert Syst. Appl. 2023, 234, 121038. [Google Scholar] [CrossRef]
  6. Bukhsh, M.; Abdullah, S.; Bajwa, I.S. A decentralized edge computing latency-aware task management method with high availability for IoT applications. IEEE Access 2021, 9, 138994–139008. [Google Scholar] [CrossRef]
  7. Waqas, M.; Niu, Y.; Ahmed, M.; Li, Y.; Jin, D.; Han, Z. Mobility-aware fog computing in dynamic environments: Understandings and implementation. IEEE Access 2018, 7, 38867–38879. [Google Scholar] [CrossRef]
  8. Tinini, R.I.; dos Santos, M.R.P.; Figueiredo, G.B.; Batista, D.M. 5GPy: A SimPy-based simulator for performance evaluations in 5G hybrid Cloud-Fog RAN architectures. Simul. Model. Pract. Theory 2020, 101, 102030. [Google Scholar] [CrossRef]
  9. Choppara, P.; Mangalampalli, S.S. A Hybrid Task scheduling technique in fog computing using fuzzy logic and Deep Reinforcement learning. IEEE Access 2024, 12, 176363–176388. [Google Scholar] [CrossRef]
  10. Younas, M.I.; Iqbal, M.J.; Aziz, A.; Sodhro, A.H. Toward QoS monitoring in IoT edge devices driven healthcare—A systematic literature review. Sensors 2023, 23, 8885. [Google Scholar] [CrossRef]
  11. Choppara, P.; Mangalampalli, S. An Effective analysis on various task scheduling algorithms in Fog computing. EAI Endorsed Trans. Internet Things 2024, 10, 1–7. [Google Scholar] [CrossRef]
  12. Raju, M.R.; Mothku, S.K. Delay and energy aware task scheduling mechanism for fog-enabled IoT applications: A reinforcement learning approach. Comput. Netw. 2023, 224, 109603. [Google Scholar] [CrossRef]
  13. Jangra, A.; Mangla, N. An efficient load balancing framework for deploying resource schedulingin cloud based communication in healthcare. Meas. Sens. 2023, 25, 100584. [Google Scholar] [CrossRef]
  14. Seifhosseini, S.; Shirvani, M.H.; Ramzanpoor, Y. Multi-objective cost-aware bag-of-tasks scheduling optimization model for IoT applications running on heterogeneous fog environment. Comput. Netw. 2024, 240, 110161. [Google Scholar] [CrossRef]
  15. Mokni, M.; Yassa, S.; Hajlaoui, J.E.; Omri, M.N.; Chelouah, R. Multi-objective fuzzy approach to scheduling and offloading workflow tasks in Fog–Cloud computing. Simul. Model. Pract. Theory 2023, 123, 102687. [Google Scholar] [CrossRef]
  16. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Hezam, I.M. Multi-objective task scheduling method for cyber–physical–social systems in fog computing. Knowl.-Based Syst. 2023, 280, 111009. [Google Scholar] [CrossRef]
  17. Wang, S.; Zhao, T.; Pang, S. Task scheduling algorithm based on improved firework algorithm in fog computing. IEEE Access 2020, 8, 32385–32394. [Google Scholar] [CrossRef]
  18. Zhu, W.; Goudarzi, M.; Buyya, R. FLight: A lightweight federated learning framework in edge and fog computing. Softw. Pract. Exp. 2024, 54, 813–841. [Google Scholar] [CrossRef]
  19. Alatoun, K.; Matrouk, K.; Mohammed, M.A.; Nedoma, J.; Martinek, R.; Zmij, P. A novel low-latency and energy-efficient task scheduling framework for internet of medical things in an edge fog cloud system. Sensors 2022, 22, 5327. [Google Scholar] [CrossRef]
  20. Deng, W.; Zhu, L.; Shen, Y.; Zhou, C.; Guo, J.; Cheng, Y. A novel multi-objective optimized DAG task scheduling strategy for fog computing based on container migration mechanism. Wirel. Netw. 2024, 31, 1005–1019. [Google Scholar] [CrossRef]
  21. Talaat, F.M.; Ali, H.A.; Saraya, M.S.; Saleh, A.I. Effective scheduling algorithm for load balancing in fog environment using CNN and MPSO. Knowl. Inf. Syst. 2022, 64, 773–797. [Google Scholar] [CrossRef]
  22. Tripathy, S.S.; Rath, M.; Tripathy, N.; Roy, D.S.; Francis, J.S.A.; Bebortta, S. An intelligent health care system in fog platform with optimized performance. Sustainability 2023, 15, 1862. [Google Scholar] [CrossRef]
  23. Siyadatzadeh, R.; Mehrafrooz, F.; Ansari, M.; Safaei, B.; Shafique, M.; Henkel, J.; Ejlali, A. Relief: A reinforcement-learning-based real-time task assignment strategy in emerging fault-tolerant fog computing. IEEE Internet Things J. 2023, 10, 10752–10763. [Google Scholar] [CrossRef]
  24. Hussein, M.K.; Mousa, M.H. Efficient task offloading for IoT-based applications in fog computing using ant colony optimization. IEEE Access 2020, 8, 37191–37201. [Google Scholar] [CrossRef]
  25. Bansal, S.; Aggarwal, H. A multiobjective optimization of task workflow scheduling using hybridization of PSO and WOA algorithms in cloud-fog computing. Clust. Comput. 2024, 27, 10921–10952. [Google Scholar] [CrossRef]
  26. Monika.; Sehrawat, H. An Adaptive and Priority Improved Task Scheduling Model for Fog Computing Environment. In Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kamand, India, 18–22 June 2024; pp. 1–5. [Google Scholar] [CrossRef]
  27. Hussain, M.; Nabi, S.; Hussain, M. RAPTS: Resource aware prioritized task scheduling technique in heterogeneous fog computing environment. Clust. Comput. 2024, 27, 13353–13377. [Google Scholar] [CrossRef]
  28. Ghafari, R.; Mansouri, N. A novel energy-based task scheduling in fog computing environment: An improved artificial rabbits optimization approach. Clust. Comput. 2024, 27, 8413–8458. [Google Scholar] [CrossRef]
  29. Nagabushnam, G.; Choi, Y.; Kim, K.H. Fodas: A novel reinforcement learning approach for efficient task scheduling in fog computing network. In Proceedings of the 2024 9th International Conference on Fog and Mobile Edge Computing (FMEC), Malmo, Sweden, 2–5 September 2024; pp. 46–53. [Google Scholar]
  30. Hussain, A.; Aleem, M. GoCJ: Google cloud jobs dataset for distributed and cloud computing infrastructures. Data 2018, 3, 38. [Google Scholar] [CrossRef]
Figure 1. Fog computing architecture.
Figure 1. Fog computing architecture.
Sensors 25 05744 g001
Figure 2. Proposed architecture.
Figure 2. Proposed architecture.
Sensors 25 05744 g002
Figure 3. Proposed work flowchart.
Figure 3. Proposed work flowchart.
Sensors 25 05744 g003
Figure 4. Makespan of small datasets.
Figure 4. Makespan of small datasets.
Sensors 25 05744 g004
Figure 5. Makespan of medium datasets.
Figure 5. Makespan of medium datasets.
Sensors 25 05744 g005
Figure 6. Makespan of large datasets.
Figure 6. Makespan of large datasets.
Sensors 25 05744 g006
Figure 7. Energy consumption of small datasets.
Figure 7. Energy consumption of small datasets.
Sensors 25 05744 g007
Figure 8. Energy consumption of medium datasets.
Figure 8. Energy consumption of medium datasets.
Sensors 25 05744 g008
Figure 9. Energy consumption of large datasets.
Figure 9. Energy consumption of large datasets.
Sensors 25 05744 g009
Figure 10. Cost of small datasets.
Figure 10. Cost of small datasets.
Sensors 25 05744 g010
Figure 11. Cost of medium datasets.
Figure 11. Cost of medium datasets.
Sensors 25 05744 g011
Figure 12. Cost of large datasets.
Figure 12. Cost of large datasets.
Sensors 25 05744 g012
Figure 13. Reliability of small datasets.
Figure 13. Reliability of small datasets.
Sensors 25 05744 g013
Figure 14. Reliability of medium datasets.
Figure 14. Reliability of medium datasets.
Sensors 25 05744 g014
Figure 15. Reliability of large datasets.
Figure 15. Reliability of large datasets.
Sensors 25 05744 g015
Table 1. Parameter study of various existing works.
Table 1. Parameter study of various existing works.
S.NoStudy/AlgorithmMetricsTool/Platform Used
[12]Fuzzy Reinforcement Learning (FRL) + MNILPMakespan, Deadline Satisfied Tasks, Average Service Time, CPU Utilization, Average Computation Time, Energy ConsumptionPython-based tool
[13]GA, SARSA, Q-LearningThroughput, Latency, MakespanMATLAB
[14]MoDGWAMakespan, Scheduling Failure Factor, Monetary Cost, Total Cost ScoreMATLAB
[15]FIS + Genetic Algorithm (GA)Makespan, Resource Utilization, CostWorkflowSim
[16]M2MPAMakespan, Energy Consumption-
[17]I-FASCTask Processing Time, Load BalancingAlibaba Cloud Server, Core i5, 4GB RAM
[18]FLightTraining Time Efficiency, Accuracy-
[19]EEIoMTNetwork Usage, Latency, Energy Consumption, CPU UsageiFogSim2
[20]TSSACEnergy Consumption, Makespan, Load Balancing-
[21]EDLBMakespan, Resource Utilization, Load Balancing, Cost-
[22]DQN (Deep Q-Network)Latency, Execution Time, Accuracy, Energy Consumption, ThroughputCooja
[23]ReLIEFReliability, Workload Distribution, Throughput, DelayReal Time
[24]ACO + PSOResponse Time, Load balancing and Standard deviationMATLAB
[25]PSO-WOAexecution time, cost, and energyWorkflowsim
[26]Adaptive Priority Task Schedulingmakespan, failure rate, and execution delayiFogSim
[27]RAPTSResource Utilization, Response Time, Makespan, Deadline, and CostiFogSim
[28]NCAROservice time, cost, and energy consumption, makespanMATLAB
[29]FODASDeadline, makespan, and energy consumptionCOSCO
Table 2. Simulation parameters.
Table 2. Simulation parameters.
Simulation ParametersDetails
No. of Tasks100–1000 tasks
Lengths of the tasks15,000–900,000
Fog Nodes10
VMs per node6
Bandwidth200 MBPS
Operating SystemWindows
ProcessorIntel i7
Simulation ToolSimPy
Table 3. Makespan of small datasets.
Table 3. Makespan of small datasets.
DatasetsACOPSOI-FASCM2MPASA
100 tasks981.21958.33890.45805.62670.83
150 tasks1085.401060.11990.38902.14742.08
200 tasks1198.621172.541095.721004.35820.78
250 tasks1313.771287.941206.441108.29901.56
300 tasks1415.921384.601298.551210.71969.22
350 tasks1532.851497.601402.181315.861048.32
Table 4. Makespan of medium datasets.
Table 4. Makespan of medium datasets.
DatasetsACOPSOI-FASCM2MPASA
400 tasks1612.331593.771394.551249.521115.64
450 tasks1719.621702.311489.511334.601191.61
500 tasks1824.851804.731579.141414.911263.31
550 tasks1845.981825.111596.961430.881277.57
600 tasks1945.261923.411682.981507.951346.38
650 tasks2054.302031.161777.261592.431421.81
Table 5. Makespan of large datasets.
Table 5. Makespan of large datasets.
DatasetsACOPSOI-FASCM2MPASA
700 tasks2151.842136.291908.121697.341495.40
750 tasks2302.922288.012041.671810.921601.41
800 tasks2334.172318.262068.451833.561622.68
850 tasks2481.942464.912204.331954.281725.43
900 tasks2589.812572.662297.582039.761800.86
950 tasks2706.682688.902402.152130.641882.23
1000 tasks2812.552795.912494.202215.361957.13
Table 6. Energy consumption of small datasets.
Table 6. Energy consumption of small datasets.
DatasetsACOPSOI-FASCM2MPASA
100 tasks878.45867.12780.36695.84607.34
150 tasks1386.741373.191235.251090.46951.77
200 tasks1832.581818.431632.411440.551272.90
250 tasks2407.212391.292130.841896.721675.60
300 tasks2990.802972.172649.122358.962070.52
350 tasks3697.453677.843276.422912.882560.49
Table 7. Energy consumption of medium datasets.
Table 7. Energy consumption of medium datasets.
DatasetsACOPSOI-FASCM2MPASA
400 tasks4431.904417.263925.443480.273091.08
450 tasks5170.315154.204574.384056.613597.74
500 tasks6031.556016.375338.424728.664201.38
550 tasks6863.936846.556081.155380.424782.54
600 tasks7774.987756.876882.776089.625425.81
650 tasks8704.778685.217701.366822.436074.64
Table 8. Energy consumption of large datasets.
Table 8. Energy consumption of large datasets.
DatasetsACOPSOI-FASCM2MPASA
700 tasks9740.889728.338705.647658.226809.73
750 tasks10,693.7510,681.029556.418402.157467.54
800 tasks11,708.1711,694.8010,463.729204.868176.36
850 tasks12,844.1712,829.3611,475.2310,100.378974.15
900 tasks13,894.7413,878.2112,405.6410,935.729704.25
950 tasks15,080.9515,063.2213,463.8811,807.4610,544.25
1000 tasks16,188.4116,170.0714,471.2312,649.3811,319.15
Table 9. Cost of small datasets.
Table 9. Cost of small datasets.
DatasetsACOPSOI-FASCM2MPASA
100 tasks7.206.506.185.915.74
150 tasks11.6010.5010.089.789.58
200 tasks15.0013.8013.2413.0012.80
250 tasks19.6018.1017.3517.0016.68
300 tasks25.0023.1022.1821.3020.66
350 tasks31.5029.1027.6426.5025.54
Table 10. Cost of medium datasets.
Table 10. Cost of medium datasets.
DatasetsACOPSOI-FASCM2MPASA
400 tasks37.9035.6033.2031.1030.53
450 tasks44.8042.5039.6036.4035.83
500 tasks51.9049.1045.8042.3041.92
550 tasks59.0056.5052.2048.5048.03
600 tasks66.2063.1058.6054.7054.25
650 tasks73.5070.2065.1060.8060.32
Table 11. Cost of large datasets.
Table 11. Cost of large datasets.
DatasetsACOPSOI-FASCM2MPASA
700 tasks83.0078.5073.9069.2068.07
750 tasks90.8085.9081.4075.6074.61
800 tasks99.2093.7088.8082.1081.64
850 tasks108.00101.5096.5089.3089.83
900 tasks117.00109.80104.1096.3097.25
950 tasks126.00118.90111.80103.70105.51
1000 tasks135.00127.00119.50111.10112.90
Table 12. Reliability of small datasets.
Table 12. Reliability of small datasets.
DatasetsACOPSOI-FASCM2MPASA
100 tasks0.86000.91000.94000.96500.9849
150 tasks0.85900.90900.93900.96400.9846
200 tasks0.85800.90800.93800.96300.9842
250 tasks0.85700.90700.93700.96200.9834
300 tasks0.85600.90600.93600.96100.9830
350 tasks0.85500.90500.93500.96000.9823
Table 13. Reliability of medium datasets.
Table 13. Reliability of medium datasets.
DatasetsACOPSOI-FASCM2MPASA
400 tasks0.85400.90400.93300.95800.9818
450 tasks0.85300.90300.93200.95700.9817
500 tasks0.85200.90200.93100.95600.9813
550 tasks0.85100.90100.93000.95500.9805
600 tasks0.85000.90000.92900.95400.9802
650 tasks0.84900.89900.92800.95300.9896
Table 14. Reliability of large datasets.
Table 14. Reliability of large datasets.
DatasetsACOPSOI-FASCM2MPASA
700 tasks0.84800.89800.92600.95800.9889
750 tasks0.84700.89700.92500.95700.9886
800 tasks0.84600.89600.92400.95600.9883
850 tasks0.84500.89500.92300.95500.9875
900 tasks0.84400.89400.92200.95400.9873
950 tasks0.84300.89300.92100.95300.9860
1000 tasks0.84200.89200.92000.95200.9862
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mangalampalli, S.S.; Reddy, P.V.; Reddy Karri, G.; Tippani, G.; Kota, H. Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing. Sensors 2025, 25, 5744. https://doi.org/10.3390/s25185744

AMA Style

Mangalampalli SS, Reddy PV, Reddy Karri G, Tippani G, Kota H. Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing. Sensors. 2025; 25(18):5744. https://doi.org/10.3390/s25185744

Chicago/Turabian Style

Mangalampalli, S. Sudheer, Pillareddy Vamsheedhar Reddy, Ganesh Reddy Karri, Gayathri Tippani, and Harini Kota. 2025. "Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing" Sensors 25, no. 18: 5744. https://doi.org/10.3390/s25185744

APA Style

Mangalampalli, S. S., Reddy, P. V., Reddy Karri, G., Tippani, G., & Kota, H. (2025). Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing. Sensors, 25(18), 5744. https://doi.org/10.3390/s25185744

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop