You are currently on the new version of our website. Access the old version .
SensorsSensors
  • Article
  • Open Access

5 January 2026

Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems

,
,
,
,
and
1
School of Computing, Gachon University, Seongnam 13120, Republic of Korea
2
Department of Computer Software Engineering, University of Engineering & Technology, Mardan 23200, Pakistan
3
Cybersecurity and Computing Systems Research Group, Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
4
Cybersecurity and Computer Science, School of Science and Technology, The American College of Greece, 6 Gravias Str., Aghia Paraskevi, 153 42 Athens, Greece
This article belongs to the Section Sensor Networks

Abstract

Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.

1. Introduction

Cloud computing provides computation, storage and network services to the customers over a wide geographical network. However, the centralized architecture and higher magnitude of user requests not only result in higher bandwidth consumption but also an increase in latency as well. This increase in service delivery or communication latency is due to the centralized nature and multi-hop distance between core data centers and end users. Real-time applications like smart manufacturing, smart transportation and smart healthcare require immediate response time. However, the cloud is unable to provide quick response time, which is unacceptable to real-time delay-sensitive applications [1]. Therefore, there is an immense need to resolve the above issue and provide the cloud resource at the edge of the network to avoid delay.
Fog computing is an extension of cloud computing where the resources are provided very close to the edge network to reduce the delay and network traffic. The basic aim is to minimize the latency and energy consumption and increase the throughput. The sudden increase in IoT devices and the data they generate for computation makes it more complicated. It is estimated that the growing number of devices connected to the Internet globally will generate 73 zettabytes of data by 2025 [2]. The increasing demand in user requests has compelled academia and industry to explore and develop potential solutions for real-time applications. Under the given circumstances, fog computing is the most suitable solution. However, fog devices are resource constrained. To effectively handle incoming user requests, there is a critical need for efficient resource allocation, task scheduling, and load balancing mechanisms. These are essential to maximize the utilization of fog resources and minimize application latency, particularly for time-sensitive healthcare applications [3], which is of great importance. We contextualize our work within recent advances in system resilience, including moving target defense strategies based on game theory and Proactive Internet of Things (PIoT) defense frameworks. These approaches demonstrate the benefits of dynamic, adaptive mechanisms for reducing attack surfaces and enhancing IoT system reliability, highlighting the importance of proactive security in distributed network environments.
This study presents the novel resource management technique using Modified Particle Swarm Optimization to tackle the aforementioned challenges associated with the fog computing paradigm. The proposed technique is based on efficient resource allocation and task scheduling by optimizing system performance parameters in terms of system response time, execution cost and time. The contributions made to this paper are as follows:
  • Proposing a three-tier architectural framework integrating cloud, fog, and IoT layers is a strategic approach for delivering reliable, low-latency, and intelligent healthcare services.
  • An optimal resource allocation and task scheduling strategy is proposed using a Modified Particle Swarm Optimization technique for delay-sensitive healthcare applications. The proposed method enhances the resource utilization of fog devices while minimizing makespan and execution cost as key parameters.
  • Analyzed and evaluated the proposed technique through several experimental iterations to ensure the effectiveness of the proposed technique in a given scenario.
The remainder of the paper is organized in the following sections: Section 2 overviews the relevant literature and identifies the current gaps in the context of this work. Section 3 presents the proposed three-tier architectural framework model. Section 4 illustrates the resource management and scheduling issue in fog computing. Section 5 exhibits the system design and optimization modeling. Section 6 discusses the experimental settings and simulation setup and reports the obtained results with respect to the existing state-of-the-art work. Finally, Section 7 concludes the research work and highlights the future work.

3. Proposed Architectural Framework Model

A three-layer hierarchical architecture integrating IoT, fog, and cloud layers is designed, as illustrated in Figure 1, to enable efficient coordination and resource management across the computing continuum.
Figure 1. Overall fog computing architecture.

3.1. Cloud Tier

The cloud layer constitutes the highest and most resource-intensive tier within the fog computing hierarchy. It comprises large-scale, centralized data centers equipped with extensive computational, storage and analytical capabilities. This tier is primarily responsible for handling complex, latency-tolerant and data-intensive operations that exceed the processing capacities of edge and fog layers, thereby ensuring global coordination, long-term data management and large-scale service orchestration across the distributed computing continuum. These data centers are centrally controlled but geographically distributed across a wide network. As a result, real-time requests must travel multiple hops to reach their destination, requiring rapid response times upon arrival [24]. This distance creates communication latency between the user request and response from the cloud server, which is unacceptable to the real-time IoT applications [25].

3.2. Fog Tier

The fog layer plays a crucial role in the multi-layer architectural framework, where there is a significant communication latency frequently produced by cloud computing due to its widely geographically distributed end points and multi-hop distance. Therefore, real-time needs an immediate response time for complex problems. The efficient utilization of fog resources needs proper resource utilization, which is only possible through an optimization technique.

3.3. IoT Tier

The IoT tier forms the lowest layer in the fog computing architecture and comprises sensors, actuators, and smart devices. These sensors include medical, temperature, light, humidity, aerial, and pollution sensors. IoT devices generate real-time requests and transmit them to nearby fog nodes for prompt processing, helping to minimize latency [26]. However, fog-based IoT facilitates end-to-end service delivery with rapid response times and improved quality of service (QoS) [27] and provides better and secure monitoring services [28] for real-time IoT applications [29].

4. Efficient Resource Allocation and Scheduling Problem in Fog Computing

Resource management plays a pivotal role in the efficiency and effectiveness of any computing paradigm. It encompasses several key components, including the following:
Resource Allocation: Refers to the dynamic provisioning of computational resources such as CPU cycles, memory, storage and network bandwidth according to task-specific demands and predefined system policies to achieve optimal utilization.
Load Balancing: It involves the equitable distribution of computational workloads across available nodes to mitigate bottlenecks, avoid resource overloading and sustain overall system efficiency and responsiveness.
Task Scheduling: Entails determining the optimal execution location and timing for each task, whether processed locally or offloaded to fog, edge or cloud servers. This decision is guided by multiple factors, including latency constraints, energy efficiency and the current state of resource availability.
Reliability: It ensures the robustness and continuity of system operations by maintaining fault tolerance, consistency and service availability, even under component failures or network disruptions.
Scalability: It represents the system’s capability to accommodate increasing workloads and expanding infrastructure while maintaining performance stability and functional integrity.
A dependable and satisfying user experience fundamentally relies on the system’s ability to maintain consistent performance and service quality. This is achieved through stringent adherence to Quality of Service (QoS) requirements, which ensure timely task execution, minimal latency and uninterrupted service delivery. To meet these QoS objectives, operational costs must be optimized through intelligent resource utilization and energy-aware management strategies. In turn, such optimization contributes to higher computational efficiency by minimizing resource idleness, avoiding overload conditions and maximizing throughput. Consequently, the overarching principle of effective resource management serves as the foundation for achieving cost efficiency, performance optimization and sustainable reliability across distributed computing environments [30] as shown in Figure 2.
Figure 2. Resource management techniques.
Efficient resource management continues to be a critical concern across various computing paradigms. In cloud environments, the centralized architecture often leads to substantial bandwidth consumption and network congestion, as large volumes of user-generated data must traverse long communication paths. This centralized dependency introduces considerable latency, rendering cloud systems less suitable for delay-sensitive real-time applications [31]. In contrast, fog computing alleviates such latency by bringing computation closer to the data sources. However, it introduces its own set of challenges. The decentralized and resource-constrained nature of fog nodes, characterized by limited processing power, memory and storage, makes effective resource orchestration more complex. Consequently, achieving secure, adaptive and efficient resource management within fog infrastructures is essential to ensure consistent performance and reliability under highly dynamic and unpredictable workloads [29].
Similarly, incorporating a security perspective into the resource management framework has been a crucial aspect of acquiring critical clinical data that needs protection during data acquisition, processing and transmission. Data protection mechanisms need to be employed across the three-tier architecture and evaluate their impact on system latency and energy consumption. Secure task offloading and secure transmission are particularly important in distributed fog architecture and need a verifiable match of the originating device. These security measures help to protect the critical healthcare data.
In this study, a meta-heuristic driven framework is introduced to enhance the efficiency of resource management and task scheduling within fog computing environments. Rather than focusing solely on static allocation policies, the proposed method employs adaptive optimization techniques to intelligently balance workloads and allocate computational resources in response to dynamic network conditions. The underlying goal is to achieve a more efficient utilization of fog infrastructure while indirectly reducing critical performance indicators such as makespan, execution cost and overall task completion time. By harmonizing these objectives, the proposed strategy contributes to improved system responsiveness, reduced operational overhead and sustained service quality across heterogeneous fog nodes.

5. Problem Formulation and Optimization Modeling

This work introduces a resource management framework tailored for latency-sensitive healthcare applications operating within fog-enabled environments. By exploiting the spatial closeness of fog nodes to end-user medical devices. The proposed architecture significantly minimizes communication delays and facilitates real-time processing of critical healthcare data. To make the system capable of operating and utilizing resources efficiently, the following conditions must be met:
  • To minimize system key performance parameters in terms of response time, cost and execution.
  • To maximize system resources and ensure system reliability.
  • To meet the quality of service (QoS) constraints pertaining to healthcare applications.

5.1. Optimization Model

The proposed architectural framework model is assessed by using the optimization function. The optimization function is used for the performance of different performance parameters like makespan and cost execution of the performing nodes to show the proposed algorithm successfully performs at the given scenario.
Min n = 1 N ( M a k e s p a n ) + ( E x e c u t i o n C o s t )
s.t.
C U r + T l o a d M U r , n N k
U T i m R m
K n , i = 0 , 1 i T n
Equation (1) formulates the joint optimization objective aimed at minimizing both execution time and computational cost. Equation (2) defines the feasibility constraint, ensuring that the cumulative load on a fog node, after incorporating new task requests, does not exceed its maximum resource capacity; only then is task execution permitted. Equation (3) introduces the optimization constraint associated with tasks that remain unscheduled within the system. Equation (4) characterizes the process of task offloading from end devices to fog nodes, representing the decision mechanism for distributed computation. Finally, Equation (5) delineates the binary offloading decision variable, where a value of 0 indicates that the task is offloaded to fog nodes for near-edge processing, while a value of 1 denotes that the task is allocated to cloud servers for high-complexity computation and long-term data management.
t n , i = 0 , | w | = f 1 , | w | = c
Here, t n , 1 = 0 denotes that the ith task is allocated to the available fog nodes for immediate and latency-sensitive execution. Conversely, t n , 1 = 1 signifies that the task is offloaded to the cloud layer, where it is processed for large-scale analytics and long-term storage purposes. In the proposed intelligent healthcare framework, the fog layer is prioritized as the primary computational tier, given its proximity to data sources and its ability to ensure low-latency processing for time-critical healthcare applications. The variable w denotes the type of computational node participating in the task execution process. The primary contribution of the proposed work mainly focused on performance-centric optimization for fog-enabled IoHT systems. The security considerations discussed in Section 4 highlight essential operational constraints that directly influence resource availability, task execution delay and execution cost. In particular, mechanisms such as encryption, authentication and secure data transmission introduce additional computational and network overhead that can affect the task scheduling and offloading dynamics modeled in our framework. Although these security factors are not explicitly embedded in the present mathematical formulation, they provide a foundation for extending the proposed MPSO-based optimization framework into a security-aware multi-objective model.

5.2. Proposed MPSO Modifications

We propose a Modified Particle Swarm Optimization (MPSO) algorithm for IoHT task scheduling in hierarchical fog environments. The modifications aim to minimize system makespan and execution cost while improving convergence rate as described in Equation (1).

5.2.1. Customized Fitness Function

The fitness function jointly optimizes system makespan and execution cost and is given below.
F i = α M a k e s p a n i M a k e s p a n max + β E x e c t i o n C o s t i E x e c u t i o n C o s t max ,
where L i and E i denote the makespan and execution cost of particle i, L max and E max are their respective maxima, and α + β = 1 to balance the objectives.

5.2.2. Adaptive Inertia Weight

A time-varying inertia weight balances exploration and exploitation with the equation given below.
w ( t ) = w max w max w min T max · t ,
where t is the iteration, w max / w min are maximum/minimum weights, and T max is the maximum iteration count.

5.2.3. Modified Velocity and Position Updates

The velocity and position updates are modified as
v i ( t + 1 ) = w ( t ) v i ( t ) + c 1 r 1 ( p i best x i ( t ) ) + c 2 r 2 ( g best x i ( t ) ) + γ Δ i ,
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 ) ,
with c 1 , c 2 as acceleration coefficients, r 1 , r 2 U ( 0 , 1 ) , and Δ i = η ( x rand x i ( t ) ) adding controlled disturbance to avoid local optimum.

5.2.4. Impact on Convergence

The proposed modifications improve convergence as follows:
  • Normalizing fitness to balance makespan and execution cost.
  • Adaptive inertia providing smooth transition from exploration to exploitation.
  • Velocity disturbance allows escape from local minima.
The novelty of the proposed MPSO lies in its problem-specific algorithmic design, which integrates IoHT contextual information directly into particle behavior, rather than relying on generic PSO parameter adjustments. In particular, MPSO employs a sensor-driven adaptive search mechanism in which inertia and velocity components dynamically respond to real-time variations in task heterogeneity and fog-node workload, enabling the search process to adapt to rapidly shifting IoHT conditions. Furthermore, the algorithm introduces a topology-aware multi-region exploration strategy that coordinates particle movements across heterogeneous fog nodes with differing capacities and communication delays, which is not present in conventional PSO variants. This is complemented by a context-sensitive perturbation scheme, where repulsion strength is modulated by local congestion levels and execution cost gradients, allowing more effective escape from local optima in nonlinear resource landscapes. These domain-oriented mechanisms collectively establish MPSO as a distinctive optimization framework tailored to fog-enabled IoHT environments, beyond incremental modifications of classical PSO.

5.3. Algorithmic Process

Algorithm 1 presents the efficient resource allocation and scheduling strategy, where tasks are offloaded to fog devices at multiple levels. The swarm optimization technique has two main characteristics: position, denoted as ( x 1 , x 2 x n ); and velocity, denoted as ( v 1 , v 2 v n ). In the process, each particle is represented as a fog node, which identifies the optimal solution using local and global vicinity, respectively. Therefore, the proposed algorithm consists of a multi-level task offloading strategy to appropriate fog devices with immediate response time.
Figure 3 shows the proposed algorithm for the three-tier framework. The IoT user sends requests to the fog broker, which manages the fog device resources as well as the requests coming from the IoT layer. The fog broker schedules the tasks to appropriate fog devices and ensures their timely execution. The fog broker monitors and controls the overall process continuously and checks the status of task completion. The results are sent back to the end users.
Figure 3. Sequence diagram of proposed MPSO.
The complexity analysis of the proposed algorithm reveals that, in the best-case scenario, it exhibits θ ( n ) time complexity, as the algorithm performs in a linear fashion. This occurs when the condition in the outer loop is consistently false, resulting in minimal iterations. The outer loop execution performs the maximum number of iterations, leading to time complexity in the worst-case scenario.
Algorithm 1 Enhanced Modified Particle Swarm Optimization (E-MPSO) for Intelligent Resource Allocation and Task Scheduling
 Require:  ● Task set T = { T 1 , T 2 , , T m }
  ●   Fog device set F = { F 1 , F 2 , , F k }
  ●   Resource capacity C r , utilization threshold M U r
  ●   Task requirements R m , desired task MIPS U T i m
 Ensure:  Optimal mapping of tasks to fog devices to minimize makespan and execution cost
                               ▹Stage 1: Initialization
 1: Initialize population of particles representing task-device mappings
 2: Assign random positions and velocities to each particle
 3: Set individual best positions P b e s t and global best G b e s t
 4: Initialize resource utilization matrix U for all F k
                     ▹Stage 2: Feasibility Evaluation
 5: for each task T i T  do
 6:     for each fog node F j F  do
 7:         if  ( C r ( F j ) + R m ( T i ) ) M U r ( F j )  then
 8:            Compute task execution time E i j = R m ( T i ) C r ( F j )
 9:            Compute energy cost E C i j and communication delay D i j
 10:            Evaluate fitness f i j = ω 1 E i j + ω 2 E C i j + ω 3 D i j
 11:         else
 12:            Mark F j as overloaded
 13:         end if
 14:     end for
 15: end for
                        ▹Stage 3: Particle Update
 16: for each particle p do
 17:     Update velocity: v p ( t + 1 ) = ω v p ( t ) + c 1 r 1 ( P b e s t , p x p ) + c 2 r 2 ( G b e s t x p )
 18:     Update position: x p ( t + 1 ) = x p ( t ) + v p ( t + 1 )
 19:     Re-evaluate fitness for updated positions
 20:     Update P b e s t , p and G b e s t based on fitness improvement
 21: end for
                ▹Stage 4: Task Scheduling and Execution
 22: for each task T i  do
 23:     Select fog node F o p t with minimal fitness value
 24:     Allocate T i F o p t
 25:     Execute T i and update utilization C r ( F o p t )
 26: end for

6. Configuration and System Setup Details

To make the framework model effective, the following configurations and system setups have been made to obtain the acquired results.

Configuration Setup

To perform the experimental operations using the iFogSim toolkit, which is primarily concerned with modeling and simulation of resource management techniques. The accuracy and the reliability of the obtained results are associated with the strategies defined in the simulation parameters defined in Table 1. This article proposed a well-established simulation environment to perform the proposed algorithm and obtained the desired results in terms of makespan and execution cost in computationally constrained conditions. The experimental setup details for performing the proposed algorithm are outlined in Table 2.
Table 1. Simulation configuration parameters.
Table 2. System setup details.

7. Result and Discussion

7.1. Analysis of Time Execution

Figure 4 shows the execution time performance of the MPSO algorithm across various fog device setups and task volumes. Optimal scenarios with smooth convergence lead to significantly reduced execution times, reflecting effective resource-task alignment. Conversely, improper resource allocation results in longer execution times. The obtained results derived from extensive simulations and testing of system nodes and task execution time to measure the system speed and stability due to the efficient resource allocation strategy.
Figure 4. Execution time analysis of proposed MPSO algorithm.

7.2. Analysis of Makespan

The system’s crucial performance parameter, i.e., system makespan, is continuously observed after several simulation runs, and its performance stability is carefully checked. The obtained results demonstrate that the performance of the proposed technique is significantly better compared to other SOTA algorithms. These novel results for real-time applications were due to efficient resource allocation and task scheduling strategy and are largely attributable to the introduction of a multi-level task execution strategy, which distinguishes the algorithm from its counterparts. Furthermore, the algorithm optimally exploits the available fog computing resources, thereby effectuating a substantial reduction in the total makespan as depicted in Table 3.
Table 3. Makespan analysis of different algorithms.
Figure 5 demonstrates that the proposed technique was executed for a number of iterations and obtained the results, which indicates that the proposed algorithm outperformed others due to its low computational time and fast convergence rate. Furthermore, the results indicate that the proposed technique maintains stable performance even when fog nodes exhibit varying resource capacities in heterogeneous conditions. While in homogeneous conditions, all fog nodes have identical computing and communication capabilities. The above results show a significant reduction in makespan analysis in a heterogeneous environment due to an adaptive particle update mechanism. The adaptive particle update enables efficient task node matching even when resource disparity exists. The results signify and confirm the robustness and applicability of MPSO in diverse IoHT heterogeneous environments.
Figure 5. Performance result of the makespan (s).

7.3. Analysis of Execution Cost

The execution cost measured the system computational performance per unit cost and is usually represented as unit grid dollar (G$). A higher execution cost indicates increased system overhead, whereas low cost values reflect better system performance as shown in Table 4.
Table 4. Execution cost analysis of different algorithms.
The proposed technique performed better compared to other techniques for the number of tasks 400 and above. This is due to the stability of the algorithm and its fast convergence rate, as shown in Figure 6.
Figure 6. Performance result of the execution cost (G$).

7.4. Analysis of Mixed Healthcare Workloads

Figure 7 demonstrates that the proposed MPSO scheduling technique provides a scalable and robust solution for delay-sensitive healthcare workloads in fog-enabled IoHT environments. Analyzing consistently the lower latency across various workloads, such as emergency tasks, high-priority tasks and routine tasks, substantially reduces the response time, especially noticed in emergency workloads. Further, our results confirm that MPSO is well suited for real-world healthcare systems for timely processing due to its sensitivity. The algorithm’s superior performance under heterogeneous nodes, mixed workloads and dynamic task conditions validates its adaptability and highlights its ability to optimize resource utilization without compromising service quality. These outcomes support our conclusion that MPSO offers a reliable, responsive and high-performance strategy for intelligent resource management techniques.
Figure 7. Performance analysis of mixed healthcare workloads of various algorithms.
Figure 8 illustrates the comparison of four scheduling algorithms to analyze the performance in terms of makespan and execution cost. The performance of each was evaluated over 30 independent simulation runs, with both metrics showing a 95% confidence interval rate, thus ensuring statistical reliability and highlighting the variability in outcomes. The figure also demonstrates the relative efficiency of the algorithms, where lower values indicate better scheduling performance. This integrated visualization enables quick comparison of computational delay (makespan) and resource utilization (execution cost).
Figure 8. Makespan and execution cost confidence intervals of various algorithms.

7.5. Performance Comparison

A comparative analysis of the proposed framework was conducted against a selection of recently published studies, as summarized in Table 5. This comparison is based on assessing the key performance parameters in terms of makespan, execution time and cost of execution for the overall system effectiveness to determine if the proposed technique meets the required solutions as compared to other existing techniques mentioned in various literature presented in the table. The proposed tri-tier architectural framework is evaluated using real-time sensor data for measuring the effect of system performance parameters, and it is evident from the obtained results that the proposed technique outperformed other meta-heuristic techniques.
Table 5. Performance comparison of the proposed work with existing literature.
It is worth mentioning to note that several recent DRL- and GNN-based task offloading frameworks are discussed in the related work; these methods were not incorporated as experimental baselines in this study. The primary reason is that our evaluation environment is built on iFogSim, which natively supports deterministic and metaheuristic scheduling models but does not provide stable or reproducible implementations of DRL-based schedulers. Integrating architectures such as ReinFog or DQN-based task offloading would require substantial modification of the simulator, including custom environment agent interaction loops, online learning modules and action abstractions, which falls outside the technical scope of this optimization-oriented metaheuristic framework. For this reason, GA, IPSO, BLA and RR were selected as representative baselines within the same algorithmic family for comparison purposes as presented in Table 3 and Table 4.

8. Conclusions

Fog computing provides resources near to the edge users without any delay. The execution of IoT tasks to fog devices is a sustainable and effective way of fog-IoT connection. However, unlike the cloud, which offers centralized data centers with abundant computational power and storage, fog operates with limited processing power and storage capacity closer to the network edge. But due to the centralized and geographically wide distribution of servers, there is a communication latency. Therefore, fog provides a low cost. low response time, reliable and better quality of services for real-time applications. To address the challenges posed by unpredictable and dynamic service requests, fog computing requires a robust resource management strategy that can efficiently utilize limited resources while minimizing latency. This article proposes an effective resource allocation and task scheduling approach based on a Modified Particle Swarm Optimization (MPSO) algorithm. The primary objective of employing MPSO is to minimize execution time, reduce makespan, and lower the overall execution cost, thereby enhancing the performance of delay-sensitive applications in fog environments.
In the future, advanced machine learning techniques for intelligent IoHT task scheduling will be explored. Deep reinforcement learning (DRL) and DQN can enable real-time dynamic task offloading, while graph neural networks can model complex device task dependencies to enhance system efficiency, reliability, robustness and resilience. Moreover, security overhead functions, trust management and adversarial resilience metrics into both the objective formulation and algorithmic design can be a future prospect to incorporate, thus enabling a unified security performance optimization strategy for next-generation fog-enabled IoHT environments.

Author Contributions

Conceptualization, S.K.; methodology, S.K.; software, S.K.; validation, S.K., W.-K.L. and I.A.S.; formal analysis, J.A.K. and A.M.; investigation, S.K.; resources, J.A.K.; data curation, S.K.; writing—original draft preparation, S.K.; writing—review and editing, N.P., J.A.K. and A.M.; visualization, S.K. and I.A.S.; supervision, W.-K.L. and J.A.K.; project administration, N.P.; funding acquisition, A.M. and N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors acknowledge the use of generative AI tools (ChatGPT 4) and a writing assistance tool (Grammarly) to improve the writing quality of this paper. Following the use of these tools, the authors thoroughly reviewed and revised the content, and they take full responsibility for the final version of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alarifi, A.; Abdelsamie, F.; Amoon, M. A fault-tolerant aware scheduling method for fog-cloud environments. PLoS ONE 2019, 14, e0223902. [Google Scholar] [CrossRef]
  2. Kishor, A.; Chakarbarty, C. Task offloading in fog computing for using smart ant colony optimization. Wirel. Pers. Commun. 2022, 127, 1683–1704. [Google Scholar] [CrossRef]
  3. Reddy, K.H.K.; Goswami, R.S.; Luhach, A.K.; Chatterjee, P.; AlNumay, M.; Roy, D.S. Eflsm:-an intelligent resource manager for fog layer service management in smart cities. IEEE Trans. Consum. Electron. 2024, 70, 2281–2289. [Google Scholar] [CrossRef]
  4. Islam, M.T.; Buyya, R. Resource management and scheduling for big data applications in cloud computing environments. In Handbook of Research on Cloud Computing and Big Data Applications in IoT; IGI Global: Hershey, PA, USA, 2019; pp. 1–23. [Google Scholar]
  5. Mouradian, C.; Naboulsi, D.; Yangui, S.; Glitho, R.H.; Morrow, M.J.; Polakos, P.A. A comprehensive survey on fog computing: State-of-the-art and research challenges. IEEE Commun. Surv. Tutor. 2017, 20, 416–464. [Google Scholar] [CrossRef]
  6. Xu, X.; Fu, S.; Cai, Q.; Tian, W.; Liu, W.; Dou, W.; Sun, X.; Liu, A.X. Dynamic resource allocation for load balancing in fog environment. Wirel. Commun. Mob. Comput. 2018, 2018, 6421607. [Google Scholar] [CrossRef]
  7. Waleed, S.; Ullah, I.; Khan, W.U.; Rehman, A.U.; Rahman, T.; Li, S. Resource allocation of 5G network by exploiting particle swarm optimization. Iran J. Comput. Sci. 2021, 4, 211–219. [Google Scholar] [CrossRef]
  8. Alqahtani, F.; Amoon, M.; Nasr, A.A. Reliable scheduling and load balancing for requests in cloud-fog computing. Peer-to-Peer Netw. Appl. 2021, 14, 1905–1916. [Google Scholar] [CrossRef]
  9. Potu, N.; Jatoth, C.; Parvataneni, P. Optimizing resource scheduling based on extended particle swarm optimization in fog computing environments. Concurr. Comput. Pract. Exp. 2021, 33, e6163. [Google Scholar] [CrossRef]
  10. Abdulredha, M.N.; Bara, A.A.; Jabir, A.J. An Evolutionary Algorithm for Task scheduling Problem in the Cloud-Fog environment. J. Phys. Conf. Ser. 2021, 1963, 012044. [Google Scholar] [CrossRef]
  11. Babar, M.; Khan, M.S.; Din, A.; Ali, F.; Habib, U.; Kwak, K.S. Intelligent computation offloading for IoT applications in scalable edge computing using artificial bee colony optimization. Complexity 2021, 2021, 5563531. [Google Scholar] [CrossRef]
  12. Zhang, L.; Zou, Y.; Wang, W.; Jin, Z.; Su, Y.; Chen, H. Resource allocation and trust computing for blockchain-enabled edge computing system. Comput. Secur. 2021, 105, 102249. [Google Scholar] [CrossRef]
  13. Khan, S.; Shah, I.A.; Tairan, N.; Shah, H.; Nadeem, M.F. Optimal Resource Allocation in Fog Computing for Healthcare Applications. CMC-Comput. Mater. Contin. 2022, 71, 6147–6163. [Google Scholar]
  14. Ahmad, S.; Kim, D. A multi-device multi-tasks management and orchestration architecture for the design of enterprise IoT applications. Future Gener. Comput. Syst. 2020, 106, 482–500. [Google Scholar] [CrossRef]
  15. Ahmad, S.; Malik, S.; Ullah, I.; Park, D.H.; Kim, K.; Kim, D. Towards the design of a formal verification and evaluation tool of real-time tasks scheduling of IoT applications. Sustainability 2019, 11, 204. [Google Scholar] [CrossRef]
  16. Jamil, B.; Shojafar, M.; Ahmed, I.; Ullah, A.; Munir, K.; Ijaz, H. A job scheduling algorithm for delay and performance optimization in fog computing. Concurr. Comput. Pract. Exp. 2020, 32, e5581. [Google Scholar] [CrossRef]
  17. Baburao, D.; Pavankumar, T.; Prabhu, C. Load balancing in the fog nodes using particle swarm optimization-based enhanced dynamic resource allocation method. Appl. Nanosci. 2023, 13, 1045–1054. [Google Scholar] [CrossRef]
  18. Wadhwa, H.; Aron, R. TRAM: Technique for resource allocation and management in fog computing environment. J. Supercomput. 2022, 78, 667–690. [Google Scholar] [CrossRef]
  19. Kaur, M.; Aron, R. Focalb: Fog computing architecture of load balancing for scientific workflow applications. J. Grid Comput. 2021, 19, 1–22. [Google Scholar] [CrossRef]
  20. Khan, F.U.; Shah, I.A.; Jan, S.; Ahmad, S.; Whangbo, T. Machine learning-based resource management in fog computing: A systematic literature review. Sensors 2025, 25, 687. [Google Scholar] [CrossRef]
  21. Ali, E.M.; Abawajy, J.; Lemma, F.; Baho, S.A. Analysis of Deep Reinforcement Learning Algorithms for Task Offloading and Resource Allocation in Fog Computing Environments. Sensors 2025, 25, 5286. [Google Scholar] [CrossRef]
  22. Wang, Z.; Goudarzi, M.; Buyya, R. ReinFog: A Deep Reinforcement Learning empowered framework for resource management in edge and cloud computing environments. J. Netw. Comput. Appl. 2025, 242, 104250. [Google Scholar] [CrossRef]
  23. Lera, I.; Guerrero, C. Multi-objective application placement in fog computing using graph neural network-based reinforcement learning. J. Supercomput. 2024, 80, 27073–27094. [Google Scholar] [CrossRef]
  24. Aazam, M.; Islam, S.U.; Lone, S.T.; Abbas, A. Cloud of things (CoT): Cloud-fog-IoT task offloading for sustainable internet of things. IEEE Trans. Sustain. Comput. 2020, 7, 87–98. [Google Scholar] [CrossRef]
  25. Tariq, N.; Asim, M.; Al-Obeidat, F.; Zubair Farooqi, M.; Baker, T.; Hammoudeh, M.; Ghafir, I. The security of big data in fog-enabled IoT applications including blockchain: A survey. Sensors 2019, 19, 1788. [Google Scholar] [CrossRef]
  26. Amin, F.; Abbasi, R.; Rehman, A.; Choi, G.S. An advanced algorithm for higher network navigation in social Internet of Things using small-world networks. Sensors 2019, 19, 2007. [Google Scholar] [CrossRef] [PubMed]
  27. Lin, J.; Yu, W.; Zhang, N.; Yang, X.; Zhang, H.; Zhao, W. A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications. IEEE Internet Things J. 2017, 4, 1125–1142. [Google Scholar] [CrossRef]
  28. Shruti; Rani, S. Towards security of consumer electronic devices: An advanced attribute-based encryption approach for multimodal fog computing. IEEE Trans. Consum. Electron. 2024, 71, 5792–5800. [Google Scholar] [CrossRef]
  29. Tripathy, S.S.; Guduri, M.; Chakraborty, C.; Bebortta, S.; Pani, S.K.; Mukhopadhyay, S. An adaptive explainable AI framework for securing consumer electronics-based IoT applications in fog-cloud infrastructure. IEEE Trans. Consum. Electron. 2024, 71, 1889–1896. [Google Scholar] [CrossRef]
  30. Tripathy, S.S.; Bebortta, S.; ul Haque, M.I.; Zhu, Y.; Gadekallu, T.R. Towards multi-modal deep learning-assisted task offloading for consumer electronic devices over an IoT-fog architecture. IEEE Trans. Consum. Electron. 2024, 70, 1656–1663. [Google Scholar] [CrossRef]
  31. Denden, M.; Jemmali, M.; Boulila, W.; Soni, M.; Khan, F.; Ahmad, J. Clustering-Based Resource Management for Consumer Cost Optimization in IoT Edge Computing Environments. IEEE Trans. Consum. Electron. 2024, 71, 5761–5769. [Google Scholar] [CrossRef]
  32. Iftikhar, S.; Golec, M.; Chowdhury, D.; Gill, S.S.; Uhlig, S. FogDLearner: A Deep Learning-based Cardiac Health Diagnosis Framework using Fog Computing. In Proceedings of the Australasian Computer Science Week 2022, Brisbane, Australia, 14–18 February 2022; pp. 136–144. [Google Scholar]
  33. Bhatia, M. Fog computing-inspired smart home framework for predictive veterinary healthcare. Microprocess. Microsyst. 2020, 78, 103227. [Google Scholar] [CrossRef]
  34. Mutlag, A.A.; Khanapi Abd Ghani, M.; Mohammed, M.A.; Maashi, M.S.; Mohd, O.; Mostafa, S.A.; Abdulkareem, K.H.; Marques, G.; de la Torre Díez, I. MAFC: Multi-agent fog computing model for healthcare critical tasks management. Sensors 2020, 20, 1853. [Google Scholar] [CrossRef]
  35. Ijaz, M.; Li, G.; Wang, H.; El-Sherbeeny, A.M.; Moro Awelisah, Y.; Lin, L.; Koubaa, A.; Noor, A. Intelligent fog-enabled smart healthcare system for wearable physiological parameter detection. Electronics 2020, 9, 2015. [Google Scholar] [CrossRef]
  36. Rajan, J.P.; Rajan, S.E.; Martis, R.J.; Panigrahi, B.K. Fog computing employed computer aided cancer classification system using deep neural network in internet of things based healthcare system. J. Med. Syst. 2020, 44, 1–10. [Google Scholar] [CrossRef] [PubMed]
  37. Talaat, F.M.; Saraya, M.S.; Saleh, A.I.; Ali, H.A.; Ali, S.H. A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 4951–4966. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.