Next Article in Journal
Exploring the Potential of the Bicameral Mind Theory in Reinforcement Learning Algorithms
Previous Article in Journal
Towards Trustworthy Energy Efficient P2P Networks: A New Method for Validating Computing Results in Decentralized Networks
Previous Article in Special Issue
Secured Audio Framework Based on Chaotic-Steganography Algorithm for Internet of Things Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends

1
Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
2
Computer Department, Applied College, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31451, Saudi Arabia
3
Computer Science Department, College of Science and Humanities, Imam Abdulrahman Bin Faisal University, P.O. Box 12020, Jubail 31961, Saudi Arabia
4
Interdisciplinary Research Centre for Intelligent Secure Systems, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
Computers 2025, 14(6), 217; https://doi.org/10.3390/computers14060217
Submission received: 14 April 2025 / Revised: 22 April 2025 / Accepted: 23 April 2025 / Published: 2 June 2025
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))

Abstract

Fog computing has emerged as a promising paradigm to extend cloud services toward the edge of the network, enabling low-latency processing and real-time responsiveness for Internet of Things (IoT) applications. However, the distributed, heterogeneous, and resource-constrained nature of fog environments introduces significant challenges in balancing workloads efficiently. This study presents a systematic literature review (SLR) of 113 peer-reviewed articles published between 2020 and 2024, aiming to provide a comprehensive overview of load-balancing strategies in fog computing. This review categorizes fog computing architectures, load-balancing algorithms, scheduling and offloading techniques, fault-tolerance mechanisms, security models, and evaluation metrics. The analysis reveals that three-layer (IoT–Fog–Cloud) architectures remain predominant, with dynamic clustering and virtualization commonly employed to enhance adaptability. Heuristic and hybrid load-balancing approaches are most widely adopted due to their scalability and flexibility. Evaluation frequently centers on latency, energy consumption, and resource utilization, while simulation is primarily conducted using tools such as iFogSim and YAFS. Despite considerable progress, key challenges persist, including workload diversity, security enforcement, and real-time decision-making under dynamic conditions. Emerging trends highlight the growing use of artificial intelligence, software-defined networking, and blockchain to support intelligent, secure, and autonomous load balancing. This review synthesizes current research directions, identifies critical gaps, and offers recommendations for designing efficient and resilient fog-based load-balancing systems.

1. Introduction

The exponential growth of Internet of Things (IoT) devices has driven demand for computational paradigms that support low-latency, real-time processing. Fog computing has emerged as a pivotal solution that bridges the gap between cloud data centers and IoT endpoints by decentralizing computing resources and bringing them closer to the data source. Unlike traditional cloud computing, which relies on centralized infrastructures, fog computing operates through geographically distributed nodes—including gateways, routers, and edge servers—which are capable of handling data locally. This proximity significantly reduces latency, improves responsiveness, and supports context-aware applications across various domains such as smart healthcare, autonomous vehicles, and industrial automation.
However, the decentralized and heterogeneous nature of fog environments introduce several technical challenges. Among them, load balancing stands out as a critical issue. Load balancing in fog computing involves the efficient distribution of tasks across available fog nodes to optimize resource utilization, reduce response time, and ensure service continuity. Unlike centralized cloud environments, fog systems must operate under constrained resources and dynamic network conditions, where nodes may frequently join or leave, workloads fluctuate, and computational capacities vary widely.
Additionally, fog systems must address the Quality of Service (QoS) demands of diverse IoT applications. For example, real-time applications such as video surveillance or telemedicine require prompt data processing, which makes intelligent and adaptive load-balancing mechanisms essential. Other applications may demand privacy-preserving features and fault-tolerant architectures to protect sensitive data and maintain operational reliability. These challenges highlight the need for robust, flexible, and context-aware load-balancing strategies in fog computing.
In light of these complexities, this study conducts a systematic literature review (SLR) to explore the state of the art in load balancing for fog computing environments. Specifically, the objectives of this review are as follows:
  • To analyze and categorize fog computing architectures and their implications for load balancing.
  • To classify load-balancing algorithms and compare their strengths and weaknesses.
  • To identify the most frequently used performance metrics and evaluation tools.
  • To highlight emerging trends, ongoing challenges, and research gaps.
This review synthesizes findings from 113 peer-reviewed studies published between 2020 and 2024. By doing so, it aims to provide a comprehensive understanding of current practices and future directions for load balancing in fog computing, with an emphasis on improving QoS, scalability, and energy efficiency in IoT-driven environments.

2. Related Works

Numerous systematic reviews and surveys have been conducted to explore load balancing in fog computing. These studies have examined various algorithmic approaches, performance metrics, and challenges. However, most of them are limited either by their scope, methodological rigor, or timeliness. This section outlines key contributions from the existing literature while highlighting their limitations and the rationale for this review.
Kashani and Mahdipour [1] conducted a systematic study categorizing load-balancing algorithms into approximate, exact, fundamental, and hybrid types. Their analysis, based on 49 papers from 2013 to 2021, emphasized response time and energy consumption as core evaluation metrics. However, the review was limited by a relatively narrow time window and a strong focus on simulation-based research, with minimal discussion on real-world applicability or emerging AI-driven methods.
Kaur and Aron [2] presented a systematic review of static and dynamic load-balancing techniques in fog computing, focusing on improving energy efficiency and resource utilization. While their taxonomy and proposed architecture were valuable, the study primarily analyzed early-stage research and did not sufficiently address modern advancements such as intelligent or hybrid algorithms.
Sadashiv [3] provided a comprehensive survey comparing various load-balancing strategies based on their impact on latency, scalability, and energy usage. Although they introduced useful classifications and performance trade-offs, the study relied heavily on theoretical models and case studies without addressing the lack of practical implementations or validation across diverse environments.
Shakeel and Alam [4] reviewed 60 heuristic, meta-heuristic, and hybrid algorithms used in both cloud and fog contexts, offering an extensive framework and classification scheme. Their analysis covered QoS indicators such as response time and resource utilization but lacked a detailed focus on fog-specific architectures and their operational constraints. Moreover, their inclusion of cloud computing diluted the specificity required for fog-centric insights.
Ebneyousef and Shirmarz [5] provided a taxonomy of load-balancing algorithms and highlighted the importance of QoS in fog computing. They underscored the need for adaptive and scalable algorithms but did not delve into algorithmic trends such as the use of reinforcement learning or blockchain integration. Additionally, their dataset was restricted to 50 articles, potentially omitting emerging contributions.
While these prior studies offer valuable insights, they exhibit several limitations, including outdated datasets, insufficient performance evaluations, limited architectural analysis, and a lack of focus on recent innovations such as AI integration, privacy-preserving strategies, and real-world deployments. Furthermore, most reviews did not categorize workload types or evaluation tools in depth. In contrast, this study presents a broad and up-to-date systematic literature review of 113 peer-reviewed articles (2020–2024). It provides the following:
  • A detailed classification of fog computing architectures and their scalability, security, and application domains.
  • A comprehensive taxonomy of load-balancing algorithms, including heuristic, meta-heuristic, ML/RL, and hybrid strategies.
  • An in-depth evaluation of performance metrics, workload types, and assessment tools.
  • A synthesis of current challenges, emerging trends, and research opportunities, including AI-driven, blockchain-based, and privacy-aware approaches.
By addressing these dimensions, this review contributes to a more complete and actionable understanding of load balancing in fog computing environments.

3. Methodology

This study follows a systematic literature review (SLR) methodology grounded in the guidelines proposed by Keele [6], with a focus on transparency, repeatability, and rigor. The methodology is structured across several stages: the formulation of research questions, the design of the search strategy, a study selection based on inclusion and exclusion criteria, quality assessment, data extraction, and synthesis.

3.1. Research Questions

The primary objective of this SLR is to analyze the state of the art in load balancing within fog computing. To achieve this, seven research questions (RQs) were formulated:
  • RQ1: What are the various architectures of fog computing, and how do they differ in terms of functionality, scalability, and application?
  • RQ2: What types of load-balancing strategies or algorithms are applied in fog computing environments?
  • RQ2.1: What are the advantages and disadvantages of these strategies?
  • RQ3: What performance metrics are most commonly used to evaluate load-balancing algorithms in fog computing?
  • RQ4: What workload types are frequently used to evaluate load balancing in fog computing (e.g., static vs. dynamic)?
  • RQ5: What evaluation tools are commonly employed for assessing load-balancing algorithms in fog computing?
  • RQ6: What methods are used to assess load-balancing effectiveness in fog computing environments?
  • RQ7: What are the key challenges, emerging trends, and unresolved issues related to load balancing in fog computing?

3.2. Study Selection Criteria

To ensure a high-quality and relevant dataset, predefined inclusion and exclusion criteria were applied (Table 1). Only journal articles from Q1 and Q2 indexed sources, written in English, and published between 2020 and 2024 were considered. Conference papers, book chapters, and articles irrelevant to fog computing or load balancing were excluded.

3.3. Search Strategy Design

A systematic search was conducted in October 2024 across three major scientific databases: Scopus, Web of Science (WoS), and IEEE Xplore. Keywords and Boolean operators were defined based on the research questions and refined iteratively. Search string example: (“load balance*” OR “load distribution” OR “load scheduling” OR “workload distribution”) AND (“fog computing”) AND (“algorithm” OR “method” OR “strategy” OR “approach”).
This process yielded 322 articles: Scopus (80), IEEE (64), and WoS (178). After removing duplicates and applying eligibility screening, 113 studies were selected for final inclusion. Figure 1 illustrates the steps of selection for the final set of 113 studies, which were used for data extraction and synthesis.

3.4. Quality Assessment

To ensure the reliability of the selected studies, a quality assessment framework was applied using five criteria (Table 2). Each study was scored using a binary scale: Yes (1 point) and No (0 points). Only studies with a score of ≥3 were retained for synthesis.

3.5. Data Extraction

A structured data extraction form was developed to systematically capture relevant information from each selected study. This included the following:
  • Architecture type, layers, and node configurations (RQ1);
  • Load-balancing strategy, classification, and brief description (RQ2);
  • Advantages and disadvantages of algorithms (RQ2.1);
  • Performance metrics and evaluation tools (RQ3–RQ5);
  • Workload types (RQ4);
  • Assessment methods (RQ6);
  • Challenges, trends, and open issues (RQ7).
Each paper was independently reviewed by two authors to ensure accuracy and consistency.

3.6. Data Synthesis

Extracted data were synthesized using quantitative tabulation and qualitative analysis and organized around each research question. Microsoft Excel was used to compile the results, create visualizations, and identify thematic patterns. Where appropriate, the results were categorized by algorithm type, metric usage frequency, architectural design, or application domain.

3.7. Threats to Validity

This review acknowledges several potential threats to validity:
  • Selection bias: To mitigate this, we used multiple databases and rigorous inclusion/exclusion criteria.
  • Publication bias: Only Q1/Q2 journal articles were selected, which may exclude the relevant gray literature.
  • Reviewer bias: Dual review and majority voting were employed to reduce subjectivity during selection and extraction.
  • Tool limitations: Some studies lacked transparency in tool usage, making cross-comparison more difficult.
To further reduce bias, we employed backward and forward snowballing techniques to identify any missing but relevant studies.

4. Results and Discussion

This section presents the findings of the systematic literature review (SLR), organized according to the research questions outlined in Section 3. Each subsection synthesizes data across the 113 selected studies and provides both quantitative and qualitative insights into the current landscape of load balancing in fog computing.

4.1. RQ1: Fog Computing Architectures

The analysis reveals that three-layer architectures (IoT–Fog–Cloud) dominate current fog computing designs, referenced in over half the studies (e.g., [7,8,9,10,11,12]). These architectures facilitate efficient task distribution by enabling low-latency processing at the fog layer and long-term storage at the cloud layer. Dynamic clustering is a recurrent feature, enhancing scalability and fault tolerance. Hierarchical architecture appears in studies such as [13,14], introducing additional fog layers to enable regional aggregation and decision-making. These designs support geographically distributed deployments, making them suitable for applications such as intelligent transportation systems. Flat (decentralized) models [15] are leveraged in highly dynamic environments where node independence is critical, such as mobile ad hoc networks. Cloud–Fog–Edge (CFE) architectures, described in [16,17], offer more modular task assignment through fine-grained control over edge and fog layers.
While often discussed in relation to edge computing, fog computing represents a broader paradigm. Edge computing typically refers to computation that takes place directly on or near the data-generating devices (e.g., sensors, gateways). In contrast, fog computing extends this model by introducing an intermediate layer of processing nodes-known as fog nodes- which may include local servers, routers, or base stations. These nodes provide storage, computation, and networking services closer to the edge but still one step above the data source. This hierarchical structure supports context-aware processing, improved scalability, and more granular resource management. In this review, we specifically focus on fog computing, while occasionally referencing edge computing where it overlaps or is part of a fog-based architecture.
Microservices-based and peer-to-peer architectures [18,19] introduce flexibility and autonomous decision-making, particularly for real-time analytics and smart home scenarios. Security, task scheduling efficiency, and fault tolerance were commonly addressed across all architectural types. Notably, studies integrating blockchain [20] and software-defined networking (SDN) Xia, et al. [21] proposed architectures with advanced trust and resource control mechanisms.
Figure 2, Figure 3 and Figure 4 are visualizations of the results in Table A1. It can be seen in Figure 2 that most architectures have three layers, aligning with the classic IoT–Fog–Cloud hierarchy (e.g., [7,11,22,23]). This highlights the consistency in design philosophy across studies, reflecting the layered abstraction that supports scalability and modularity in fog computing environments. Meanwhile, the scatter plot (Figure 3) shows the relationship between the number of architectural layers and the number of nodes. There is no strong linear correlation between the number of layers and the number of nodes, suggesting that architectures with more layers do not necessarily imply more nodes, and the node count is more likely influenced by the deployment scope or application type (e.g., smart cities vs. industrial systems). The countplot (Figure 4) shows the distribution of cluster types. It can be seen that dynamic clustering dominates, highlighting the adaptive nature of fog computing to workload, mobility, and real-time requirements [18,24,25,26,27]. However, some studies use task-based clustering [28] or form fixed clusters, often based on geography or role (e.g., RSUs in vehicular networks [29], fog colonies [30]).
The insights from Table A1 and the related figures (Figure 2, Figure 3 and Figure 4) regarding layering trends: (a) Three-layer architectures are the de facto standard in fog computing research. (b) Studies with more than three layers [30,31,32,33] often involve complex domains like robotics, vehicular networks, or blockchain, which necessitate more nuanced abstractions. In terms of node scale variability, node counts vary widely from as few as 2 [22] to over 2000 [34], showing a huge span in deployment scales. This reflects fog computing’s flexibility, applicable to both local edge setups and large distributed infrastructures. Finally, cluster formation is either static (e.g., predefined groups in [9,35]) or dynamic based on proximity [36,37], workload [2,38], or node types (e.g., master–worker models in [39]). Figure 5 presents a 3D plot showing the relationships among layers, nodes, and clustering. It is noticed that there is no tight clustering or linear plane, which suggests that the data are widely dispersed. This reveals that architectures with few layers can still support many nodes, and systems with many nodes may use simple or complex clustering, but no single formula dominates.

4.2. RQ2: Load-Balancing Strategies and Algorithms

The reviewed literature identifies nine major categories of load-balancing strategies used in fog computing, each with distinct strengths and limitations. Fundamental strategies, such as round robin and random assignment, offer low complexity but lack adaptability in dynamic environments. Exact optimization techniques like linear programming deliver optimal results but are computationally intensive and unsuitable for real-time fog scenarios. Heuristic approaches are the most widely adopted due to their flexibility and ease of implementation, although they may yield suboptimal solutions and often require system-specific tuning.
Meta-heuristic algorithms, including Particle Swarm Optimization and Genetic Algorithms, excel in handling multi-objective optimization but are resource-heavy and often demand extensive parameter calibration. Machine learning (ML) and reinforcement learning (RL) methods are gaining popularity for their ability to predict and adapt to workload patterns in real time. However, they face significant challenges in fog environments. These include high training latency, resource demands during inference, limited interpretability (especially with deep models), and difficulties in deploying or updating models on low-power, distributed nodes. Additionally, data scarcity and privacy constraints in fog scenarios can hinder effective model training.
Fuzzy logic-based strategies are well suited for systems dealing with uncertain or imprecise inputs, although they rely on manually defined rules and do not scale well without adaptive mechanisms. Game theory models enable decentralized resource negotiation but often involve high computational costs and depend on accurate system modeling. Probabilistic methods provide lightweight solutions for stochastic systems but struggle with dynamic changes. Lastly, hybrid approaches; combinations of heuristics, ML, meta-heuristics, and fuzzy logic; are increasingly used to balance accuracy, adaptability, and resource efficiency, though they come with increased implementation complexity. Overall, while heuristic methods dominate current practice, ML/RL and hybrid models are emerging as promising solutions for adaptive and intelligent load balancing in fog computing.
Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 and Table A2 complement this discussion by illustrating the distribution, co-occurrence, and performance trade-offs among the reviewed strategies. Figure 6 and Figure 7 present the results related to Table A3 Figure 6 shows the number of studies per strategy. It is noticed that the top category is heuristic approaches, which dominate the landscape with 80+ studies. Meta-heuristic strategies and hybrid approaches show a strong presence, indicating growing interest in adaptive and multi-objective methods. Additionally, machine/deep/reinforcement learning methods (ML/RL) are emerging but still trail behind heuristics. Figure 7 presents the strategy proportions. Heuristics account for more than 50% of all categorized strategies. Smaller yet important segments are exact optimization (~5%) and fuzzy logic/game theory (niche but valuable in uncertain or decentralized environments), and ML/RL appears in ~10% of studies, showing fast growth, especially in autonomous or real-time decision-making.
In summary, heuristic approaches are practical, scalable, and ideal for real-time load balancing in volatile environments and cited in studies like [24,27,29,40,41]. Meta-heuristic strategies are suitable for complex, multi-variable problems requiring a balance between exploration and exploitation and used in [9,12,22,28,36]. ML/RL strategies are increasingly adopted for systems needing adaptive optimization. Studies like [7,33,42,43] highlight learning-based load prediction and decision-making. Fuzzy logic is helpful in systems with imprecise inputs or needing human-like reasoning, which is seen in [8,44,45]. Game theory ensures fairness and efficiency in multi-agent fog environments, which is applied in [38,46] for decentralized negotiation and decision-making. Finally, hybrid approaches combine the strengths of other strategies for robust, versatile load balancing and are employed in [10,26,28,47].
Regarding the distribution of studies by the number of categories (Figure 8), 49 studies used a single strategy, showing strong domain-specific alignment. Yet over 30 studies appear in two to four categories, revealing a move toward multi-strategy fusion in fog environments. This shows that real-world fog computing challenges often require more than one theoretical approach, especially in dynamic or uncertain conditions. The heatmap (Figure 9) reveals how often different categories are used together in the same studies. Heuristic approaches co-occur with almost every other strategy: strongly with meta-heuristic (20+ overlaps), frequently with ML/RL, hybrid, and even fuzzy logic, hybrid approaches commonly overlap with meta-heuristic, heuristic, ML/RL, and fuzzy logic, and ML/RL + meta-heuristics is a powerful emerging combo, often used in adaptive scheduling, predictive balancing, and real-time orchestration [47,48,49].
The Venn diagram (focused on heuristic, meta-heuristic, and hybrid approaches) in Figure 10 reveals a substantial intersection where studies (like [47,50,51]) implement all three approaches, indicating high system complexity and a large, shared region between heuristic and meta-heuristic, showing a common pairing for performance tuning under constraints.

4.3. RQ2.2: Advantages and Disadvantages in Load-Balancing Strategies

The strengths and limitations of each category were analyzed across all studies: (a) Heuristic methods offer low complexity and high adaptability but often converge on suboptimal solutions. (b) Meta-heuristics provide robust optimization but require intensive parameter tuning. (c) ML/RL algorithms enable intelligent decision-making but demand large training datasets and are prone to overfitting. (d) Exact optimization ensures optimality but lacks scalability. (e) Hybrid approaches integrate benefits across techniques but often introduce implementation complexity.
Based on Table A3, Figure 11, Figure 12 and Figure 13 present some statistics. The distribution of scheduling strategies is presented in Figure 11 and Figure 12. The most prevalent is hybrid scheduling, which appears in 72 studies, making it the dominant strategy, followed by latency-aware and deadline-aware strategies, appearing in over 55 studies each. The least represented are security-aware scheduling and cost-aware scheduling; while still significant, these are covered in fewer studies (~40 each), possibly due to their specialized nature. Meanwhile, the heatmap of the co-occurrence of scheduling strategies (Figure 13) shows the high overlap zones: hybrid scheduling co-occurs with nearly every other category, acting as a meta-layer combining various constraints, and strong links also exist: latency-aware ↔ deadline-aware, energy-aware ↔ cost-aware, and context-aware ↔ resource-aware. This suggests that real-world fog applications frequently balance multiple metrics: energy, delay, cost, security, and more.

4.4. RQ3: Performance Metrics

Performance evaluation across studies primarily involved the following metric categories: (a) Latency and response time: most frequently reported (e.g., [8,9,15]). (b) Energy consumption: particularly relevant in mobile and battery-constrained environments [10,24]. (c) Resource utilization: CPU, memory, and bandwidth usage [11,52]. (d) Task completion time and throughput: common in smart city and real-time applications [7,53]. (e) Fault tolerance and deadline adherence: evaluated via metrics like task failure rate and missed deadlines [31,54].
Several studies also reported composite metrics (Table A4) such as QoS satisfaction rates, prediction accuracy, and success ratios. Figure 14 presents the offloading strategy distribution. The most used strategies are hybrid offloading and dynamic offloading, which lead with 65+ studies each, reflecting the complexity and adaptiveness of modern fog systems. The other popular strategies are partial offloading and latency/energy-aware strategies, which are also widespread, showing the need to balance performance and efficiency. In terms of the specialized focus areas, application-specific offloading and context-aware offloading are common in targeted domains like AR/VR, healthcare, or location-based services.
The heatmap (Figure 15) shows the strongest co-occurrences: hybrid ↔ dynamic ↔ resource-aware. These three strategies frequently co-occur, as fog environments require real-time, flexible offloading based on the device state and network load. Similarly, energy-aware ↔ latency-aware ↔ context-aware is a common trio, especially in mobile and sensor-driven applications, where energy and delay are key, and context drives adaptiveness.

4.5. RQ4: Workload Types

Workloads were classified as follows: (a) Dynamic (95.6% of studies): real-time IoT data with unpredictable volumes [10,15,55]. (b) Static: benchmarking scenarios with fixed workloads [23,56]. (c) Hybrid: combining predictable and real-time tasks [57,58]. (d) Data intensive: special category for large-scale analytics and multimedia processing [13]. (e) Dynamic workloads dominate the evaluation landscape due to the volatile nature of fog environments.
Figure 16 and Figure 17 present the studies listed in Table A5. The distribution of fault tolerance strategies (Figure 16) shows that (dominant techniques) task migration, checkpointing, and replication are the most adopted methods, each appearing in 65+ studies. These strategies focus on recovery, continuity, and proactive protection, critical in dynamic fog environments. The supportive techniques (failure detection, self-healing, and resilient scheduling) occur in 50+ studies, showcasing a trend toward intelligence and autonomy in fault handling. Finally, comprehensive resilience; hybrid fault tolerance; appears in most studies, combining multiple techniques for layered protection.
In regard to the heatmap (Figure 17), the top co-occurring pairs are as follows: checkpointing ↔ replication, task migration ↔ resilient scheduling, and self-healing ↔ failure detection. This clustering suggests that fog systems rely heavily on failover and recovery coordination, especially when real-time services are at risk.

4.6. RQ5: Evaluation Tools

Evaluation environments ranged from simulators to real-world testbeds: (a) iFogSim (most common): used in 24 studies for modeling latency, energy, and network utilization [59,60]. (b) CloudSim and PFogSim: applied to hybrid fog–cloud scenarios [43]. (c) OMNeT++ and YAFS: chosen for detailed network simulations [61,62]. (d) Custom simulators: built using Python, Java, Docker, or MATLAB to simulate unique conditions [23,36]. Real-world testbeds: Implemented using Raspberry Pi and microcontroller-based setups in [63]. Some studies integrated multiple tools to bridge network and computer modeling (e.g., [58]).
Results presented in Table A6 reveal that the most common approaches are intrusion detection and data encryption, highlighting a dual focus on preventing and protecting against attacks. Furthermore, authentication, privacy preservation, and trust management follow closely, highlighting the importance of data integrity and user legitimacy. The emerging trends are blockchain-based security and lightweight security, which are gaining traction due to their applicability in decentralized and resource-constrained environments. The top co-occurrences are intrusion detection ↔ authentication, privacy ↔ data encryption, and trust management ↔ lightweight and blockchain. These patterns confirm that security is rarely isolated; instead, strategies are tightly coupled to address multiple threat vectors simultaneously.

4.7. RQ6: Assessment Methods

Studies utilized various assessment approaches: (a) simulation-based evaluation: dominant method due to its reproducibility. (b) Prototype implementation: used in research targeting specific use cases like smart cities or industrial IoT [28,55]. (c) Theoretical modeling: used for algorithm development and proof of concept [62,64]. Few studies incorporated empirical measurements from real-world deployments, indicating a gap in practical validation.
Table 3 shows that iFogSim is by far the most widely used tool, referenced in 90+ studies (Table A7). Its popularity stems from native support for IoT, fog, and mobility to strong community adoption and extensions (e.g., iFogSim2, iFogSim-Mobility). YAFS and EdgeCloudSim follow, but with significantly fewer studies. They offer more flexibility (YAFS) or more focused simulation on edge environments (EdgeCloudSim). CloudSim and its variants still appear in fog research, mainly for legacy support or hybrid cloud–fog architectures. GreenCloud, EmuFog, and PureEdgeSim are more specialized and used in niche studies, often focusing on energy, network topology, or edge-centric architectures. Finally, custom simulators are found in a handful of studies (e.g., [58,65]), typically to support novel frameworks or architectures not yet supported in public tools.
While iFogSim remains the most widely adopted simulator in fog computing research—cited in over 90 studies—its dominance warrants critical examination. One of its primary limitations lies in its simplified modeling of network dynamics. iFogSim typically assumes stable and deterministic network conditions, which do not accurately reflect the variability and unpredictability of real-world fog and IoT networks. Moreover, its workload generation models are often static or predefined, limiting its applicability for evaluating systems that require dynamic or event-driven workloads. These constraints may lead to over-optimistic performance estimates and reduce the generalizability of findings derived from iFogSim-based simulations.
In contrast, tools like YAFS (Yet Another Fog Simulator) offer greater flexibility in modeling stochastic behavior and network latency. YAFS supports dynamic node behavior and complex event-driven simulations, making it suitable for environments with mobility or failure-prone devices. OMNeT++, though more complex to configure, provides high-fidelity network simulation capabilities and is ideal for evaluating fog systems with detailed communication protocols and cross-layer interactions. Researchers focused on the accurate modeling of network variability, message delays, or routing protocols may find YAFS or OMNeT++ more appropriate than iFogSim.
Ultimately, the selection of a simulation tool should be aligned with the research objectives. While iFogSim offers ease of use and strong community support for application-level load balancing, alternatives like YAFS and OMNeT++ provide richer abstractions for network-level modeling and dynamic behaviors. A more diversified toolset can enable a deeper and more realistic understanding of fog computing performance under different operational conditions.
Despite their methodological convenience, simulation-based evaluations present a narrow view of fog computing performance under real-world conditions. The limited adoption of physical testbeds in the current research stems from several practical barriers. First, fog environments often involve heterogeneous and distributed devices, making testbed design complex and resource intensive. Deploying and maintaining such infrastructure requires not only significant financial investment but also expertise in embedded systems, networking, and security. Second, real-time testing introduces variability due to hardware failures, network delays, and environmental noise factors that are difficult to control and reproduce consistently. Additionally, there is often limited access to large-scale open-source fog testbeds, which restricts community-wide experimentation.
To address these challenges, future research could leverage affordable prototyping platforms such as Raspberry Pi clusters, Arduino boards, or Jetson Nano kits to build lightweight test environments. Cloud-integrated fog simulators (e.g., iFogSim2 with IoT hardware extensions) and federated edge–cloud frameworks can also be explored to simulate partial real-world behavior. Moreover, partnerships between academia and industry can play a vital role in providing access to realistic test environments, operational data, and deployment feedback. These approaches will help bridge the current gap between theoretical evaluations and operational viability, thus enhancing the credibility and applicability of fog-based load-balancing strategies.

4.8. RQ7: Challenges, Trends, and Future Directions

Fog computing continues to face several critical challenges that hinder its broader adoption and practical deployment. One of the primary challenges involves managing the heterogeneous capabilities of fog nodes, which can vary significantly in terms of computational power, memory, and energy efficiency. This heterogeneity complicates the process of load balancing and requires adaptable solutions that can operate effectively across diverse environments. Another pressing challenge is achieving scalability, particularly in high-density IoT environments where the sheer volume of data and the number of connected devices demand efficient, real-time processing with minimal delays.
Ensuring energy efficiency and reducing latency simultaneously remains a difficult task, especially given the resource-constrained nature of many fog nodes. As real-time responsiveness is critical in applications like healthcare and autonomous systems, achieving this balance is an ongoing concern. Security and privacy also remain at the forefront of research efforts, as decentralized fog architectures inherently expose data to a broader range of potential threats. Protecting sensitive information while maintaining performance necessitates lightweight yet effective security frameworks.
In terms of emerging trends, several technologies are shaping the next generation of fog computing systems. The adoption of artificial intelligence (AI) and machine learning (ML) is enabling adaptive load-balancing strategies that can learn and predict optimal resource allocation patterns in dynamic environments. Reinforcement learning (RL) is particularly being explored for its potential to autonomously optimize resource usage over time. Additionally, the integration of blockchain technology is gaining attention for its ability to support secure, tamper-proof data handling and decentralized trust management, which are vital in untrusted and distributed settings. Software-defined networking (SDN) is another key trend, offering centralized control over fog resources while maintaining the decentralized nature of fog infrastructure. This can facilitate the efficient orchestration and dynamic reconfiguration of resources in response to changing workloads.
Looking ahead, future research is expected to prioritize the development of real-time, energy-aware hybrid algorithms that can effectively balance multiple competing objectives such as latency, energy consumption, cost, and security. Furthermore, there is a strong need to expand real-world testbed deployments to complement simulation-based evaluations. This would provide more accurate assessments of system performance in practical conditions, accounting for hardware limitations, network variability, and user behavior. Another promising direction is cross-layer optimization, which involves coordinating decisions across application, network, and infrastructure layers to achieve holistic and context-aware scheduling.
Lastly, the lack of standardized evaluation frameworks presents a major obstacle to progress in the field. Without consistent benchmarking methodologies, it remains difficult to compare results across studies or replicate findings. Developing unified, transparent, and comprehensive evaluation frameworks is essential for ensuring reproducibility and enabling the meaningful comparisons of different load-balancing strategies.
While security and privacy have been acknowledged as critical challenges in fog computing, there is growing emphasis on implementing efficient yet robust mechanisms tailored for resource-constrained environments. Among the most widely discussed approaches is lightweight cryptography, which offers encryption and authentication using smaller key sizes and faster algorithms. These are particularly suitable for fog nodes with limited processing power and memory. However, they often involve trade-offs in terms of reduced cryptographic strength or susceptibility to emerging attack vectors and must be carefully selected based on threat models and application criticality.
Another emerging approach is the adoption of zero-trust architectures (ZTAs), where no device or user is inherently trusted, even within the network perimeter. ZTA involves continuous verification, micro-segmentation, and strict access controls. While this model enhances security posture against insider threats and lateral attacks, it introduces additional overhead from continuous monitoring and policy enforcement, which can impact system latency and increase energy consumption—especially if not optimized for fog-level operations.
Furthermore, secure data offloading and trusted execution environments (TEEs) are also being explored to protect sensitive data during transmission and processing. These solutions often rely on hardware support, which may not be uniformly available across all fog nodes, posing deployment challenges. Overall, integrating these security mechanisms must be balanced against real-time responsiveness and energy efficiency, which are often critical in fog applications such as healthcare, vehicular systems, or industrial automation.
Future research should focus on security–performance co-optimization, where security mechanisms are integrated in a way that minimally disrupts core performance metrics. Approaches such as adaptive security models, which adjust protection levels based on workload or context, may provide promising directions for aligning safety and efficiency in fog computing deployments.
Table A8-related visualizations are presented in Figure 18 and Figure 19. The metrics of trend distribution are presented in Figure 18. The most frequent trend is latency/delay and energy consumption, which dominate and appear in over 80 studies each. These are central due to fog computing’s real-time and power-sensitive nature. Resource utilization, cost efficiency, throughput, and load balancing follow closely, confirming multi-faceted practices in the near future. Finally, security/privacy and fault tolerance, and custom metrics are used in more specialized domains such as healthcare or vehicular networks. The co-occurrence rends (heatmap) (Figure 19) show frequent pairings: latency + energy, utilization + cost, security + privacy, and latency + QoS. These clusters reflect how trends are strategically bundled: performance with efficiency, protection with compliance, and responsiveness with service satisfaction.

5. Implications

The architecture choice should match the use case: (a) simple two-layer or three-layer designs are suitable for general IoT applications. (b) Advanced domains (e.g., autonomous vehicles and health systems) benefit from multi-layer or hybrid models. With regard to dynamic Clustering Is Essential: real-time data flow, mobility, and heterogeneous resources make static clustering insufficient, and the integration of SDN (as in [21]) or orchestration platforms (like Kubernetes in [14,85]) can enable intelligent cluster management. Regarding that scalability and interoperability are ongoing challenges, the diversity in node and cluster configurations reveals a lack of standardization, which might hinder interoperability across platforms.
The design and architecture can start simply and scale smartly: in the beginning, utilize heuristics in pilot deployments; then, transition to ML/meta-heuristics as the complexity increases. For autonomous or smart environments, the priority is for ML/RL or hybrid strategies to handle dynamic and uncertain workloads. Meanwhile, while exact optimization is ideal in theory, it is often impractical for large-scale or real-time fog systems.
Regarding strategies, there is no one-size-fits-all strategy; load balancing in fog computing requires blending multiple paradigms, especially for heterogeneous, large-scale, or mobile scenarios. It could be said that heuristics are the glue; i.e., heuristics serve as the core logic layer, often enhanced by meta-heuristics, learning, or fuzzy reasoning, depending on system complexity. Finally, in terms of future trends, such as intelligent hybridization, studies like [47,50,51] showcase layered or hierarchical load-balancing models, combining rule-based, statistical, and AI-driven components.
Based on the above observations, Table 4 presents a tailored recommendation matrix for selecting load-balancing strategies in fog computing environments. Each strategy is mapped to scenarios where it performs optimally, along with real-world examples to guide implementation. Heuristic approaches are best suited for systems requiring quick, real-time decisions under moderate complexity, such as smart traffic control. Meta-heuristic strategies are ideal for solving complex optimization problems with constraints, commonly found in vehicle routing or task-offloading scenarios. ML/RL strategies enable adaptive and predictive load balancing, especially useful in energy-aware fog environments. Fuzzy logic excels in handling vague or imprecise input data, as seen in health monitoring systems using uncertain sensor readings. Hybrid approaches combine multiple methods to address diverse objectives; like latency, cost, and mobility; making them highly effective in large-scale or heterogeneous systems, such as smart cities. This strategic mapping offers a practical reference for researchers and practitioners to align system requirements with the most effective load-balancing methods.
In terms of scheduling, hybrid scheduling is essential. Complex fog environments demand multi-objective optimization (e.g., [2,28,97]). Especially vital in smart city, healthcare, and mobility-driven use cases. Latency and deadlines are core concerns, and time-sensitivity is a major driver, particularly in IoT and edge applications [27,38]. Energy and cost often travel together as fog nodes, which are resource constrained, so studies often address these factors together [41,76]. Finally, security is gaining attention, appearing in 40+ studies, which shows a rising concern for secure offloading, authentication, and data handling [32,47,98]. Table 5 presents practical mapping between scheduling strategy types and their optimal application contexts in fog computing environments. Each strategy is associated with scenarios where it is most effective, along with representative real-world use cases. This matrix serves as a design guideline for selecting scheduling algorithms that align with the specific operational goals and constraints of fog-based systems.
Regarding the offloading in Fog is contextual and dynamic; the popularity of dynamic and context-aware offloading confirms that systems must continuously adapt to environmental and device conditions. Similarly, hybrid techniques dominate real deployments. Real-world fog deployments often need to balance latency, energy, and computation load, which explains the dominance of hybrid strategies. On the other hand, while full offloading is useful for constrained devices, partial offloading is more flexible and efficient in bandwidth-limited or real-time cases. Eventually, cross-layer decision-making is common; co-occurrence across 4+ categories shows that fog-based offloading decisions are not isolated but rather integrated with scheduling, resource management, and security.
Table 6 presents practical guidance for selecting offloading strategies in fog computing based on system characteristics and application needs. Each strategy is aligned with specific operational contexts and real-world use cases. This matrix supports developers and researchers in choosing the most suitable offloading mechanisms to align with diverse operational goals and application scenarios in fog-based systems.
Regarding fault management, tolerance in fog is not optional. The consistent use of migration, replication, and recovery strategies confirms that faults are expected and must be managed proactively. Furthermore, multi-layer protection is commonplace. The overlap of 5–7 strategies in most studies indicates that single-method fault tolerance is insufficient for real-world fog deployments. On the other hand, predictive and self-healing trends are rising. The increasing use of AI-based detection and self-healing shows a shift toward autonomous fog systems that can adapt to failures without human intervention. Finally, hybrid approaches are now standard practice. Systems that combine detection, redundancy, and recovery strategies are more resilient, especially in mission-critical applications like healthcare or industrial automation.
Then, it could be said that security is multi-dimensional. No single technique suffices. Systems must incorporate both proactive (e.g., intrusion detection) and preventive (e.g., access control and encryption) layers. Fog security must be lightweight. The rise in lightweight and blockchain-based security solutions reflects a growing demand for scalable yet efficient protection. Finally, trust and privacy are critical; with sensitive data traversing fog nodes, trust management and privacy-preserving methods are essential, especially in healthcare and financial systems.

6. Conclusions and Future Work

6.1. Conclusions

This systematic literature review offers a comprehensive examination of load-balancing strategies within the context of fog computing. By analyzing 113 peer-reviewed articles published between 2020 and 2024, this study identifies prevailing architectural models, algorithmic advancements, evaluation metrics, and emerging trends that shape current research in the field.
This review reveals that three-layer architectures (IoT–Fog–Cloud) are the most widely adopted, as they provide an effective balance between localized responsiveness and centralized control. More complex hierarchical and multi-layer configurations further enhance scalability and are especially well suited for data-intensive and latency-sensitive applications. In terms of load-balancing strategies, dynamic approaches; particularly heuristic, meta-heuristic, and hybrid algorithms; dominate the landscape due to their adaptability in heterogeneous and fluctuating environments. Increasingly, machine learning and reinforcement learning techniques are being leveraged to support intelligent, predictive load balancing.
Performance evaluation in the reviewed studies frequently focuses on latency, response time, energy consumption, and resource utilization. Over 95% of the studies simulate dynamic workloads, underlining the importance of context-aware and real-time distribution strategies. iFogSim emerges as the most commonly used simulation tool, followed by CloudSim and several custom-built environments. Nevertheless, a notable limitation persists: the absence of real-world deployments in many studies restricts the generalizability and practical applicability of their results.
Despite substantial progress, fog computing continues to face critical challenges, particularly in scalability, security, and energy efficiency. Promising research directions include the integration of artificial intelligence for adaptive scheduling, blockchain for decentralized trust management, and software-defined networking (SDN) for enhanced control and orchestration. These technologies have significant potential to address the unique constraints of privacy-sensitive and resource-limited environments. Ultimately, this review provides researchers and practitioners with a structured roadmap of existing methodologies, while highlighting unresolved issues and areas that demand further exploration.

6.2. Future Work

Despite substantial progress in fog computing and load-balancing strategies, several critical challenges and research gaps remain unaddressed. One of the primary limitations is an overreliance on simulation-based evaluations. While simulations offer a controlled environment for preliminary testing, they often fail to capture the complexities of real-world deployments. Future research should therefore focus on the development of real-world testbeds to validate algorithmic performance under operational conditions, providing insights into practical constraints such as hardware limitations, network variability, and user dynamics.
Another important area of focus is energy-aware and green computing. As fog nodes are typically deployed near end-users and may operate with constrained power resources, there is a pressing need for energy-efficient strategies. These strategies must optimize system performance without compromising device longevity or reliability. Integrating energy consumption as a key metric in load-balancing decision-making is essential for sustainable fog computing environments.
Moreover, the dynamic and heterogeneous nature of fog environments necessitate the development of context-aware load-balancing mechanisms. Future approaches should not only respond to workload fluctuations but also incorporate contextual information such as user mobility, geographical distribution, and application-specific Quality of Service (QoS) requirements. Contextual adaptation will enhance the responsiveness and efficiency of resource management in highly dynamic IoT scenarios.
Security and privacy also remain critical concerns. Many IoT applications process sensitive or personal data, making it imperative that future load-balancing solutions integrate security features at their core. Approaches that incorporate encryption, trust management frameworks, and blockchain technologies into load-balancing protocols could offer enhanced data protection while maintaining performance.
In addition, most existing algorithms focus on optimizing a single objective, such as latency or energy consumption. However, real-world applications often require a balance across multiple criteria, including performance, cost, energy efficiency, and security. Future research should explore multi-objective optimization techniques that can effectively manage these competing priorities within a unified framework.
Finally, the lack of standardized evaluation frameworks hinders the ability to conduct fair and reproducible comparisons across different studies. The development of a unified benchmarking framework is essential to promote transparency, improve comparability, and accelerate progress in the field.
By addressing these challenges, future research can pave the way for the development of resilient, scalable, and intelligent fog computing systems capable of supporting the diverse and evolving needs of next-generation IoT applications.
While this review provides a qualitative synthesis of architectural and algorithmic strategies in fog computing, future research should move toward quantitative meta-analysis to enhance comparability and rigor. A statistical aggregation of key performance metrics; such as latency reduction, energy savings, and task completion times; would enable a more objective evaluation of the competing strategies. Such a synthesis could be derived from benchmarking results across tools like iFogSim, YAFS, and OMNeT++ using standardized workloads.
Moreover, there is a critical need for deployment-focused case studies that go beyond simulation. Industrial prototypes and smart city implementations; such as those in healthcare monitoring or vehicular networks; can offer vital insights into real-world constraints, including hardware limitations, environmental volatility, and user behavior. These studies are particularly important for evaluating interoperability and scalability, as fog systems often span heterogeneous platforms, protocols, and administrative domains.
To unify these efforts, we advocate for the development of an open benchmarking framework that defines standardized evaluation scenarios, metrics, datasets, and reporting formats. Such a framework would greatly improve transparency, reproducibility, and cross-study comparability- key pillars for advancing the technical maturity of fog computing research and facilitating real-world adoption.

Author Contributions

Conceptualization, T.H. and D.A.; methodology, D.A., E.A. and T.B.; software, D.A.; validation, D.A., E.A. and T.B.; formal analysis, D.A., E.A. and T.B.; investigation, T.H.; resources, D.A., E.A. and T.B.; data curation, D.A., E.A. and T.B.; writing—original draft preparation, D.A., E.A. and T.B.; writing—review and editing, T.H.; visualization, D.A., E.A. and T.B.; supervision, T.H.; project administration, T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No original data used.

Acknowledgments

The authors acknowledge King Fahd University of Petroleum and Minerals and the Interdisciplinary Research Center for Intelligent Secure Systems for the support and for providing the computing facilities for doing this work. Special thanks go to anonymous reviewers for their insightful comments and feedback, resulting in a significant improvement in the quality of the paper. Generative AI has been used for proofreading purposes.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Summary of fog computing architectures across research studies.
Table A1. Summary of fog computing architectures across research studies.
StudyNumber of LayersArchitecture DescriptionNumber of NodesNumber of Clusters
[10]3Cloud layer, fog layer, terminal nodes layer50–3006–14
[23]MultipleMultiple layers, including cloud, fog nodes, and end devices.NANA
[7]3IoT devices, fog nodes, cloud/central server50NA
[22]3End-device layer, fog layer, cloud layer2–30 fog nodesNA
[15]FlatThe architecture is non-hierarchical, which suggests a flat structure rather than multiple layersNANA
[13]3Local fog layer, adjacent fog nodes, cloud layer4–16NA
[11]3IoT devices (edge), fog layer, cloud layerNANA
[8]3Device, fog, control layers100–600 IoT, 40 fog nodes, 3 cloud nodesClusters for task management
[35]3IoT devices, fog nodes, cloud75 fog nodes5 clusters
[12]3Cloud computing layer, fog computing layer, user devices11 nodesNA
[9]3IoT layer, fog layer, cloud layer12 IoT devices, 4 access points, 4 gateways, 6 fog nodes, 1 cloud data center4 clusters
[25]3Devices layer, fog layer, cloud layer236–578 vehicles, fog nodes, mobile and dedicatedDynamically created clusters
[54]3Edge devices, fog nodes, cloud infrastructure22 IoT nodes, 3 fog nodes, 1 cloud nodeNA
[52]3IoT devices, fog computing, cloud services5–25 fog nodesNA
[57]MultipleDistributed layers, including sensors, fog nodes, and cloud layersNANA
[59]3IoT devices, fog computing resources, cloud data centers50 IoT devices, 4 fog providers (50 VMs each)4 clusters
[53]3User devices, fog nodes, cloud systems5, 15, 30, 50 VRsNA
[66]MultipleMulti-layer architecture integrating edge, fog, and cloud environments10–25 IoT devices, 10 fog nodesNA
[31]4Production equipment, fog computing, cloud computing, user layers10 fog nodesNA
[67]3IoT devices, fog layer, central cloud layer3 fog serversNA
[68]VariableVaries per deployment scenarioNANA
[69]3NA5 Fog nodes4 cluster
[75]Variablevarious architectures, including sequential, tree-based, and DAG-based NANA
[74]7Multiple fog nodes and a centralized cloud environment1073 fog nodes distributed across 7 layersPuddles based on geographic proximity
[29]3Infrastructure tier, fog tier, global management tier1000 OBUs, 50 RSUsRSUs grouped by LSDNC
[45]2Fog layer, cloud layerNANA
[16]3Dew layer, fog layer, cloud layerNANA
[61]2Fog layer, cloud layer5 cloud nodes, 3 fog nodesNA
[70]MultipleMulti-layered fog computing paradigm extending from IoT sensors to cloudVaries, heterogeneous fog nodesClusters of IoT devices
[71]3Cloud layer, fog layer, IoT layer100–500 fog nodesNA
[99]3IoT devices, fog layer, cloud servicesMultiple fog nodes and cloud nodesNA
[96]3End devices, cog nodes, cloud layer16–64 fog nodesNA
[63]3End-user devices, fog layer, cloud services40–100 VMs, 50 physical machinesNA
[30]4Acquisition layer, image layer, computing layer, robot layer5 fog nodesNA
[72]3IoT layer, fog layer, cloud layer10–25 fog nodesNA
[39]3IoT layer, fog layer, cloud layer1 master node, multiple SBC devicesHomogeneous or heterogeneous clusters
[32]4IoT gateways, fog broker, fog cluster, applications layer5 fog nodes1 cluster
[52]4Cloud layer, proxy server layer, gateway layer, edge device layerVaries based on simulationNA
[34]3Edge layer, fog layer, cloud layer500–2000 fog nodesNA
[42]3Cloud layer, fog layer, end-user layer6 fog nodesNA
[14]MultipleA multi-layered approach involving edge, fog, and cloud layers2 Raspberry PisSingle Kubernetes cluster
[21]4Cloud computing layer, SDN control layer, fog computing layer, user layer10 fog nodes, 2 cloud nodesSDN control allows for dynamic clustering
[40]3Edge layer, fog layer, cloud layer29 fog nodesDynamic clusters
[80]3Edge layer, fog layer, cloud layer4–5 fog nodesDynamic clusters
[76]3The sensing layer, fog layer, cloud layerMultiple fog nodesDynamic clusters
[89]3Cloud layer, fog layer, end-user layer4 fog nodes, 1 cloud nodeNA
[77]3End-user layer, fog layer, cloud layer2 fog nodes, 1 cloud nodeNA
[36]3Lightweight nodes layer, access points layer, dedicated computing servers’ layer400 lightweight nodes, 10 access pointsDynamic logical groupings
[62]3Edge/IoT layer, fog layer, cloud layer50 fog nodes, 1 cloud nodeDynamic groupings
[37]3IoT device layer, fog layer, cloud layer20 fog nodes, 40 volunteer devices, 1 cloud data centerLogical grouping based on proximity
[100]2IoT layer, fog layer10 fog nodesNA
[78]2Vehicular layer, RSU layer6 RSUs, 1000–3000 vehiclesDynamically created clusters
[101]3Edge layer, fog layer, cloud layer100 edge devices, 20 fog devices, 5 cloud serversNA
[38]3IoT layer, fog layer, edge server layer1 edge server, multiple fog nodesDynamic clusters based on workload
[24]2Fog layer, cloud layer3 fog servers, 1 cloud serverSingle fog cluster
[26]3IoT layer, fog layer, cloud layerDynamic, based on parked vehiclesDynamically formed clusters
[81]3Vehicular fog layer, fog server layer, cloud layerVehicular fog nodes, RSUs, cloud nodesDynamically created clusters
[20]4Perception layer, blockchain layer, SDN and fog layer, cloud layerDistributed RSUs, 500 vehiclesDynamically created clusters
[82]3Vehicular layer, fog layer, cloud layerUAVs and RSUs, dynamicDynamic swarms
[46]2Fog layer, IoT device layer4 fog nodes, 50 IoT devices4 clusters
[64]3Fog layer, edge layer, cloud layer5 UEs per edge server, 5 edge servers, 1 cloud server5 clusters
[55]3IoT device layer, fog layer, cloud layer2 fog servers, 1 cloud server2 clusters
[18]3IoT layer, fog layer, cloud layerMultiple fog nodes, dynamic environmentDynamic clustering
[56]2Fog layer, cloud layer15 fog nodes, varying IoT devicesDynamic clusters
[90]3IoT layer, fog layer, cloud layerVaries based on demandDynamic clustering
[83]2Fog layer, cloud layerDistributed fog nodes, cloud nodesDynamic clusters
[91]3Microgrid layer, fog layer, cloud layer3 microgrids, fog nodes3 clusters
[27]3IoT layer, fog layer, cloud layer9 edge servers, dynamic IoT devicesDynamic clusters
[84]3Edge layer, fog layer, cloud layerMultiple appliances, smart sensorsDynamic clusters based on HEMS
[17]4IoT layer, edge layer, fog layer, cloud layerMultiple edge, fog, and cloud nodesDynamic clusters
[85]3Containerized workload layer, fog layer, cloud layer6 nodes, KubernetesDynamic clusters
[28]3Edge, fog, cloud100 fog nodes, 10–20 IoT devicesTask-based clusters
[86]3IoT devices layer, fog landscape layer, cloud layerMultiple fog cellsFog colonies as clusters
[97]2Fog layer, cloud layerNANA
[98]3Sensor layer, fog layer, cloud layerMultiple fog nodesDynamic clusters
[19]4Sensor layer, fog devices, proxy servers, cloud data centers30 microdata centersHierarchical MDCs
[102]3IoT layer, fog layer, cloud layerNANA
[87]3IoT layer, fog layer, cloud layer50 fog nodesNA
[88]3IoT layer, fog layer, cloud layerNANA
[60]3IoMT layer, fog layer, cloud layer100–500 fog devicesNA
[103]4Fog devices, cloud data center5 devices at each tierNA
[65]MultipleNA10–70 heterogeneous fog nodes3 clusters
[92]MultipleMultiple layers, including fog nodes and cloud500 fog nodes, 200 fog–cloud interfacesNA
[104]4Edge layer, Base station layer, fog layer, cloud layer20 heterogeneous virtual machines in the fog layerNA
[41]3Edge, fog, cloud100 fog nodes, 3 data centersNA
[50]3IoT layer, fog layer, cloud layer5–20 fog nodes in experimentsNA
[105]3EDC, Fog nodes, cloud data centersDistributed fog nodes between data sources and the cloudNA
[33]4Perception layer, fog layer, cloud layer, communication layerDynamic based on vehicles and lanesNA
[106]3End users, fog nodes, cloud layer100 fog nodes, centralized cloud serversNA
[47]3IoT layer, fog layer, cloud layerVaries with the application; fog servers categorized into overloaded, balanced, and underloadedNA
[107]3IoT devices layer, fog layer, cloud layer20 fog nodes, 6 cloud nodesNA
[108]3IoT layer, fog layer, cloud layerDynamic, fog nodes per region, cloud nodesNA
[93]3IoT device layer, fog layer, cloud layer1–5 fog nodes as gateways10 clusters
[51]3End-user layer, fog layer, cloud layer2 to 200 fog nodes, dynamically grouped into clusters10 clusters (20 nodes each)
[48]3IoT layer, fog layer, cloud layer200 fog servers, dynamic categorizationNA
[109]4WGL, FCL, CCL, RALDynamic, based on the number of IoT devices, fog nodes, and cloud serversNA
[94]3Infrastructure layer, fog layer, cloud layer10 fog nodes, 1 cloud nodeNA
[44]3IoT layer, fog layer, cloud layer15 IoT devices, 8 fog nodes, 30–180 VMs, 1 cloud data centerNA
[95]3IoT layer, fog layer, cloud layer5–20 fog nodesNA
[110]3IoT layer, fog layer, cloud layer20 fog nodes, 30 cloud nodesNA
[111]2Fog layer, consumer layer5 fog nodes in residential areasNA
[112]3IoT layer, fog layer, cloud layerVaries dynamically based on IoT devices, fog nodes, and cloud serversNA
[113]3IoT tier, fog tier, data-center tier100 fog nodes, 10–20 IoT devices, 3 cloud data centersNA
[49]3IoT layer, fog layer, cloud layer30 fog nodes, multiple cloud nodesNA
[2]3IoT layer, fog layer, cloud layer5–20 fog nodes with VMs, 1 cloud nodeNA
[58]3IoT layer, fog layer, cloud layerMultiple fog nodes (small servers, routers, gateways)NA
[43]3IoT devices, fog nodes, cloud data centers15 fog nodesNA
[114]2Load balancer at the first stage and virtual machines/web servers at the second stageSingle load balancer, multiple virtual machines/web serversNA
[115]3Mobile devices, fog computing layer, cloud layer10 fog nodes, 2 cloud nodesNA
[116]3Edge, fog, cloud1 edge node per consumer, 1 fog node per load point, centralized cloud data centerNA
[117]3IoT devices, fog layer, cloud layerMobile IoT devices, fog gateways, brokers, and processing serversNA
Table A2. The advantages and disadvantages of load-balancing categories.
Table A2. The advantages and disadvantages of load-balancing categories.
CategoryAdvantagesDisadvantagesRepresentative Papers
Fundamental StrategiesSimple and predictable behavior in static environments.-Lack of adaptability to dynamic workloads.[57,88]
Low computational overhead, ideal for systems with minimal complexity.-Inefficient resource utilization in heterogeneous systems.
Exact OptimizationGuarantees optimal solutions for latency and resource utilization.-High computational demands.[19,62]
Ideal for theoretical modeling and small-scale systems.-Poor scalability, unsuitable for real-time and large-scale fog scenarios.
Heuristic AppoachesSimple and flexible with moderate computational efficiency.-Often yield suboptimal solutions.[20,21,74]
Adaptable to system constraints, suitable for dynamic environments.-Highly dependent on parameter tuning and system-specific configurations.
-Limited exploration capabilities.
Meta-Heuristic StrategiesExcel in exploring complex solution spaces and multi-objective optimization.-Computationally intensive.[2,49,95,110,111,112]
Well suited for dynamic environments, balancing global exploration and local convergence.-Require extensive tuning of fitness functions.
-Risk of convergence to local optima without proper hybridization or enhancements.
ML/RLOffer intelligent automation, adapting to changing workloads in real time.-Require large training datasets.[7,33,41]
Optimize across multiple objectives, suitable for dynamic and complex fog systems.-High implementation complexity.
-Vulnerable to overfitting and poor performance in underrepresented scenarios.
Fuzzy LogicHandle uncertainty and imprecision effectively.-Dependence on expert-defined rules limits scalability.[18,44,45]
Low resource consumption and easily adaptable to diverse scenarios with clear fuzzy rules.-Struggle in highly dynamic environments.
Game TheoryEnsures fairness and efficiency in distributed and competitive environments.-Computationally complex to achieve equilibrium.[38,46]
Well suited for resource allocation in multi-agent systems.-Dependent on accurate real-time data, limiting scalability in dynamic systems.
Probabilistic/StatisticalSimple and efficient for tasks involving statistical variability.-Oversimplify system dynamics, limiting applicability in deterministic or complex scenarios.[64,69]
Effective in handling uncertainty within fog environments.-Limited adaptability can lead to suboptimal performance.
Hybrid ApproachesCombine strengths of multiple methods, such as meta-heuristics, ML/RL, or exact methods.-High complexity in implementation and parameter tuning.[2,10,20,26,31]
Effective for multi-objective optimization in dynamic and complex environments.-Often requires specialized hardware for deployment.
Table A3. Categorization of load-balancing strategies in fog computing.
Table A3. Categorization of load-balancing strategies in fog computing.
CategoryDescriptionRepresentative Papers
Fundamental StrategiesBasic approaches with predictable behavior are suitable for simple static environments.[57,58,88]
Exact OptimizationOptimization techniques guarantee the theoretical best outcomes, ideal for small-scale scenarios.[19,62,111,115]
Heuristic ApproachesFlexible and efficient algorithms suited for dynamic environments with moderate complexity.[8,14,20,21,23,29,32,34,40,42,45,52,54,61,63,68,69,70,71,74,76,77,78,89,96,99,100,101]
Meta-Heuristic StrategiesAdvanced strategies exploring complex solution spaces, balancing exploration and exploitation.[2,22,35,47,49,106]
ML/RLMachine learning and reinforcement learning models for real-time adaptive optimization.[17,18,33,41,55]
Fuzzy LogicTechniques for managing uncertainty using fuzzy rules, applicable to imprecise data scenarios.[8,11,18,22,45]
Game TheoryGame theory-based methods ensure fairness and efficiency in distributed systems.[38,46]
Probabilistic/StatisticalProbabilistic models handling variability are effective in uncertain fog environments.[64,69]
Hybrid ApproachesIntegrated methods combining strengths of multiple approaches for multi-objective optimization.[2,10,20,26,31]
Table A4. Categorization of performance metrics in fog computing.
Table A4. Categorization of performance metrics in fog computing.
Primary CategorySubcategoryPercentage
1. Latency and Time MetricsLatency (ms)16%
Task Completion Time (sec)
Response Time
Communication Delay
Waiting Time in Queue
Turnaround Time (TAT)
Service Delay Time
Temporal Delay
Processing Time
Mean Service Time (MST)
2. Resource Utilization MetricsResource Utilization (CPU%)7%
Memory Usage
Number of Computing Resources
Network Utilization
Number of Used Devices
Bandwidth Utilization
3. Energy Efficiency MetricsEnergy Consumption (W)4%
Brown Energy Consumption
4. Reliability and Fault MetricsFault Tolerance (Yes/No)9%
Failure Rate
Access Level Violations
Deadline Violations
Number of Deadlines Missed
5. Cost MetricsCommunication Cost7%
Execution Cost
Service Cost
Cost of Data Transmission
6. Load-Balancing and Distribution MetricsLoad-Balancing Level (LBL)10%
Workload Distribution
Queue Length
Task Delivery
Workflows
7. Network and Scalability MetricsNetwork Lifetime8%
Network Bandwidth
Jitter
Congestion
Scalability
8. Prediction and Accuracy MetricsPrediction Interval Coverage (PIC)5%
Prediction Accuracy (AAPE)
Prediction Efficiency
Accuracy
Accuracy of IRS Classification
9. Quality of Service (QoS) MetricsQoS Satisfaction Rate18%
Blocking Probability (bp)
Success Ratio (SR)
Fairness Index
Throughput (Tasks/sec)
Efficiency
Scalability
Prediction Interval Coverage (PIC)
Prediction Accuracy (AAPE)
Average Processing Time (APT)
Success Ratio (SR)
Blocking Probability (bp)
Task Delivery
10. Security MetricsEncryption Time2%
Decryption Time
Handover Served Ratio (HSR)
Onboard Unit Served Ratio (OSR)
11. Specialized MetricsSolution Convergence14%
Queue Management
Loss Rate
Consensus Time
Social Welfare
Offloading Success
Task Rejection Rate
Node Selection Success Rates
Table A5. Distribution of workload types in reviewed fog computing studies.
Table A5. Distribution of workload types in reviewed fog computing studies.
Workload TypePaper IDsDescriptionFrequencyPercentage
Dynamic[7,8,9,10,15,20,21,22,31,35,74,118]Workload is dynamically generated based on real-time conditions, reflecting the variability and unpredictability of IoT systems.10895.60%
Static[23,56]Fixed workloads without runtime changes, often used for benchmarking or theoretical analysis.21.80%
Dynamic and Data Intensive[13]Combines dynamic nature with high data volume or computational intensity.10.90%
Static/Dynamic[57,58]Incorporates both static and dynamic workloads to evaluate hybrid or flexible algorithms.21.80%
Table A6. Mapping of evaluation tools for studies in fog computing.
Table A6. Mapping of evaluation tools for studies in fog computing.
StudyTool Name
[11,23,24,69,99]Custom Simulator
[7]COSCO Simulator
[15,54]Discrete-event Simulator (YAFS)
[40,71]Python (SimPy)
[32]Python (SciPy)
[83,91,102]Python (Custom)
[65]Java (Custom)
[57,66]C (Custom)
[8]IoTSim-Osmosis
[22,31,35]MATLAB
[79,95,111,114,117]iFogSim
[68,85]Kubernetes and Istio
[37,58]OMNeT++
[74]PFogSim
[27]EdgeCloudSim
[20,81]SUMO Simulator
[90,115]Mininet-WiFi
[43,49,58,113]CloudSim
[14]Truffle Suite and Ganache
[72,116]Docker
[28,84]Real-World Testbeds
[82]AnyLogic
[2,87]FogSim
[30]PySpark
[39]CloudAnalyst
[93]Apache Karaf
Table A7. Classification and definition of tools used for evaluating load-balancing algorithms in fog computing.
Table A7. Classification and definition of tools used for evaluating load-balancing algorithms in fog computing.
Tool NameClassificationDefinition
iFogSimWidely Recognized SimulatorA simulator designed for fog and IoT environments, providing capabilities for modeling latency, energy consumption, and network parameters.
CloudSimWidely Recognized SimulatorA simulation platform for modeling cloud computing environments, extended for fog computing.
PFogSimWidely Recognized SimulatorAn extension of CloudSim designed for large-scale, heterogeneous fog environments.
FogSimWidely Recognized SimulatorA modular simulator tailored for fog and cloud computing environments.
YAFS (Yet Another Fog Simulator)Widely Recognized SimulatorA simulator focusing on fog-specific features and performance metrics.
OMNeT++Widely Recognized SimulatorA discrete-event network simulation framework for task distribution and communication modeling.
MATLABWidely Recognized SimulatorA computational environment supporting algorithm implementation and simulation, often used for fuzzy logic and deep learning.
Mininet/Mininet-WiFiNetwork and IoT-Specific ToolA network emulator for testing SDN and IoT architectures.
SUMONetwork and IoT-Specific ToolA traffic simulation tool used for vehicular mobility and task-offloading evaluations.
IoTSim-OsmosisNetwork and IoT-Specific ToolA Java-based simulator built on the CloudSim framework for IoT and fog computing.
Truffle SuiteBlockchain and SDN ToolA development framework for blockchain applications, used for writing and testing smart contracts.
GanacheBlockchain and SDN ToolA personal Ethereum blockchain simulator for testing blockchain-based implementations.
DockerCustom Simulator FrameworkA containerization platform used for deploying custom simulation environments.
SimPyPython-Based FrameworkA process-based discrete-event simulation framework in Python.
SciPyPython-Based FrameworkA Python library for optimization and simulation in custom environments.
Custom SimulatorsCustom Simulator FrameworkTailored simulation environments designed for specific research scenarios.
Real-World TestbedsReal-World TestbedPhysical setups using devices like Raspberry Pi for practical evaluations.
AnyLogicWidely Recognized SimulatorA multi-method simulation modeling tool for evaluating complex systems.
EdgeCloudSimWidely Recognized SimulatorA simulation environment for edge–fog–cloud computing scenarios.
Table A8. Emerging trends and technologies in fog computing research.
Table A8. Emerging trends and technologies in fog computing research.
Emerging TrendKey Technologies/MethodologiesRelevant Studies
AI and Machine LearningPredictive load balancing, intelligent resource management[7,33,41]
Edge Computing IntegrationEnhancing processing capabilities for IoT[10,78]
Reinforcement Learning (RL)Privacy-aware load balancing, adaptive resource allocation[15,61,66,75]
Intelligent SchedulingReal-time data processing, decision-making adaptations[28,42,85,109]
Multi-Layer Scheduling MethodsComplex scheduling for large-scale applications[13,52,74]
Clustering IntegrationImproving load balancing in vehicular networks[24]
AI-Driven Task AllocationReal-time prioritization for IoT systems[11,32,92]
Software-Defined Networking (SDN)Flexibility in network management[20,21,77]
Serverless ComputingFunction as a Service (FaaS) for IoT applications[12]
Adaptive Offloading TechniquesMachine learning for resource management in vehicular fog[25]
Fault Tolerance MechanismsHybrid approaches incorporating RL[54]
Scalability and ResilienceManaging large-scale systems and dynamic environments[52,66,101]
Energy ManagementBalancing energy efficiency with computational demands[42,93,95]
Security and Privacy ConcernsAddressing challenges in IoT data[19,64,76]
Interoperability Between Fog and CloudSeamless task distribution across resources[37,66,106]
Hybrid Optimization ApproachesCombining meta-heuristic algorithms for better performance[2,44,104]
Game–Theoretic FrameworksWorkload distribution strategies[38]
Dynamic Resource AllocationAdapting systems to real-time changes in workload[34,80]
Blockchain IntegrationSecure data handling and load balancing[14,33,76]
Integration of Renewable Energy SourcesEnergy-efficient fog devices[70,108]
Community-Based PlacementDistributing workloads effectively across nodes[94]
Hybrid AlgorithmsCombining multiple methodologies for enhanced performance[43,50]
Dynamic OffloadingReal-time task-offloading strategies[24,80,89,119]
Fuzzy Logic IntegrationIntelligent scheduling strategies[117]
Nature-Inspired AlgorithmsResource optimization strategies[105,114]

References

  1. Kashani, M.H.; Mahdipour, E. Load balancing algorithms in fog computing. IEEE Trans. Serv. Comput. 2022, 16, 1505–1521. [Google Scholar] [CrossRef]
  2. Kaur, M.; Aron, R. A systematic study of load balancing approaches in the fog computing environment. J. Supercomput. 2021, 77, 9202–9247. [Google Scholar] [CrossRef]
  3. Sadashiv, N. Load balancing in fog computing: A detailed survey. Int. J. Comput. Digit. Syst. 2023, 13, 729–750. [Google Scholar]
  4. Shakeel, H.; Alam, M. Load balancing approaches in cloud and fog computing environments: A framework, classification, and systematic review. Int. J. Cloud Appl. Comput. (IJCAC) 2022, 12, 1–24. [Google Scholar] [CrossRef]
  5. Ebneyousef, S.; Shirmarz, A. A taxonomy of load balancing algorithms and approaches in fog computing: A survey. Clust. Comput. 2023, 26, 3187–3208. [Google Scholar] [CrossRef]
  6. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Software Engineering Group, School of Computer Science and Mathematics, Keele University: Keele, UK; Department of Computer Science, University of Durham: Durham, UK, 2007; Volume 5. [Google Scholar] [CrossRef]
  7. Singh, J.; Singh, P.; Amhoud, E.M.; Hedabou, M. Energy-efficient and secure load balancing technique for SDN-enabled fog computing. Sustainability 2022, 14, 12951. [Google Scholar] [CrossRef]
  8. Javanmardi, S.; Shojafar, M.; Mohammadi, R.; Persico, V.; Pescapè, A. S-FoS: A secure workflow scheduling approach for performance optimization in SDN-based IoT-Fog networks. J. Inf. Secur. Appl. 2023, 72, 103404. [Google Scholar] [CrossRef]
  9. Gazori, P.; Rahbari, D.; Nickray, M. Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach. Future Gener. Comput. Syst. 2020, 110, 1098–1115. [Google Scholar] [CrossRef]
  10. Liu, W.; Li, C.; Zheng, A.; Zheng, Z.; Zhang, Z.; Xiao, Y. Fog Computing Resource-Scheduling Strategy in IoT Based on Artificial Bee Colony Algorithm. Electronics 2023, 12, 1511. [Google Scholar] [CrossRef]
  11. Chaudhry, R.; Rishiwal, V. An Efficient Task Allocation with Fuzzy Reptile Search Algorithm for Disaster Management in urban and rural area. Sustain. Comput. Inform. Syst. 2023, 39, 100893. [Google Scholar] [CrossRef]
  12. Khansari, M.E.; Sharifian, S. A scalable modified deep reinforcement learning algorithm for serverless IoT microservice composition infrastructure in fog layer. Future Gener. Comput. Syst. 2024, 153, 206–221. [Google Scholar] [CrossRef]
  13. Kazemi, S.M.; Ghanbari, S.; Kazemi, M.; Othman, M. Optimum scheduling in fog computing using the Divisible Load Theory (DLT) with linear and nonlinear loads. Comput. Netw. 2023, 220, 109483. [Google Scholar] [CrossRef]
  14. Forestiero, A.; Gentile, A.F.; Macri, D. A blockchain based approach for Fog infrastructure management leveraging on Non-Fungible Tokens. In Proceedings of the 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Falerna, Italy, 12–15 September 2022; pp. 1–7. [Google Scholar]
  15. Ebrahim, M.; Hafid, A. Privacy-aware load balancing in fog networks: A reinforcement learning approach. Comput. Netw. 2023, 237, 110095. [Google Scholar] [CrossRef]
  16. Sarker, S.; Arafat, M.T.; Lameesa, A.; Afrin, M.; Mahmud, R.; Razzaque, M.A.; Iqbal, T. FOLD: Fog-dew infrastructure-aided optimal workload distribution for cloud robotic operations. Internet Things 2024, 26, 101185. [Google Scholar] [CrossRef]
  17. Menouer, T.; Cérin, C.; Darmon, P. KOptim: Kubernetes Optimization Framework. In Proceedings of the 2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), San Francisco, CA, USA, 27–31 May 2024; pp. 900–908. [Google Scholar]
  18. Mattia, G.P.; Pietrabissa, A.; Beraldi, R. A Load Balancing Algorithm for Equalising Latency Across Fog or Edge Computing Nodes. IEEE Trans. Serv. Comput. 2023, 16, 3129–3140. [Google Scholar] [CrossRef]
  19. Ameena, B.; Ramasamy, L. Drawer Cosine optimization enabled task offloading in fog computing. Expert Syst. Appl. 2025, 259, 125212. [Google Scholar] [CrossRef]
  20. Alotaibi, J.; Alazzawi, L. SaFIoV: A Secure and Fast Communication in Fog-based Internet-of-Vehicles using SDN and Blockchain. In Proceedings of the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Lansing, MI, USA, 9–11 August 2021; pp. 334–339. [Google Scholar]
  21. Xia, B.; Kong, F.; Zhou, J.; Tang, X.; Gong, H. A Delay-Tolerant Data Transmission Scheme for Internet of Vehicles Based on Software Defined Cloud-Fog Networks. IEEE Access 2020, 8, 65911–65922. [Google Scholar] [CrossRef]
  22. Singh, S.P. Effective load balancing strategy using fuzzy golden eagle optimization in fog computing environment. Sustain. Comput. Inform. Syst. 2022, 35, 100766. [Google Scholar] [CrossRef]
  23. Ibrahim, A.H.; Fayed, Z.T.; Faheem, H.M. Fog-Based CDN Framework for Minimizing Latency of Web Services Using Fog-Based HTTP Browser. Future Internet 2021, 13, 320. [Google Scholar] [CrossRef]
  24. Hameed, A.R.; Islam, S.u.; Ahmad, I.; Munir, K. Energy- and performance-aware load-balancing in vehicular fog computing. Sustain. Comput. Inform. Syst. 2021, 30, 100454. [Google Scholar] [CrossRef]
  25. Sethi, V.; Pal, S. FedDOVe: A Federated Deep Q-learning-based Offloading for Vehicular fog computing. Future Gener. Comput. Syst. 2023, 141, 96–105. [Google Scholar] [CrossRef]
  26. Liu, Z.; Dai, P.; Xing, H.; Yu, Z.; Zhang, W. A Distributed Algorithm for Task Offloading in Vehicular Networks With Hybrid Fog/Cloud Computing. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 4388–4401. [Google Scholar] [CrossRef]
  27. Veloso, A.F.D.S.; De Moura, M.C.L.; Mendes, D.L.D.S.; Junior, J.V.R.; Rabelo, R.A.L.; Rodrigues, J.J.P.C. Towards Sustainability using an Edge-Fog-Cloud Architecture for Demand-Side Management. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Melbourne, Australia, 17–20 October 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021; pp. 1731–1736. [Google Scholar] [CrossRef]
  28. Nazeri, M.; Soltanaghaei, M.; Khorsand, R. A predictive energy-aware scheduling strategy for scientific workflows in fog computing. Expert Syst. Appl. 2024, 247, 123192. [Google Scholar] [CrossRef]
  29. Marwein, P.S.; Sur, S.N.; Kandar, D. Efficient load distribution in heterogeneous vehicular networks using hierarchical controllers. Comput. Netw. 2024, 254, 110805. [Google Scholar] [CrossRef]
  30. Wang, J.; Huang, D. Visual Servo Image Real-time Processing System Based on Fog Computing. Hum.-Centric Comput. Inf. Sci. 2022, 13, 1–14. [Google Scholar] [CrossRef]
  31. Liu, W.; Huang, G.; Zheng, A.; Liu, J. Research on the optimization of IIoT data processing latency. Comput. Commun. 2020, 151, 290–298. [Google Scholar] [CrossRef]
  32. Badidi, E.; Ragmani, A. An architecture for QoS-aware fog service provisioning. Comput. Sci. 2020, 170, 411–418. [Google Scholar] [CrossRef]
  33. Das, D.; Sengupta, S.; Satapathy, S.M.; Saini, D. HOGWO: A fog inspired optimized load balancing approach using hybridized grey wolf algorithm. Cluster Comput. 2024, 27, 13273–13294. [Google Scholar] [CrossRef]
  34. Rehman, A.U.; Ahmad, Z.; Jehangiri, A.I.; Ala’Anzy, M.A.; Othman, M.; Umar, A.I.; Ahmad, J. Dynamic Energy Efficient Resource Allocation Strategy for Load Balancing in Fog Environment. IEEE Access 2020, 8, 199829–199839. [Google Scholar] [CrossRef]
  35. Wu, B.; Lv, X.; Deyah Shamsi, W.; Gholami Dizicheh, E. Optimal deploying IoT services on the fog computing: A metaheuristic-based multi-objective approach. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 10010–10027. [Google Scholar] [CrossRef]
  36. Wu, Y.; Wang, Y.; Wei, Y.; Leng, S. Intelligent deployment of dedicated servers: Rebalancing the computing resource in IoT. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), Seoul, Republic of Korea, 6–9 April 2020; pp. 1–6. [Google Scholar]
  37. Hossain, M.T.; De Grande, R.E. Cloudlet Dwell Time Model and Resource Availability for Vehicular Fog Computing. In Proceedings of the 2021 IEEE/ACM 25th International Symposium on Distributed Simulation and Real Time Applications (DSRT), Valencia, Spain, 27–29 September 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021. [Google Scholar] [CrossRef]
  38. Cheng, Y.; Vijayaraj, A.; Sree Pokkuluri, K.; Salehnia, T.; Montazerolghaem, A.; Rateb, R. Vehicular Fog Resource Allocation Approach for VANETs Based on Deep Adaptive Reinforcement Learning Combined With Heuristic Information. IEEE Access 2024, 12, 139056–139075. [Google Scholar] [CrossRef]
  39. Fayos-Jordan, R.; Felici-Castell, S.; Segura-Garcia, J.; Lopez-Ballester, J.; Cobos, M. Performance comparison of container orchestration platforms with low cost devices in the fog, assisting Internet of Things applications. Procedia Comput. Sci. 2020, 169, 102788. [Google Scholar] [CrossRef]
  40. Huber, S.; Pfandzelter, T.; Bermbach, D. Identifying Nearest Fog Nodes With Network Coordinate Systems. In Proceedings of the 2023 IEEE International Conference on Cloud Engineering (IC2E), Boston, MA, USA, 25–29 September 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2023; pp. 222–223. [Google Scholar] [CrossRef]
  41. Kashyap, V.; Ahuja, R.; Kumar, A. A hybrid approach for fault-tolerance aware load balancing in fog computing. Clust. Comput. 2023, 27, 5217–5233. [Google Scholar] [CrossRef]
  42. Mattia, G.P.; Beraldi, R. On real-time scheduling in Fog computing: A Reinforcement Learning algorithm with application to smart cities. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2022, Pisa, Italy, 21–25 May 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2022; pp. 187–193. [Google Scholar] [CrossRef]
  43. Aldağ, M.; Kırsal, Y.; Ülker, S. An analytical modelling and QoS evaluation of fault-tolerant load balancer and web servers in fog computing. J. Supercomput. 2022, 78, 12136–12158. [Google Scholar] [CrossRef]
  44. Bhatia, M.; Sood, S.K.; Kaur, S. Quantumized approach of load scheduling in fog computing environment for IoT applications. Computing 2020, 102, 1097–1115. [Google Scholar] [CrossRef]
  45. Bukhari, A.A.; Hussain, F.K. Fuzzy logic trust-based fog node selection. Internet Things 2024, 27, 101293. [Google Scholar] [CrossRef]
  46. Hwang, R.H.; Lai, Y.-C.; Lin, Y.D. Queue-Length-Based Offloading for Delay Sensitive Applications in Federated Cloud-Edge-Fog Systems. In Proceedings of the IEEE Consumer Communications and Networking Conference, CCNC, Las Vegas, NV, USA, 6–9 January 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2024; pp. 406–411. [Google Scholar] [CrossRef]
  47. Najafizadeh, A.; Salajegheh, A.; Rahmani, A.M.; Sahafi, A. Multi-objective Task Scheduling in cloud-fog computing using goal programming approach. Cluster Comput. 2021, 25, 141–165. [Google Scholar] [CrossRef]
  48. Shamsa, Z.; Rezaee, A.; Adabi, S.; Rahmani, A.M. A decentralized prediction-based workflow load balancing architecture for cloud/fog/IoT environments. Computing 2023, 106, 201–239. [Google Scholar] [CrossRef]
  49. Yin, C.; Fang, Q.; Li, H.; Peng, Y.; Xu, X.; Tang, D. An optimized resource scheduling algorithm based on GA and ACO algorithm in fog computing. J. Supercomput. 2024, 80, 4248–4285. [Google Scholar] [CrossRef]
  50. Premkumar, N.; Santhosh, R. Pelican optimization algorithm with blockchain for secure load balancing in fog computing. Multimed. Tools Appl. 2023, 83, 53417–53439. [Google Scholar] [CrossRef]
  51. Talaat, F.M.; Ali, H.A.; Saraya, M.S.; Saleh, A.I. Effective scheduling algorithm for load balancing in fog environment using CNN and MPSO. Knowl. Inf. Syst. 2022, 64, 773–797. [Google Scholar] [CrossRef]
  52. Azizi, S.; Shojafar, M.; Farzin, P.; Dogani, J. DCSP: A delay and cost-aware service placement and load distribution algorithm for IoT-based fog networks. Comput. Commun. 2024, 215, 9–20. [Google Scholar] [CrossRef]
  53. Baniata, H.; Anaqreh, A.; Kertesz, A. PF-BTS: A Privacy-Aware Fog-enhanced Blockchain-assisted task scheduling. Inf. Process. Manag. 2021, 58, 102393. [Google Scholar] [CrossRef]
  54. Ebrahim, M.; Hafid, A. Resilience and load balancing in Fog networks: A Multi-Criteria Decision Analysis approach. Microprocess. Microsyst. 2023, 101, 104893. [Google Scholar] [CrossRef]
  55. Abdulazeez, D.H.; Askar, S.K. A Novel Offloading Mechanism Leveraging Fuzzy Logic and Deep Reinforcement Learning to Improve IoT Application Performance in a Three-Layer Architecture Within the Fog-Cloud Environment. IEEE Access 2024, 12, 39936–39952. [Google Scholar] [CrossRef]
  56. Al Maruf, M.; Singh, A.; Azim, A.; Auluck, N. Faster Fog Computing Based Over-the-Air Vehicular Updates: A Transfer Learning Approach. IEEE Trans. Serv. Comput. 2022, 15, 3245–3259. [Google Scholar] [CrossRef]
  57. Stavrinides, G.L.; Karatza, H.D. Multicriteria scheduling of linear workflows with dynamically varying structure on distributed platforms. Simul. Model. Pract. Theory 2021, 112, 102369. [Google Scholar] [CrossRef]
  58. Johri, P.; Balu, V.; Jayaprakash, B.; Jain, A.; Thacker, C.; Kumari, A. Quality of service-based machine learning in fog computing networks for e-healthcare services with data storage system. Soft Comput. 2023. [Google Scholar] [CrossRef]
  59. Javaheri, D.; Gorgin, S.; Lee, J.-A.; Masdari, M. An improved discrete harris hawk optimization algorithm for efficient workflow scheduling in multi-fog computing. Sustain. Comput. Sustain. Comput. Inform. Syst. 2022, 36, 100787. [Google Scholar] [CrossRef]
  60. Kaur, A.; Auluck, N. Scheduling algorithms for truly heterogeneous hierarchical fog networks. Softw. Softw. Pract. Exp. 2022, 52, 2411–2438. [Google Scholar] [CrossRef]
  61. Peralta, G.; Garrido, P.; Bilbao, J.; Agüero, R.; Crespo, P.M. Fog to cloud and network coded based architecture: Minimizing data download time for smart mobility. Simul. Model. Pract. Theory 2020, 101. [Google Scholar] [CrossRef]
  62. Qayyum, T.; Trabelsi, Z.; Malik, A.W.; Hayawi, K. Multi-Level Resource Sharing Framework Using Collaborative Fog Environment for Smart Cities. IEEE Access 2021, 9, 21859–21869. [Google Scholar] [CrossRef]
  63. Singh, S.; Mishra, A.K.; Arjaria, S.K.; Bhatt, C.; Pandey, D.S.; Yadav, R.K. Improved deep network-based load predictor and optimal load balancing in cloud-fog services. Concurr. Comput. 2024, 36, e8275. [Google Scholar] [CrossRef]
  64. Pourkiani, M.; Abedi, M. Machine learning based task distribution in heterogeneous fog-cloud environments. In Proceedings of the 2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Hvar, Croatia, 17–19 September 2020. [Google Scholar]
  65. Dehury, C.K.; Veeravalli, B.; Srirama, S.N. HeRAFC: Heuristic resource allocation and optimization in MultiFog-cloud environment. J. Parallel Distrib. Comput. 2024, 183, 104760. [Google Scholar] [CrossRef]
  66. Singh, S.; Sham, E.E.; Vidyarthi, D.P. Optimizing workload distribution in Fog-Cloud ecosystem: A JAYA based meta-heuristic for energy-efficient applications. Appl. Soft Comput. 2024, 154, 111391. [Google Scholar] [CrossRef]
  67. Gowri, V.; Baranidharan, B. An energy efficient and secure model using chaotic levy flight deep Q-learning in healthcare system. Sustain. Comput. Inform. Syst. 2023, 39, 100894. [Google Scholar] [CrossRef]
  68. Pallewatta, S.; Kostakos, V.; Buyya, R. MicroFog: A framework for scalable placement of microservices-based IoT applications in federated Fog environments. J. Syst. Softw. 2024, 209, 111910. [Google Scholar] [CrossRef]
  69. Zeng, D.; Gu, L.; Yao, H. Towards energy efficient service composition in green energy powered Cyber–Physical Fog Systems. Future Gener. Comput. Syst. 2020, 105, 757–765. [Google Scholar] [CrossRef]
  70. Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G.P. Distributed load balancing for heterogeneous fog computing infrastructures in smart cities. Pervasive Mob. Comput. 2020, 67, 101221. [Google Scholar] [CrossRef]
  71. Khan, S.; Ali Shah, I.; Tairan, N.; Shah, H.; Faisal Nadeem, M. Optimal Resource Allocation in Fog Computing for Healthcare Applications. Comput. Mater. Contin. 2022, 71, 6147–6163. [Google Scholar] [CrossRef]
  72. Lone, K.; Sofi, S.A. e-TOALB: An efficient task offloading in IoT-fog networks. Concurr. Comput. 2024, 36, e7951. [Google Scholar] [CrossRef]
  73. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile edge computing: A survey. IEEE Internet Things J. 2017, 5, 450–465. [Google Scholar] [CrossRef]
  74. Shaik, S.; Baskiyar, S. Distributed service placement in hierarchical fog environments. Sustain. Comput. Inform. Syst. 2022, 34, 100744. [Google Scholar] [CrossRef]
  75. Boudieb, W.; Malki, A.; Malki, M.; Badawy, A.; Barhamgi, M. Microservice instances selection and load balancing in fog computing using deep reinforcement learning approach. Future Gener. Comput. Syst. 2024, 156, 77–94. [Google Scholar] [CrossRef]
  76. Fugkeaw, S.; Prasad Gupta, R.; Worapaluk, K. Secure and Fine-Grained Access Control With Optimized Revocation for Outsourced IoT EHRs With Adaptive Load-Sharing in Fog-Assisted Cloud Environment. IEEE Access 2024, 12, 82753–82768. [Google Scholar] [CrossRef]
  77. Stypsanelli, I.; Brun, O.; Prabhu, B.J. Performance Evaluation of Some Adaptive Task Allocation Algorithms for Fog Networks. In Proceedings of the 2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC2021), Melbourne, Australia, 10–13 May 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021; pp. 84–88. [Google Scholar] [CrossRef]
  78. Liu, H.; Long, S.; Li, Z.; Fu, Y.; Zuo, Y.; Zhang, X. Revenue Maximizing Online Service Function Chain Deployment in Multi-Tier Computing Network. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 781–796. [Google Scholar] [CrossRef]
  79. Bala, M.I.; Chishti, M.A. Offloading in Cloud and Fog Hybrid Infrastructure Using iFogSim. In Proceedings of the 10th International Conference on Cloud Computing, Data Science & Engineering, Noida, India, 29–31 January 2020; pp. 421–426. [Google Scholar]
  80. Tran-Dang, H.; Kim, D.-S. Dynamic Task Offloading Approach for Task Delay Reduction in the IoT-enabled Fog Computing Systems. In Proceedings of the 2022 IEEE 20th International Conference on Industrial Informatics (INDIN), Perth, Australia, 25–28 July 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2022; pp. 61–66. [Google Scholar] [CrossRef]
  81. Hayawi, K.; Anwar, Z.; Malik, A.W.; Trabelsi, Z. Airborne Computing: A Toolkit for UAV-Assisted Federated Computing for Sustainable Smart Cities. IEEE Internet Things J. 2023, 10, 18941–18950. [Google Scholar] [CrossRef]
  82. Huang, X.; Cui, Y.; Chen, Q.; Zhang, J. Joint Task Offloading and QoS-Aware Resource Allocation in Fog-Enabled Internet-of-Things Networks. IEEE Internet Things J. 2020, 7, 7194–7206. [Google Scholar] [CrossRef]
  83. Zhang, T.; Yue, D.; Yu, L.; Dou, C.; Xie, X. Joint Energy and Workload Scheduling for Fog-Assisted Multimicrogrid Systems: A Deep Reinforcement Learning Approach. IEEE Syst. J. 2023, 17, 164–175. [Google Scholar] [CrossRef]
  84. Firouzi, F.; Farahani, B.; Panahi, E.; Barzegari, M. Task Offloading for Edge-Fog-Cloud Interplay in the Healthcare Internet of Things (IoT). In Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) 2021, Barcelona, Spain, 23–26 August 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021; pp. 1–8. [Google Scholar] [CrossRef]
  85. Keshri, R.; Vidyarthi, D.P. An ML-based task clustering and placement using hybrid Jaya-gray wolf optimization in fog-cloud ecosystem. Concurr. Comput. 2024, 36, e8109. [Google Scholar] [CrossRef]
  86. Nethaji, S.V.; Chidambaram, M.; Forestiero, A. Differential Grey Wolf Load-Balanced Stochastic Bellman Deep Reinforced Resource Allocation in Fog Environment. Appl. Comput. Intell. Soft Comput. 2022, 2022, 1–13. [Google Scholar] [CrossRef]
  87. Singh, S.P.; Kumar, R.; Sharma, A.; Nayyar, A. Leveraging energy-efficient load balancing algorithms in fog computing. Concurr. Comput. 2022, 34, e5913. [Google Scholar] [CrossRef]
  88. Khan, S.; Shah, I.A.; Nadeem, M.F.; Jan, S.; Whangbo, T.; Ahmad, S. Optimal Resource Allocation and Task Scheduling in Fog Computing for Internet of Medical Things Applications. Hum.-Centric Comput. Inf. Sci. 2023, 13. [Google Scholar] [CrossRef]
  89. Bala, M.I.; Chishti, M.A. Optimizing the Computational Offloading Decision in Cloud-Fog Environment. In Proceedings of the 2020 International Conference on Innovative Trends in Information Technology (ICITIIT), Kottayam, India, 13–14 February 2020; pp. 1–5. [Google Scholar]
  90. Tajalli, S.Z.; Kavousi-Fard, A.; Mardaneh, M.; Khosravi, A.; Razavi-Far, R. Uncertainty-Aware Management of Smart Grids Using Cloud-Based LSTM-Prediction Interval. IEEE Trans. Cybern. 2022, 52, 9964–9977. [Google Scholar] [CrossRef]
  91. Casadei, R.; Fortino, G.; Pianini, D.; Placuzzi, A.; Savaglio, C.; Viroli, M. A Methodology and Simulation-Based Toolchain for Estimating Deployment Performance of Smart Collective Services at the Edge. IEEE Internet Things J. 2022, 9, 20136–20148. [Google Scholar] [CrossRef]
  92. Yakubu, I.Z.; Murali, M. An efficient meta-heuristic resource allocation with load balancing in IoT-Fog-cloud computing environment. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 2981–2992. [Google Scholar] [CrossRef]
  93. Kaur, M.; Aron, R. FOCALB: Fog Computing Architecture of Load Balancing for Scientific Workflow Applications. J. Grid Comput. 2021, 19, 40. [Google Scholar] [CrossRef]
  94. Singh, S.P.; Kumar, R.; Sharma, A.; Abawajy, J.H.; Kaur, R. Energy efficient load balancing hybrid priority assigned laxity algorithm in fog computing. Cluster Comput. 2022, 25, 3325–3342. [Google Scholar] [CrossRef]
  95. Dubey, K.; Sharma, S.C.; Kumar, M. A Secure IoT Applications Allocation Framework for Integrated Fog-Cloud Environment. J. Grid Comput. 2022, 20, 5. [Google Scholar] [CrossRef]
  96. Ibrahim AlShathri, S.; Hassan, D.S.M.; Allaoua Chelloug, S. Latency-Aware Dynamic Second Offloading Service in SDN-Based Fog Architecture. Comput. Mater. Contin. 2023, 75, 1501–1526. [Google Scholar] [CrossRef]
  97. Hassan, S.R.; Rehman, A.U.; Alsharabi, N.; Arain, S.; Quddus, A.; Hamam, H. Design of load-aware resource allocation for heterogeneous fog computing systems. PeerJ Comput. Sci. 2024, 10, e1986. [Google Scholar] [CrossRef] [PubMed]
  98. Singh, A.; Auluck, N. Load balancing aware scheduling algorithms for fog networks. In Software—Practice and Experience; John Wiley and Sons Ltd.: Hoboken, NJ, USA, 2020; pp. 2012–2030. [Google Scholar] [CrossRef]
  99. Tran-Dang, H.; Kim, D.-S. Dynamic collaborative task offloading for delay minimization in the heterogeneous fog computing systems. J. Commun. Netw. 2023, 25, 244–252. [Google Scholar] [CrossRef]
  100. Huang, X.; Fan, W.; Chen, Q.; Zhang, J. Energy-Efficient Resource Allocation in Fog Computing Networks With the Candidate Mechanism. IEEE Internet Things J. 2020, 7, 8502–8512. [Google Scholar] [CrossRef]
  101. Yi, C.; Cai, J.; Zhu, K.; Wang, R. A Queueing Game Based Management Framework for Fog Computing With Strategic Computing Speed Control. IEEE Trans. Mob. Comput. 2022, 21, 1537–1551. [Google Scholar] [CrossRef]
  102. Potu, N.; Bhukya, S.; Jatoth, C.; Parvataneni, P. Quality-aware energy efficient scheduling model for fog computing comprised IoT network. Comput. Electr. Eng. 2022, 97, 107603. [Google Scholar] [CrossRef]
  103. Sabireen, H.; Venkataraman, N. A Hybrid and Light Weight Metaheuristic Approach with Clustering for Multi-Objective Resource Scheduling and Application Placement in Fog Environment. Expert Syst. Appl. 2023, 223, 119895. [Google Scholar] [CrossRef]
  104. Ramezani Shahidani, F.; Ghasemi, A.; Toroghi Haghighat, A.; Keshavarzi, A. Task scheduling in edge-fog-cloud architecture: A multi-objective load balancing approach using reinforcement learning algorithm. Computing 2023, 105, 1337–1359. [Google Scholar] [CrossRef]
  105. Sahil; Sood, S.K.; Chang, V. Fog-Cloud-IoT centric collaborative framework for machine learning-based situation-aware traffic management in urban spaces. Computing 2022, 106, 1193–1225. [Google Scholar] [CrossRef]
  106. Talaat, F.M.; Saraya, M.S.; Saleh, A.I.; Ali, H.A.; Ali, S.H. A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 4951–4966. [Google Scholar] [CrossRef]
  107. Abbasi, M.; Yaghoobikia, M.; Rafiee, M.; Khosravi, M.R.; Menon, V.G. Optimal Distribution of Workloads in Cloud-Fog Architecture in Intelligent Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4706–4715. [Google Scholar] [CrossRef]
  108. Mota, E.; Barbosa, J.; Figueiredo, G.B.; Peixoto, M.; Prazeres, C. A self-configuration framework for balancing services in the fog of things. Internet Things Cyber-Phys. Syst. 2024, 4, 318–332. [Google Scholar] [CrossRef]
  109. Yang, J. Low-latency cloud-fog network architecture and its load balancing strategy for medical big data. J. Ambient. Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  110. Jain, A.; Jatoth, C.; Gangadharan, G.R. Bi-level optimization of resource allocation and appliance scheduling in residential areas using a Fog of Things (FOT) framework. Cluster Comput. 2024, 27, 219–229. [Google Scholar] [CrossRef]
  111. Daghayeghi, A.; Nickray, M. Delay-Aware and Energy-Efficient Task Scheduling Using Strength Pareto Evolutionary Algorithm II in Fog-Cloud Computing Paradigm. Wirel. Pers. Commun. 2024, 138, 409–457. [Google Scholar] [CrossRef]
  112. Alqahtani, F.; Amoon, M.; Nasr, A.A. Reliable scheduling and load balancing for requests in cloud-fog computing. Peer-to-Peer Netw. Appl. 2021, 14, 1905–1916. [Google Scholar] [CrossRef]
  113. Deng, W.; Zhu, L.; Shen, Y.; Zhou, C.; Guo, J.; Cheng, Y. A novel multi-objective optimized DAG task scheduling strategy for fog computing based on container migration mechanism. Wirel. Netw. 2024, 31, 1005–1019. [Google Scholar] [CrossRef]
  114. Chuang, Y.-T.; Hsiang, C.-S. A popularity-aware and energy-efficient offloading mechanism in fog computing. J. Supercomput. 2022, 78, 19435–19458. [Google Scholar] [CrossRef]
  115. Oprea, S.-V.; Bâra, A. An Edge-Fog-Cloud computing architecture for IoT and smart metering data. Peer Peer Netw. Appl. 2023, 16, 818–845. [Google Scholar] [CrossRef]
  116. Ali, H.S.; Sridevi, R. Mobility and Security Aware Real-Time Task Scheduling in Fog-Cloud Computing for IoT Devices: A Fuzzy-Logic Approach. Comput. J. 2024, 67, 782–805. [Google Scholar] [CrossRef]
  117. Verma, R.; Chandra, S. HBI-LB: A Dependable Fault-Tolerant Load Balancing Approach for Fog based Internet-of-Things Environment. J. Supercomput. 2023, 79, 3731–3749. [Google Scholar] [CrossRef]
  118. Chiang, M.; Zhang, T. Fog and IoT: An overview of research opportunities. IEEE Internet Things J. 2016, 3, 854–864. [Google Scholar] [CrossRef]
  119. Mahapatra, A.; Majhi, S.K.; Mishra, K.; Pradhan, R.; Rao, D.C.; Panda, S.K. An energy-aware task offloading and load balancing for latency-sensitive IoT applications in the Fog-Cloud continuum. IEEE Access 2024, 12, 14334–14349. [Google Scholar] [CrossRef]
Figure 1. Flowchart for the systematic search of the studies in this review.
Figure 1. Flowchart for the systematic search of the studies in this review.
Computers 14 00217 g001
Figure 2. Distribution of number of layers in fog architectures.
Figure 2. Distribution of number of layers in fog architectures.
Computers 14 00217 g002
Figure 3. Number of layers vs. number of nodes.
Figure 3. Number of layers vs. number of nodes.
Computers 14 00217 g003
Figure 4. Distribution of cluster types.
Figure 4. Distribution of cluster types.
Computers 14 00217 g004
Figure 5. Three-dimensional plot: layers vs. nodes vs. clusters.
Figure 5. Three-dimensional plot: layers vs. nodes vs. clusters.
Computers 14 00217 g005
Figure 6. Number of studies per load-balancing strategy.
Figure 6. Number of studies per load-balancing strategy.
Computers 14 00217 g006
Figure 7. Proportions of load-balancing strategies in fog computing research.
Figure 7. Proportions of load-balancing strategies in fog computing research.
Computers 14 00217 g007
Figure 8. Number of studies participating in multiple load-balancing strategies.
Figure 8. Number of studies participating in multiple load-balancing strategies.
Computers 14 00217 g008
Figure 9. Co-occurrence of strategies across studies.
Figure 9. Co-occurrence of strategies across studies.
Computers 14 00217 g009
Figure 10. Venn diagram: heuristic vs. hybrid strategies.
Figure 10. Venn diagram: heuristic vs. hybrid strategies.
Computers 14 00217 g010
Figure 11. Number of studies per scheduling strategy.
Figure 11. Number of studies per scheduling strategy.
Computers 14 00217 g011
Figure 12. Proportion of scheduling strategies in fog computing research.
Figure 12. Proportion of scheduling strategies in fog computing research.
Computers 14 00217 g012
Figure 13. Co-occurrence of scheduling strategies across studies.
Figure 13. Co-occurrence of scheduling strategies across studies.
Computers 14 00217 g013
Figure 14. Proportions of offloading strategies in fog computing research.
Figure 14. Proportions of offloading strategies in fog computing research.
Computers 14 00217 g014
Figure 15. Co-occurrence of offloading strategies across studies.
Figure 15. Co-occurrence of offloading strategies across studies.
Computers 14 00217 g015
Figure 16. Number of studies per fault tolerance strategy.
Figure 16. Number of studies per fault tolerance strategy.
Computers 14 00217 g016
Figure 17. Co-occurrences of fault tolerance strategies across studies.
Figure 17. Co-occurrences of fault tolerance strategies across studies.
Computers 14 00217 g017
Figure 18. Emerging trends and technologies in fog computing research.
Figure 18. Emerging trends and technologies in fog computing research.
Computers 14 00217 g018
Figure 19. The co-occurrences of emerging trends and technologies in fog computing research.
Figure 19. The co-occurrences of emerging trends and technologies in fog computing research.
Computers 14 00217 g019
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
CriteriaInclusionExclusion
LiteraturePeer-reviewed journal articles (Q1/Q2)Non-indexed journals, retracted papers, conference proceedings
Publication Date2020–2024Prior to 2020
LanguageEnglishNon-English
FocusFog computing with emphasis on load balancingStudies on edge computing or unrelated topics
Table 2. Quality assessment criteria.
Table 2. Quality assessment criteria.
Criterion IDDescription
C1Are the research objectives clearly defined?
C2Are the methods well described and appropriate?
C3Is the experimental design justifiable?
C4Are the performance results measured and reported in detail?
C5Are limitations acknowledged and conclusions supported by results?
Three reviewers independently evaluated each paper, with disagreements resolved via consensus or majority vote.
Table 3. Distribution of studies across current challenges in fog computing.
Table 3. Distribution of studies across current challenges in fog computing.
ChallengePaper(s)
Resource Heterogeneity[8,9,10,12,13,15,22,25,31,35,54,57,59,66,67,68,69]
Dynamic Workloads[8,9,10,12,13,15,16,22,29,30,34,39,42,63,70,71,72,73]
Latency Sensitivity[8,10,12,13,15,22,52,54]
Energy Management and Efficiency[8,10,13,15,20,21,22,25,30,34,35,36,37,38,39,53,54,55,57,59,66,69,70,71,74,75,76,77,78]
Security and Privacy[8,10,14,15,19,21,35,38,40,41,45,46,54,55,56,57,59,62,63,64,65,66,69,70,71,72,74,76,79,80,81,82,83,84,85,86,87,88]
Scalability[8,12,13,14,15,20,22,27,35,36,37,39,41,44,48,52,55,63,66,77,80,88,89,90,91,92,93,94,95]
Mobility and Environmental Changes[32,39,63,72,96]
Table 4. Strategy recommendations.
Table 4. Strategy recommendations.
CategoryBest WithExample Use Case
HeuristicReal-time, moderate complexitySmart traffic coordination
Meta-HeuristicComplex optimization problemsVehicle routing and resource migration
ML/RLAdaptive/predictive controlEnergy-aware scheduling in fog systems
Fuzzy LogicInput uncertainty or vaguenessHealth monitoring with fuzzy sensor data
Hybrid ApproachesMulti-objective, large-scale, or mobile systemsSmart cities with dynamic resource allocation
Table 5. Scheduling strategy recommendation matrix.
Table 5. Scheduling strategy recommendation matrix.
CategoryBest WhenExample Use Case
Resource-awareSystem has limited fog resources like CPU or bandwidthWorkload balancing in industrial IoT systems
Deadline-awareTasks must be completed within strict time constraintsReal-time video analytics or emergency response
Energy-awareReducing power usage is a high priorityBattery-constrained sensor networks or mobile fog nodes
Latency-awareApplication requires real-time responsivenessAutonomous vehicles, remote surgery, AR/VR streaming
Security-awareSensitive data or user privacy are involvedHealthcare data processing or financial transactions
Cost-awareCloud offloading or communication costs must be minimizedFog–cloud collaboration in smart cities
Context-awareEnvironmental or user context must adapt scheduling decisionsLocation-based task delegation or user mobility prediction
Hybrid schedulingMultiple conflicting objectives must be balancedSmart cities managing energy, latency, and cost together
Table 6. Offloading strategy recommendations.
Table 6. Offloading strategy recommendations.
Best WhenExample Use Case
Full OffloadingEnd device is severely resource constrained, or idle task tolerance is highWearable health sensors offloading all processing to fog/cloud
Partial OffloadingPartial processing is possible on device to save bandwidth or preserve privacyEdge preprocessing for smart surveillance before cloud analytics
Dynamic OffloadingNetwork or device status changes rapidly; decisions must be adaptiveMobile users in vehicular fog systems or smart retail scenarios
Application-Specific OffloadingOffloading depends on application logic or requirementsAugmented reality or IoT applications with tailored offloading models
Resource-Aware OffloadingFog/cloud resources are heterogeneous, and availability variesLoad balancing in multi-tier fog infrastructure
Latency-Aware OffloadingReal-time response is criticalTactile internet or remote healthcare diagnostics
Energy-Aware OffloadingEnergy savings are more important than performanceBattery-powered IoT devices or sensor networks
Context-Aware OffloadingOffloading needs to adapt to user behavior or environmental dataLocation-based task assignment in smart cities
Hybrid OffloadingMultiple objectives like latency, energy, and context must be optimized togetherSmart transportation systems with high mobility and QoS constraints
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aldossary, D.; Aldahasi, E.; Balharith, T.; Helmy, T. A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends. Computers 2025, 14, 217. https://doi.org/10.3390/computers14060217

AMA Style

Aldossary D, Aldahasi E, Balharith T, Helmy T. A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends. Computers. 2025; 14(6):217. https://doi.org/10.3390/computers14060217

Chicago/Turabian Style

Aldossary, Danah, Ezaz Aldahasi, Taghreed Balharith, and Tarek Helmy. 2025. "A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends" Computers 14, no. 6: 217. https://doi.org/10.3390/computers14060217

APA Style

Aldossary, D., Aldahasi, E., Balharith, T., & Helmy, T. (2025). A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends. Computers, 14(6), 217. https://doi.org/10.3390/computers14060217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop