Abstract
Fog computing has emerged as a promising paradigm to extend cloud services toward the edge of the network, enabling low-latency processing and real-time responsiveness for Internet of Things (IoT) applications. However, the distributed, heterogeneous, and resource-constrained nature of fog environments introduces significant challenges in balancing workloads efficiently. This study presents a systematic literature review (SLR) of 113 peer-reviewed articles published between 2020 and 2024, aiming to provide a comprehensive overview of load-balancing strategies in fog computing. This review categorizes fog computing architectures, load-balancing algorithms, scheduling and offloading techniques, fault-tolerance mechanisms, security models, and evaluation metrics. The analysis reveals that three-layer (IoT–Fog–Cloud) architectures remain predominant, with dynamic clustering and virtualization commonly employed to enhance adaptability. Heuristic and hybrid load-balancing approaches are most widely adopted due to their scalability and flexibility. Evaluation frequently centers on latency, energy consumption, and resource utilization, while simulation is primarily conducted using tools such as iFogSim and YAFS. Despite considerable progress, key challenges persist, including workload diversity, security enforcement, and real-time decision-making under dynamic conditions. Emerging trends highlight the growing use of artificial intelligence, software-defined networking, and blockchain to support intelligent, secure, and autonomous load balancing. This review synthesizes current research directions, identifies critical gaps, and offers recommendations for designing efficient and resilient fog-based load-balancing systems.
1. Introduction
The exponential growth of Internet of Things (IoT) devices has driven demand for computational paradigms that support low-latency, real-time processing. Fog computing has emerged as a pivotal solution that bridges the gap between cloud data centers and IoT endpoints by decentralizing computing resources and bringing them closer to the data source. Unlike traditional cloud computing, which relies on centralized infrastructures, fog computing operates through geographically distributed nodes—including gateways, routers, and edge servers—which are capable of handling data locally. This proximity significantly reduces latency, improves responsiveness, and supports context-aware applications across various domains such as smart healthcare, autonomous vehicles, and industrial automation.
However, the decentralized and heterogeneous nature of fog environments introduce several technical challenges. Among them, load balancing stands out as a critical issue. Load balancing in fog computing involves the efficient distribution of tasks across available fog nodes to optimize resource utilization, reduce response time, and ensure service continuity. Unlike centralized cloud environments, fog systems must operate under constrained resources and dynamic network conditions, where nodes may frequently join or leave, workloads fluctuate, and computational capacities vary widely.
Additionally, fog systems must address the Quality of Service (QoS) demands of diverse IoT applications. For example, real-time applications such as video surveillance or telemedicine require prompt data processing, which makes intelligent and adaptive load-balancing mechanisms essential. Other applications may demand privacy-preserving features and fault-tolerant architectures to protect sensitive data and maintain operational reliability. These challenges highlight the need for robust, flexible, and context-aware load-balancing strategies in fog computing.
In light of these complexities, this study conducts a systematic literature review (SLR) to explore the state of the art in load balancing for fog computing environments. Specifically, the objectives of this review are as follows:
- To analyze and categorize fog computing architectures and their implications for load balancing.
- To classify load-balancing algorithms and compare their strengths and weaknesses.
- To identify the most frequently used performance metrics and evaluation tools.
- To highlight emerging trends, ongoing challenges, and research gaps.
This review synthesizes findings from 113 peer-reviewed studies published between 2020 and 2024. By doing so, it aims to provide a comprehensive understanding of current practices and future directions for load balancing in fog computing, with an emphasis on improving QoS, scalability, and energy efficiency in IoT-driven environments.
2. Related Works
Numerous systematic reviews and surveys have been conducted to explore load balancing in fog computing. These studies have examined various algorithmic approaches, performance metrics, and challenges. However, most of them are limited either by their scope, methodological rigor, or timeliness. This section outlines key contributions from the existing literature while highlighting their limitations and the rationale for this review.
Kashani and Mahdipour [] conducted a systematic study categorizing load-balancing algorithms into approximate, exact, fundamental, and hybrid types. Their analysis, based on 49 papers from 2013 to 2021, emphasized response time and energy consumption as core evaluation metrics. However, the review was limited by a relatively narrow time window and a strong focus on simulation-based research, with minimal discussion on real-world applicability or emerging AI-driven methods.
Kaur and Aron [] presented a systematic review of static and dynamic load-balancing techniques in fog computing, focusing on improving energy efficiency and resource utilization. While their taxonomy and proposed architecture were valuable, the study primarily analyzed early-stage research and did not sufficiently address modern advancements such as intelligent or hybrid algorithms.
Sadashiv [] provided a comprehensive survey comparing various load-balancing strategies based on their impact on latency, scalability, and energy usage. Although they introduced useful classifications and performance trade-offs, the study relied heavily on theoretical models and case studies without addressing the lack of practical implementations or validation across diverse environments.
Shakeel and Alam [] reviewed 60 heuristic, meta-heuristic, and hybrid algorithms used in both cloud and fog contexts, offering an extensive framework and classification scheme. Their analysis covered QoS indicators such as response time and resource utilization but lacked a detailed focus on fog-specific architectures and their operational constraints. Moreover, their inclusion of cloud computing diluted the specificity required for fog-centric insights.
Ebneyousef and Shirmarz [] provided a taxonomy of load-balancing algorithms and highlighted the importance of QoS in fog computing. They underscored the need for adaptive and scalable algorithms but did not delve into algorithmic trends such as the use of reinforcement learning or blockchain integration. Additionally, their dataset was restricted to 50 articles, potentially omitting emerging contributions.
While these prior studies offer valuable insights, they exhibit several limitations, including outdated datasets, insufficient performance evaluations, limited architectural analysis, and a lack of focus on recent innovations such as AI integration, privacy-preserving strategies, and real-world deployments. Furthermore, most reviews did not categorize workload types or evaluation tools in depth. In contrast, this study presents a broad and up-to-date systematic literature review of 113 peer-reviewed articles (2020–2024). It provides the following:
- A detailed classification of fog computing architectures and their scalability, security, and application domains.
- A comprehensive taxonomy of load-balancing algorithms, including heuristic, meta-heuristic, ML/RL, and hybrid strategies.
- An in-depth evaluation of performance metrics, workload types, and assessment tools.
- A synthesis of current challenges, emerging trends, and research opportunities, including AI-driven, blockchain-based, and privacy-aware approaches.
By addressing these dimensions, this review contributes to a more complete and actionable understanding of load balancing in fog computing environments.
3. Methodology
This study follows a systematic literature review (SLR) methodology grounded in the guidelines proposed by Keele [], with a focus on transparency, repeatability, and rigor. The methodology is structured across several stages: the formulation of research questions, the design of the search strategy, a study selection based on inclusion and exclusion criteria, quality assessment, data extraction, and synthesis.
3.1. Research Questions
The primary objective of this SLR is to analyze the state of the art in load balancing within fog computing. To achieve this, seven research questions (RQs) were formulated:
- RQ1: What are the various architectures of fog computing, and how do they differ in terms of functionality, scalability, and application?
- RQ2: What types of load-balancing strategies or algorithms are applied in fog computing environments?
- RQ2.1: What are the advantages and disadvantages of these strategies?
- RQ3: What performance metrics are most commonly used to evaluate load-balancing algorithms in fog computing?
- RQ4: What workload types are frequently used to evaluate load balancing in fog computing (e.g., static vs. dynamic)?
- RQ5: What evaluation tools are commonly employed for assessing load-balancing algorithms in fog computing?
- RQ6: What methods are used to assess load-balancing effectiveness in fog computing environments?
- RQ7: What are the key challenges, emerging trends, and unresolved issues related to load balancing in fog computing?
3.2. Study Selection Criteria
To ensure a high-quality and relevant dataset, predefined inclusion and exclusion criteria were applied (Table 1). Only journal articles from Q1 and Q2 indexed sources, written in English, and published between 2020 and 2024 were considered. Conference papers, book chapters, and articles irrelevant to fog computing or load balancing were excluded.
Table 1.
Inclusion and exclusion criteria.
3.3. Search Strategy Design
A systematic search was conducted in October 2024 across three major scientific databases: Scopus, Web of Science (WoS), and IEEE Xplore. Keywords and Boolean operators were defined based on the research questions and refined iteratively. Search string example: (“load balance*” OR “load distribution” OR “load scheduling” OR “workload distribution”) AND (“fog computing”) AND (“algorithm” OR “method” OR “strategy” OR “approach”).
This process yielded 322 articles: Scopus (80), IEEE (64), and WoS (178). After removing duplicates and applying eligibility screening, 113 studies were selected for final inclusion. Figure 1 illustrates the steps of selection for the final set of 113 studies, which were used for data extraction and synthesis.
Figure 1.
Flowchart for the systematic search of the studies in this review.
3.4. Quality Assessment
To ensure the reliability of the selected studies, a quality assessment framework was applied using five criteria (Table 2). Each study was scored using a binary scale: Yes (1 point) and No (0 points). Only studies with a score of ≥3 were retained for synthesis.
Table 2.
Quality assessment criteria.
3.5. Data Extraction
A structured data extraction form was developed to systematically capture relevant information from each selected study. This included the following:
- Architecture type, layers, and node configurations (RQ1);
- Load-balancing strategy, classification, and brief description (RQ2);
- Advantages and disadvantages of algorithms (RQ2.1);
- Performance metrics and evaluation tools (RQ3–RQ5);
- Workload types (RQ4);
- Assessment methods (RQ6);
- Challenges, trends, and open issues (RQ7).
Each paper was independently reviewed by two authors to ensure accuracy and consistency.
3.6. Data Synthesis
Extracted data were synthesized using quantitative tabulation and qualitative analysis and organized around each research question. Microsoft Excel was used to compile the results, create visualizations, and identify thematic patterns. Where appropriate, the results were categorized by algorithm type, metric usage frequency, architectural design, or application domain.
3.7. Threats to Validity
This review acknowledges several potential threats to validity:
- Selection bias: To mitigate this, we used multiple databases and rigorous inclusion/exclusion criteria.
- Publication bias: Only Q1/Q2 journal articles were selected, which may exclude the relevant gray literature.
- Reviewer bias: Dual review and majority voting were employed to reduce subjectivity during selection and extraction.
- Tool limitations: Some studies lacked transparency in tool usage, making cross-comparison more difficult.
To further reduce bias, we employed backward and forward snowballing techniques to identify any missing but relevant studies.
4. Results and Discussion
This section presents the findings of the systematic literature review (SLR), organized according to the research questions outlined in Section 3. Each subsection synthesizes data across the 113 selected studies and provides both quantitative and qualitative insights into the current landscape of load balancing in fog computing.
4.1. RQ1: Fog Computing Architectures
The analysis reveals that three-layer architectures (IoT–Fog–Cloud) dominate current fog computing designs, referenced in over half the studies (e.g., [,,,,,]). These architectures facilitate efficient task distribution by enabling low-latency processing at the fog layer and long-term storage at the cloud layer. Dynamic clustering is a recurrent feature, enhancing scalability and fault tolerance. Hierarchical architecture appears in studies such as [,], introducing additional fog layers to enable regional aggregation and decision-making. These designs support geographically distributed deployments, making them suitable for applications such as intelligent transportation systems. Flat (decentralized) models [] are leveraged in highly dynamic environments where node independence is critical, such as mobile ad hoc networks. Cloud–Fog–Edge (CFE) architectures, described in [,], offer more modular task assignment through fine-grained control over edge and fog layers.
While often discussed in relation to edge computing, fog computing represents a broader paradigm. Edge computing typically refers to computation that takes place directly on or near the data-generating devices (e.g., sensors, gateways). In contrast, fog computing extends this model by introducing an intermediate layer of processing nodes-known as fog nodes- which may include local servers, routers, or base stations. These nodes provide storage, computation, and networking services closer to the edge but still one step above the data source. This hierarchical structure supports context-aware processing, improved scalability, and more granular resource management. In this review, we specifically focus on fog computing, while occasionally referencing edge computing where it overlaps or is part of a fog-based architecture.
Microservices-based and peer-to-peer architectures [,] introduce flexibility and autonomous decision-making, particularly for real-time analytics and smart home scenarios. Security, task scheduling efficiency, and fault tolerance were commonly addressed across all architectural types. Notably, studies integrating blockchain [] and software-defined networking (SDN) Xia, et al. [] proposed architectures with advanced trust and resource control mechanisms.
Figure 2, Figure 3 and Figure 4 are visualizations of the results in Table A1. It can be seen in Figure 2 that most architectures have three layers, aligning with the classic IoT–Fog–Cloud hierarchy (e.g., [,,,]). This highlights the consistency in design philosophy across studies, reflecting the layered abstraction that supports scalability and modularity in fog computing environments. Meanwhile, the scatter plot (Figure 3) shows the relationship between the number of architectural layers and the number of nodes. There is no strong linear correlation between the number of layers and the number of nodes, suggesting that architectures with more layers do not necessarily imply more nodes, and the node count is more likely influenced by the deployment scope or application type (e.g., smart cities vs. industrial systems). The countplot (Figure 4) shows the distribution of cluster types. It can be seen that dynamic clustering dominates, highlighting the adaptive nature of fog computing to workload, mobility, and real-time requirements [,,,,]. However, some studies use task-based clustering [] or form fixed clusters, often based on geography or role (e.g., RSUs in vehicular networks [], fog colonies []).
Figure 2.
Distribution of number of layers in fog architectures.
Figure 3.
Number of layers vs. number of nodes.
Figure 4.
Distribution of cluster types.
The insights from Table A1 and the related figures (Figure 2, Figure 3 and Figure 4) regarding layering trends: (a) Three-layer architectures are the de facto standard in fog computing research. (b) Studies with more than three layers [,,,] often involve complex domains like robotics, vehicular networks, or blockchain, which necessitate more nuanced abstractions. In terms of node scale variability, node counts vary widely from as few as 2 [] to over 2000 [], showing a huge span in deployment scales. This reflects fog computing’s flexibility, applicable to both local edge setups and large distributed infrastructures. Finally, cluster formation is either static (e.g., predefined groups in [,]) or dynamic based on proximity [,], workload [,], or node types (e.g., master–worker models in []). Figure 5 presents a 3D plot showing the relationships among layers, nodes, and clustering. It is noticed that there is no tight clustering or linear plane, which suggests that the data are widely dispersed. This reveals that architectures with few layers can still support many nodes, and systems with many nodes may use simple or complex clustering, but no single formula dominates.
Figure 5.
Three-dimensional plot: layers vs. nodes vs. clusters.
4.2. RQ2: Load-Balancing Strategies and Algorithms
The reviewed literature identifies nine major categories of load-balancing strategies used in fog computing, each with distinct strengths and limitations. Fundamental strategies, such as round robin and random assignment, offer low complexity but lack adaptability in dynamic environments. Exact optimization techniques like linear programming deliver optimal results but are computationally intensive and unsuitable for real-time fog scenarios. Heuristic approaches are the most widely adopted due to their flexibility and ease of implementation, although they may yield suboptimal solutions and often require system-specific tuning.
Meta-heuristic algorithms, including Particle Swarm Optimization and Genetic Algorithms, excel in handling multi-objective optimization but are resource-heavy and often demand extensive parameter calibration. Machine learning (ML) and reinforcement learning (RL) methods are gaining popularity for their ability to predict and adapt to workload patterns in real time. However, they face significant challenges in fog environments. These include high training latency, resource demands during inference, limited interpretability (especially with deep models), and difficulties in deploying or updating models on low-power, distributed nodes. Additionally, data scarcity and privacy constraints in fog scenarios can hinder effective model training.
Fuzzy logic-based strategies are well suited for systems dealing with uncertain or imprecise inputs, although they rely on manually defined rules and do not scale well without adaptive mechanisms. Game theory models enable decentralized resource negotiation but often involve high computational costs and depend on accurate system modeling. Probabilistic methods provide lightweight solutions for stochastic systems but struggle with dynamic changes. Lastly, hybrid approaches; combinations of heuristics, ML, meta-heuristics, and fuzzy logic; are increasingly used to balance accuracy, adaptability, and resource efficiency, though they come with increased implementation complexity. Overall, while heuristic methods dominate current practice, ML/RL and hybrid models are emerging as promising solutions for adaptive and intelligent load balancing in fog computing.
Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 and Table A2 complement this discussion by illustrating the distribution, co-occurrence, and performance trade-offs among the reviewed strategies. Figure 6 and Figure 7 present the results related to Table A3 Figure 6 shows the number of studies per strategy. It is noticed that the top category is heuristic approaches, which dominate the landscape with 80+ studies. Meta-heuristic strategies and hybrid approaches show a strong presence, indicating growing interest in adaptive and multi-objective methods. Additionally, machine/deep/reinforcement learning methods (ML/RL) are emerging but still trail behind heuristics. Figure 7 presents the strategy proportions. Heuristics account for more than 50% of all categorized strategies. Smaller yet important segments are exact optimization (~5%) and fuzzy logic/game theory (niche but valuable in uncertain or decentralized environments), and ML/RL appears in ~10% of studies, showing fast growth, especially in autonomous or real-time decision-making.
Figure 6.
Number of studies per load-balancing strategy.
Figure 7.
Proportions of load-balancing strategies in fog computing research.
Figure 8.
Number of studies participating in multiple load-balancing strategies.
Figure 9.
Co-occurrence of strategies across studies.
Figure 10.
Venn diagram: heuristic vs. hybrid strategies.
In summary, heuristic approaches are practical, scalable, and ideal for real-time load balancing in volatile environments and cited in studies like [,,,,]. Meta-heuristic strategies are suitable for complex, multi-variable problems requiring a balance between exploration and exploitation and used in [,,,,]. ML/RL strategies are increasingly adopted for systems needing adaptive optimization. Studies like [,,,] highlight learning-based load prediction and decision-making. Fuzzy logic is helpful in systems with imprecise inputs or needing human-like reasoning, which is seen in [,,]. Game theory ensures fairness and efficiency in multi-agent fog environments, which is applied in [,] for decentralized negotiation and decision-making. Finally, hybrid approaches combine the strengths of other strategies for robust, versatile load balancing and are employed in [,,,].
Regarding the distribution of studies by the number of categories (Figure 8), 49 studies used a single strategy, showing strong domain-specific alignment. Yet over 30 studies appear in two to four categories, revealing a move toward multi-strategy fusion in fog environments. This shows that real-world fog computing challenges often require more than one theoretical approach, especially in dynamic or uncertain conditions. The heatmap (Figure 9) reveals how often different categories are used together in the same studies. Heuristic approaches co-occur with almost every other strategy: strongly with meta-heuristic (20+ overlaps), frequently with ML/RL, hybrid, and even fuzzy logic, hybrid approaches commonly overlap with meta-heuristic, heuristic, ML/RL, and fuzzy logic, and ML/RL + meta-heuristics is a powerful emerging combo, often used in adaptive scheduling, predictive balancing, and real-time orchestration [,,].
The Venn diagram (focused on heuristic, meta-heuristic, and hybrid approaches) in Figure 10 reveals a substantial intersection where studies (like [,,]) implement all three approaches, indicating high system complexity and a large, shared region between heuristic and meta-heuristic, showing a common pairing for performance tuning under constraints.
4.3. RQ2.2: Advantages and Disadvantages in Load-Balancing Strategies
The strengths and limitations of each category were analyzed across all studies: (a) Heuristic methods offer low complexity and high adaptability but often converge on suboptimal solutions. (b) Meta-heuristics provide robust optimization but require intensive parameter tuning. (c) ML/RL algorithms enable intelligent decision-making but demand large training datasets and are prone to overfitting. (d) Exact optimization ensures optimality but lacks scalability. (e) Hybrid approaches integrate benefits across techniques but often introduce implementation complexity.
Based on Table A3, Figure 11, Figure 12 and Figure 13 present some statistics. The distribution of scheduling strategies is presented in Figure 11 and Figure 12. The most prevalent is hybrid scheduling, which appears in 72 studies, making it the dominant strategy, followed by latency-aware and deadline-aware strategies, appearing in over 55 studies each. The least represented are security-aware scheduling and cost-aware scheduling; while still significant, these are covered in fewer studies (~40 each), possibly due to their specialized nature. Meanwhile, the heatmap of the co-occurrence of scheduling strategies (Figure 13) shows the high overlap zones: hybrid scheduling co-occurs with nearly every other category, acting as a meta-layer combining various constraints, and strong links also exist: latency-aware ↔ deadline-aware, energy-aware ↔ cost-aware, and context-aware ↔ resource-aware. This suggests that real-world fog applications frequently balance multiple metrics: energy, delay, cost, security, and more.
Figure 11.
Number of studies per scheduling strategy.
Figure 12.
Proportion of scheduling strategies in fog computing research.
Figure 13.
Co-occurrence of scheduling strategies across studies.
4.4. RQ3: Performance Metrics
Performance evaluation across studies primarily involved the following metric categories: (a) Latency and response time: most frequently reported (e.g., [,,]). (b) Energy consumption: particularly relevant in mobile and battery-constrained environments [,]. (c) Resource utilization: CPU, memory, and bandwidth usage [,]. (d) Task completion time and throughput: common in smart city and real-time applications [,]. (e) Fault tolerance and deadline adherence: evaluated via metrics like task failure rate and missed deadlines [,].
Several studies also reported composite metrics (Table A4) such as QoS satisfaction rates, prediction accuracy, and success ratios. Figure 14 presents the offloading strategy distribution. The most used strategies are hybrid offloading and dynamic offloading, which lead with 65+ studies each, reflecting the complexity and adaptiveness of modern fog systems. The other popular strategies are partial offloading and latency/energy-aware strategies, which are also widespread, showing the need to balance performance and efficiency. In terms of the specialized focus areas, application-specific offloading and context-aware offloading are common in targeted domains like AR/VR, healthcare, or location-based services.
Figure 14.
Proportions of offloading strategies in fog computing research.
The heatmap (Figure 15) shows the strongest co-occurrences: hybrid ↔ dynamic ↔ resource-aware. These three strategies frequently co-occur, as fog environments require real-time, flexible offloading based on the device state and network load. Similarly, energy-aware ↔ latency-aware ↔ context-aware is a common trio, especially in mobile and sensor-driven applications, where energy and delay are key, and context drives adaptiveness.
Figure 15.
Co-occurrence of offloading strategies across studies.
4.5. RQ4: Workload Types
Workloads were classified as follows: (a) Dynamic (95.6% of studies): real-time IoT data with unpredictable volumes [,,]. (b) Static: benchmarking scenarios with fixed workloads [,]. (c) Hybrid: combining predictable and real-time tasks [,]. (d) Data intensive: special category for large-scale analytics and multimedia processing []. (e) Dynamic workloads dominate the evaluation landscape due to the volatile nature of fog environments.
Figure 16 and Figure 17 present the studies listed in Table A5. The distribution of fault tolerance strategies (Figure 16) shows that (dominant techniques) task migration, checkpointing, and replication are the most adopted methods, each appearing in 65+ studies. These strategies focus on recovery, continuity, and proactive protection, critical in dynamic fog environments. The supportive techniques (failure detection, self-healing, and resilient scheduling) occur in 50+ studies, showcasing a trend toward intelligence and autonomy in fault handling. Finally, comprehensive resilience; hybrid fault tolerance; appears in most studies, combining multiple techniques for layered protection.
Figure 16.
Number of studies per fault tolerance strategy.
Figure 17.
Co-occurrences of fault tolerance strategies across studies.
In regard to the heatmap (Figure 17), the top co-occurring pairs are as follows: checkpointing ↔ replication, task migration ↔ resilient scheduling, and self-healing ↔ failure detection. This clustering suggests that fog systems rely heavily on failover and recovery coordination, especially when real-time services are at risk.
4.6. RQ5: Evaluation Tools
Evaluation environments ranged from simulators to real-world testbeds: (a) iFogSim (most common): used in 24 studies for modeling latency, energy, and network utilization [,]. (b) CloudSim and PFogSim: applied to hybrid fog–cloud scenarios []. (c) OMNeT++ and YAFS: chosen for detailed network simulations [,]. (d) Custom simulators: built using Python, Java, Docker, or MATLAB to simulate unique conditions [,]. Real-world testbeds: Implemented using Raspberry Pi and microcontroller-based setups in []. Some studies integrated multiple tools to bridge network and computer modeling (e.g., []).
Results presented in Table A6 reveal that the most common approaches are intrusion detection and data encryption, highlighting a dual focus on preventing and protecting against attacks. Furthermore, authentication, privacy preservation, and trust management follow closely, highlighting the importance of data integrity and user legitimacy. The emerging trends are blockchain-based security and lightweight security, which are gaining traction due to their applicability in decentralized and resource-constrained environments. The top co-occurrences are intrusion detection ↔ authentication, privacy ↔ data encryption, and trust management ↔ lightweight and blockchain. These patterns confirm that security is rarely isolated; instead, strategies are tightly coupled to address multiple threat vectors simultaneously.
4.7. RQ6: Assessment Methods
Studies utilized various assessment approaches: (a) simulation-based evaluation: dominant method due to its reproducibility. (b) Prototype implementation: used in research targeting specific use cases like smart cities or industrial IoT [,]. (c) Theoretical modeling: used for algorithm development and proof of concept [,]. Few studies incorporated empirical measurements from real-world deployments, indicating a gap in practical validation.
Table 3 shows that iFogSim is by far the most widely used tool, referenced in 90+ studies (Table A7). Its popularity stems from native support for IoT, fog, and mobility to strong community adoption and extensions (e.g., iFogSim2, iFogSim-Mobility). YAFS and EdgeCloudSim follow, but with significantly fewer studies. They offer more flexibility (YAFS) or more focused simulation on edge environments (EdgeCloudSim). CloudSim and its variants still appear in fog research, mainly for legacy support or hybrid cloud–fog architectures. GreenCloud, EmuFog, and PureEdgeSim are more specialized and used in niche studies, often focusing on energy, network topology, or edge-centric architectures. Finally, custom simulators are found in a handful of studies (e.g., [,]), typically to support novel frameworks or architectures not yet supported in public tools.
Table 3.
Distribution of studies across current challenges in fog computing.
While iFogSim remains the most widely adopted simulator in fog computing research—cited in over 90 studies—its dominance warrants critical examination. One of its primary limitations lies in its simplified modeling of network dynamics. iFogSim typically assumes stable and deterministic network conditions, which do not accurately reflect the variability and unpredictability of real-world fog and IoT networks. Moreover, its workload generation models are often static or predefined, limiting its applicability for evaluating systems that require dynamic or event-driven workloads. These constraints may lead to over-optimistic performance estimates and reduce the generalizability of findings derived from iFogSim-based simulations.
In contrast, tools like YAFS (Yet Another Fog Simulator) offer greater flexibility in modeling stochastic behavior and network latency. YAFS supports dynamic node behavior and complex event-driven simulations, making it suitable for environments with mobility or failure-prone devices. OMNeT++, though more complex to configure, provides high-fidelity network simulation capabilities and is ideal for evaluating fog systems with detailed communication protocols and cross-layer interactions. Researchers focused on the accurate modeling of network variability, message delays, or routing protocols may find YAFS or OMNeT++ more appropriate than iFogSim.
Ultimately, the selection of a simulation tool should be aligned with the research objectives. While iFogSim offers ease of use and strong community support for application-level load balancing, alternatives like YAFS and OMNeT++ provide richer abstractions for network-level modeling and dynamic behaviors. A more diversified toolset can enable a deeper and more realistic understanding of fog computing performance under different operational conditions.
Despite their methodological convenience, simulation-based evaluations present a narrow view of fog computing performance under real-world conditions. The limited adoption of physical testbeds in the current research stems from several practical barriers. First, fog environments often involve heterogeneous and distributed devices, making testbed design complex and resource intensive. Deploying and maintaining such infrastructure requires not only significant financial investment but also expertise in embedded systems, networking, and security. Second, real-time testing introduces variability due to hardware failures, network delays, and environmental noise factors that are difficult to control and reproduce consistently. Additionally, there is often limited access to large-scale open-source fog testbeds, which restricts community-wide experimentation.
To address these challenges, future research could leverage affordable prototyping platforms such as Raspberry Pi clusters, Arduino boards, or Jetson Nano kits to build lightweight test environments. Cloud-integrated fog simulators (e.g., iFogSim2 with IoT hardware extensions) and federated edge–cloud frameworks can also be explored to simulate partial real-world behavior. Moreover, partnerships between academia and industry can play a vital role in providing access to realistic test environments, operational data, and deployment feedback. These approaches will help bridge the current gap between theoretical evaluations and operational viability, thus enhancing the credibility and applicability of fog-based load-balancing strategies.
4.8. RQ7: Challenges, Trends, and Future Directions
Fog computing continues to face several critical challenges that hinder its broader adoption and practical deployment. One of the primary challenges involves managing the heterogeneous capabilities of fog nodes, which can vary significantly in terms of computational power, memory, and energy efficiency. This heterogeneity complicates the process of load balancing and requires adaptable solutions that can operate effectively across diverse environments. Another pressing challenge is achieving scalability, particularly in high-density IoT environments where the sheer volume of data and the number of connected devices demand efficient, real-time processing with minimal delays.
Ensuring energy efficiency and reducing latency simultaneously remains a difficult task, especially given the resource-constrained nature of many fog nodes. As real-time responsiveness is critical in applications like healthcare and autonomous systems, achieving this balance is an ongoing concern. Security and privacy also remain at the forefront of research efforts, as decentralized fog architectures inherently expose data to a broader range of potential threats. Protecting sensitive information while maintaining performance necessitates lightweight yet effective security frameworks.
In terms of emerging trends, several technologies are shaping the next generation of fog computing systems. The adoption of artificial intelligence (AI) and machine learning (ML) is enabling adaptive load-balancing strategies that can learn and predict optimal resource allocation patterns in dynamic environments. Reinforcement learning (RL) is particularly being explored for its potential to autonomously optimize resource usage over time. Additionally, the integration of blockchain technology is gaining attention for its ability to support secure, tamper-proof data handling and decentralized trust management, which are vital in untrusted and distributed settings. Software-defined networking (SDN) is another key trend, offering centralized control over fog resources while maintaining the decentralized nature of fog infrastructure. This can facilitate the efficient orchestration and dynamic reconfiguration of resources in response to changing workloads.
Looking ahead, future research is expected to prioritize the development of real-time, energy-aware hybrid algorithms that can effectively balance multiple competing objectives such as latency, energy consumption, cost, and security. Furthermore, there is a strong need to expand real-world testbed deployments to complement simulation-based evaluations. This would provide more accurate assessments of system performance in practical conditions, accounting for hardware limitations, network variability, and user behavior. Another promising direction is cross-layer optimization, which involves coordinating decisions across application, network, and infrastructure layers to achieve holistic and context-aware scheduling.
Lastly, the lack of standardized evaluation frameworks presents a major obstacle to progress in the field. Without consistent benchmarking methodologies, it remains difficult to compare results across studies or replicate findings. Developing unified, transparent, and comprehensive evaluation frameworks is essential for ensuring reproducibility and enabling the meaningful comparisons of different load-balancing strategies.
While security and privacy have been acknowledged as critical challenges in fog computing, there is growing emphasis on implementing efficient yet robust mechanisms tailored for resource-constrained environments. Among the most widely discussed approaches is lightweight cryptography, which offers encryption and authentication using smaller key sizes and faster algorithms. These are particularly suitable for fog nodes with limited processing power and memory. However, they often involve trade-offs in terms of reduced cryptographic strength or susceptibility to emerging attack vectors and must be carefully selected based on threat models and application criticality.
Another emerging approach is the adoption of zero-trust architectures (ZTAs), where no device or user is inherently trusted, even within the network perimeter. ZTA involves continuous verification, micro-segmentation, and strict access controls. While this model enhances security posture against insider threats and lateral attacks, it introduces additional overhead from continuous monitoring and policy enforcement, which can impact system latency and increase energy consumption—especially if not optimized for fog-level operations.
Furthermore, secure data offloading and trusted execution environments (TEEs) are also being explored to protect sensitive data during transmission and processing. These solutions often rely on hardware support, which may not be uniformly available across all fog nodes, posing deployment challenges. Overall, integrating these security mechanisms must be balanced against real-time responsiveness and energy efficiency, which are often critical in fog applications such as healthcare, vehicular systems, or industrial automation.
Future research should focus on security–performance co-optimization, where security mechanisms are integrated in a way that minimally disrupts core performance metrics. Approaches such as adaptive security models, which adjust protection levels based on workload or context, may provide promising directions for aligning safety and efficiency in fog computing deployments.
Table A8-related visualizations are presented in Figure 18 and Figure 19. The metrics of trend distribution are presented in Figure 18. The most frequent trend is latency/delay and energy consumption, which dominate and appear in over 80 studies each. These are central due to fog computing’s real-time and power-sensitive nature. Resource utilization, cost efficiency, throughput, and load balancing follow closely, confirming multi-faceted practices in the near future. Finally, security/privacy and fault tolerance, and custom metrics are used in more specialized domains such as healthcare or vehicular networks. The co-occurrence rends (heatmap) (Figure 19) show frequent pairings: latency + energy, utilization + cost, security + privacy, and latency + QoS. These clusters reflect how trends are strategically bundled: performance with efficiency, protection with compliance, and responsiveness with service satisfaction.
Figure 18.
Emerging trends and technologies in fog computing research.
Figure 19.
The co-occurrences of emerging trends and technologies in fog computing research.
5. Implications
The architecture choice should match the use case: (a) simple two-layer or three-layer designs are suitable for general IoT applications. (b) Advanced domains (e.g., autonomous vehicles and health systems) benefit from multi-layer or hybrid models. With regard to dynamic Clustering Is Essential: real-time data flow, mobility, and heterogeneous resources make static clustering insufficient, and the integration of SDN (as in []) or orchestration platforms (like Kubernetes in [,]) can enable intelligent cluster management. Regarding that scalability and interoperability are ongoing challenges, the diversity in node and cluster configurations reveals a lack of standardization, which might hinder interoperability across platforms.
The design and architecture can start simply and scale smartly: in the beginning, utilize heuristics in pilot deployments; then, transition to ML/meta-heuristics as the complexity increases. For autonomous or smart environments, the priority is for ML/RL or hybrid strategies to handle dynamic and uncertain workloads. Meanwhile, while exact optimization is ideal in theory, it is often impractical for large-scale or real-time fog systems.
Regarding strategies, there is no one-size-fits-all strategy; load balancing in fog computing requires blending multiple paradigms, especially for heterogeneous, large-scale, or mobile scenarios. It could be said that heuristics are the glue; i.e., heuristics serve as the core logic layer, often enhanced by meta-heuristics, learning, or fuzzy reasoning, depending on system complexity. Finally, in terms of future trends, such as intelligent hybridization, studies like [,,] showcase layered or hierarchical load-balancing models, combining rule-based, statistical, and AI-driven components.
Based on the above observations, Table 4 presents a tailored recommendation matrix for selecting load-balancing strategies in fog computing environments. Each strategy is mapped to scenarios where it performs optimally, along with real-world examples to guide implementation. Heuristic approaches are best suited for systems requiring quick, real-time decisions under moderate complexity, such as smart traffic control. Meta-heuristic strategies are ideal for solving complex optimization problems with constraints, commonly found in vehicle routing or task-offloading scenarios. ML/RL strategies enable adaptive and predictive load balancing, especially useful in energy-aware fog environments. Fuzzy logic excels in handling vague or imprecise input data, as seen in health monitoring systems using uncertain sensor readings. Hybrid approaches combine multiple methods to address diverse objectives; like latency, cost, and mobility; making them highly effective in large-scale or heterogeneous systems, such as smart cities. This strategic mapping offers a practical reference for researchers and practitioners to align system requirements with the most effective load-balancing methods.
Table 4.
Strategy recommendations.
In terms of scheduling, hybrid scheduling is essential. Complex fog environments demand multi-objective optimization (e.g., [,,]). Especially vital in smart city, healthcare, and mobility-driven use cases. Latency and deadlines are core concerns, and time-sensitivity is a major driver, particularly in IoT and edge applications [,]. Energy and cost often travel together as fog nodes, which are resource constrained, so studies often address these factors together [,]. Finally, security is gaining attention, appearing in 40+ studies, which shows a rising concern for secure offloading, authentication, and data handling [,,]. Table 5 presents practical mapping between scheduling strategy types and their optimal application contexts in fog computing environments. Each strategy is associated with scenarios where it is most effective, along with representative real-world use cases. This matrix serves as a design guideline for selecting scheduling algorithms that align with the specific operational goals and constraints of fog-based systems.
Table 5.
Scheduling strategy recommendation matrix.
Regarding the offloading in Fog is contextual and dynamic; the popularity of dynamic and context-aware offloading confirms that systems must continuously adapt to environmental and device conditions. Similarly, hybrid techniques dominate real deployments. Real-world fog deployments often need to balance latency, energy, and computation load, which explains the dominance of hybrid strategies. On the other hand, while full offloading is useful for constrained devices, partial offloading is more flexible and efficient in bandwidth-limited or real-time cases. Eventually, cross-layer decision-making is common; co-occurrence across 4+ categories shows that fog-based offloading decisions are not isolated but rather integrated with scheduling, resource management, and security.
Table 6 presents practical guidance for selecting offloading strategies in fog computing based on system characteristics and application needs. Each strategy is aligned with specific operational contexts and real-world use cases. This matrix supports developers and researchers in choosing the most suitable offloading mechanisms to align with diverse operational goals and application scenarios in fog-based systems.
Table 6.
Offloading strategy recommendations.
Regarding fault management, tolerance in fog is not optional. The consistent use of migration, replication, and recovery strategies confirms that faults are expected and must be managed proactively. Furthermore, multi-layer protection is commonplace. The overlap of 5–7 strategies in most studies indicates that single-method fault tolerance is insufficient for real-world fog deployments. On the other hand, predictive and self-healing trends are rising. The increasing use of AI-based detection and self-healing shows a shift toward autonomous fog systems that can adapt to failures without human intervention. Finally, hybrid approaches are now standard practice. Systems that combine detection, redundancy, and recovery strategies are more resilient, especially in mission-critical applications like healthcare or industrial automation.
Then, it could be said that security is multi-dimensional. No single technique suffices. Systems must incorporate both proactive (e.g., intrusion detection) and preventive (e.g., access control and encryption) layers. Fog security must be lightweight. The rise in lightweight and blockchain-based security solutions reflects a growing demand for scalable yet efficient protection. Finally, trust and privacy are critical; with sensitive data traversing fog nodes, trust management and privacy-preserving methods are essential, especially in healthcare and financial systems.
6. Conclusions and Future Work
6.1. Conclusions
This systematic literature review offers a comprehensive examination of load-balancing strategies within the context of fog computing. By analyzing 113 peer-reviewed articles published between 2020 and 2024, this study identifies prevailing architectural models, algorithmic advancements, evaluation metrics, and emerging trends that shape current research in the field.
This review reveals that three-layer architectures (IoT–Fog–Cloud) are the most widely adopted, as they provide an effective balance between localized responsiveness and centralized control. More complex hierarchical and multi-layer configurations further enhance scalability and are especially well suited for data-intensive and latency-sensitive applications. In terms of load-balancing strategies, dynamic approaches; particularly heuristic, meta-heuristic, and hybrid algorithms; dominate the landscape due to their adaptability in heterogeneous and fluctuating environments. Increasingly, machine learning and reinforcement learning techniques are being leveraged to support intelligent, predictive load balancing.
Performance evaluation in the reviewed studies frequently focuses on latency, response time, energy consumption, and resource utilization. Over 95% of the studies simulate dynamic workloads, underlining the importance of context-aware and real-time distribution strategies. iFogSim emerges as the most commonly used simulation tool, followed by CloudSim and several custom-built environments. Nevertheless, a notable limitation persists: the absence of real-world deployments in many studies restricts the generalizability and practical applicability of their results.
Despite substantial progress, fog computing continues to face critical challenges, particularly in scalability, security, and energy efficiency. Promising research directions include the integration of artificial intelligence for adaptive scheduling, blockchain for decentralized trust management, and software-defined networking (SDN) for enhanced control and orchestration. These technologies have significant potential to address the unique constraints of privacy-sensitive and resource-limited environments. Ultimately, this review provides researchers and practitioners with a structured roadmap of existing methodologies, while highlighting unresolved issues and areas that demand further exploration.
6.2. Future Work
Despite substantial progress in fog computing and load-balancing strategies, several critical challenges and research gaps remain unaddressed. One of the primary limitations is an overreliance on simulation-based evaluations. While simulations offer a controlled environment for preliminary testing, they often fail to capture the complexities of real-world deployments. Future research should therefore focus on the development of real-world testbeds to validate algorithmic performance under operational conditions, providing insights into practical constraints such as hardware limitations, network variability, and user dynamics.
Another important area of focus is energy-aware and green computing. As fog nodes are typically deployed near end-users and may operate with constrained power resources, there is a pressing need for energy-efficient strategies. These strategies must optimize system performance without compromising device longevity or reliability. Integrating energy consumption as a key metric in load-balancing decision-making is essential for sustainable fog computing environments.
Moreover, the dynamic and heterogeneous nature of fog environments necessitate the development of context-aware load-balancing mechanisms. Future approaches should not only respond to workload fluctuations but also incorporate contextual information such as user mobility, geographical distribution, and application-specific Quality of Service (QoS) requirements. Contextual adaptation will enhance the responsiveness and efficiency of resource management in highly dynamic IoT scenarios.
Security and privacy also remain critical concerns. Many IoT applications process sensitive or personal data, making it imperative that future load-balancing solutions integrate security features at their core. Approaches that incorporate encryption, trust management frameworks, and blockchain technologies into load-balancing protocols could offer enhanced data protection while maintaining performance.
In addition, most existing algorithms focus on optimizing a single objective, such as latency or energy consumption. However, real-world applications often require a balance across multiple criteria, including performance, cost, energy efficiency, and security. Future research should explore multi-objective optimization techniques that can effectively manage these competing priorities within a unified framework.
Finally, the lack of standardized evaluation frameworks hinders the ability to conduct fair and reproducible comparisons across different studies. The development of a unified benchmarking framework is essential to promote transparency, improve comparability, and accelerate progress in the field.
By addressing these challenges, future research can pave the way for the development of resilient, scalable, and intelligent fog computing systems capable of supporting the diverse and evolving needs of next-generation IoT applications.
While this review provides a qualitative synthesis of architectural and algorithmic strategies in fog computing, future research should move toward quantitative meta-analysis to enhance comparability and rigor. A statistical aggregation of key performance metrics; such as latency reduction, energy savings, and task completion times; would enable a more objective evaluation of the competing strategies. Such a synthesis could be derived from benchmarking results across tools like iFogSim, YAFS, and OMNeT++ using standardized workloads.
Moreover, there is a critical need for deployment-focused case studies that go beyond simulation. Industrial prototypes and smart city implementations; such as those in healthcare monitoring or vehicular networks; can offer vital insights into real-world constraints, including hardware limitations, environmental volatility, and user behavior. These studies are particularly important for evaluating interoperability and scalability, as fog systems often span heterogeneous platforms, protocols, and administrative domains.
To unify these efforts, we advocate for the development of an open benchmarking framework that defines standardized evaluation scenarios, metrics, datasets, and reporting formats. Such a framework would greatly improve transparency, reproducibility, and cross-study comparability- key pillars for advancing the technical maturity of fog computing research and facilitating real-world adoption.
Author Contributions
Conceptualization, T.H. and D.A.; methodology, D.A., E.A. and T.B.; software, D.A.; validation, D.A., E.A. and T.B.; formal analysis, D.A., E.A. and T.B.; investigation, T.H.; resources, D.A., E.A. and T.B.; data curation, D.A., E.A. and T.B.; writing—original draft preparation, D.A., E.A. and T.B.; writing—review and editing, T.H.; visualization, D.A., E.A. and T.B.; supervision, T.H.; project administration, T.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
No original data used.
Acknowledgments
The authors acknowledge King Fahd University of Petroleum and Minerals and the Interdisciplinary Research Center for Intelligent Secure Systems for the support and for providing the computing facilities for doing this work. Special thanks go to anonymous reviewers for their insightful comments and feedback, resulting in a significant improvement in the quality of the paper. Generative AI has been used for proofreading purposes.
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A
Table A1.
Summary of fog computing architectures across research studies.
Table A1.
Summary of fog computing architectures across research studies.
| Study | Number of Layers | Architecture Description | Number of Nodes | Number of Clusters |
|---|---|---|---|---|
| [] | 3 | Cloud layer, fog layer, terminal nodes layer | 50–300 | 6–14 |
| [] | Multiple | Multiple layers, including cloud, fog nodes, and end devices. | NA | NA |
| [] | 3 | IoT devices, fog nodes, cloud/central server | 50 | NA |
| [] | 3 | End-device layer, fog layer, cloud layer | 2–30 fog nodes | NA |
| [] | Flat | The architecture is non-hierarchical, which suggests a flat structure rather than multiple layers | NA | NA |
| [] | 3 | Local fog layer, adjacent fog nodes, cloud layer | 4–16 | NA |
| [] | 3 | IoT devices (edge), fog layer, cloud layer | NA | NA |
| [] | 3 | Device, fog, control layers | 100–600 IoT, 40 fog nodes, 3 cloud nodes | Clusters for task management |
| [] | 3 | IoT devices, fog nodes, cloud | 75 fog nodes | 5 clusters |
| [] | 3 | Cloud computing layer, fog computing layer, user devices | 11 nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 12 IoT devices, 4 access points, 4 gateways, 6 fog nodes, 1 cloud data center | 4 clusters |
| [] | 3 | Devices layer, fog layer, cloud layer | 236–578 vehicles, fog nodes, mobile and dedicated | Dynamically created clusters |
| [] | 3 | Edge devices, fog nodes, cloud infrastructure | 22 IoT nodes, 3 fog nodes, 1 cloud node | NA |
| [] | 3 | IoT devices, fog computing, cloud services | 5–25 fog nodes | NA |
| [] | Multiple | Distributed layers, including sensors, fog nodes, and cloud layers | NA | NA |
| [] | 3 | IoT devices, fog computing resources, cloud data centers | 50 IoT devices, 4 fog providers (50 VMs each) | 4 clusters |
| [] | 3 | User devices, fog nodes, cloud systems | 5, 15, 30, 50 VRs | NA |
| [] | Multiple | Multi-layer architecture integrating edge, fog, and cloud environments | 10–25 IoT devices, 10 fog nodes | NA |
| [] | 4 | Production equipment, fog computing, cloud computing, user layers | 10 fog nodes | NA |
| [] | 3 | IoT devices, fog layer, central cloud layer | 3 fog servers | NA |
| [] | Variable | Varies per deployment scenario | NA | NA |
| [] | 3 | NA | 5 Fog nodes | 4 cluster |
| [] | Variable | various architectures, including sequential, tree-based, and DAG-based | NA | NA |
| [] | 7 | Multiple fog nodes and a centralized cloud environment | 1073 fog nodes distributed across 7 layers | Puddles based on geographic proximity |
| [] | 3 | Infrastructure tier, fog tier, global management tier | 1000 OBUs, 50 RSUs | RSUs grouped by LSDNC |
| [] | 2 | Fog layer, cloud layer | NA | NA |
| [] | 3 | Dew layer, fog layer, cloud layer | NA | NA |
| [] | 2 | Fog layer, cloud layer | 5 cloud nodes, 3 fog nodes | NA |
| [] | Multiple | Multi-layered fog computing paradigm extending from IoT sensors to cloud | Varies, heterogeneous fog nodes | Clusters of IoT devices |
| [] | 3 | Cloud layer, fog layer, IoT layer | 100–500 fog nodes | NA |
| [] | 3 | IoT devices, fog layer, cloud services | Multiple fog nodes and cloud nodes | NA |
| [] | 3 | End devices, cog nodes, cloud layer | 16–64 fog nodes | NA |
| [] | 3 | End-user devices, fog layer, cloud services | 40–100 VMs, 50 physical machines | NA |
| [] | 4 | Acquisition layer, image layer, computing layer, robot layer | 5 fog nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 10–25 fog nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 1 master node, multiple SBC devices | Homogeneous or heterogeneous clusters |
| [] | 4 | IoT gateways, fog broker, fog cluster, applications layer | 5 fog nodes | 1 cluster |
| [] | 4 | Cloud layer, proxy server layer, gateway layer, edge device layer | Varies based on simulation | NA |
| [] | 3 | Edge layer, fog layer, cloud layer | 500–2000 fog nodes | NA |
| [] | 3 | Cloud layer, fog layer, end-user layer | 6 fog nodes | NA |
| [] | Multiple | A multi-layered approach involving edge, fog, and cloud layers | 2 Raspberry Pis | Single Kubernetes cluster |
| [] | 4 | Cloud computing layer, SDN control layer, fog computing layer, user layer | 10 fog nodes, 2 cloud nodes | SDN control allows for dynamic clustering |
| [] | 3 | Edge layer, fog layer, cloud layer | 29 fog nodes | Dynamic clusters |
| [] | 3 | Edge layer, fog layer, cloud layer | 4–5 fog nodes | Dynamic clusters |
| [] | 3 | The sensing layer, fog layer, cloud layer | Multiple fog nodes | Dynamic clusters |
| [] | 3 | Cloud layer, fog layer, end-user layer | 4 fog nodes, 1 cloud node | NA |
| [] | 3 | End-user layer, fog layer, cloud layer | 2 fog nodes, 1 cloud node | NA |
| [] | 3 | Lightweight nodes layer, access points layer, dedicated computing servers’ layer | 400 lightweight nodes, 10 access points | Dynamic logical groupings |
| [] | 3 | Edge/IoT layer, fog layer, cloud layer | 50 fog nodes, 1 cloud node | Dynamic groupings |
| [] | 3 | IoT device layer, fog layer, cloud layer | 20 fog nodes, 40 volunteer devices, 1 cloud data center | Logical grouping based on proximity |
| [] | 2 | IoT layer, fog layer | 10 fog nodes | NA |
| [] | 2 | Vehicular layer, RSU layer | 6 RSUs, 1000–3000 vehicles | Dynamically created clusters |
| [] | 3 | Edge layer, fog layer, cloud layer | 100 edge devices, 20 fog devices, 5 cloud servers | NA |
| [] | 3 | IoT layer, fog layer, edge server layer | 1 edge server, multiple fog nodes | Dynamic clusters based on workload |
| [] | 2 | Fog layer, cloud layer | 3 fog servers, 1 cloud server | Single fog cluster |
| [] | 3 | IoT layer, fog layer, cloud layer | Dynamic, based on parked vehicles | Dynamically formed clusters |
| [] | 3 | Vehicular fog layer, fog server layer, cloud layer | Vehicular fog nodes, RSUs, cloud nodes | Dynamically created clusters |
| [] | 4 | Perception layer, blockchain layer, SDN and fog layer, cloud layer | Distributed RSUs, 500 vehicles | Dynamically created clusters |
| [] | 3 | Vehicular layer, fog layer, cloud layer | UAVs and RSUs, dynamic | Dynamic swarms |
| [] | 2 | Fog layer, IoT device layer | 4 fog nodes, 50 IoT devices | 4 clusters |
| [] | 3 | Fog layer, edge layer, cloud layer | 5 UEs per edge server, 5 edge servers, 1 cloud server | 5 clusters |
| [] | 3 | IoT device layer, fog layer, cloud layer | 2 fog servers, 1 cloud server | 2 clusters |
| [] | 3 | IoT layer, fog layer, cloud layer | Multiple fog nodes, dynamic environment | Dynamic clustering |
| [] | 2 | Fog layer, cloud layer | 15 fog nodes, varying IoT devices | Dynamic clusters |
| [] | 3 | IoT layer, fog layer, cloud layer | Varies based on demand | Dynamic clustering |
| [] | 2 | Fog layer, cloud layer | Distributed fog nodes, cloud nodes | Dynamic clusters |
| [] | 3 | Microgrid layer, fog layer, cloud layer | 3 microgrids, fog nodes | 3 clusters |
| [] | 3 | IoT layer, fog layer, cloud layer | 9 edge servers, dynamic IoT devices | Dynamic clusters |
| [] | 3 | Edge layer, fog layer, cloud layer | Multiple appliances, smart sensors | Dynamic clusters based on HEMS |
| [] | 4 | IoT layer, edge layer, fog layer, cloud layer | Multiple edge, fog, and cloud nodes | Dynamic clusters |
| [] | 3 | Containerized workload layer, fog layer, cloud layer | 6 nodes, Kubernetes | Dynamic clusters |
| [] | 3 | Edge, fog, cloud | 100 fog nodes, 10–20 IoT devices | Task-based clusters |
| [] | 3 | IoT devices layer, fog landscape layer, cloud layer | Multiple fog cells | Fog colonies as clusters |
| [] | 2 | Fog layer, cloud layer | NA | NA |
| [] | 3 | Sensor layer, fog layer, cloud layer | Multiple fog nodes | Dynamic clusters |
| [] | 4 | Sensor layer, fog devices, proxy servers, cloud data centers | 30 microdata centers | Hierarchical MDCs |
| [] | 3 | IoT layer, fog layer, cloud layer | NA | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 50 fog nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | NA | NA |
| [] | 3 | IoMT layer, fog layer, cloud layer | 100–500 fog devices | NA |
| [] | 4 | Fog devices, cloud data center | 5 devices at each tier | NA |
| [] | Multiple | NA | 10–70 heterogeneous fog nodes | 3 clusters |
| [] | Multiple | Multiple layers, including fog nodes and cloud | 500 fog nodes, 200 fog–cloud interfaces | NA |
| [] | 4 | Edge layer, Base station layer, fog layer, cloud layer | 20 heterogeneous virtual machines in the fog layer | NA |
| [] | 3 | Edge, fog, cloud | 100 fog nodes, 3 data centers | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 5–20 fog nodes in experiments | NA |
| [] | 3 | EDC, Fog nodes, cloud data centers | Distributed fog nodes between data sources and the cloud | NA |
| [] | 4 | Perception layer, fog layer, cloud layer, communication layer | Dynamic based on vehicles and lanes | NA |
| [] | 3 | End users, fog nodes, cloud layer | 100 fog nodes, centralized cloud servers | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | Varies with the application; fog servers categorized into overloaded, balanced, and underloaded | NA |
| [] | 3 | IoT devices layer, fog layer, cloud layer | 20 fog nodes, 6 cloud nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | Dynamic, fog nodes per region, cloud nodes | NA |
| [] | 3 | IoT device layer, fog layer, cloud layer | 1–5 fog nodes as gateways | 10 clusters |
| [] | 3 | End-user layer, fog layer, cloud layer | 2 to 200 fog nodes, dynamically grouped into clusters | 10 clusters (20 nodes each) |
| [] | 3 | IoT layer, fog layer, cloud layer | 200 fog servers, dynamic categorization | NA |
| [] | 4 | WGL, FCL, CCL, RAL | Dynamic, based on the number of IoT devices, fog nodes, and cloud servers | NA |
| [] | 3 | Infrastructure layer, fog layer, cloud layer | 10 fog nodes, 1 cloud node | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 15 IoT devices, 8 fog nodes, 30–180 VMs, 1 cloud data center | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 5–20 fog nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 20 fog nodes, 30 cloud nodes | NA |
| [] | 2 | Fog layer, consumer layer | 5 fog nodes in residential areas | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | Varies dynamically based on IoT devices, fog nodes, and cloud servers | NA |
| [] | 3 | IoT tier, fog tier, data-center tier | 100 fog nodes, 10–20 IoT devices, 3 cloud data centers | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 30 fog nodes, multiple cloud nodes | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | 5–20 fog nodes with VMs, 1 cloud node | NA |
| [] | 3 | IoT layer, fog layer, cloud layer | Multiple fog nodes (small servers, routers, gateways) | NA |
| [] | 3 | IoT devices, fog nodes, cloud data centers | 15 fog nodes | NA |
| [] | 2 | Load balancer at the first stage and virtual machines/web servers at the second stage | Single load balancer, multiple virtual machines/web servers | NA |
| [] | 3 | Mobile devices, fog computing layer, cloud layer | 10 fog nodes, 2 cloud nodes | NA |
| [] | 3 | Edge, fog, cloud | 1 edge node per consumer, 1 fog node per load point, centralized cloud data center | NA |
| [] | 3 | IoT devices, fog layer, cloud layer | Mobile IoT devices, fog gateways, brokers, and processing servers | NA |
Table A2.
The advantages and disadvantages of load-balancing categories.
Table A2.
The advantages and disadvantages of load-balancing categories.
| Category | Advantages | Disadvantages | Representative Papers |
|---|---|---|---|
| Fundamental Strategies | Simple and predictable behavior in static environments. | -Lack of adaptability to dynamic workloads. | [,] |
| Low computational overhead, ideal for systems with minimal complexity. | -Inefficient resource utilization in heterogeneous systems. | ||
| Exact Optimization | Guarantees optimal solutions for latency and resource utilization. | -High computational demands. | [,] |
| Ideal for theoretical modeling and small-scale systems. | -Poor scalability, unsuitable for real-time and large-scale fog scenarios. | ||
| Heuristic Appoaches | Simple and flexible with moderate computational efficiency. | -Often yield suboptimal solutions. | [,,] |
| Adaptable to system constraints, suitable for dynamic environments. | -Highly dependent on parameter tuning and system-specific configurations. | ||
| -Limited exploration capabilities. | |||
| Meta-Heuristic Strategies | Excel in exploring complex solution spaces and multi-objective optimization. | -Computationally intensive. | [,,,,,] |
| Well suited for dynamic environments, balancing global exploration and local convergence. | -Require extensive tuning of fitness functions. | ||
| -Risk of convergence to local optima without proper hybridization or enhancements. | |||
| ML/RL | Offer intelligent automation, adapting to changing workloads in real time. | -Require large training datasets. | [,,] |
| Optimize across multiple objectives, suitable for dynamic and complex fog systems. | -High implementation complexity. | ||
| -Vulnerable to overfitting and poor performance in underrepresented scenarios. | |||
| Fuzzy Logic | Handle uncertainty and imprecision effectively. | -Dependence on expert-defined rules limits scalability. | [,,] |
| Low resource consumption and easily adaptable to diverse scenarios with clear fuzzy rules. | -Struggle in highly dynamic environments. | ||
| Game Theory | Ensures fairness and efficiency in distributed and competitive environments. | -Computationally complex to achieve equilibrium. | [,] |
| Well suited for resource allocation in multi-agent systems. | -Dependent on accurate real-time data, limiting scalability in dynamic systems. | ||
| Probabilistic/Statistical | Simple and efficient for tasks involving statistical variability. | -Oversimplify system dynamics, limiting applicability in deterministic or complex scenarios. | [,] |
| Effective in handling uncertainty within fog environments. | -Limited adaptability can lead to suboptimal performance. | ||
| Hybrid Approaches | Combine strengths of multiple methods, such as meta-heuristics, ML/RL, or exact methods. | -High complexity in implementation and parameter tuning. | [,,,,] |
| Effective for multi-objective optimization in dynamic and complex environments. | -Often requires specialized hardware for deployment. |
Table A3.
Categorization of load-balancing strategies in fog computing.
Table A3.
Categorization of load-balancing strategies in fog computing.
| Category | Description | Representative Papers |
|---|---|---|
| Fundamental Strategies | Basic approaches with predictable behavior are suitable for simple static environments. | [,,] |
| Exact Optimization | Optimization techniques guarantee the theoretical best outcomes, ideal for small-scale scenarios. | [,,,] |
| Heuristic Approaches | Flexible and efficient algorithms suited for dynamic environments with moderate complexity. | [,,,,,,,,,,,,,,,,,,,,,,,,,,,] |
| Meta-Heuristic Strategies | Advanced strategies exploring complex solution spaces, balancing exploration and exploitation. | [,,,,,] |
| ML/RL | Machine learning and reinforcement learning models for real-time adaptive optimization. | [,,,,] |
| Fuzzy Logic | Techniques for managing uncertainty using fuzzy rules, applicable to imprecise data scenarios. | [,,,,] |
| Game Theory | Game theory-based methods ensure fairness and efficiency in distributed systems. | [,] |
| Probabilistic/Statistical | Probabilistic models handling variability are effective in uncertain fog environments. | [,] |
| Hybrid Approaches | Integrated methods combining strengths of multiple approaches for multi-objective optimization. | [,,,,] |
Table A4.
Categorization of performance metrics in fog computing.
Table A4.
Categorization of performance metrics in fog computing.
| Primary Category | Subcategory | Percentage |
|---|---|---|
| 1. Latency and Time Metrics | Latency (ms) | 16% |
| Task Completion Time (sec) | ||
| Response Time | ||
| Communication Delay | ||
| Waiting Time in Queue | ||
| Turnaround Time (TAT) | ||
| Service Delay Time | ||
| Temporal Delay | ||
| Processing Time | ||
| Mean Service Time (MST) | ||
| 2. Resource Utilization Metrics | Resource Utilization (CPU%) | 7% |
| Memory Usage | ||
| Number of Computing Resources | ||
| Network Utilization | ||
| Number of Used Devices | ||
| Bandwidth Utilization | ||
| 3. Energy Efficiency Metrics | Energy Consumption (W) | 4% |
| Brown Energy Consumption | ||
| 4. Reliability and Fault Metrics | Fault Tolerance (Yes/No) | 9% |
| Failure Rate | ||
| Access Level Violations | ||
| Deadline Violations | ||
| Number of Deadlines Missed | ||
| 5. Cost Metrics | Communication Cost | 7% |
| Execution Cost | ||
| Service Cost | ||
| Cost of Data Transmission | ||
| 6. Load-Balancing and Distribution Metrics | Load-Balancing Level (LBL) | 10% |
| Workload Distribution | ||
| Queue Length | ||
| Task Delivery | ||
| Workflows | ||
| 7. Network and Scalability Metrics | Network Lifetime | 8% |
| Network Bandwidth | ||
| Jitter | ||
| Congestion | ||
| Scalability | ||
| 8. Prediction and Accuracy Metrics | Prediction Interval Coverage (PIC) | 5% |
| Prediction Accuracy (AAPE) | ||
| Prediction Efficiency | ||
| Accuracy | ||
| Accuracy of IRS Classification | ||
| 9. Quality of Service (QoS) Metrics | QoS Satisfaction Rate | 18% |
| Blocking Probability (bp) | ||
| Success Ratio (SR) | ||
| Fairness Index | ||
| Throughput (Tasks/sec) | ||
| Efficiency | ||
| Scalability | ||
| Prediction Interval Coverage (PIC) | ||
| Prediction Accuracy (AAPE) | ||
| Average Processing Time (APT) | ||
| Success Ratio (SR) | ||
| Blocking Probability (bp) | ||
| Task Delivery | ||
| 10. Security Metrics | Encryption Time | 2% |
| Decryption Time | ||
| Handover Served Ratio (HSR) | ||
| Onboard Unit Served Ratio (OSR) | ||
| 11. Specialized Metrics | Solution Convergence | 14% |
| Queue Management | ||
| Loss Rate | ||
| Consensus Time | ||
| Social Welfare | ||
| Offloading Success | ||
| Task Rejection Rate | ||
| Node Selection Success Rates |
Table A5.
Distribution of workload types in reviewed fog computing studies.
Table A5.
Distribution of workload types in reviewed fog computing studies.
| Workload Type | Paper IDs | Description | Frequency | Percentage |
|---|---|---|---|---|
| Dynamic | [,,,,,,,,,,,] | Workload is dynamically generated based on real-time conditions, reflecting the variability and unpredictability of IoT systems. | 108 | 95.60% |
| Static | [,] | Fixed workloads without runtime changes, often used for benchmarking or theoretical analysis. | 2 | 1.80% |
| Dynamic and Data Intensive | [] | Combines dynamic nature with high data volume or computational intensity. | 1 | 0.90% |
| Static/Dynamic | [,] | Incorporates both static and dynamic workloads to evaluate hybrid or flexible algorithms. | 2 | 1.80% |
Table A6.
Mapping of evaluation tools for studies in fog computing.
Table A6.
Mapping of evaluation tools for studies in fog computing.
| Study | Tool Name |
|---|---|
| [,,,,] | Custom Simulator |
| [] | COSCO Simulator |
| [,] | Discrete-event Simulator (YAFS) |
| [,] | Python (SimPy) |
| [] | Python (SciPy) |
| [,,] | Python (Custom) |
| [] | Java (Custom) |
| [,] | C (Custom) |
| [] | IoTSim-Osmosis |
| [,,] | MATLAB |
| [,,,,] | iFogSim |
| [,] | Kubernetes and Istio |
| [,] | OMNeT++ |
| [] | PFogSim |
| [] | EdgeCloudSim |
| [,] | SUMO Simulator |
| [,] | Mininet-WiFi |
| [,,,] | CloudSim |
| [] | Truffle Suite and Ganache |
| [,] | Docker |
| [,] | Real-World Testbeds |
| [] | AnyLogic |
| [,] | FogSim |
| [] | PySpark |
| [] | CloudAnalyst |
| [] | Apache Karaf |
Table A7.
Classification and definition of tools used for evaluating load-balancing algorithms in fog computing.
Table A7.
Classification and definition of tools used for evaluating load-balancing algorithms in fog computing.
| Tool Name | Classification | Definition |
|---|---|---|
| iFogSim | Widely Recognized Simulator | A simulator designed for fog and IoT environments, providing capabilities for modeling latency, energy consumption, and network parameters. |
| CloudSim | Widely Recognized Simulator | A simulation platform for modeling cloud computing environments, extended for fog computing. |
| PFogSim | Widely Recognized Simulator | An extension of CloudSim designed for large-scale, heterogeneous fog environments. |
| FogSim | Widely Recognized Simulator | A modular simulator tailored for fog and cloud computing environments. |
| YAFS (Yet Another Fog Simulator) | Widely Recognized Simulator | A simulator focusing on fog-specific features and performance metrics. |
| OMNeT++ | Widely Recognized Simulator | A discrete-event network simulation framework for task distribution and communication modeling. |
| MATLAB | Widely Recognized Simulator | A computational environment supporting algorithm implementation and simulation, often used for fuzzy logic and deep learning. |
| Mininet/Mininet-WiFi | Network and IoT-Specific Tool | A network emulator for testing SDN and IoT architectures. |
| SUMO | Network and IoT-Specific Tool | A traffic simulation tool used for vehicular mobility and task-offloading evaluations. |
| IoTSim-Osmosis | Network and IoT-Specific Tool | A Java-based simulator built on the CloudSim framework for IoT and fog computing. |
| Truffle Suite | Blockchain and SDN Tool | A development framework for blockchain applications, used for writing and testing smart contracts. |
| Ganache | Blockchain and SDN Tool | A personal Ethereum blockchain simulator for testing blockchain-based implementations. |
| Docker | Custom Simulator Framework | A containerization platform used for deploying custom simulation environments. |
| SimPy | Python-Based Framework | A process-based discrete-event simulation framework in Python. |
| SciPy | Python-Based Framework | A Python library for optimization and simulation in custom environments. |
| Custom Simulators | Custom Simulator Framework | Tailored simulation environments designed for specific research scenarios. |
| Real-World Testbeds | Real-World Testbed | Physical setups using devices like Raspberry Pi for practical evaluations. |
| AnyLogic | Widely Recognized Simulator | A multi-method simulation modeling tool for evaluating complex systems. |
| EdgeCloudSim | Widely Recognized Simulator | A simulation environment for edge–fog–cloud computing scenarios. |
Table A8.
Emerging trends and technologies in fog computing research.
Table A8.
Emerging trends and technologies in fog computing research.
| Emerging Trend | Key Technologies/Methodologies | Relevant Studies |
|---|---|---|
| AI and Machine Learning | Predictive load balancing, intelligent resource management | [,,] |
| Edge Computing Integration | Enhancing processing capabilities for IoT | [,] |
| Reinforcement Learning (RL) | Privacy-aware load balancing, adaptive resource allocation | [,,,] |
| Intelligent Scheduling | Real-time data processing, decision-making adaptations | [,,,] |
| Multi-Layer Scheduling Methods | Complex scheduling for large-scale applications | [,,] |
| Clustering Integration | Improving load balancing in vehicular networks | [] |
| AI-Driven Task Allocation | Real-time prioritization for IoT systems | [,,] |
| Software-Defined Networking (SDN) | Flexibility in network management | [,,] |
| Serverless Computing | Function as a Service (FaaS) for IoT applications | [] |
| Adaptive Offloading Techniques | Machine learning for resource management in vehicular fog | [] |
| Fault Tolerance Mechanisms | Hybrid approaches incorporating RL | [] |
| Scalability and Resilience | Managing large-scale systems and dynamic environments | [,,] |
| Energy Management | Balancing energy efficiency with computational demands | [,,] |
| Security and Privacy Concerns | Addressing challenges in IoT data | [,,] |
| Interoperability Between Fog and Cloud | Seamless task distribution across resources | [,,] |
| Hybrid Optimization Approaches | Combining meta-heuristic algorithms for better performance | [,,] |
| Game–Theoretic Frameworks | Workload distribution strategies | [] |
| Dynamic Resource Allocation | Adapting systems to real-time changes in workload | [,] |
| Blockchain Integration | Secure data handling and load balancing | [,,] |
| Integration of Renewable Energy Sources | Energy-efficient fog devices | [,] |
| Community-Based Placement | Distributing workloads effectively across nodes | [] |
| Hybrid Algorithms | Combining multiple methodologies for enhanced performance | [,] |
| Dynamic Offloading | Real-time task-offloading strategies | [,,,] |
| Fuzzy Logic Integration | Intelligent scheduling strategies | [] |
| Nature-Inspired Algorithms | Resource optimization strategies | [,] |
References
- Kashani, M.H.; Mahdipour, E. Load balancing algorithms in fog computing. IEEE Trans. Serv. Comput. 2022, 16, 1505–1521. [Google Scholar] [CrossRef]
- Kaur, M.; Aron, R. A systematic study of load balancing approaches in the fog computing environment. J. Supercomput. 2021, 77, 9202–9247. [Google Scholar] [CrossRef]
- Sadashiv, N. Load balancing in fog computing: A detailed survey. Int. J. Comput. Digit. Syst. 2023, 13, 729–750. [Google Scholar]
- Shakeel, H.; Alam, M. Load balancing approaches in cloud and fog computing environments: A framework, classification, and systematic review. Int. J. Cloud Appl. Comput. (IJCAC) 2022, 12, 1–24. [Google Scholar] [CrossRef]
- Ebneyousef, S.; Shirmarz, A. A taxonomy of load balancing algorithms and approaches in fog computing: A survey. Clust. Comput. 2023, 26, 3187–3208. [Google Scholar] [CrossRef]
- Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Software Engineering Group, School of Computer Science and Mathematics, Keele University: Keele, UK; Department of Computer Science, University of Durham: Durham, UK, 2007; Volume 5. [Google Scholar] [CrossRef]
- Singh, J.; Singh, P.; Amhoud, E.M.; Hedabou, M. Energy-efficient and secure load balancing technique for SDN-enabled fog computing. Sustainability 2022, 14, 12951. [Google Scholar] [CrossRef]
- Javanmardi, S.; Shojafar, M.; Mohammadi, R.; Persico, V.; Pescapè, A. S-FoS: A secure workflow scheduling approach for performance optimization in SDN-based IoT-Fog networks. J. Inf. Secur. Appl. 2023, 72, 103404. [Google Scholar] [CrossRef]
- Gazori, P.; Rahbari, D.; Nickray, M. Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach. Future Gener. Comput. Syst. 2020, 110, 1098–1115. [Google Scholar] [CrossRef]
- Liu, W.; Li, C.; Zheng, A.; Zheng, Z.; Zhang, Z.; Xiao, Y. Fog Computing Resource-Scheduling Strategy in IoT Based on Artificial Bee Colony Algorithm. Electronics 2023, 12, 1511. [Google Scholar] [CrossRef]
- Chaudhry, R.; Rishiwal, V. An Efficient Task Allocation with Fuzzy Reptile Search Algorithm for Disaster Management in urban and rural area. Sustain. Comput. Inform. Syst. 2023, 39, 100893. [Google Scholar] [CrossRef]
- Khansari, M.E.; Sharifian, S. A scalable modified deep reinforcement learning algorithm for serverless IoT microservice composition infrastructure in fog layer. Future Gener. Comput. Syst. 2024, 153, 206–221. [Google Scholar] [CrossRef]
- Kazemi, S.M.; Ghanbari, S.; Kazemi, M.; Othman, M. Optimum scheduling in fog computing using the Divisible Load Theory (DLT) with linear and nonlinear loads. Comput. Netw. 2023, 220, 109483. [Google Scholar] [CrossRef]
- Forestiero, A.; Gentile, A.F.; Macri, D. A blockchain based approach for Fog infrastructure management leveraging on Non-Fungible Tokens. In Proceedings of the 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Falerna, Italy, 12–15 September 2022; pp. 1–7. [Google Scholar]
- Ebrahim, M.; Hafid, A. Privacy-aware load balancing in fog networks: A reinforcement learning approach. Comput. Netw. 2023, 237, 110095. [Google Scholar] [CrossRef]
- Sarker, S.; Arafat, M.T.; Lameesa, A.; Afrin, M.; Mahmud, R.; Razzaque, M.A.; Iqbal, T. FOLD: Fog-dew infrastructure-aided optimal workload distribution for cloud robotic operations. Internet Things 2024, 26, 101185. [Google Scholar] [CrossRef]
- Menouer, T.; Cérin, C.; Darmon, P. KOptim: Kubernetes Optimization Framework. In Proceedings of the 2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), San Francisco, CA, USA, 27–31 May 2024; pp. 900–908. [Google Scholar]
- Mattia, G.P.; Pietrabissa, A.; Beraldi, R. A Load Balancing Algorithm for Equalising Latency Across Fog or Edge Computing Nodes. IEEE Trans. Serv. Comput. 2023, 16, 3129–3140. [Google Scholar] [CrossRef]
- Ameena, B.; Ramasamy, L. Drawer Cosine optimization enabled task offloading in fog computing. Expert Syst. Appl. 2025, 259, 125212. [Google Scholar] [CrossRef]
- Alotaibi, J.; Alazzawi, L. SaFIoV: A Secure and Fast Communication in Fog-based Internet-of-Vehicles using SDN and Blockchain. In Proceedings of the 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), Lansing, MI, USA, 9–11 August 2021; pp. 334–339. [Google Scholar]
- Xia, B.; Kong, F.; Zhou, J.; Tang, X.; Gong, H. A Delay-Tolerant Data Transmission Scheme for Internet of Vehicles Based on Software Defined Cloud-Fog Networks. IEEE Access 2020, 8, 65911–65922. [Google Scholar] [CrossRef]
- Singh, S.P. Effective load balancing strategy using fuzzy golden eagle optimization in fog computing environment. Sustain. Comput. Inform. Syst. 2022, 35, 100766. [Google Scholar] [CrossRef]
- Ibrahim, A.H.; Fayed, Z.T.; Faheem, H.M. Fog-Based CDN Framework for Minimizing Latency of Web Services Using Fog-Based HTTP Browser. Future Internet 2021, 13, 320. [Google Scholar] [CrossRef]
- Hameed, A.R.; Islam, S.u.; Ahmad, I.; Munir, K. Energy- and performance-aware load-balancing in vehicular fog computing. Sustain. Comput. Inform. Syst. 2021, 30, 100454. [Google Scholar] [CrossRef]
- Sethi, V.; Pal, S. FedDOVe: A Federated Deep Q-learning-based Offloading for Vehicular fog computing. Future Gener. Comput. Syst. 2023, 141, 96–105. [Google Scholar] [CrossRef]
- Liu, Z.; Dai, P.; Xing, H.; Yu, Z.; Zhang, W. A Distributed Algorithm for Task Offloading in Vehicular Networks With Hybrid Fog/Cloud Computing. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 4388–4401. [Google Scholar] [CrossRef]
- Veloso, A.F.D.S.; De Moura, M.C.L.; Mendes, D.L.D.S.; Junior, J.V.R.; Rabelo, R.A.L.; Rodrigues, J.J.P.C. Towards Sustainability using an Edge-Fog-Cloud Architecture for Demand-Side Management. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Melbourne, Australia, 17–20 October 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021; pp. 1731–1736. [Google Scholar] [CrossRef]
- Nazeri, M.; Soltanaghaei, M.; Khorsand, R. A predictive energy-aware scheduling strategy for scientific workflows in fog computing. Expert Syst. Appl. 2024, 247, 123192. [Google Scholar] [CrossRef]
- Marwein, P.S.; Sur, S.N.; Kandar, D. Efficient load distribution in heterogeneous vehicular networks using hierarchical controllers. Comput. Netw. 2024, 254, 110805. [Google Scholar] [CrossRef]
- Wang, J.; Huang, D. Visual Servo Image Real-time Processing System Based on Fog Computing. Hum.-Centric Comput. Inf. Sci. 2022, 13, 1–14. [Google Scholar] [CrossRef]
- Liu, W.; Huang, G.; Zheng, A.; Liu, J. Research on the optimization of IIoT data processing latency. Comput. Commun. 2020, 151, 290–298. [Google Scholar] [CrossRef]
- Badidi, E.; Ragmani, A. An architecture for QoS-aware fog service provisioning. Comput. Sci. 2020, 170, 411–418. [Google Scholar] [CrossRef]
- Das, D.; Sengupta, S.; Satapathy, S.M.; Saini, D. HOGWO: A fog inspired optimized load balancing approach using hybridized grey wolf algorithm. Cluster Comput. 2024, 27, 13273–13294. [Google Scholar] [CrossRef]
- Rehman, A.U.; Ahmad, Z.; Jehangiri, A.I.; Ala’Anzy, M.A.; Othman, M.; Umar, A.I.; Ahmad, J. Dynamic Energy Efficient Resource Allocation Strategy for Load Balancing in Fog Environment. IEEE Access 2020, 8, 199829–199839. [Google Scholar] [CrossRef]
- Wu, B.; Lv, X.; Deyah Shamsi, W.; Gholami Dizicheh, E. Optimal deploying IoT services on the fog computing: A metaheuristic-based multi-objective approach. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 10010–10027. [Google Scholar] [CrossRef]
- Wu, Y.; Wang, Y.; Wei, Y.; Leng, S. Intelligent deployment of dedicated servers: Rebalancing the computing resource in IoT. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), Seoul, Republic of Korea, 6–9 April 2020; pp. 1–6. [Google Scholar]
- Hossain, M.T.; De Grande, R.E. Cloudlet Dwell Time Model and Resource Availability for Vehicular Fog Computing. In Proceedings of the 2021 IEEE/ACM 25th International Symposium on Distributed Simulation and Real Time Applications (DSRT), Valencia, Spain, 27–29 September 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021. [Google Scholar] [CrossRef]
- Cheng, Y.; Vijayaraj, A.; Sree Pokkuluri, K.; Salehnia, T.; Montazerolghaem, A.; Rateb, R. Vehicular Fog Resource Allocation Approach for VANETs Based on Deep Adaptive Reinforcement Learning Combined With Heuristic Information. IEEE Access 2024, 12, 139056–139075. [Google Scholar] [CrossRef]
- Fayos-Jordan, R.; Felici-Castell, S.; Segura-Garcia, J.; Lopez-Ballester, J.; Cobos, M. Performance comparison of container orchestration platforms with low cost devices in the fog, assisting Internet of Things applications. Procedia Comput. Sci. 2020, 169, 102788. [Google Scholar] [CrossRef]
- Huber, S.; Pfandzelter, T.; Bermbach, D. Identifying Nearest Fog Nodes With Network Coordinate Systems. In Proceedings of the 2023 IEEE International Conference on Cloud Engineering (IC2E), Boston, MA, USA, 25–29 September 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2023; pp. 222–223. [Google Scholar] [CrossRef]
- Kashyap, V.; Ahuja, R.; Kumar, A. A hybrid approach for fault-tolerance aware load balancing in fog computing. Clust. Comput. 2023, 27, 5217–5233. [Google Scholar] [CrossRef]
- Mattia, G.P.; Beraldi, R. On real-time scheduling in Fog computing: A Reinforcement Learning algorithm with application to smart cities. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2022, Pisa, Italy, 21–25 May 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2022; pp. 187–193. [Google Scholar] [CrossRef]
- Aldağ, M.; Kırsal, Y.; Ülker, S. An analytical modelling and QoS evaluation of fault-tolerant load balancer and web servers in fog computing. J. Supercomput. 2022, 78, 12136–12158. [Google Scholar] [CrossRef]
- Bhatia, M.; Sood, S.K.; Kaur, S. Quantumized approach of load scheduling in fog computing environment for IoT applications. Computing 2020, 102, 1097–1115. [Google Scholar] [CrossRef]
- Bukhari, A.A.; Hussain, F.K. Fuzzy logic trust-based fog node selection. Internet Things 2024, 27, 101293. [Google Scholar] [CrossRef]
- Hwang, R.H.; Lai, Y.-C.; Lin, Y.D. Queue-Length-Based Offloading for Delay Sensitive Applications in Federated Cloud-Edge-Fog Systems. In Proceedings of the IEEE Consumer Communications and Networking Conference, CCNC, Las Vegas, NV, USA, 6–9 January 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2024; pp. 406–411. [Google Scholar] [CrossRef]
- Najafizadeh, A.; Salajegheh, A.; Rahmani, A.M.; Sahafi, A. Multi-objective Task Scheduling in cloud-fog computing using goal programming approach. Cluster Comput. 2021, 25, 141–165. [Google Scholar] [CrossRef]
- Shamsa, Z.; Rezaee, A.; Adabi, S.; Rahmani, A.M. A decentralized prediction-based workflow load balancing architecture for cloud/fog/IoT environments. Computing 2023, 106, 201–239. [Google Scholar] [CrossRef]
- Yin, C.; Fang, Q.; Li, H.; Peng, Y.; Xu, X.; Tang, D. An optimized resource scheduling algorithm based on GA and ACO algorithm in fog computing. J. Supercomput. 2024, 80, 4248–4285. [Google Scholar] [CrossRef]
- Premkumar, N.; Santhosh, R. Pelican optimization algorithm with blockchain for secure load balancing in fog computing. Multimed. Tools Appl. 2023, 83, 53417–53439. [Google Scholar] [CrossRef]
- Talaat, F.M.; Ali, H.A.; Saraya, M.S.; Saleh, A.I. Effective scheduling algorithm for load balancing in fog environment using CNN and MPSO. Knowl. Inf. Syst. 2022, 64, 773–797. [Google Scholar] [CrossRef]
- Azizi, S.; Shojafar, M.; Farzin, P.; Dogani, J. DCSP: A delay and cost-aware service placement and load distribution algorithm for IoT-based fog networks. Comput. Commun. 2024, 215, 9–20. [Google Scholar] [CrossRef]
- Baniata, H.; Anaqreh, A.; Kertesz, A. PF-BTS: A Privacy-Aware Fog-enhanced Blockchain-assisted task scheduling. Inf. Process. Manag. 2021, 58, 102393. [Google Scholar] [CrossRef]
- Ebrahim, M.; Hafid, A. Resilience and load balancing in Fog networks: A Multi-Criteria Decision Analysis approach. Microprocess. Microsyst. 2023, 101, 104893. [Google Scholar] [CrossRef]
- Abdulazeez, D.H.; Askar, S.K. A Novel Offloading Mechanism Leveraging Fuzzy Logic and Deep Reinforcement Learning to Improve IoT Application Performance in a Three-Layer Architecture Within the Fog-Cloud Environment. IEEE Access 2024, 12, 39936–39952. [Google Scholar] [CrossRef]
- Al Maruf, M.; Singh, A.; Azim, A.; Auluck, N. Faster Fog Computing Based Over-the-Air Vehicular Updates: A Transfer Learning Approach. IEEE Trans. Serv. Comput. 2022, 15, 3245–3259. [Google Scholar] [CrossRef]
- Stavrinides, G.L.; Karatza, H.D. Multicriteria scheduling of linear workflows with dynamically varying structure on distributed platforms. Simul. Model. Pract. Theory 2021, 112, 102369. [Google Scholar] [CrossRef]
- Johri, P.; Balu, V.; Jayaprakash, B.; Jain, A.; Thacker, C.; Kumari, A. Quality of service-based machine learning in fog computing networks for e-healthcare services with data storage system. Soft Comput. 2023. [Google Scholar] [CrossRef]
- Javaheri, D.; Gorgin, S.; Lee, J.-A.; Masdari, M. An improved discrete harris hawk optimization algorithm for efficient workflow scheduling in multi-fog computing. Sustain. Comput. Sustain. Comput. Inform. Syst. 2022, 36, 100787. [Google Scholar] [CrossRef]
- Kaur, A.; Auluck, N. Scheduling algorithms for truly heterogeneous hierarchical fog networks. Softw. Softw. Pract. Exp. 2022, 52, 2411–2438. [Google Scholar] [CrossRef]
- Peralta, G.; Garrido, P.; Bilbao, J.; Agüero, R.; Crespo, P.M. Fog to cloud and network coded based architecture: Minimizing data download time for smart mobility. Simul. Model. Pract. Theory 2020, 101. [Google Scholar] [CrossRef]
- Qayyum, T.; Trabelsi, Z.; Malik, A.W.; Hayawi, K. Multi-Level Resource Sharing Framework Using Collaborative Fog Environment for Smart Cities. IEEE Access 2021, 9, 21859–21869. [Google Scholar] [CrossRef]
- Singh, S.; Mishra, A.K.; Arjaria, S.K.; Bhatt, C.; Pandey, D.S.; Yadav, R.K. Improved deep network-based load predictor and optimal load balancing in cloud-fog services. Concurr. Comput. 2024, 36, e8275. [Google Scholar] [CrossRef]
- Pourkiani, M.; Abedi, M. Machine learning based task distribution in heterogeneous fog-cloud environments. In Proceedings of the 2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Hvar, Croatia, 17–19 September 2020. [Google Scholar]
- Dehury, C.K.; Veeravalli, B.; Srirama, S.N. HeRAFC: Heuristic resource allocation and optimization in MultiFog-cloud environment. J. Parallel Distrib. Comput. 2024, 183, 104760. [Google Scholar] [CrossRef]
- Singh, S.; Sham, E.E.; Vidyarthi, D.P. Optimizing workload distribution in Fog-Cloud ecosystem: A JAYA based meta-heuristic for energy-efficient applications. Appl. Soft Comput. 2024, 154, 111391. [Google Scholar] [CrossRef]
- Gowri, V.; Baranidharan, B. An energy efficient and secure model using chaotic levy flight deep Q-learning in healthcare system. Sustain. Comput. Inform. Syst. 2023, 39, 100894. [Google Scholar] [CrossRef]
- Pallewatta, S.; Kostakos, V.; Buyya, R. MicroFog: A framework for scalable placement of microservices-based IoT applications in federated Fog environments. J. Syst. Softw. 2024, 209, 111910. [Google Scholar] [CrossRef]
- Zeng, D.; Gu, L.; Yao, H. Towards energy efficient service composition in green energy powered Cyber–Physical Fog Systems. Future Gener. Comput. Syst. 2020, 105, 757–765. [Google Scholar] [CrossRef]
- Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G.P. Distributed load balancing for heterogeneous fog computing infrastructures in smart cities. Pervasive Mob. Comput. 2020, 67, 101221. [Google Scholar] [CrossRef]
- Khan, S.; Ali Shah, I.; Tairan, N.; Shah, H.; Faisal Nadeem, M. Optimal Resource Allocation in Fog Computing for Healthcare Applications. Comput. Mater. Contin. 2022, 71, 6147–6163. [Google Scholar] [CrossRef]
- Lone, K.; Sofi, S.A. e-TOALB: An efficient task offloading in IoT-fog networks. Concurr. Comput. 2024, 36, e7951. [Google Scholar] [CrossRef]
- Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile edge computing: A survey. IEEE Internet Things J. 2017, 5, 450–465. [Google Scholar] [CrossRef]
- Shaik, S.; Baskiyar, S. Distributed service placement in hierarchical fog environments. Sustain. Comput. Inform. Syst. 2022, 34, 100744. [Google Scholar] [CrossRef]
- Boudieb, W.; Malki, A.; Malki, M.; Badawy, A.; Barhamgi, M. Microservice instances selection and load balancing in fog computing using deep reinforcement learning approach. Future Gener. Comput. Syst. 2024, 156, 77–94. [Google Scholar] [CrossRef]
- Fugkeaw, S.; Prasad Gupta, R.; Worapaluk, K. Secure and Fine-Grained Access Control With Optimized Revocation for Outsourced IoT EHRs With Adaptive Load-Sharing in Fog-Assisted Cloud Environment. IEEE Access 2024, 12, 82753–82768. [Google Scholar] [CrossRef]
- Stypsanelli, I.; Brun, O.; Prabhu, B.J. Performance Evaluation of Some Adaptive Task Allocation Algorithms for Fog Networks. In Proceedings of the 2021 IEEE 5th International Conference on Fog and Edge Computing (ICFEC2021), Melbourne, Australia, 10–13 May 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021; pp. 84–88. [Google Scholar] [CrossRef]
- Liu, H.; Long, S.; Li, Z.; Fu, Y.; Zuo, Y.; Zhang, X. Revenue Maximizing Online Service Function Chain Deployment in Multi-Tier Computing Network. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 781–796. [Google Scholar] [CrossRef]
- Bala, M.I.; Chishti, M.A. Offloading in Cloud and Fog Hybrid Infrastructure Using iFogSim. In Proceedings of the 10th International Conference on Cloud Computing, Data Science & Engineering, Noida, India, 29–31 January 2020; pp. 421–426. [Google Scholar]
- Tran-Dang, H.; Kim, D.-S. Dynamic Task Offloading Approach for Task Delay Reduction in the IoT-enabled Fog Computing Systems. In Proceedings of the 2022 IEEE 20th International Conference on Industrial Informatics (INDIN), Perth, Australia, 25–28 July 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2022; pp. 61–66. [Google Scholar] [CrossRef]
- Hayawi, K.; Anwar, Z.; Malik, A.W.; Trabelsi, Z. Airborne Computing: A Toolkit for UAV-Assisted Federated Computing for Sustainable Smart Cities. IEEE Internet Things J. 2023, 10, 18941–18950. [Google Scholar] [CrossRef]
- Huang, X.; Cui, Y.; Chen, Q.; Zhang, J. Joint Task Offloading and QoS-Aware Resource Allocation in Fog-Enabled Internet-of-Things Networks. IEEE Internet Things J. 2020, 7, 7194–7206. [Google Scholar] [CrossRef]
- Zhang, T.; Yue, D.; Yu, L.; Dou, C.; Xie, X. Joint Energy and Workload Scheduling for Fog-Assisted Multimicrogrid Systems: A Deep Reinforcement Learning Approach. IEEE Syst. J. 2023, 17, 164–175. [Google Scholar] [CrossRef]
- Firouzi, F.; Farahani, B.; Panahi, E.; Barzegari, M. Task Offloading for Edge-Fog-Cloud Interplay in the Healthcare Internet of Things (IoT). In Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) 2021, Barcelona, Spain, 23–26 August 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Keshri, R.; Vidyarthi, D.P. An ML-based task clustering and placement using hybrid Jaya-gray wolf optimization in fog-cloud ecosystem. Concurr. Comput. 2024, 36, e8109. [Google Scholar] [CrossRef]
- Nethaji, S.V.; Chidambaram, M.; Forestiero, A. Differential Grey Wolf Load-Balanced Stochastic Bellman Deep Reinforced Resource Allocation in Fog Environment. Appl. Comput. Intell. Soft Comput. 2022, 2022, 1–13. [Google Scholar] [CrossRef]
- Singh, S.P.; Kumar, R.; Sharma, A.; Nayyar, A. Leveraging energy-efficient load balancing algorithms in fog computing. Concurr. Comput. 2022, 34, e5913. [Google Scholar] [CrossRef]
- Khan, S.; Shah, I.A.; Nadeem, M.F.; Jan, S.; Whangbo, T.; Ahmad, S. Optimal Resource Allocation and Task Scheduling in Fog Computing for Internet of Medical Things Applications. Hum.-Centric Comput. Inf. Sci. 2023, 13. [Google Scholar] [CrossRef]
- Bala, M.I.; Chishti, M.A. Optimizing the Computational Offloading Decision in Cloud-Fog Environment. In Proceedings of the 2020 International Conference on Innovative Trends in Information Technology (ICITIIT), Kottayam, India, 13–14 February 2020; pp. 1–5. [Google Scholar]
- Tajalli, S.Z.; Kavousi-Fard, A.; Mardaneh, M.; Khosravi, A.; Razavi-Far, R. Uncertainty-Aware Management of Smart Grids Using Cloud-Based LSTM-Prediction Interval. IEEE Trans. Cybern. 2022, 52, 9964–9977. [Google Scholar] [CrossRef]
- Casadei, R.; Fortino, G.; Pianini, D.; Placuzzi, A.; Savaglio, C.; Viroli, M. A Methodology and Simulation-Based Toolchain for Estimating Deployment Performance of Smart Collective Services at the Edge. IEEE Internet Things J. 2022, 9, 20136–20148. [Google Scholar] [CrossRef]
- Yakubu, I.Z.; Murali, M. An efficient meta-heuristic resource allocation with load balancing in IoT-Fog-cloud computing environment. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 2981–2992. [Google Scholar] [CrossRef]
- Kaur, M.; Aron, R. FOCALB: Fog Computing Architecture of Load Balancing for Scientific Workflow Applications. J. Grid Comput. 2021, 19, 40. [Google Scholar] [CrossRef]
- Singh, S.P.; Kumar, R.; Sharma, A.; Abawajy, J.H.; Kaur, R. Energy efficient load balancing hybrid priority assigned laxity algorithm in fog computing. Cluster Comput. 2022, 25, 3325–3342. [Google Scholar] [CrossRef]
- Dubey, K.; Sharma, S.C.; Kumar, M. A Secure IoT Applications Allocation Framework for Integrated Fog-Cloud Environment. J. Grid Comput. 2022, 20, 5. [Google Scholar] [CrossRef]
- Ibrahim AlShathri, S.; Hassan, D.S.M.; Allaoua Chelloug, S. Latency-Aware Dynamic Second Offloading Service in SDN-Based Fog Architecture. Comput. Mater. Contin. 2023, 75, 1501–1526. [Google Scholar] [CrossRef]
- Hassan, S.R.; Rehman, A.U.; Alsharabi, N.; Arain, S.; Quddus, A.; Hamam, H. Design of load-aware resource allocation for heterogeneous fog computing systems. PeerJ Comput. Sci. 2024, 10, e1986. [Google Scholar] [CrossRef] [PubMed]
- Singh, A.; Auluck, N. Load balancing aware scheduling algorithms for fog networks. In Software—Practice and Experience; John Wiley and Sons Ltd.: Hoboken, NJ, USA, 2020; pp. 2012–2030. [Google Scholar] [CrossRef]
- Tran-Dang, H.; Kim, D.-S. Dynamic collaborative task offloading for delay minimization in the heterogeneous fog computing systems. J. Commun. Netw. 2023, 25, 244–252. [Google Scholar] [CrossRef]
- Huang, X.; Fan, W.; Chen, Q.; Zhang, J. Energy-Efficient Resource Allocation in Fog Computing Networks With the Candidate Mechanism. IEEE Internet Things J. 2020, 7, 8502–8512. [Google Scholar] [CrossRef]
- Yi, C.; Cai, J.; Zhu, K.; Wang, R. A Queueing Game Based Management Framework for Fog Computing With Strategic Computing Speed Control. IEEE Trans. Mob. Comput. 2022, 21, 1537–1551. [Google Scholar] [CrossRef]
- Potu, N.; Bhukya, S.; Jatoth, C.; Parvataneni, P. Quality-aware energy efficient scheduling model for fog computing comprised IoT network. Comput. Electr. Eng. 2022, 97, 107603. [Google Scholar] [CrossRef]
- Sabireen, H.; Venkataraman, N. A Hybrid and Light Weight Metaheuristic Approach with Clustering for Multi-Objective Resource Scheduling and Application Placement in Fog Environment. Expert Syst. Appl. 2023, 223, 119895. [Google Scholar] [CrossRef]
- Ramezani Shahidani, F.; Ghasemi, A.; Toroghi Haghighat, A.; Keshavarzi, A. Task scheduling in edge-fog-cloud architecture: A multi-objective load balancing approach using reinforcement learning algorithm. Computing 2023, 105, 1337–1359. [Google Scholar] [CrossRef]
- Sahil; Sood, S.K.; Chang, V. Fog-Cloud-IoT centric collaborative framework for machine learning-based situation-aware traffic management in urban spaces. Computing 2022, 106, 1193–1225. [Google Scholar] [CrossRef]
- Talaat, F.M.; Saraya, M.S.; Saleh, A.I.; Ali, H.A.; Ali, S.H. A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 4951–4966. [Google Scholar] [CrossRef]
- Abbasi, M.; Yaghoobikia, M.; Rafiee, M.; Khosravi, M.R.; Menon, V.G. Optimal Distribution of Workloads in Cloud-Fog Architecture in Intelligent Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4706–4715. [Google Scholar] [CrossRef]
- Mota, E.; Barbosa, J.; Figueiredo, G.B.; Peixoto, M.; Prazeres, C. A self-configuration framework for balancing services in the fog of things. Internet Things Cyber-Phys. Syst. 2024, 4, 318–332. [Google Scholar] [CrossRef]
- Yang, J. Low-latency cloud-fog network architecture and its load balancing strategy for medical big data. J. Ambient. Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
- Jain, A.; Jatoth, C.; Gangadharan, G.R. Bi-level optimization of resource allocation and appliance scheduling in residential areas using a Fog of Things (FOT) framework. Cluster Comput. 2024, 27, 219–229. [Google Scholar] [CrossRef]
- Daghayeghi, A.; Nickray, M. Delay-Aware and Energy-Efficient Task Scheduling Using Strength Pareto Evolutionary Algorithm II in Fog-Cloud Computing Paradigm. Wirel. Pers. Commun. 2024, 138, 409–457. [Google Scholar] [CrossRef]
- Alqahtani, F.; Amoon, M.; Nasr, A.A. Reliable scheduling and load balancing for requests in cloud-fog computing. Peer-to-Peer Netw. Appl. 2021, 14, 1905–1916. [Google Scholar] [CrossRef]
- Deng, W.; Zhu, L.; Shen, Y.; Zhou, C.; Guo, J.; Cheng, Y. A novel multi-objective optimized DAG task scheduling strategy for fog computing based on container migration mechanism. Wirel. Netw. 2024, 31, 1005–1019. [Google Scholar] [CrossRef]
- Chuang, Y.-T.; Hsiang, C.-S. A popularity-aware and energy-efficient offloading mechanism in fog computing. J. Supercomput. 2022, 78, 19435–19458. [Google Scholar] [CrossRef]
- Oprea, S.-V.; Bâra, A. An Edge-Fog-Cloud computing architecture for IoT and smart metering data. Peer Peer Netw. Appl. 2023, 16, 818–845. [Google Scholar] [CrossRef]
- Ali, H.S.; Sridevi, R. Mobility and Security Aware Real-Time Task Scheduling in Fog-Cloud Computing for IoT Devices: A Fuzzy-Logic Approach. Comput. J. 2024, 67, 782–805. [Google Scholar] [CrossRef]
- Verma, R.; Chandra, S. HBI-LB: A Dependable Fault-Tolerant Load Balancing Approach for Fog based Internet-of-Things Environment. J. Supercomput. 2023, 79, 3731–3749. [Google Scholar] [CrossRef]
- Chiang, M.; Zhang, T. Fog and IoT: An overview of research opportunities. IEEE Internet Things J. 2016, 3, 854–864. [Google Scholar] [CrossRef]
- Mahapatra, A.; Majhi, S.K.; Mishra, K.; Pradhan, R.; Rao, D.C.; Panda, S.K. An energy-aware task offloading and load balancing for latency-sensitive IoT applications in the Fog-Cloud continuum. IEEE Access 2024, 12, 14334–14349. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).