Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (243)

Search Parameters:
Keywords = Edge/Fog/Cloud Computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 899 KB  
Article
Proximity-Aware VM Placement in Multi-Layer Fog Computing for Efficient Resource Management: Performance Evaluation Under a Gaming Application Scenario
by Sreebha Bhaskaran and Supriya Muthuraman
Computers 2026, 15(4), 225; https://doi.org/10.3390/computers15040225 - 3 Apr 2026
Viewed by 296
Abstract
The rapid proliferation of mobile devices, particularly smartphones and tablets, has transformed digital entertainment, with mobile gaming emerging as one of the fastest-growing digital segments. Such applications are inherently latency-sensitive and require effective resource management and seamless mobility support. To overcome these issues, [...] Read more.
The rapid proliferation of mobile devices, particularly smartphones and tablets, has transformed digital entertainment, with mobile gaming emerging as one of the fastest-growing digital segments. Such applications are inherently latency-sensitive and require effective resource management and seamless mobility support. To overcome these issues, this paper suggests a four-layered infrastructure that combines edge, fog, and cloud computing with Software-Defined Networking (SDN) and is assisted by a lightweight proximity-aware heuristic placement strategy and mobility management. The suggested structure follows a microservices contained breakdown of the gaming functionality and uses clustering algorithms to permit coordinated access to resources by edge and fog nodes. A dynamic lightweight proximity-aware virtual machine placement algorithm is presented to deploy application modules nearer to the users depending on the availability and mobility of the resources. The proposed work is simulated using IFogSim2. The proposed model reduces the latency by up to 73 percent and the rate of task completion by 25 percent relative to baseline configurations in the case of dynamic mobility of users. These results indicate that the suggested strategy can be effective in improving the latency-sensitive mobile gaming applications performance in the edge-fog networks. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

45 pages, 3695 KB  
Article
Towards a Reference Architecture for Machine Learning Operations
by Miguel Ángel Mateo-Casalí, Andrés Boza and Francisco Fraile
Computers 2026, 15(4), 218; https://doi.org/10.3390/computers15040218 - 1 Apr 2026
Viewed by 395
Abstract
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), [...] Read more.
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), including implementation constraints, latency in edge-fog-cloud scenarios, governance requirements, and continuous performance degradation caused by data drift. Although Machine Learning Operations (MLOps) provides lifecycle practices for deployment, monitoring, and retraining, the evidence is fragmented across tool-centric descriptions, case-specific pipelines, and conceptual architectures, offering limited guidance on which industrial constraints should inform architectural decisions and how to evaluate solutions. This work addresses that gap through a PRISMA-guided systematic review of 49 studies on industrial MLOps (with the search and screening primarily targeting Industry 4.0/IIoT operationalisation contexts, as reflected in the search strategy and corpus) and an evidence-based synthesis of principles, challenges, lifecycle practices, and enabling technologies. From this synthesis, industrial requirements are derived that encompass OT/IT integration, edge-fog-cloud orchestration, security and traceability, and observability-based lifecycle control. On this basis, a reference architecture is proposed that maps these requirements to functional layers, data and control flows, and verifiable responsibilities. To support reproducibility and practical inspectability, the article also presents an open-source architectural instantiation aligned with the proposed decomposition. Finally, the evaluation is illustrated through a predictive maintenance use case (tool breakage) in a single CNC machining cell, where the objective is to demonstrate end-to-end feasibility under realistic operational constraints rather than cross-scenario superiority or broad industrial generalisability. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

20 pages, 315 KB  
Systematic Review
Green Scheduling and Task Offloading in Edge Computing: A Systematic Review
by Adriana Rangel Ribeiro, Ana Clara Santos Andrade, Gabriel Leal dos Santos, Guilherme Dinarte Marcondes Lopes, Edvard Martins de Oliveira, Adler Diniz de Souza and Jeremias Barbosa Machado
Network 2026, 6(1), 17; https://doi.org/10.3390/network6010017 - 16 Mar 2026
Viewed by 313
Abstract
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, [...] Read more.
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, and the Internet of Things (IoT). In this context, Mobile Edge Computing (MEC) is often combined with Unmanned Aerial Vehicles (UAVs) to extend computational capabilities to areas with limited infrastructure, bringing processing closer to the data source to reduce latency and improve scalability. Nevertheless, these systems encounter substantial energy-related challenges, particularly in battery-powered or resource-constrained environments. To address these concerns, green computing strategies—especially energy-efficient scheduling and task offloading—have emerged as promising approaches to optimize energy usage in edge environments. Green scheduling optimizes task allocation to minimize energy consumption, whereas offloading redistributes workloads from resource-constrained devices to edge or cloud servers. Increasingly, these techniques are enhanced through artificial intelligence (AI) and machine learning (ML), enabling adaptive and context-aware decision-making in dynamic environments. This paper conducts a systematic literature review (SLR) to synthesize the most widely adopted strategies for energy-efficient scheduling and task offloading in edge computing, highlighting their impact on sustainability and performance. The analysis provides a comprehensive view of the state of the art, examines how architectural contexts influence energy-aware decisions, and highlights the role of AI/ML in enabling intelligent and sustainable edge systems. The findings reveal current research gaps and outline future directions to advance the development of robust, scalable, and environmentally responsible computing infrastructures. Full article
Show Figures

Figure 1

27 pages, 2849 KB  
Systematic Review
Intrusion Detection in Fog Computing: A Systematic Review of Security Advances and Challenges
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Computers 2026, 15(3), 169; https://doi.org/10.3390/computers15030169 - 5 Mar 2026
Viewed by 630
Abstract
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents [...] Read more.
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents research on intrusion detection systems (IDSs) for fog computing and synthesizes advances and research gaps. The study was guided by the “Preferred-Reporting-Items for-Systematic-Reviews-and-Meta-Analyses” (PRISMA) framework. Scopus and Web of Science were searched in the title field using TITLE/TI = (“intrusion detection” AND “fog computing”) for 2021–2025. The inclusion criteria were (i) 2021–2025 publications, (ii) journal or conference papers, (iii) English language, and (iv) open access availability; duplicates were removed programmatically using a DOI-first key with a title, year, and author alternative. The search identified 8560 records, of which 4905 were unique and included for qualitative grouping and bibliometric synthesis. Metadata (year, venue, authors, affiliations, keywords, and citations) were extracted and analyzed in Python to compute trends and collaboration. Intrusion detection systems in fog networks were categorized into traditional/signature-based, machine learning, deep learning, and hybrid/ensemble. Hybrid and DL approaches reported accuracy ranging from 95 to 99% on benchmark datasets (such as NSL-KDD, UNSW-NB15, CIC-IDS2017, KDD99, BoT-IoT). Notable bottlenecks included computational load relative to real-time latency on resource-constrained nodes, elevated false-positive rates for anomaly detection under concept drift, limited generalization to unseen attacks, privacy risks from centralizing data, and limited real-world validation. Bibliometric analyses highlighted the field’s concentration in fast-turnaround, open-access journals such as IEEE Access and Sensors, as well as a small number of highly collaborative author clusters, alongside dominant terms such as “learning,” “federated,” “ensemble,” “lightweight,” and “explainability.” Emerging directions include federated and distributed training to preserve privacy, as well as online/continual learning adaptation. Future work should consist of real-world evaluation of fog networks, ultra-lightweight yet adaptive hybrid IDS, self-learning, and secure cooperative frameworks. These insights help researchers select appropriate IDS models for fog networks. Full article
Show Figures

Figure 1

42 pages, 3268 KB  
Article
LITO: Lemur-Inspired Task Offloading for Edge–Fog–Cloud Continuum Systems
by Asma Almulifi and Heba Kurdi
Sensors 2026, 26(5), 1497; https://doi.org/10.3390/s26051497 - 27 Feb 2026
Viewed by 366
Abstract
Edge, fog, and cloud continuum architectures that interconnect resource-constrained devices, intermediate edge servers, and remote cloud data centers face persistent challenges in handling heterogeneous and latency-sensitive workloads while reducing energy consumption and improving resource utilization. Classical task offloading approaches either rely on static [...] Read more.
Edge, fog, and cloud continuum architectures that interconnect resource-constrained devices, intermediate edge servers, and remote cloud data centers face persistent challenges in handling heterogeneous and latency-sensitive workloads while reducing energy consumption and improving resource utilization. Classical task offloading approaches either rely on static heuristics, which lack adaptability to dynamic conditions, or on metaheuristic optimizers, which often incur high computational overhead and centralized coordination. This paper proposes LITO, a lemur-inspired task offloading algorithm for edge, fog, and cloud continuum systems that models the infrastructure as a social system in which computing nodes assume distinct roles that mirror lemur social hierarchies. Building on an abstracted model of lemur group behavior, LITO incorporates two key lemur-inspired mechanisms: an energy-aware task assignment mechanism based on sun basking, a thermoregulation behavior in which lemurs seek favorable warm spots, mapped here to selecting energetically efficient execution nodes, and a cooperative scheduling policy based on huddling, group clustering under stress, mapped here to sharing load among overloaded nodes. These mechanisms are combined with a continual supervised policy-learning layer with contextual bandit feedback that refines offloading decisions from online feedback. The resulting multi-objective formulation jointly minimizes energy consumption and deadline violations while maximizing resource utilization and throughput under high-load conditions in the edge and fog segment of the continuum. Simulations under diverse workload regimes and task complexities show that LITO outperforms representative multi-objective offloading baselines in terms of energy consumption, resource utilization, latency, Service Level Agreement (SLA) violations, and throughput in congested scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 2079 KB  
Article
Assuring Brokerage Quality in the Cloud–Edge Continuum
by Evangelos Barmpas, Simeon Veloudis, Yiannis Verginadis and Iraklis Paraskakis
Future Internet 2026, 18(2), 107; https://doi.org/10.3390/fi18020107 - 19 Feb 2026
Viewed by 358
Abstract
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus [...] Read more.
The Cloud–Edge Continuum (CEC) has emerged as a paradigm for distributing computational resources across cloud, fog, and edge layers, enabling latency-sensitive applications to operate efficiently. However, ensuring the quality of service (QoS) brokerage in such environments remains a challenge. Existing frameworks primarily focus on resource management techniques such as allocation, scheduling, and offloading but fail to address the quality assurance of the brokerage process itself. This paper introduces SLA governance as a means of ensuring the quality of service brokerage by validating—through automated reasoning—Service Level Agreements (SLAs) against meta-quality constraints—high-level policies that define permissible QoS conditions. We propose an ontology-driven approach leveraging the ODRL ontology for SLA representation and capturing meta-quality constraints. Our method also enables introspective reasoning ensuring internal SLA consistency. Additionally, we integrate SLA governance with a real-time monitoring framework, the Event Management System (EMS), to continuously track workload performance and trigger SLA adaptation when necessary. This integration ensures that SLA-based brokerage decisions remain dynamic and context-aware. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Graphical abstract

25 pages, 2045 KB  
Article
A Comparative Analysis of Self-Aware Reinforcement Learning Models for Real-Time Intrusion Detection in Fog Networks
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Future Internet 2026, 18(2), 100; https://doi.org/10.3390/fi18020100 - 14 Feb 2026
Viewed by 505
Abstract
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study [...] Read more.
Fog computing extends cloud services to the network edge, enabling low-latency processing for Internet of Things (IoT) applications. However, this distributed approach is vulnerable to a wide range of attacks, necessitating advanced intrusion detection systems (IDSs) that operate under resource constraints. This study proposes integrating self-awareness (online learning and concept drift adaptation) into a lightweight RL (reinforcement learning)-based IDS for fog networks and quantitatively comparing it with non-RL static thresholds and bandit-based approaches in real time. Novel self-aware reinforcement learning (RL) models, the Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (HATS-RL) model, and the Federated Hierarchical Adaptive Thompson Sampling–Reinforcement Learning (F-HATS-RL), were proposed for real-time intrusion detection in a fog network. These self-aware RL policies integrated online uncertainty estimation and concept-drift detection to adapt to evolving attacks. The RL models were benchmarked against the static threshold (ST) model and a widely adopted linear bandit (Linear Upper Confidence Bound/LinUCB). A realistic fog network simulator with heterogeneous nodes and streaming traffic, including multi-type attack bursts and gradual concept drift, was established. The models’ detection performance was compared using metrics including latency, energy consumption, detection accuracy, and the area under the precision–recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). Notably, the federated self-aware agent (F-HATS-RL) achieved the best AUROC (0.933) and AUPR (0.857), with a latency of 0.27 ms and the lowest energy consumption of 0.0137 mJ, indicating its ability to detect intrusions in fog networks with minimal energy. The findings suggest that self-aware RL agents can detect traffic–dynamic attack methods and adapt accordingly, resulting in more stable long-term performance. By contrast, a static model’s accuracy degrades under drift. Full article
Show Figures

Figure 1

69 pages, 31002 KB  
Review
Next-Gen Explainable AI (XAI) for Federated and Distributed Internet of Things Systems: A State-of-the-Art Survey
by Aristeidis Karras, Anastasios Giannaros, Natalia Amasiadi and Christos Karras
Future Internet 2026, 18(2), 83; https://doi.org/10.3390/fi18020083 - 4 Feb 2026
Viewed by 1108
Abstract
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT [...] Read more.
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT architectures and identifies systematic gaps in evaluation under real-world resource constraints. Methods: A structured search across IEEE Xplore, ACM Digital Library, ScienceDirect, SpringerLink, and Google Scholar targeted publications related to XAI, IoT, edge/fog computing, smart cities, smart agriculture, and federated learning. Relevant peer-reviewed works were synthesized along three dimensions: deployment tier (device, edge/fog, cloud), explanation scope (local vs. global), and validation methodology. Results: The analysis reveals a persistent resource–interpretability gap: computationally intensive explainers are frequently applied on constrained edge and federated platforms without explicitly accounting for latency, memory footprint, or energy consumption. Only a minority of studies quantify privacy–utility effects or address causal attribution in sensor-rich environments, limiting the reliability of explanations in safety- and mission-critical IoT applications. Contribution: To address these shortcomings, the survey introduces a hardware-centric evaluation framework with the Computational Complexity Score (CCS), Memory Footprint Ratio (MFR), and Privacy–Utility Trade-off (PUT) metrics and proposes a hierarchical IoT–XAI reference architecture, together with the conceptual Internet of Things Interpretability Evaluation Standard (IOTIES) for cross-domain assessment. Conclusions: The findings indicate that IoT–XAI research must shift from accuracy-only reporting to lightweight, model-agnostic, and privacy-aware explanation pipelines that are explicitly budgeted for edge resources and aligned with the needs of heterogeneous stakeholders in smart city and agricultural deployments. Full article
(This article belongs to the Special Issue Human-Centric Explainability in Large-Scale IoT and AI Systems)
Show Figures

Graphical abstract

26 pages, 461 KB  
Systematic Review
A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks
by Iram Fiaz, Nadia Kanwal and Amro Al-Said Ahmad
Sensors 2026, 26(1), 229; https://doi.org/10.3390/s26010229 - 30 Dec 2025
Viewed by 1007
Abstract
Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these [...] Read more.
Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health. Full article
(This article belongs to the Special Issue Secure AI for Biomedical Sensing and Imaging Applications)
Show Figures

Figure 1

25 pages, 1050 KB  
Review
IoT-Based Approaches to Personnel Health Monitoring in Emergency Response
by Jialin Wu, Yongqi Tang, Feifan He, Zhichao He, Yunting Tsai and Wenguo Weng
Sustainability 2026, 18(1), 365; https://doi.org/10.3390/su18010365 - 30 Dec 2025
Viewed by 980
Abstract
The health and operational continuity of emergency responders are fundamental pillars of sustainable and resilient disaster management systems. These personnel operate in high-risk environments, exposed to intense physical, environmental, and psychological stress. This makes it crucial to monitor their health to safeguard their [...] Read more.
The health and operational continuity of emergency responders are fundamental pillars of sustainable and resilient disaster management systems. These personnel operate in high-risk environments, exposed to intense physical, environmental, and psychological stress. This makes it crucial to monitor their health to safeguard their well-being and performance. Traditional methods, which rely on intermittent, voice-based check-ins, are reactive and create a dangerous information gap regarding a responder’s real-time health and safety. To address this sustainability challenge, the convergence of the Internet of Things (IoT) and wearable biosensors presents a transformative opportunity to shift from reactive to proactive safety monitoring, enabling the continuous capture of high-resolution physiological and environmental data. However, realizing a field-deployable system is a complex “system-of-systems” challenge. This review contributes to the field of sustainable emergency management by analyzing the complete technological chain required to build such a solution, structured along the data workflow from acquisition to action. It examines: (1) foundational health sensing technologies for bioelectrical, biophysical, and biochemical signals; (2) powering strategies, including low-power design and self-powering systems via energy harvesting; (3) ad hoc communication networks (terrestrial, aerial, and space-based) essential for infrastructure-denied disaster zones; (4) data processing architectures, comparing edge, fog, and cloud computing for real-time analytics; and (5) visualization tools, such as augmented reality (AR) and heads-up displays (HUDs), for decision support. The review synthesizes these components by discussing their integrated application in scenarios like firefighting and urban search and rescue. It concludes that a robust system depends not on a single component but on the seamless integration of this entire technological chain, and highlights future research directions crucial for quantifying and maximizing its impact on sustainable development goals (SDGs 3, 9, and 11) related to health, sustainable cities, and resilient infrastructure. Full article
Show Figures

Figure 1

25 pages, 3648 KB  
Article
Authentication and Authorisation Method for a Cloud Side Static IoT Application
by Jose Alvarez, Matheus Santos, David May and Gerard Dooly
Network 2026, 6(1), 1; https://doi.org/10.3390/network6010001 - 19 Dec 2025
Viewed by 548
Abstract
IoT applications are increasingly common, yet they often rely on expensive, externally managed authentication services. This paper introduces a novel, self-contained authentication method for IoT applications which leverages fog computing principles to lower operational costs and infrastructure complexity. The proposed system, fogauth, [...] Read more.
IoT applications are increasingly common, yet they often rely on expensive, externally managed authentication services. This paper introduces a novel, self-contained authentication method for IoT applications which leverages fog computing principles to lower operational costs and infrastructure complexity. The proposed system, fogauth, combines device serial numbers with cryptographically generated UUIDs to establish secure identification without third-party services. A static cloud-side architecture coupled with a lightweight, locally hosted API enables secure authentication through object-storage operations. Performance testing demonstrates comparable security performance to commercial cloud-based authentication while reducing long-term operational costs and maintaining latency at below 2 minutes in production conditions. fogauth provides a scalable and economically viable alternative for companies seeking to reduce cloud dependency and minimize long-term costs associated with IoT application security. To support reproducibility, a complete open-source implementation and validation dataset are provided, allowing independent replication and extension of the system. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

25 pages, 7707 KB  
Article
A Multi-Tier Vehicular Edge–Fog Framework for Real-Time Traffic Management in Smart Cities
by Syed Rizwan Hassan and Asif Mehmood
Mathematics 2025, 13(24), 3947; https://doi.org/10.3390/math13243947 - 11 Dec 2025
Cited by 1 | Viewed by 592
Abstract
The factors restricting the large-scale deployment of smart vehicular networks include application service placement/migration, mobility management, network congestion, and latency. Current vehicular networks are striving to optimize network performance through decentralized framework deployments. Specifically, the urban-level execution of current network deployments often fails [...] Read more.
The factors restricting the large-scale deployment of smart vehicular networks include application service placement/migration, mobility management, network congestion, and latency. Current vehicular networks are striving to optimize network performance through decentralized framework deployments. Specifically, the urban-level execution of current network deployments often fails to achieve the quality of service required by smart cities. To address these issues, we have proposed a vehicular edge–fog computing (VEFC)-enabled adaptive area-based traffic management (AABTM) architecture. Our design divides the urban area into multiple microzones for distributed control. These microzones are equipped with roadside units for real-time collection of vehicular information. We also propose (1) a vehicle mobility management (VMM) scheme to facilitate seamless service migration during vehicular movement; (2) a dynamic vehicular clustering (DVC) approach for the dynamic clustering of distributed network nodes to enhance service delivery; and (3) a dynamic microservice assignment (DMA) algorithm to ensure efficient resource-aware microservice placement/migration. We have evaluated the proposed schemes on different scales. The proposed schemes provide a significant improvement in vital network parameters. AABTM achieves reductions of 86.4% in latency, 53.3% in network consumption, 6.2% in energy usage, and 48.3% in execution cost, while DMA-clustering reduces network consumption by 59.2%, energy usage by 5%, and execution cost by 38.4% compared to traditional cloud-based urban traffic management frameworks. This research highlights the potential of utilizing distributed frameworks for real-time traffic management in next-generation smart vehicular networks. Full article
Show Figures

Figure 1

29 pages, 6039 KB  
Article
A Hierarchical Fractal Space NSGA-II-Based Cloud–Fog Collaborative Optimization Framework for Latency and Energy-Aware Task Offloading in Smart Manufacturing
by Zhiwen Lin, Chuanhai Chen, Jianzhou Chen and Zhifeng Liu
Mathematics 2025, 13(22), 3691; https://doi.org/10.3390/math13223691 - 18 Nov 2025
Viewed by 676
Abstract
The growth of intelligent manufacturing systems has led to a wealth of computation-intensive tasks with complex dependencies. These tasks require an efficient offloading architecture that balances responsiveness and energy efficiency across distributed computing resources. Existing task offloading approaches have fundamental limitations when simultaneously [...] Read more.
The growth of intelligent manufacturing systems has led to a wealth of computation-intensive tasks with complex dependencies. These tasks require an efficient offloading architecture that balances responsiveness and energy efficiency across distributed computing resources. Existing task offloading approaches have fundamental limitations when simultaneously optimizing multiple conflicting objectives while accommodating hierarchical computing architectures and heterogeneous resource capabilities. To address these challenges, this paper presents a cloud–fog hierarchical collaborative computing (CFHCC) framework that features fog cluster mechanisms. These methods enable coordinated, multi-node parallel processing while maintaining data sensitivity constraints. The optimization of task distribution across this three-tier architecture is formulated as a multi-objective problem, minimizing both system latency and energy consumption. To solve this problem, a fractal-based multi-objective optimization algorithm is proposed to efficiently explore Pareto-optimal task allocation strategies by employing recursive space partitioning aligned with the hierarchical computing structure. Simulation experiments across varying task scales demonstrate that the proposed method achieves a 20.28% latency reduction and 3.03% energy savings compared to typical and advanced methods for large-scale task scenarios, while also exhibiting superior solution consistency and convergence. A case study on a digital twin manufacturing system validated its practical effectiveness, with CFHCC outperforming traditional cloud–edge collaborative computing by 12.02% in latency and 11.55% in energy consumption, confirming its suitability for diverse intelligent manufacturing applications. Full article
Show Figures

Figure 1

24 pages, 943 KB  
Review
A Review on AI Miniaturization: Trends and Challenges
by Bin Tang, Shengzhi Du and Antonie Johan Smith
Appl. Sci. 2025, 15(20), 10958; https://doi.org/10.3390/app152010958 - 12 Oct 2025
Cited by 1 | Viewed by 2584
Abstract
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing [...] Read more.
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing (MEC)/micro data center (MDC) architectures. Based on a standardized literature search and screening process, three categories of miniaturization strategies are distilled: redundancy compression (e.g., pruning, quantization, and distillation), knowledge transfer (e.g., distillation and parameter-efficient fine-tuning), and hardware–software co-design (e.g., neural architecture search (NAS), compiler-level, and operator-level optimization). The purposes of this review are threefold: (1) to unify the “architecture–strategy–implementation pathway” from a system-level perspective; (2) to establish technology–budget mapping with verifiable quantitative indicators; and (3) to summarize representative pathways for energy- and performance-prioritized scenarios, while highlighting current deficiencies in data disclosure and device-side validation. The findings indicate that, compared with single techniques, cross-layer combined optimization better balances accuracy, latency, and power consumption. Therefore, AI miniaturization should be regarded as a proactive method of structural reconfiguration for large-scale deployment. Future efforts should advance cross-scenario empirical validation and standardized benchmarking, while reinforcing hardware–software co-design. Compared with existing reviews that mostly focus on a single dimension, this review proposes a cross-level framework and design checklist, systematizing scattered optimization methods into reusable engineering pathways. Full article
Show Figures

Figure 1

26 pages, 13551 KB  
Article
Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration
by Mohammed M. Alenazi and Fawwad Hassan Jaskani
Mathematics 2025, 13(18), 3044; https://doi.org/10.3390/math13183044 - 22 Sep 2025
Cited by 1 | Viewed by 2306
Abstract
The volatile nature of cryptocurrency markets demands real-time analytical capabilities that traditional centralized computing architectures struggle to provide. This paper presents a novel hybrid cloud–edge computing framework for cryptocurrency market forecasting, leveraging distributed systems to enable low-latency prediction models. Our approach integrates machine [...] Read more.
The volatile nature of cryptocurrency markets demands real-time analytical capabilities that traditional centralized computing architectures struggle to provide. This paper presents a novel hybrid cloud–edge computing framework for cryptocurrency market forecasting, leveraging distributed systems to enable low-latency prediction models. Our approach integrates machine learning algorithms across a distributed network: edge nodes perform real-time data preprocessing and feature extraction, while the cloud infrastructure handles deep learning model training and global pattern recognition. The proposed architecture uses a three-tier system comprising edge nodes for immediate data capture, fog layers for intermediate processing and local inference, and cloud servers for comprehensive model training on historical blockchain data. A federated learning mechanism allows edge nodes to contribute to a global prediction model while preserving data locality and reducing network latency. The experimental results show a 40% reduction in prediction latency compared to cloud-only solutions while maintaining comparable accuracy in forecasting Bitcoin and Ethereum price movements. The system processes over 10,000 transactions per second and delivers real-time insights with sub-second response times. Integration with blockchain ensures data integrity and provides transparent audit trails for all predictions. Full article
(This article belongs to the Special Issue Recent Computational Techniques to Forecast Cryptocurrency Markets)
Show Figures

Figure 1

Back to TopTop