Distributed Computing Paradigms for the Internet of Things: Exploring Cloud, Edge, and Fog Solutions

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "Internet of Things (IoT) and Industrial IoT".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 2458

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science and Cybersecurity, University of Central Missouri, Warrensburg, MO 64093, USA
Interests: cloud computing; edge computing; AI; 6G; IoT

Special Issue Information

Dear Colleagues,

The proliferation of the Internet of Things (IoT) is transforming industries, environments, and everyday life by connecting devices, systems, and people. However, as IoT applications generate massive amounts of data, traditional centralized computing paradigms face challenges in meeting the requirements for real-time processing, scalability, and resource efficiency. Distributed computing paradigms, such as cloud, edge, and fog computing, offer promising solutions by bringing computation closer to data sources, reducing latency, and enhancing operational efficiency.

This Special Issue on "Distributed Computing Paradigms for the Internet of Things: Exploring Cloud, Edge, and Fog Solutions" invites high-quality research articles, reviews, and case studies that investigate the roles, challenges, and innovations related to distributed computing frameworks tailored for IoT applications. We welcome submissions that address theoretical, experimental, and practical aspects of these paradigms and their synergistic integration to support the complex requirements of IoT systems.

This special issue seeks submissions addressing (but not limited to) the following topics:

  • Cloud, Edge, and Fog Architectures for IoT:
    • Design and implementation of cloud, edge, and fog computing systems for IoT;
    • Novel distributed architectures and frameworks for IoT environments;
    • Interoperability across cloud, edge, and fog systems.
  • Resource Management and Optimization:
    • Efficient resource allocation and scheduling for IoT data processing;
    • Adaptive resource management in resource-constrained IoT environments;
    • Performance optimization in cloud, edge and fog computing environments;
    • Load balancing and fault tolerance in distributed IoT systems.
  • Green and Energy-efficient Computing for IoT:
    • Energy-saving techniques in distributed IoT systems;
    • Green computing frameworks for cloud, edge, and fog environments;
    • Low-power IoT device management and optimization;
    • Renewable energy integration in distributed IoT computing frameworks.
  • Data Analytics and Processing for IoT:
    • Real-time data processing and analytics in distributed computing;
    • Machine learning and AI techniques for decentralized IoT networks;
    • Data privacy, security, and trust in cloud–edge–fog environments;
  • Communication and Networking for Distributed Computing Paradigms:
    • Low-latency and high-reliability communication protocols
    • 5G and beyond for IoT networking in cloud-edge-fog architectures
    • Network function virtualization (NFV) and software-defined networking (SDN) in distributed IoT systems
    • Adaptive communication models for dynamic IoT environments
  • Security, Privacy and Trust in Distributed Computing Paradigms:
    • Security frameworks and architectures for distributed IoT environments;
    • Threat detection and mitigation for IoT, edge, and fog layers;
    • Cybersecurity challenges in multi-tenant cloud–edge–fog networks;
    • Authentication, authorization, and identity management for IoT systems;
    • Secure data transmission, storage, and processing in cloud–edge–fog systems;
    • Privacy-preserving mechanisms and cryptographic solutions in IoT networks.
  • AI Technologies in IoT, Cloud, Edge, and Fog Computing:
    • AI-driven decision-making in distributed IoT systems;
    • Deep learning and reinforcement learning for IoT data processing and optimization;
    • Autonomous IoT systems with AI for real-time decision-making;
    • Integrating AI with cloud–edge–fog computing for intelligent IoT applications.
  • Applications and Case Studies:
    • Practical IoT applications leveraging cloud, edge, and fog solutions;
    • Case studies in smart cities, healthcare, agriculture, and other IoT fields;
    • Evaluation and benchmarking of distributed computing frameworks in real-world scenarios.

Dr. Kevin (Qixiang) Pang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • distributed computing
  • Internet of things (IoT)
  • cloud computing
  • edge computing
  • fog computing
  • IoT architecture
  • data processing
  • IoT applications
  • resource optimization
  • security in IoT
  • energy efficiency
  • green computing
  • real-time analytics
  • hybrid computing models
  • smart devices
  • big data in IoT
  • application performance
  • IoT ecosystem

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

32 pages, 4255 KiB  
Article
Improving Real-Time Economic Decisions Through Edge Computing: Implications for Financial Contagion Risk Management
by Ștefan Ionescu, Camelia Delcea and Ionuț Nica
Computers 2025, 14(5), 196; https://doi.org/10.3390/computers14050196 - 18 May 2025
Viewed by 300
Abstract
In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in [...] Read more.
In the face of accelerating digitalization and growing systemic vulnerabilities, the ability to make accurate, real-time economic decisions has become a critical capability for financial and institutional stability. This study investigates how edge computing infrastructures influence decision-making accuracy, responsiveness, and risk containment in economic systems, particularly under the threat of financial contagion. A synthetic dataset simulating the interaction between economic indicators and edge performance metrics was constructed to emulate real-time decision environments. Composite indicators were developed to quantify key dynamics, and a range of machine learning models, including XGBoost, Random Forest, and Neural Networks, were applied to classify economic decision outcomes. The results indicate that low latency, efficient resource use, and balanced workload distribution are significantly associated with higher decision quality. XGBoost outperformed all other models, achieving 97% accuracy and a ROC-AUC of 0.997. The findings suggest that edge computing performance metrics can act as predictive signals for systemic fragility and may be integrated into early warning systems for financial risk management. This study contributes to the literature by offering a novel framework for modeling the economic implications of edge intelligence and provides policy insights for designing resilient, real-time financial infrastructures. Full article
Show Figures

Figure 1

20 pages, 3977 KiB  
Article
Investigation of Multiple Hybrid Deep Learning Models for Accurate and Optimized Network Slicing
by Ahmed Raoof Nasser and Omar Younis Alani
Computers 2025, 14(5), 174; https://doi.org/10.3390/computers14050174 - 2 May 2025
Viewed by 370
Abstract
In 5G wireless communication, network slicing is considered one of the key network elements, which aims to provide services with high availability, low latency, maximizing data throughput, and ultra-reliability and save network resources. Due to the exponential expansion of cellular networking in the [...] Read more.
In 5G wireless communication, network slicing is considered one of the key network elements, which aims to provide services with high availability, low latency, maximizing data throughput, and ultra-reliability and save network resources. Due to the exponential expansion of cellular networking in the number of users along with the new applications, delivering the desired Quality of Service (QoS) requires an accurate and fast network slicing mechanism. In this paper, hybrid deep learning (DL) approaches are investigated using convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), recurrent neural networks (RNNs), and Gated Recurrent Units (GRUs) to provide an accurate network slicing model. The proposed hybrid approaches are CNN-LSTM, CNN-RNN, and CNN-GRU, where a CNN is initially used for effective feature extraction and then LSTM, an RNN, and GRUs are utilized to achieve an accurate network slice classification. To optimize the model performance in terms of accuracy and model complexity, the hyperparameters of each algorithm are selected using the Bayesian optimization algorithm. The obtained results illustrate that the optimized hybrid CNN-GRU algorithm provides the best performance in terms of slicing accuracy (99.31%) and low model complexity. Full article
Show Figures

Graphical abstract

28 pages, 2200 KiB  
Article
Fine-Tuning Network Slicing in 5G: Unveiling Mathematical Equations for Precision Classification
by Nikola Anđelić, Sandi Baressi Šegota and Vedran Mrzljak
Computers 2025, 14(5), 159; https://doi.org/10.3390/computers14050159 - 25 Apr 2025
Viewed by 286
Abstract
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G [...] Read more.
Modern 5G network slicing centers on the precise design of virtual, independent networks operating over a shared physical infrastructure, each configured to meet specific service requirements. This approach plays a vital role in enabling highly customized and flexible service delivery within the 5G ecosystem. In this study, we present the application of a genetic programming symbolic classifier to a dedicated network slicing dataset, resulting in the generation of accurate symbolic expressions for classifying different network slice types. To address the issue of class imbalance, we employ oversampling strategies that produce balanced variations of the dataset. Furthermore, a random search strategy is used to explore the hyperparameter space comprehensively in pursuit of optimal classification performance. The derived symbolic models, refined through threshold tuning based on prediction correctness, are subsequently evaluated on the original imbalanced dataset. The proposed method demonstrates outstanding performance, achieving a perfect classification accuracy of 1.0. Full article
Show Figures

Figure 1

17 pages, 5373 KiB  
Article
Real-Time Overhead Power Line Component Detection on Edge Computing Platforms
by Nico Surantha
Computers 2025, 14(4), 134; https://doi.org/10.3390/computers14040134 - 5 Apr 2025
Viewed by 496
Abstract
Regular inspection of overhead power line (OPL) systems is required to detect damage early and ensure the efficient and uninterrupted transmission of high-voltage electric power. In the past, these checks were conducted utilizing line crawling, inspection robots, and a helicopter. Yet, these traditional [...] Read more.
Regular inspection of overhead power line (OPL) systems is required to detect damage early and ensure the efficient and uninterrupted transmission of high-voltage electric power. In the past, these checks were conducted utilizing line crawling, inspection robots, and a helicopter. Yet, these traditional solutions are slow, costly, and hazardous. Advancements in drones, edge computing platforms, deep learning, and high-resolution cameras may enable real-time OPL inspections using drones. Some research has been conducted on OPL inspection with autonomous drones. However, it is essential to explore how to achieve real-time OPL component detection effectively and efficiently. In this paper, we report our research on OPL component detection on edge computing devices. The original OPL dataset is generated in this study. In this paper, we evaluate the detection performance with several sizes of training datasets. We also implement simple data augmentation to extend the size of datasets. The performance of the YOLOv7 model is also evaluated on several edge computing platforms, such as Raspberry Pi 4B, Jetson Nano, and Jetson Orin Nano. The model quantization method is used to improve the real-time performance of the detection model. The simulation results show that the proposed YOLOv7 model can achieve mean average precision (mAP) over 90%. While the hardware evaluation shows the real-time detection performance can be achieved in several circumstances. Full article
Show Figures

Figure 1

21 pages, 4465 KiB  
Article
Modified Ant Colony Optimization to Improve Energy Consumption of Cruiser Boundary Tour with Internet of Underwater Things
by Hadeel Mohammed, Mustafa Ibrahim, Ahmed Raoof, Amjad Jaleel and Ayad Q. Al-Dujaili
Computers 2025, 14(2), 74; https://doi.org/10.3390/computers14020074 - 17 Feb 2025
Cited by 1 | Viewed by 682
Abstract
The Internet of Underwater Things (IoUT) holds significant promise for developing a smart ocean. In recent years, there has been swift progress in data collection methods using autonomous underwater vehicles (AUVs) within underwater acoustic sensor networks (UASNs). One of the key challenges in [...] Read more.
The Internet of Underwater Things (IoUT) holds significant promise for developing a smart ocean. In recent years, there has been swift progress in data collection methods using autonomous underwater vehicles (AUVs) within underwater acoustic sensor networks (UASNs). One of the key challenges in the IoUT is improving both the energy consumption (EC) of underwater vehicles and the value of information (VoI) necessary for completing missions while gathering sensing data. In this paper, a hybrid optimization technique is proposed based on boundary tour modified ant colony optimization (BTMACO). The proposed optimization algorithm was developed to solve the challenging problem of determining the optimal path of an AUV visiting all sensor nodes with minimum energy consumption. The optimization algorithm specifies the best order in which to visit all the sensor nodes, while it also works to adjust the AUV’s information-gathering locations according to the permissible data transmission range. Compared with the related works in the literature, the proposed method showed better performance, and it can find the best route through which to collect sensor information with minimum power consumption and a 6.9% better VoI. Full article
Show Figures

Figure 1

Back to TopTop