Advanced Computational Intelligence in Cloud/Edge Computing

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 7237

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China
Interests: cloud/edge computing; resource optimization; machine learning

E-Mail Website
Guest Editor
School of Computing and Communications, Lancaster University, Lancaster LA1 4WA, UK
Interests: federated learning; mobile edge computing; cyber security; AI security
School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth PL4 8AA, UK
Interests: mobile edge computing; software-defined networking; network function virtualization; AI/ML-driven resource optimization; performance modeling and analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Integrating AI and cloud/edge computing fully unleashes their potential values, leading to a new intelligence computing paradigm. However, there are still many open-ended challenges during its implementation, such as limited computing, networks, and energy resources, accompanied by serious security issues. Meanwhile, the dynamic features of cloud/edge environments also complicate matters. Under this landscape, computational intelligence has emerged that focuses on crafting diverse computational techniques inspired by intelligent behaviors in nature and biology. By learning from data and making decisions grounded in discernment, among other methods, such techniques can lead to machines with the capabilities to solve complicated problems. Therefore, advanced computational intelligence exhibits great promise and abundant prospects for applications in cloud/edge computing, which can serve as both an enabler that bolsters the service capabilities and as a problem-solver, surmounting the obstacles during the system design. This Special Issue endeavors to assemble scholarly studies that explore the paths of synergizing cloud/edge computing with advanced computational intelligence to guide the development of next-generation network technology.

The topics of this Special Issue include but are not limited to the following:

  • Uncertainty-aware intelligence in dynamic cloud/edge environments;
  • Computing, networks, and energy optimization for cloud/edge intelligence applications;
  • Advanced deep reinforcement learning for cloud/edge computing;
  • Novel task scheduling and offloading methods in cloud/edge computing;
  • Novel service deployment and migration methods in cloud/edge computing;
  • Advanced federated learning for cloud/edge computing;
  • Novel traffic prediction and content-caching methods in cloud/edge computing;
  • Cost-aware federated learning in cloud/edge computing;
  • Security and privacy protection for cloud/edge intelligence applications;
  • Model compression for efficient training and inference in cloud/edge computing.

Prof. Dr. Zheyi Chen
Dr. Zhengxin Yu
Dr. Wang Miao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud/edge computing
  • resource optimization
  • deep reinforcement learning
  • task scheduling/offloading
  • service deployment/migration
  • federated learning
  • traffic prediction
  • content caching
  • security/privacy protection
  • model compression

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 1840 KiB  
Article
Fractional-Order System Identification: Efficient Reduced-Order Modeling with Particle Swarm Optimization and AI-Based Algorithms for Edge Computing Applications
by Ignacio Fidalgo Astorquia, Nerea Gómez-Larrakoetxea, Juan J. Gude and Iker Pastor
Mathematics 2025, 13(8), 1308; https://doi.org/10.3390/math13081308 - 16 Apr 2025
Viewed by 168
Abstract
Fractional-order systems capture complex dynamic behaviors more accurately than integer-order models, yet their real-time identification remains challenging, particularly in resource-constrained environments. This work proposes a hybrid framework that combines Particle Swarm Optimization (PSO) with various artificial intelligence (AI) techniques to estimate reduced-order models [...] Read more.
Fractional-order systems capture complex dynamic behaviors more accurately than integer-order models, yet their real-time identification remains challenging, particularly in resource-constrained environments. This work proposes a hybrid framework that combines Particle Swarm Optimization (PSO) with various artificial intelligence (AI) techniques to estimate reduced-order models of fractional systems. First, PSO optimizes model parameters by minimizing the discrepancy between the high-order system response and the reduced model output. These optimized parameters then serve as training data for several AI-based algorithms—including neural networks, support vector regression (SVR), and extreme gradient boosting (XGBoost)—to evaluate their inference speed and accuracy. Experimental validation on a custom-built heating system demonstrates that both PSO and the AI techniques yield precise reduced-order models. While PSO achieves slightly lower error metrics, its iterative nature leads to higher and more variable computation times compared to the deterministic and rapid inference of AI approaches. These findings highlight a trade-off between estimation accuracy and computational efficiency, providing a robust solution for real-time fractional-order system identification on edge devices. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

34 pages, 8526 KiB  
Article
Zero-Trust Mechanisms for Securing Distributed Edge and Fog Computing in 6G Networks
by Abdulrahman K. Alnaim and Ahmed M. Alwakeel
Mathematics 2025, 13(8), 1239; https://doi.org/10.3390/math13081239 - 9 Apr 2025
Viewed by 261
Abstract
The rapid advancement in 6G networks, driven by the proliferation of distributed edge and fog computing, has introduced unprecedented challenges in securing these decentralized architectures. Traditional security paradigms are inadequate for protecting the dynamic and heterogeneous environments of 6G-enabled systems. In this context, [...] Read more.
The rapid advancement in 6G networks, driven by the proliferation of distributed edge and fog computing, has introduced unprecedented challenges in securing these decentralized architectures. Traditional security paradigms are inadequate for protecting the dynamic and heterogeneous environments of 6G-enabled systems. In this context, we propose ZTF-6G (Zero-Trust Framework for 6G Networks), a novel model that integrates Zero-Trust principles to secure distributed edge and fog computing environments. Robust security is ensured by ZTF-6G by adopting a “never trust, always verify” approach, which comprises adaptive authentication, continuous verification, and fine-grained access control against all entities within the network. Within this context, our proposed framework makes use of Zero-Trust-based multi-layering that extends to AI-driven anomaly detection and blockchain-based identity management for the authentication and real-time monitoring of network interactions. Simulation results indicate that ZTF-6G is able to reduce latency by 77.6% (up to 2.8 ms, compared to the standard models’ 12.5 ms), improve throughput by 70%, and improve resource utilization by 41.5% (90% of utilization). Additionally, the trust score accuracy increased from 95% to 98%, energy efficiency improved by 22.2% (from 88% to 110% efficiency), and threat detection accuracy increased to 98%. Finally, the framework perfectly mitigated the insider threats by 85% and enforced a dynamic policy within 1.8 ms. ZTF-6G maintained a low latency while providing more resilience to insider threats, unauthorized access, and data breaches, which is a requirement of 6G networks. This research aims to lay a foundation for deploying Zero-Trust as an integral part of the next-generation networks which will face the security challenges of the distributed systems driven by 6G networks. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

17 pages, 2537 KiB  
Article
Collaborative Optimization Strategy for Dependent Task Offloading in Vehicular Edge Computing
by Xiting Peng, Yandi Zhang, Xiaoyu Zhang, Chaofeng Zhang and Wei Yang
Mathematics 2024, 12(23), 3820; https://doi.org/10.3390/math12233820 - 2 Dec 2024
Viewed by 1472
Abstract
The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel [...] Read more.
The advancement of the Internet of Autonomous Vehicles has facilitated the development and deployment of numerous onboard applications. However, the delay-sensitive tasks generated by these applications present enormous challenges for vehicles with limited computing resources. Moreover, these tasks are often interdependent, preventing parallel computation and severely prolonging completion times, which results in substantial energy consumption. Task-offloading technology offers an effective solution to mitigate these challenges. Traditional offloading strategies, however, fall short in the highly dynamic environment of the Internet of Vehicles. This paper proposes a task-offloading scheme based on deep reinforcement learning to optimize the strategy between vehicles and edge computing resources. The task-offloading problem is modeled as a Markov Decision Process, and an improved twin-delayed deep deterministic policy gradient algorithm, LT-TD3, is introduced to enhance the decision-making process. The integration of LSTM and a self-attention mechanism into the LT-TD3 network boosts its capability for feature extraction and representation. Additionally, considering task dependency, a topological sorting algorithm is employed to assign priorities to subtasks, thereby improving the efficiency of task offloading. Experimental results demonstrate that the proposed strategy significantly reduces task delays and energy consumption, offering an effective solution for efficient task processing and energy saving in autonomous vehicles. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

26 pages, 1012 KiB  
Article
On the Optimization of Kubernetes toward the Enhancement of Cloud Computing
by Subrota Kumar Mondal, Zhen Zheng and Yuning Cheng
Mathematics 2024, 12(16), 2476; https://doi.org/10.3390/math12162476 - 10 Aug 2024
Cited by 5 | Viewed by 2813
Abstract
With the vigorous development of big data and cloud computing, containers are becoming the main platform for running applications due to their flexible and lightweight features. Using a container cluster management system can more effectively manage multiocean containers on multiple machine nodes, and [...] Read more.
With the vigorous development of big data and cloud computing, containers are becoming the main platform for running applications due to their flexible and lightweight features. Using a container cluster management system can more effectively manage multiocean containers on multiple machine nodes, and Kubernetes has become a leader in container cluster management systems, with its powerful container orchestration capabilities. However, the current default Kubernetes components and settings have appeared to have a performance bottleneck and are not adaptable to complex usage environments. In particular, the issues are data distribution latency, inefficient cluster backup and restore leading to poor disaster recovery, poor rolling update leading to downtime, inefficiency in load balancing and handling requests, poor autoscaling and scheduling strategy leading to quality of service (QoS) violations and insufficient resource usage, and many others. Aiming at the insufficient performance of the default Kubernetes platform, this paper focuses on reducing the data distribution latency, improving the cluster backup and restore strategies toward better disaster recovery, optimizing zero-downtime rolling updates, incorporating better strategies for load balancing and handling requests, optimizing autoscaling, introducing better scheduling strategy, and so on. At the same time, the relevant experimental analysis is carried out. The experiment results show that compared with the default settings, the optimized Kubernetes platform can handle more than 2000 concurrent requests, reduce the CPU overhead by more than 1.5%, reduce the memory by more than 0.6%, reduce the average request time by an average of 7.6%, and reduce the number of request failures by at least 32.4%, achieving the expected effect. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

18 pages, 562 KiB  
Article
Joint UAV Deployment and Task Offloading in Large-Scale UAV-Assisted MEC: A Multiobjective Evolutionary Algorithm
by Qijie Qiu, Lingjie Li, Zhijiao Xiao, Yuhong Feng, Qiuzhen Lin and Zhong Ming
Mathematics 2024, 12(13), 1966; https://doi.org/10.3390/math12131966 - 25 Jun 2024
Cited by 1 | Viewed by 1228
Abstract
With the development of digital economy technologies, mobile edge computing (MEC) has emerged as a promising computing paradigm that provides mobile devices with closer edge computing resources. Because of high mobility, unmanned aerial vehicles (UAVs) have been extensively utilized to augment MEC to [...] Read more.
With the development of digital economy technologies, mobile edge computing (MEC) has emerged as a promising computing paradigm that provides mobile devices with closer edge computing resources. Because of high mobility, unmanned aerial vehicles (UAVs) have been extensively utilized to augment MEC to improve scalability and adaptability. However, with more UAVs or mobile devices, the search space grows exponentially, leading to the curse of dimensionality. This paper focus on the combined challenges of the deployment of UAVs and the task of offloading mobile devices in a large-scale UAV-assisted MEC. Specifically, the joint UAV deployment and task offloading problem is first modeled as a large-scale multiobjective optimization problem with the purpose of minimizing energy consumption while improving user satisfaction. Then, a large-scale UAV deployment and task offloading multiobjective optimization method based on the evolutionary algorithm, called LDOMO, is designed to address the above formulated problem. In LDOMO, a CSO-based evolutionary strategy and a MLP-based evolutionary strategy are proposed to explore solution spaces with different features for accelerating convergence and maintaining the diversity of the population, and two local search optimizers are designed to improve the quality of the solution. Finally, simulation results show that our proposed LDOMO outperforms several representative multiobjective evolutionary algorithms. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

Back to TopTop