Recent Advances and Challenges in IoT, Cloud and Edge Coexistence

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 27018

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information and Communication Systems Engineering, University of the Aegean, 83100 Samos, Greece
Interests: novel internet architectures and services; cloud computing & networking; energy & context aware next generation networks and services; management aspects of mobile and wireless networks; ubiquitous and pervasive computing and end-to-end quality of service provisioning in heterogeneous networks environment

E-Mail Website
Guest Editor
Eclipse Foundation, 31000 Toulouse, France
Interests: open source; community building; ecosystems

E-Mail Website
Guest Editor
Atos Research & Innovation, 28037 Madrid, Spain
Interests: edge; cloud; DevOps

E-Mail Website
Guest Editor
Netcompany-Intrasoft S.A., 19 5 KM Markopoulou Ave., 19002 Peania, Attika, Greece
Interests: Internet of Things; artificial intelligence; edge computing

Special Issue Information

Dear Colleagues,

The coexistence of the IoT, Cloud and Edge (ICE) provides a stronghold for innovation that spans across all sectors of vertical industries, research, and development ecosystems towards the ever-increasing volume of data and the associated burst of business cases.

The ICE continuum is driven to the next generation via intelligence, scalability, openness, risk awareness, security, greenness and monetization, enabling massive timely development, deployment and migration of devices and services in diverse operational environments, effective management of risks and vulnerabilities and cross-sector exploitation of dynamic data sets.

In this Special Issue, the aim is to publish high-quality articles including reviews and position papers that address various challenges in the use of these technologies (IoT, cloud and edge computing) towards their technical and business fusion, realizing a computing continuum enhanced by AI and openness.

The topics of interest include but are not limited to:

  • Meta-operating systems;
  • Security, privacy and trust;
  • Open source enabling platforms and tools;
  • Artificial intelligence;
  • Data and identity management;
  • Vertical industries;
  • Orchestration and management technologies;
  • Business modeling and commercialization;
  • Sustainable Development Goals;
  • Digital twins;
  • Blockchain technologies.

Dr. Charalabos Skianis
Dr. Philippe Krief
Dr. Enric Pages Montanera
Dr. John Soldatos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • cloud
  • edge
  • open source
  • artificial intelligence
  • security
  • data management
  • sustainability

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

25 pages, 5087 KiB  
Article
Optimizing the Long-Term Efficiency of Users and Operators in Mobile Edge Computing Using Reinforcement Learning
by Jianji Shao and Yanjun Li
Electronics 2025, 14(8), 1689; https://doi.org/10.3390/electronics14081689 - 21 Apr 2025
Viewed by 158
Abstract
Mobile edge computing (MEC) has emerged as a promising paradigm to enhance computational capabilities at the network edge, enabling low-latency services for users while ensuring efficient resource utilization for operators. One of the key challenges in MEC is optimizing offloading decisions and resource [...] Read more.
Mobile edge computing (MEC) has emerged as a promising paradigm to enhance computational capabilities at the network edge, enabling low-latency services for users while ensuring efficient resource utilization for operators. One of the key challenges in MEC is optimizing offloading decisions and resource allocation to balance user experience and operator profitability. In this paper, we integrate software-defined networking (SDN) and MEC to enhance system utility and propose an SDN-based MEC network framework. Within this framework, we formulate an optimization problem that jointly maximizes the utility of both users and operators by optimizing the offloading decisions, communication and computation resource allocation ratios. To address this challenge, we model the problem as a Markov decision process (MDP) and propose a reinforcement learning (RL)-based algorithm to optimize long-term system utility in a dynamic network environment. However, since RL-based algorithms struggle with large state spaces, we extend the MDP formulation to a continuous state space and develop a deep reinforcement learning (DRL)-based algorithm to improve performance. The DRL approach leverages neural networks to approximate optimal policies, enabling more effective decision-making in complex environments. Experimental results validate the effectiveness of our proposed methods. While the RL-based algorithm enhances the long-term average utility of both users and operators, the DRL-based algorithm further improves performance, increasing operator and user efficiency by approximately 22.4% and 12.2%, respectively. These results highlight the potential of intelligent learning-based approaches for optimizing MEC networks and provide valuable insights into designing adaptive and efficient MEC architectures. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

17 pages, 642 KiB  
Article
A Distributed Trustable Framework for AI-Aided Anomaly Detection
by Nikolaos Nomikos, George Xylouris, Gerasimos Patsourakis, Vasileios Nikolakakis, Anastasios Giannopoulos, Charilaos Mandilaris, Panagiotis Gkonis, Charalabos Skianis and Panagiotis Trakadas
Electronics 2025, 14(3), 410; https://doi.org/10.3390/electronics14030410 - 21 Jan 2025
Cited by 1 | Viewed by 1156
Abstract
The evolution towards sixth-generation (6G) networks requires new architecture enhancements to support the broad device ecosystem, comprising users, machines, autonomous vehicles, and Internet-of-things devices. Moreover, high heterogeneity in the desired quality-of-service (QoS) is expected, as 6G networks will offer extremely low-latency and high-throughput [...] Read more.
The evolution towards sixth-generation (6G) networks requires new architecture enhancements to support the broad device ecosystem, comprising users, machines, autonomous vehicles, and Internet-of-things devices. Moreover, high heterogeneity in the desired quality-of-service (QoS) is expected, as 6G networks will offer extremely low-latency and high-throughput services and error-free communication. This complex environment raises significant challenges in resource management while adhering to security and privacy constraints due to the plethora of data generation endpoints. Considering the advances in AI/ML-aided integration in wireless networks and recent efforts on the network data analytics function (NWDAF) by the 3rd generation partnership project (3GPP), this work presents an AI/ML-aided distributed trustable engine (DTE), collecting data from diverse sources of the 6G infrastructure and deploying ML methods for anomaly detection against diverse threat types. Moreover, we present the DTE architecture and its components, providing data management, AI/ML model training, and classification capabilities for anomaly detection. To promote privacy-aware networking, a federated learning (FL) framework to extend the DTE is discussed. Then, the anomaly detection capabilities of the AI/ML-aided DTE are presented in detail, together with the ML model training process, which considers various ML models. For this purpose, we use two open datasets representing attack scenarios in the core and the edge parts of the network. Experimental results, including an ensemble learning method and different supervised learning alternatives, show that the AI/ML-aided DTE can efficiently train ML models with reduced dimensionality and deploy them in diverse cybersecurity scenarios to improve anomaly detection in 6G networks. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

13 pages, 3319 KiB  
Article
Energy and Precision Evaluation of a Systolic Array Accelerator Using a Quantization Approach for Edge Computing
by Alejandra Sanchez-Flores, Jordi Fornt, Lluc Alvarez and Bartomeu Alorda-Ladaria
Electronics 2024, 13(14), 2822; https://doi.org/10.3390/electronics13142822 - 18 Jul 2024
Cited by 1 | Viewed by 1100
Abstract
This paper focuses on the implementation of a neural network accelerator optimized for speed and energy efficiency, for use in embedded machine learning. Specifically, we explore power reduction at the hardware level through systolic array and low-precision data systems, including quantized approaches. We [...] Read more.
This paper focuses on the implementation of a neural network accelerator optimized for speed and energy efficiency, for use in embedded machine learning. Specifically, we explore power reduction at the hardware level through systolic array and low-precision data systems, including quantized approaches. We present a comprehensive analysis comparing a full precision (FP16) accelerator with a quantized (INT16) version on an FPGA. We upgraded the FP16 modules to handle INT16 values, employing data shifts to enhance value density while maintaining accuracy. Through single convolution experiments, we assess the energy consumption and error minimization. The paper’s structure includes a detailed description of the FP16 accelerator, the transition to quantization, mathematical and implementation insights, instrumentation for power measurement, and a comparative analysis of power consumption and convolution error. Our results attempt to identify a pattern in 16-bit quantization to achieve significant power savings with minimal loss of accuracy. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

19 pages, 619 KiB  
Article
A Reinforcement Learning-Based Reverse Auction Enforcing Smart Pricing Policies towards B5G Offloading Strategies
by Konstantinos Kaltakis, Alexandros Dimos, Ioannis Giannoulakis, Emmanouil Kafetzakis and Charalampos Skianis
Electronics 2024, 13(13), 2488; https://doi.org/10.3390/electronics13132488 - 25 Jun 2024
Viewed by 1921
Abstract
In this paper, we present our work on developing a Smart Pricing Policies module specifically designed for individual users and Mobile Network Operators (MNOs). Our framework will operate in a multi-MNO blockchain radio access network (B-RAN) and is tasked with determining prices for [...] Read more.
In this paper, we present our work on developing a Smart Pricing Policies module specifically designed for individual users and Mobile Network Operators (MNOs). Our framework will operate in a multi-MNO blockchain radio access network (B-RAN) and is tasked with determining prices for resource sharing among users and MNOs. Our sophisticated adaptive pricing system can adjust to situations where User Equipment (UE) shifts out of the coverage area of their MNO by immediately sealing a contract with a different MNO to cover the users’ needs. This way, we aim to provide financial incentives to MNOs while ensuring continuous network optimization for all parties involved. Our system accomplishes that by utilizing deep reinforcement learning (DLR) to implement a reverse auction model. In our reinforcement learning scenario, the MNOs, acting as agents, enter a competition and try to bid the most appealing price based on the user’s request, and based on the reward system, agents that do not win in the current round will adjust their strategies in an attempt to secure a win in subsequent rounds. The findings indicated that combining DRL with reverse auction theory offers a more appropriate method for addressing the pricing and bid challenges, and additionally, administrators can utilize this strategy to gain a notable edge by dynamically selecting and adjusting their methods according to the individual network conditions and requirements. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

22 pages, 1083 KiB  
Article
FLRAM: Robust Aggregation Technique for Defense against Byzantine Poisoning Attacks in Federated Learning
by Haitian Chen, Xuebin Chen, Lulu Peng and Ruikui Ma
Electronics 2023, 12(21), 4463; https://doi.org/10.3390/electronics12214463 - 30 Oct 2023
Cited by 6 | Viewed by 2825
Abstract
In response to the susceptibility of federated learning, which is based on a distributed training structure, to byzantine poisoning attacks from malicious clients, resulting in issues such as slowed or disrupted model convergence and reduced model accuracy, we propose a robust aggregation technique [...] Read more.
In response to the susceptibility of federated learning, which is based on a distributed training structure, to byzantine poisoning attacks from malicious clients, resulting in issues such as slowed or disrupted model convergence and reduced model accuracy, we propose a robust aggregation technique for defending against byzantine poisoning attacks in federated learning, known as FLRAM. First, we employ isolation forest and an improved density-based clustering algorithm to detect anomalies in the amplitudes and symbols of client local gradients, effectively filtering out gradients with large magnitude and angular deviation variations. Subsequently, we construct a credibility matrix based on the filtered subset of gradients to evaluate the trustworthiness of each local gradient. Using this credibility score, we further select gradients with higher trustworthiness. Finally, we aggregate the filtered gradients to obtain the global gradient, which is then used to update the global model. The experimental findings show that our proposed approach achieves strong defense performance without compromising FedAvg accuracy. Furthermore, it exhibits superior robustness compared to existing solutions. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

18 pages, 3232 KiB  
Article
Vulnerability Identification and Assessment for Critical Infrastructures in the Energy Sector
by Nikolaos Nikolaou, Andreas Papadakis, Konstantinos Psychogyios and Theodore Zahariadis
Electronics 2023, 12(14), 3185; https://doi.org/10.3390/electronics12143185 - 22 Jul 2023
Cited by 6 | Viewed by 3245
Abstract
Vulnerability identification and assessment is a key process in risk management. While enumerations of vulnerabilities are available, it is challenging to identify vulnerability sets focused on the profiles and roles of specific organizations. To this end, we have employed systematized knowledge and relevant [...] Read more.
Vulnerability identification and assessment is a key process in risk management. While enumerations of vulnerabilities are available, it is challenging to identify vulnerability sets focused on the profiles and roles of specific organizations. To this end, we have employed systematized knowledge and relevant standards (including National Electric Sector Cybersecurity Organization Resource (NESCOR), ISO/IEC 27005:2018 and National Vulnerability Database (NVD)) to identify a set of 250 vulnerabilities for operators of energy-related critical infrastructures. We have elaborated a “double-mapping” scheme to associate (arbitrarily) categorized assets, with the pool of identified Physical, Cyber and Human/Organizational vulnerabilities. We have designed and implemented an extensible vulnerability identification and assessment framework, allowing historized assessments, based on the CVSS (Common Vulnerability Scoring System) scoring mechanism. This framework has been extended to allow modelling of the vulnerabilities and assessments using the Structured Threat Information eXpression (STIX) JSON format, as Cyber Threat Intelligence (CTI) information, to facilitate information sharing between Electrical Power and Energy Systems (EPES) and to promote collaboration and interoperability scenarios. Vulnerability assessments from the initial analysis of the project in the context of Research and Technology Development (RTD) projects have been statistically processed, offering insights in terms of the assessment’s importance and distribution. The assessments have also been transformed into a dynamic dataset processed to identify and quantify correlation and start the discussion on the interpretation of the way assessments are performed. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

20 pages, 3140 KiB  
Article
Better Safe Than Sorry: Constructing Byzantine-Robust Federated Learning with Synthesized Trust
by Gangchao Geng, Tianyang Cai and Zheng Yang
Electronics 2023, 12(13), 2926; https://doi.org/10.3390/electronics12132926 - 3 Jul 2023
Cited by 4 | Viewed by 2082
Abstract
Byzantine-robust federated learning empowers the central server to acquire high-end global models amidst a restrictive set of malicious clients. The general idea of existing learning methods requires the central server to statistically analyze all local parameter (gradient or weight) updates, and to delete [...] Read more.
Byzantine-robust federated learning empowers the central server to acquire high-end global models amidst a restrictive set of malicious clients. The general idea of existing learning methods requires the central server to statistically analyze all local parameter (gradient or weight) updates, and to delete suspicious ones. The drawback of these approaches is that they lack a root of trust that would allow us to identify which local parameter updates are suspicious, which means that malicious clients can still disrupt the global model. The machine learning community has recently proposed a new method, FLTrust (NDSS’2021), where the server achieves robust aggregation by using a tiny, uncontaminated dataset (denoted as the root dataset) to generate the root of trust; however, the global model’s accuracy will significantly decline if the root dataset greatly deviates from the client’s dataset. To address the above problems, we propose FLEST: a Federated LEarning with Synthesized Trust method. Our method considers that trust and anomaly detection methods can complementarily solve their respective problems; therefore, we designed a new robust aggregation rule with synthesized trust scores (STS). Specifically, we propose the trust synthesizing mechanism, which can aggregate trust scores (TS) and confidence scores (CS) into STS through a dynamic trust ratio γ, and we use STS as the weight for aggregating the local parameter updates. Our experimental results demonstrated that FLEST is capable of resisting existing attacks, even when the root dataset distribution significantly differs from the total dataset distribution: for example, the global model trained by FLEST is 41% more accurate than FLTrust for adaptive attacks using the mnist-0.5 dataset with the bias probability set to 0.8. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

17 pages, 2438 KiB  
Article
GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems
by Konstantinos Psychogyios, Terpsichori-Helen Velivassaki, Stavroula Bourou, Artemis Voulkidis, Dimitrios Skias and Theodore Zahariadis
Electronics 2023, 12(8), 1805; https://doi.org/10.3390/electronics12081805 - 11 Apr 2023
Cited by 9 | Viewed by 4639
Abstract
Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the [...] Read more.
Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of ∼5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 1135 KiB  
Review
A State-of-the-Art Review of Task Scheduling for Edge Computing: A Delay-Sensitive Application Perspective
by Amin Avan, Akramul Azim and Qusay H. Mahmoud
Electronics 2023, 12(12), 2599; https://doi.org/10.3390/electronics12122599 - 8 Jun 2023
Cited by 12 | Viewed by 7540
Abstract
The edge computing paradigm enables mobile devices with limited memory and processing power to execute delay-sensitive, compute-intensive, and bandwidth-intensive applications on the network by bringing the computational power and storage capacity closer to end users. Edge computing comprises heterogeneous computing platforms with resource [...] Read more.
The edge computing paradigm enables mobile devices with limited memory and processing power to execute delay-sensitive, compute-intensive, and bandwidth-intensive applications on the network by bringing the computational power and storage capacity closer to end users. Edge computing comprises heterogeneous computing platforms with resource constraints that are geographically distributed all over the network. As users are mobile and applications change over time, identifying an optimal task scheduling method is a complex multi-objective optimization problem that is NP-hard, meaning the exhaustive search with a time complexity that grows exponentially can solve the problem. Therefore, various approaches are utilized to discover a good solution for scheduling the tasks within a reasonable time complexity, while achieving the most optimal solution takes exponential time. This study reviews task scheduling algorithms based on centralized and distributed methods in a three-layer computing architecture to identify their strengths and limitations in scheduling tasks to edge service nodes. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

Back to TopTop