Previous Issue
Volume 5, March
 
 

Network, Volume 5, Issue 2 (June 2025) – 12 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
34 pages, 7040 KiB  
Article
A Practical Implementation of Post-Quantum Cryptography for Secure Wireless Communication
by Babatunde Ojetunde, Takuya Kurihara, Kazuto Yano, Toshikazu Sakano and Hiroyuki Yokoyama
Network 2025, 5(2), 20; https://doi.org/10.3390/network5020020 (registering DOI) - 10 Jun 2025
Abstract
Recent advances in quantum computing have prompted urgent consideration of the migration of classical cryptographic systems to post-quantum alternatives. However, it is impossible to fully understand the impact that migrating to current Post-Quantum Cryptography (PQC) algorithms will have on various applications without the [...] Read more.
Recent advances in quantum computing have prompted urgent consideration of the migration of classical cryptographic systems to post-quantum alternatives. However, it is impossible to fully understand the impact that migrating to current Post-Quantum Cryptography (PQC) algorithms will have on various applications without the actual implementation of quantum-resistant cryptography. On the other hand, PQC algorithms come with complexity and long processing times, which may impact the quality of service (QoS) of many applications. Therefore, PQC-based protocols with practical implementations across various applications are essential. This paper introduces a new framework for PQC standalone and PQC–AES (Advanced Encryption Standard) hybrid public-key encryption (PKE) protocols. Building on prior results, we focus on securing applications such as file transfer, video streaming, and chat-based communication using enhanced PQC-based protocols. The extended PQC-based protocols use a sequence number-based mechanism to effectively counter replay and man-in-the-middle attacks and mitigate standard cybersecurity attack vectors. Experimental evaluations examined encryption/decryption speeds, throughput, and processing overhead for the standalone PQC and the PQC–AES hybrid schemes, benchmarking them against traditional AES-256 in an existing client–server environment. The results demonstrate that the new approaches achieve a significant balance between security and system performance compared to conventional deployments. Furthermore, a comprehensive security analysis confirms the robustness and effectiveness of the proposed PQC-based protocols across diverse attack scenarios. Notably, the PQC–AES hybrid protocol demonstrates greater efficiency for applications handling larger data volumes (e.g., 10–100 KB) with reduced latency, underscoring the practical necessity of carefully balancing security and operational efficiency in the post-quantum migration process. Full article
Show Figures

Figure 1

34 pages, 2852 KiB  
Article
RACHEIM: Reinforced Reliable Computing in Cloud by Ensuring Restricted Access Control
by Urvashi Rahul Saxena and Rajan Kadel
Network 2025, 5(2), 19; https://doi.org/10.3390/network5020019 - 9 Jun 2025
Abstract
Cloud computing has witnessed rapid growth and notable technological progress in recent years. Nevertheless, it is still regarded as being in its early developmental phase, with substantial potential remaining to be explored—particularly through integration with emerging technologies such as the Metaverse, Augmented Reality [...] Read more.
Cloud computing has witnessed rapid growth and notable technological progress in recent years. Nevertheless, it is still regarded as being in its early developmental phase, with substantial potential remaining to be explored—particularly through integration with emerging technologies such as the Metaverse, Augmented Reality (AR), and Virtual Reality (VR). As the number of service users increases, so does the demand for computational resources, leading data owners to outsource processing tasks to remote cloud servers. The internet-based delivery of cloud computing services consequently expands the attack surface and impacts the trust relationship between the service user and the service provider. To address these challenges, this study proposes a restricted access control framework based on homomorphic encryption (HE) and identity-based encryption (IBE) mechanisms. A formal analysis of the proposed model is also conducted under an unauthenticated communication model. Simulation results indicate that the proposed approach achieves a 20–40% reduction in encryption and decryption times, respectively, compared with existing state-of-the-art homomorphic encryption schemes. The simulation was performed using a 2048-bit key and data size, consistent with current industry standards, to improve key management efficiency. Additionally, the role-based hierarchy was implemented in a Salesforce cloud environment to ensure secure and restricted access control. Full article
Show Figures

Figure 1

12 pages, 1752 KiB  
Article
The Role of Topological Parameters in Wavelength Requirements for Survivable Optical Backbone Networks
by Filipe Carmo and João Pires
Network 2025, 5(2), 18; https://doi.org/10.3390/network5020018 - 4 Jun 2025
Viewed by 142
Abstract
As optical networks operate using light-based transmission, assigning wavelengths to the paths taken by traffic demands is a key aspect of their design. This paper revisits the wavelength assignment problem in optical backbone networks, focusing on survivability via 1 + 1 Optical Chanel [...] Read more.
As optical networks operate using light-based transmission, assigning wavelengths to the paths taken by traffic demands is a key aspect of their design. This paper revisits the wavelength assignment problem in optical backbone networks, focusing on survivability via 1 + 1 Optical Chanel (OCh) protection, which ensures fault tolerance by duplicating data over two disjoint optical paths. The analysis gives great emphasis to studying the influence of topological parameters on wavelength requirements, with algebraic connectivity being identified as the most significant parameter. The results show that, across a set of 27 real-world networks, the wavelength increment factor, defined as the ratio between the number of wavelengths required with protection and without protection, ranges from 1.49 to 3.07, with a mean value of 2.26. Using synthetic data, formulas were derived to estimate this factor from network parameters, resulting in a mean relative error of 12.7% and errors below 15% in 70% of the real-world cases studied. Full article
Show Figures

Figure 1

24 pages, 2188 KiB  
Article
Optimizing Energy Efficiency in Cloud Data Centers: A Reinforcement Learning-Based Virtual Machine Placement Strategy
by Abdelhadi Amahrouch, Youssef Saadi and Said El Kafhali
Network 2025, 5(2), 17; https://doi.org/10.3390/network5020017 - 27 May 2025
Viewed by 295
Abstract
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization [...] Read more.
Cloud computing faces growing challenges in energy consumption due to the increasing demand for services and resource usage in data centers. To address this issue, we propose a novel energy-efficient virtual machine (VM) placement strategy that integrates reinforcement learning (Q-learning), a Firefly optimization algorithm, and a VM sensitivity classification model based on random forest and self-organizing map. The proposed method, RLVMP, classifies VMs as sensitive or insensitive and dynamically allocates resources to minimize energy consumption while ensuring compliance with service level agreements (SLAs). Experimental results using the CloudSim simulator, adapted with data from Microsoft Azure, show that our model significantly reduces energy consumption. Specifically, under the lr_1.2_mmt strategy, our model achieves a 5.4% reduction in energy consumption compared to PABFD, 12.8% compared to PSO, and 12% compared to genetic algorithms. Under the iqr_1.5_mc strategy, the reductions are even more significant: 12.11% compared to PABFD, 15.6% compared to PSO, and 18.67% compared to genetic algorithms. Furthermore, our model reduces the number of live migrations, which helps minimize SLA violations. Overall, the combination of Q-learning and the Firefly algorithm enables adaptive, SLA-compliant VM placement with improved energy efficiency. Full article
Show Figures

Figure 1

28 pages, 2049 KiB  
Review
A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions
by Baha Uddin Kazi, Md Kawsarul Islam, Muhammad Mahmudul Haque Siddiqui and Muhammad Jaseemuddin
Network 2025, 5(2), 16; https://doi.org/10.3390/network5020016 - 20 May 2025
Viewed by 401
Abstract
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address [...] Read more.
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address these challenges, edge cloud-distributed computing networks emerge. Because of the distributed nature of edge cloud networks, many research works considering software defined networks (SDNs) and network–function–virtualization (NFV) could be key enablers for managing, orchestrating, and load balancing resources. This article provides a comprehensive survey of these emerging technologies, focusing on SDN controllers, orchestration, and the function of artificial intelligence (AI) in enhancing the capabilities of controllers within the edge cloud computing networks. More specifically, we present an extensive survey on the research proposals on the integration of SDN controllers and orchestration with the edge cloud networks. We further introduce a holistic overview of SDN-enabled edge cloud networks and an inclusive summary of edge cloud use cases and their key challenges. Finally, we address some challenges and potential research directions for further exploration in this vital research area. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

29 pages, 2193 KiB  
Article
Evaluation of TOPSIS Algorithm for Multi-Criteria Handover in LEO Satellite Networks: A Sensitivity Analysis
by Pascal Buhinyori Ngango, Marie-Line Lufua Binda, Michel Matalatala Tamasala, Pierre Sedi Nzakuna, Vincenzo Paciello and Angelo Kuti Lusala
Network 2025, 5(2), 15; https://doi.org/10.3390/network5020015 - 2 May 2025
Viewed by 619
Abstract
The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is widely recognized as an effective multi-criteria decision-making algorithm for handover management in terrestrial cellular networks, especially in scenarios involving dynamic and multi-faceted criteria. While TOPSIS is widely adopted in terrestrial cellular [...] Read more.
The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is widely recognized as an effective multi-criteria decision-making algorithm for handover management in terrestrial cellular networks, especially in scenarios involving dynamic and multi-faceted criteria. While TOPSIS is widely adopted in terrestrial cellular networks for handover management, its application in satellite networks, particularly in Low Earth Orbit (LEO) constellations, remains limited and underexplored. In this work, the performance of three TOPSIS algorithms is evaluated for handover management in LEO satellite networks, where efficient handover management is crucial due to rapid changes in satellite positions and network conditions. Sensitivity analysis is conducted on Standard Deviation TOPSIS (SD-TOPSIS), Entropy-TOPSIS, and Importance-TOPSIS in the context of LEO satellite networks, assessing their responsiveness to small variations in key performance metrics such as upload speed, download speed, ping, and packet loss. This study uses real-world data from “Starlink-on-the-road-Dataset”. Results show that SD-TOPSIS effectively optimizes handover management in dynamic LEO satellite networks thanks to its lower standard deviation scores and reduced score variation rate, thus demonstrating superior stability and lower sensitivity to small variations in performance metrics values compared to both Entropy-TOPSIS and Importance-TOPSIS. This ensures more consistent decision-making, avoidance of unnecessary handovers, and enhanced robustness in rapidly-changing network conditions, making it particularly suitable for real-time services that require stable, low-latency, and reliable connectivity. Full article
Show Figures

Figure 1

40 pages, 2062 KiB  
Review
State of the Art in Internet of Things Standards and Protocols for Precision Agriculture with an Approach to Semantic Interoperability
by Eduard Roccatello, Antonino Pagano, Nicolò Levorato and Massimo Rumor
Network 2025, 5(2), 14; https://doi.org/10.3390/network5020014 - 21 Apr 2025
Viewed by 742
Abstract
The integration of Internet of Things (IoT) technology into the agricultural sector enables the collection and analysis of large amounts of data, facilitating greater control over internal processes, resulting in cost reduction and improved quality of the final product. One of the main [...] Read more.
The integration of Internet of Things (IoT) technology into the agricultural sector enables the collection and analysis of large amounts of data, facilitating greater control over internal processes, resulting in cost reduction and improved quality of the final product. One of the main challenges in designing an IoT system is the need for interoperability among devices: different sensors collect information in non-homogeneous formats, which are often incompatible with each other. Therefore, the user of the system is forced to use different platforms and software to consult the data, making the analysis complex and cumbersome. The solution to this problem lies in the adoption of an IoT standard that standardizes the output of the data. This paper first provides an overview of the standards and protocols used in precision farming and then presents a system architecture designed to collect measurements from sensors and translate them into a standard. The standard is selected based on an analysis of the state of the art and tailored to meet the specific needs of precision agriculture. With the introduction of a connector device, the system can accommodate any number of different sensors while maintaining the output data in a uniform format. Each type of sensor is associated with a specific connector that intercepts the data intended for the database and translates it into the standard format before forwarding it to the central server. Finally, examples with real sensors are presented to illustrate the operation of the connectors and their role in an interoperable architecture, aiming to combine flexibility and ease of use with low implementation costs. Full article
Show Figures

Figure 1

48 pages, 1921 KiB  
Article
Design and Analysis of an Effective Architecture for Machine Learning Based Intrusion Detection Systems
by Noora Alromaihi, Mohsen Rouached and Aymen Akremi
Network 2025, 5(2), 13; https://doi.org/10.3390/network5020013 - 14 Apr 2025
Viewed by 698
Abstract
The increase in new cyber threats is the result of the rapid growth of using the Internet, thus raising questions about the effectiveness of traditional Intrusion Detection Systems (IDSs). Machine learning (ML) technology is used to enhance cybersecurity in general and especially for [...] Read more.
The increase in new cyber threats is the result of the rapid growth of using the Internet, thus raising questions about the effectiveness of traditional Intrusion Detection Systems (IDSs). Machine learning (ML) technology is used to enhance cybersecurity in general and especially for reactive approaches, such as traditional IDSs. In several instances, it is seen that a single assailant may direct their efforts towards different servers belonging to an organization. This behavior is often perceived by IDSs as infrequent attacks, thus diminishing the effectiveness of detection. In this context, this paper aims to create a machine learning-based IDS model able to detect malicious traffic received by different organizational network interfaces. A centralized proxy server is designed to receive all the incoming traffic at the organization’s servers, scan the traffic by using the proposed IDS, and then redirect the traffic to the requested server. The proposed IDS was evaluated by using three datasets: CIC-MalMem-2022, CIC-IDS-2018, and CIC-IDS-2017. The XGBoost model showed exceptional performance in rapid detection, achieving 99.96%, 99.73%, and 99.84% accuracy rates within short time intervals. The Stacking model achieved the highest level of accuracy among the evaluated models. The developed IDS demonstrated superior accuracy and detection time outcomes compared with previous research in the field. Full article
Show Figures

Figure 1

16 pages, 2521 KiB  
Article
Age of Information Minimization in Vehicular Edge Computing Networks: A Mask-Assisted Hybrid PPO-Based Method
by Xiaoli Qin, Zhifei Zhang, Chanyuan Meng, Rui Dong, Ke Xiong and Pingyi Fan
Network 2025, 5(2), 12; https://doi.org/10.3390/network5020012 - 14 Apr 2025
Viewed by 324
Abstract
With the widespread deployment of various emerging intelligent applications, information timeliness is crucial for intelligent decision-making in vehicular networks, where vehicular edge computing (VEC) has become an important paradigm to enhance computing capabilities by offloading tasks to edge nodes. To promote the information [...] Read more.
With the widespread deployment of various emerging intelligent applications, information timeliness is crucial for intelligent decision-making in vehicular networks, where vehicular edge computing (VEC) has become an important paradigm to enhance computing capabilities by offloading tasks to edge nodes. To promote the information timeliness in VEC, an optimization problem is formulated to minimize the age of information (AoI) by jointly optimizing task offloading and subcarrier allocation. Due to the time-varying channel and the coupling of the continuous and discrete optimization variables, the problem exhibits non-convexity, which is difficult to solve using traditional mathematical optimization methods. To efficiently tackle this challenge, we employ a hybrid proximal policy optimization (HPPO)-based deep reinforcement learning (DRL) method by designing the mixed action space involving both continuous and discrete variables. Moreover, an action masking mechanism is designed to filter out invalid actions in the action space caused by limitations in the effective communication distance between vehicles. As a result, a mask-assisted HPPO (MHPPO) method is proposed by integrating the action masking mechanism into the HPPO. Simulation results show that the proposed MHPPO method achieves an approximately 28.9% reduction in AoI compared with the HPPO method and about a 23% reduction compared with the mask-assisted deep deterministic policy gradient (MDDPG). Full article
Show Figures

Figure 1

20 pages, 2857 KiB  
Article
An Experimental Comparison of Basic Device Localization Systems in Wireless Sensor Networks
by Maurizio D’Arienzo
Network 2025, 5(2), 11; https://doi.org/10.3390/network5020011 - 14 Apr 2025
Viewed by 277
Abstract
Localization plays a crucial role in wireless sensor networks (WSNs) and it has sparked significant research interest. GPSs provide quite accurate positioning estimations, but they are ineffective indoors and in environments like underwater. Power usage and cost are further disadvantages, and so many [...] Read more.
Localization plays a crucial role in wireless sensor networks (WSNs) and it has sparked significant research interest. GPSs provide quite accurate positioning estimations, but they are ineffective indoors and in environments like underwater. Power usage and cost are further disadvantages, and so many alternatives have been proposed. Many works in the literature still base localization on RSSI measurements and often rely on methods to mitigate the effects of fluctuations in values, so it is important to know real values of RSSIs measured using common devices. This work presents the main localization techniques and exploits a real testbed to collect and evaluate RSSI measurements. An accuracy evaluation and a comparison among several localization techniques are also provided. Full article
Show Figures

Figure 1

29 pages, 875 KiB  
Review
A Survey of Quality-of-Service and Quality-of-Experience Provisioning in Information-Centric Networks
by Nazmus Sadat and Rui Dai
Network 2025, 5(2), 10; https://doi.org/10.3390/network5020010 - 14 Apr 2025
Viewed by 479
Abstract
Information-centric networking (ICN) is a promising approach to address the limitations of current host-centric IP-based networking. ICN models feature ubiquitous in-network caching to provide faster and more reliable content delivery, name-based routing to provide better scalability, and self-certifying contents to ensure better security. [...] Read more.
Information-centric networking (ICN) is a promising approach to address the limitations of current host-centric IP-based networking. ICN models feature ubiquitous in-network caching to provide faster and more reliable content delivery, name-based routing to provide better scalability, and self-certifying contents to ensure better security. Due to the differences in the core architecture of ICN compared to existing IP-based networks, it requires special considerations to provide quality-of-service (QoS) or quality-of-experience (QoE) support for applications based on ICNs. This paper discusses the latest advances in QoS and QoE research for ICNs. First, an overview of ICN architectures is given, followed by a summary of different factors that influence QoS and QoE. Approaches for improving QoS and QoE in ICNs are then discussed in five main categories: in-network caching, name resolution and routing, transmission and flow control, software-defined networking, and media-streaming-based strategies. Finally, open research questions for providing QoS and QoE support in ICNs are outlined for future research. Full article
Show Figures

Figure 1

29 pages, 2215 KiB  
Article
Bounce: A High Performance Satellite-Based Blockchain System
by Xiaoteng Liu, Taegyun Kim and Dennis E. Shasha
Network 2025, 5(2), 9; https://doi.org/10.3390/network5020009 - 31 Mar 2025
Viewed by 587
Abstract
Blockchains are designed to produce a secure, append-only sequence of transactions. Establishing transaction sequentiality is typically achieved by underlying consensus protocols that either prevent forks entirely (no-forking-ever) or make forks short-lived. The main challenges facing blockchains are to achieve this no-forking condition while [...] Read more.
Blockchains are designed to produce a secure, append-only sequence of transactions. Establishing transaction sequentiality is typically achieved by underlying consensus protocols that either prevent forks entirely (no-forking-ever) or make forks short-lived. The main challenges facing blockchains are to achieve this no-forking condition while achieving high throughput, low response time, and low energy costs. This paper presents the Bounce blockchain protocol along with throughput and response time experiments. The core of the Bounce system is a set of satellites that partition time slots. The satellite for slot i signs a commit record that includes the hash of the commit record of slot i1 as well as a sequence of zero or more Merkle tree roots whose corresponding Merkle trees each has thousands or millions of transactions. The ledger consists of the transactions in the sequence of the Merkle trees corresponding to the roots of the sequence of commit records. Thus, the satellites work as arbiters that decide the next block(s) for the blockchain. Satellites orbiting around the Earth are harder to tamper with and harder to isolate than terrestrial data centers, though our protocol could work with terrestrial data centers as well. Under reasonable assumptions—intermittently failing but non-Byzantine (i.e., non-traitorous) satellites, possibly Byzantine Ground Stations, and “exposure-averse” administrators—the Bounce System achieves high availability and a no-fork-ever blockchain. Our experiments show that the protocol achieves high transactional throughput (5.2 million transactions per two-second slot), low response time (less than three seconds for “premium” transactions and less than ten seconds for “economy” transactions), and minimal energy consumption (under 0.05 joules per transaction). Moreover, given five more cloud sites of the kinds currently available in CloudLab, Clemson, we show how the design could achieve throughputs of 15.2 million transactions per two second slot with the same response time profile. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop