Journal Description
Network
Network
is an international, peer-reviewed, open access journal on science and technology of networks, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within ESCI (Web of Science), Scopus, EBSCO, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Engineering (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.1 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Network is a companion journal of Electronics.
- Journal Clusters of Network and Communications Technology: Future Internet, IoT, Telecom, Journal of Sensor and Actuator Networks, Network, Signals.
Impact Factor:
3.1 (2024);
5-Year Impact Factor:
2.9 (2024)
Latest Articles
A Two-Phase Genetic Algorithm Approach for Sleep Scheduling, Routing, and Clustering in Heterogeneous Wireless Sensor Networks
Network 2025, 5(4), 50; https://doi.org/10.3390/network5040050 - 4 Nov 2025
Abstract
►
Show Figures
Heterogeneous wireless sensor networks (HWSNs), comprising super nodes and normal sensors, offer a promising solution for monitoring diverse environments. However, their deployment is constrained by the limited battery life of sensors. To address this issue, clustering and routing techniques have been employed to
[...] Read more.
Heterogeneous wireless sensor networks (HWSNs), comprising super nodes and normal sensors, offer a promising solution for monitoring diverse environments. However, their deployment is constrained by the limited battery life of sensors. To address this issue, clustering and routing techniques have been employed to conserve energy. Nevertheless, existing approaches often struggle with suboptimal energy distribution and weak network coverage. Additionally, they mostly failed to exploit other energy saving techniques such as sleep scheduling. This paper proposes a novel genetic algorithm (GA)-based approach to optimize sleep scheduling, routing, and clustering in HWSNs. The method comprises two phases, namely join sleep scheduling and tree construction, and clustering of normal nodes. Inspired by the concept of unequal clustering, the HWSN is split into some rings in the first phase, and the number of awake super nodes in each ring keeps the same. This approach addresses the challenges of balancing energy consumption and network lifetime. Furthermore, including network coverage and energy-related criteria in the proposed GA yields long-lasting network operation. Through rigorous simulations, we demonstrate that, on average, our algorithm reduces energy consumption and improves network coverage by 23% and 21.9%, respectively, and extends network lifetime by 501 rounds, compared to the state-of-the-art methods.
Full article
Open AccessArticle
Real-Time Handover in LEO Satellite Networks via Markov Chain-Guided Simulated Annealing
by
Mohammad A. Massad, Abdallah Y. Alma’aitah and Hossam S. Hassanein
Network 2025, 5(4), 49; https://doi.org/10.3390/network5040049 - 3 Nov 2025
Abstract
This paper presents a real-time handover and link assignment framework for low-Earth-orbit (LEO) satellite networks operating in dense urban canyons. The proposed Markov chain-guided simulated annealing (MCSA) algorithm optimizes user-to-satellite assignments under dynamic channel and capacity constraints. By incorporating Markov chains to guide
[...] Read more.
This paper presents a real-time handover and link assignment framework for low-Earth-orbit (LEO) satellite networks operating in dense urban canyons. The proposed Markov chain-guided simulated annealing (MCSA) algorithm optimizes user-to-satellite assignments under dynamic channel and capacity constraints. By incorporating Markov chains to guide state transitions, MCSA achieves faster convergence and more effective exploration than conventional simulated annealing. Simulations conducted in Ku-band urban canyon environments show that the framework achieves an average user satisfaction of about 97%, providing an approximately 10% improvement over genetic algorithm (GA) results. It also delivers 10–15% higher resource utilization, lower blocking rates comparable to integer linear programming (ILP), and superior runtime scalability with linear complexity . These results confirm that MCSA provides a scalable and robust real-time mobility management solution for next-generation LEO satellite systems.
Full article
(This article belongs to the Special Issue Advances in Wireless Communications and Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Intelligent Reflecting-Surface-Aided Orbital Angular Momentum Divergence-Alleviated Wireless Communication Mechanism
by
Qiuli Wu, Yufei Zhao, Shicheng Li, Yiqi Li, Deyu Lin and Xuefeng Jiang
Network 2025, 5(4), 48; https://doi.org/10.3390/network5040048 - 30 Oct 2025
Abstract
Orbital angular momentum (OAM) beams exhibit divergence during transmission, which constrains the capacity of communication system channels. To address these challenges, intelligent reflecting surfaces (IRSs), which can independently manipulate incident electromagnetic waves by adjustment of their amplitude and phase, are employed to construct
[...] Read more.
Orbital angular momentum (OAM) beams exhibit divergence during transmission, which constrains the capacity of communication system channels. To address these challenges, intelligent reflecting surfaces (IRSs), which can independently manipulate incident electromagnetic waves by adjustment of their amplitude and phase, are employed to construct IRS-assisted OAM communication systems. By introducing additional information pathways, IRSs enhance diversity gain. We studied the simulations of two placement methods for an IRS: arbitrary placement and standard placement. In the case of arbitrary placement, the beam reflected by the IRS can be decomposed into different OAM modes, producing various reception powers corresponding to each OAM mode component. This improves the signal-to-noise ratio (SNR) at the receiver, thereby enhancing channel capacity. In particular, when the IRS is symmetrically and uniformly positioned at the center of the main transmission axis, its elements can be approximated as a uniform circular array (UCA). This configuration not only achieves optimal reception along the direction of the maximum gain of the orbital angular momentum beam but also reduces the antenna radius required at the receiver to half or even less.
Full article
Open AccessArticle
Adaptive Context-Aware VANET Routing Protocol for Intelligent Transportation Systems
by
Abdul Karim Kazi, Muhammad Umer Farooq, Raheela Asif and Saman Hina
Network 2025, 5(4), 47; https://doi.org/10.3390/network5040047 - 27 Oct 2025
Abstract
Vehicular Ad-Hoc Networks (VANETs) play a critical role in Intelligent Transportation Systems (ITS), enabling communication between vehicles and roadside infrastructure. This paper proposes an Adaptive Context-Aware VANET Routing (ACAVR) protocol designed to handle the challenges of high mobility, dynamic topology, and variable vehicle
[...] Read more.
Vehicular Ad-Hoc Networks (VANETs) play a critical role in Intelligent Transportation Systems (ITS), enabling communication between vehicles and roadside infrastructure. This paper proposes an Adaptive Context-Aware VANET Routing (ACAVR) protocol designed to handle the challenges of high mobility, dynamic topology, and variable vehicle density in urban environments. The proposed protocol integrates context-aware routing, dynamic clustering, and geographic forwarding to enhance performance under diverse traffic conditions. Simulation results demonstrate that ACAVR achieves higher throughput, improved packet delivery ratio, lower end-to-end delay, and reduced routing overhead compared to existing routing schemes. The proposed ACAVR outperforms benchmark protocols such as DyTE, RGoV, and CAEL, improving PDR by 12–18%, reducing delay by 10–15%, and increasing throughput by 15–22%.
Full article
(This article belongs to the Special Issue Emerging Trends and Applications in Vehicular Ad Hoc Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
A Game-Theoretic Analysis of Cooperation Among Autonomous Systems in Network Federations
by
Rudolf Kovacs, Bogdan Iancu, Vasile Dadarlat and Adrian Peculea
Network 2025, 5(4), 46; https://doi.org/10.3390/network5040046 - 15 Oct 2025
Abstract
►▼
Show Figures
This paper investigates cooperative behavior among Autonomous Systems (ASs) within a federated network environment designed to support collaborative shared-technology deployment. It makes use of the concept of an AS federation, where independently managed systems adhere to a shared standard while maintaining implementation flexibility.
[...] Read more.
This paper investigates cooperative behavior among Autonomous Systems (ASs) within a federated network environment designed to support collaborative shared-technology deployment. It makes use of the concept of an AS federation, where independently managed systems adhere to a shared standard while maintaining implementation flexibility. Using a systematic game-theoretic framework, the study models various coalition structures—including full cooperation, partial coalitions, and defection—across several canonical cooperative games. The analysis evaluates the effects of different cooperation strategies and resource-sharing schemes on payoff distribution and coalition stability. Simulation results over short- and medium-to-long-term horizons demonstrate that cooperative coalition formation, especially with fair payoff allocation, consistently outperforms solitary strategies. The study also identifies key thresholds affecting partial coalition viability and explores the impact of defection on overall federation performance. By linking theoretical game models with practical deployment challenges in heterogeneous networked systems, this work offers valuable insights for designing mechanisms that promote effective cooperation in complex, resource-constrained environments.
Full article

Figure 1
Open AccessArticle
Contrastive Geometric Cross-Entropy: A Unified Explicit-Margin Loss for Classification in Network Automation
by
Yifan Wu, Lei Xiao and Xia Du
Network 2025, 5(4), 45; https://doi.org/10.3390/network5040045 - 9 Oct 2025
Abstract
►▼
Show Figures
As network automation and self-organizing networks (SONs) rapidly evolve, edge devices increasingly demand lightweight, real-time, and high-precision classification algorithms to support critical tasks such as traffic identification, intrusion detection, and fault diagnosis. In recent years, cross-entropy (CE) loss has been widely adopted in
[...] Read more.
As network automation and self-organizing networks (SONs) rapidly evolve, edge devices increasingly demand lightweight, real-time, and high-precision classification algorithms to support critical tasks such as traffic identification, intrusion detection, and fault diagnosis. In recent years, cross-entropy (CE) loss has been widely adopted in deep learning classification tasks due to its computational efficiency and ease of optimization. However, traditional CE methods primarily focus on class separability without explicitly constraining intra-class compactness and inter-class boundaries in the feature space, thereby limiting their generalization performance on complex classification tasks. To address this issue, we propose a novel classification loss framework—Contrastive Geometric Cross-Entropy (CGCE). Without incurring additional computational or memory overhead, CGCE explicitly introduces learnable class representation vectors and constructs the loss function based on the dot-product similarity between features and these class representations, thus explicitly reinforcing geometric constraints in the feature space. This mechanism effectively enhances intra-class compactness and inter-class separability. Theoretical analysis further demonstrates that minimizing the CGCE loss naturally induces clear and measurable geometric class boundaries in the feature space, a desirable property absent from traditional CE methods. Furthermore, CGCE can seamlessly incorporate the prior knowledge of pretrained models, converging rapidly within only a few training epochs (for example, on the CIFAR-10 dataset using the ViT model, a single training epoch is sufficient to reach 99% of the final training accuracy.) Experimental results on both text and image classification tasks show that CGCE achieves accuracy improvements of up to 2% over traditional CE methods, exhibiting stronger generalization capabilities under challenging scenarios such as class imbalance, few-shot learning, and noisy labels. These findings indicate that CGCE has significant potential as a superior alternative to traditional CE methods.
Full article

Figure 1
Open AccessArticle
Hybrid Spatio-Temporal CNN–LSTM/BiLSTM Models for Blocking Prediction in Elastic Optical Networks
by
Farzaneh Nourmohammadi, Jaume Comellas and Uzay Kaymak
Network 2025, 5(4), 44; https://doi.org/10.3390/network5040044 - 7 Oct 2025
Abstract
►▼
Show Figures
Elastic optical networks (EONs) must allocate resources dynamically to accommodate heterogeneous, high-bandwidth demands. However, the continuous setup and teardown of connections with different bit rates can fragment the spectrum and lead to blocking. The blocking predictors enable proactive defragmentation and resource reallocation within
[...] Read more.
Elastic optical networks (EONs) must allocate resources dynamically to accommodate heterogeneous, high-bandwidth demands. However, the continuous setup and teardown of connections with different bit rates can fragment the spectrum and lead to blocking. The blocking predictors enable proactive defragmentation and resource reallocation within network controllers. In this paper, we propose two novel deep learning models (based on CNN–BiLSTM and CNN–LSTM) to predict blocking in EONs by combining spatial feature extraction from spectrum snapshots using 2D convolutional layers with temporal sequence modeling. This hybrid spatio-temporal design learns how local fragmentation patterns evolve over time, allowing it to detect impending blocking scenarios more accurately than conventional methods. We evaluate our model on the simulated NSFNET topology and compare it against multiple baselines, namely 1D CNN, 2D CNN, k-nearest neighbors (KNN), and support vector machines (SVMs). The results show that the proposed CNN–BiLSTM/LSTM models consistently achieve higher performance. The CNN–BiLSTM model achieved the highest accuracy in blocking prediction, while the CNN–LSTM model shows slightly lower accuracy; however, it has much lower complexity and a faster learning time.
Full article

Figure 1
Open AccessArticle
Optimized Hybrid Ensemble Intrusion Detection for VANET-Based Autonomous Vehicle Security
by
Ahmad Aloqaily, Emad E. Abdallah, Aladdin Baarah, Mohammad Alnabhan, Esra’a Alshdaifat and Hind Milhem
Network 2025, 5(4), 43; https://doi.org/10.3390/network5040043 - 3 Oct 2025
Abstract
Connected and Autonomous Vehicles are promising for advancing traffic safety and efficiency. However, the increased connectivity makes these vehicles vulnerable to a broad array of cyber threats. This paper presents a novel hybrid approach for intrusion detection in in-vehicle networks, specifically focusing on
[...] Read more.
Connected and Autonomous Vehicles are promising for advancing traffic safety and efficiency. However, the increased connectivity makes these vehicles vulnerable to a broad array of cyber threats. This paper presents a novel hybrid approach for intrusion detection in in-vehicle networks, specifically focusing on the Controller Area Network bus. Ensemble learning techniques are combined with sophisticated optimization techniques and dynamic adaptation mechanisms to develop a robust, accurate, and computationally efficient intrusion detection system. The proposed system is evaluated on real-world automotive network datasets that include various attack types (e.g., Denial of Service, fuzzy, and spoofing attacks). With these results, the proposed hybrid adaptive system achieves an unprecedented accuracy of 99.995% with a 0.00001% false positive rate, which is significantly more accurate than traditional methods. In addition, the system is very robust to novel attack patterns and is tolerant to varying computational constraints and is suitable for deployment on a real-time basis in various automotive platforms. As this research represents a significant advancement in automotive cybersecurity, a scalable and proactive defense mechanism is necessary to safely operate next-generation vehicles.
Full article
(This article belongs to the Special Issue Emerging Trends and Applications in Vehicular Ad Hoc Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Bijective Network-to-Image Encoding for Interpretable CNN-Based Intrusion Detection System
by
Omesh A. Fernando, Joseph Spring and Hannan Xiao
Network 2025, 5(4), 42; https://doi.org/10.3390/network5040042 - 25 Sep 2025
Abstract
As 5G and beyond networks grow in heterogeneity, complexity, and scale, traditional Intrusion Detection Systems (IDS) struggle to maintain accurate and precise detection mechanisms. A promising alternative approach to this problem has involved the use of Deep Learning (DL) techniques; however, DL-based IDS
[...] Read more.
As 5G and beyond networks grow in heterogeneity, complexity, and scale, traditional Intrusion Detection Systems (IDS) struggle to maintain accurate and precise detection mechanisms. A promising alternative approach to this problem has involved the use of Deep Learning (DL) techniques; however, DL-based IDS suffer from issues relating to interpretation, performance variability, and high computational overheads. These issues limit their practical deployment in real-world applications. In this study, CiNeT is introduced as a novel DL-based IDS employing Convolutional Neural Networks (CNN) within a bijective encoding–decoding framework between network traffic features (such as IPv6, IPv4, Timestamp, MAC addresses, and network data) and their RGB representations. This transformation facilitates our DL IDS in detecting spatial patterns without sacrificing fidelity. The bijective pipeline enables complete traceability from detection decisions to their corresponding network traffic features, enabling a significant initiative towards solving the ‘black-box’ problem inherent in Deep Learning models, thus facilitating digital forensics. Finally, the DL IDS has been evaluated on three datasets, UNSW NB-15, InSDN, and ToN_IoT, with analysis conducted on accuracy, GPU usage, memory utilisation, training, testing, and validation time. To summarise, this study presents a new CNN-based IDS with an end-to-end pipeline between network traffic data and their RGB representation, which offers high performance and enhanced interpretability through revisable transformation.
Full article
(This article belongs to the Special Issue AI-Based Innovations in 5G Communications and Beyond)
►▼
Show Figures

Figure 1
Open AccessArticle
Unified Distributed Machine Learning for 6G Intelligent Transportation Systems: A Hierarchical Approach for Terrestrial and Non-Terrestrial Networks
by
David Naseh, Arash Bozorgchenani, Swapnil Sadashiv Shinde and Daniele Tarchi
Network 2025, 5(3), 41; https://doi.org/10.3390/network5030041 - 17 Sep 2025
Abstract
The successful integration of Terrestrial and Non-Terrestrial Networks (T/NTNs) in 6G is poised to revolutionize demanding domains like Earth Observation (EO) and Intelligent Transportation Systems (ITSs). Still, it requires Distributed Machine Learning (DML) frameworks that are scalable, private, and efficient. Existing methods, such
[...] Read more.
The successful integration of Terrestrial and Non-Terrestrial Networks (T/NTNs) in 6G is poised to revolutionize demanding domains like Earth Observation (EO) and Intelligent Transportation Systems (ITSs). Still, it requires Distributed Machine Learning (DML) frameworks that are scalable, private, and efficient. Existing methods, such as Federated Learning (FL) and Split Learning (SL), face critical limitations in terms of client computation burden and latency. To address these challenges, this paper proposes a novel hierarchical DML paradigm. We first introduce Federated Split Transfer Learning (FSTL), a foundational framework that synergizes FL, SL, and Transfer Learning (TL) to enable efficient, privacy-preserving learning within a single client group. We then extend this concept to the Generalized FSTL (GFSTL) framework, a scalable, multi-group architecture designed for complex and large-scale networks. GFSTL orchestrates parallel training across multiple client groups managed by intermediate servers (RSUs/HAPs) and aggregates them at a higher-level central server, significantly enhancing performance. We apply this framework to a unified T/NTN architecture that seamlessly integrates vehicular, aerial, and satellite assets, enabling advanced applications in 6G ITS and EO. Comprehensive simulations using the YOLOv5 model on the Cityscapes dataset validate our approach. The results show that GFSTL not only achieves faster convergence and higher detection accuracy but also substantially reduces communication overhead compared to baseline FL, and critically, both detection accuracy and end-to-end latency remain essentially invariant as the number of participating users grows, making GFSTL especially well suited for large-scale heterogeneous 6G ITS deployments. We also provide a formal latency decomposition and analysis that explains this scaling behavior. This work establishes GFSTL as a robust and practical solution for enabling the intelligent, connected, and resilient ecosystems required for next-generation transportation and environmental monitoring.
Full article
(This article belongs to the Special Issue Satellite Networks for Communication, Positioning, Navigation and Timing)
►▼
Show Figures

Figure 1
Open AccessArticle
Orchestrating and Choreographing Distributed Self-Explaining Ambient Applications
by
Börge Kordts, Lea C. Brandl and Andreas Schrader
Network 2025, 5(3), 40; https://doi.org/10.3390/network5030040 - 17 Sep 2025
Abstract
The Internet of Things allows us to implement concepts such as Education 4.0 by connecting sensors, actuators, and applications. In the case of direct and explicit connections, we refer to ensembles that can consist of devices and applications. When realizing spatially distributed applications,
[...] Read more.
The Internet of Things allows us to implement concepts such as Education 4.0 by connecting sensors, actuators, and applications. In the case of direct and explicit connections, we refer to ensembles that can consist of devices and applications. When realizing spatially distributed applications, there are scenarios in which these ensembles must coordinate with each other. In software development, this process is referred to as orchestration or choreography. This paper describes a software framework that provides orchestration or choreography for self-explaining ensembles using predefined rules based on a self-description of all involved components. The framework is capable of generating user instructions or explanations for smart environments that cover interaction details. The approach also forms a basis to provide information about event-based coordination. In a case study, we investigated the technical perception of a coordinated spatial learning game application (an ambient serious game). Most participants perceived the application as cohesive and found it responsive. These results suggest that our framework provides a solid foundation for implementing coordinated applications within smart environments that appear as unified applications.
Full article
(This article belongs to the Special Issue Advances in Network Automation and Self-Organizing Networks: Architecture, Algorithms, and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Integrating Reinforcement Learning and LLM with Self-Optimization Network System
by
Xing Xu, Jianbin Zhao, Yu Zhang and Rongpeng Li
Network 2025, 5(3), 39; https://doi.org/10.3390/network5030039 - 16 Sep 2025
Abstract
►▼
Show Figures
The rapid expansion of communication networks and increasingly complex service demands have presented significant challenges to the intelligent management of network resources. To address these challenges, we have proposed a network self-optimization framework integrating the predictive capabilities of the Large Language Model (LLM)
[...] Read more.
The rapid expansion of communication networks and increasingly complex service demands have presented significant challenges to the intelligent management of network resources. To address these challenges, we have proposed a network self-optimization framework integrating the predictive capabilities of the Large Language Model (LLM) with the decision-making capabilities of multi-agent Reinforcement Learning (RL). Specifically, historical network traffic data are converted into structured inputs to forecast future traffic patterns using a GPT-2-based prediction module. Concurrently, a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm leverages real-time sensor data—including link delay and packet loss rates collected by embedded network sensors—to dynamically optimize bandwidth allocation. This sensor-driven mechanism enables the system to perform real-time optimization of bandwidth allocation, ensuring accurate monitoring and proactive resource scheduling. We evaluate our framework in a heterogeneous network simulated using Mininet under diverse traffic scenarios. Experimental results show that the proposed method significantly reduces network latency and packet loss, as well as improves robustness and resource utilization, highlighting the effectiveness of integrating sensor-driven RL optimization with predictive insights from LLMs.
Full article

Figure 1
Open AccessReview
From Counters to Telemetry: A Survey of Programmable Network-Wide Monitoring
by
Nofel Yaseen
Network 2025, 5(3), 38; https://doi.org/10.3390/network5030038 - 16 Sep 2025
Abstract
►▼
Show Figures
Network monitoring is becoming increasingly challenging as networks grow in scale, speed, and complexity. The evolution of monitoring approaches reflects a shift from device-centric, localized techniques toward network-wide observability enabled by modern networking paradigms. Early methods like SNMP polling and NetFlow provided basic
[...] Read more.
Network monitoring is becoming increasingly challenging as networks grow in scale, speed, and complexity. The evolution of monitoring approaches reflects a shift from device-centric, localized techniques toward network-wide observability enabled by modern networking paradigms. Early methods like SNMP polling and NetFlow provided basic insights but struggled with real-time visibility in large, dynamic environments. The emergence of Software-Defined Networking (SDN) introduced centralized control and a global view of network state, opening the door to more coordinated and programmable measurement strategies. More recently, programmable data planes (e.g., P4-based switches) and in-band telemetry frameworks have allowed fine grained, line rate data collection directly from traffic, reducing overhead and latency compared to traditional polling. These developments mark a move away from single point or per flow analysis toward holistic monitoring woven throughout the network fabric. In this survey, we systematically review the state of the art in network-wide monitoring. We define key concepts (topologies, flows, telemetry, observability) and trace the progression of monitoring architectures from traditional networks to SDN to fully programmable networks. We introduce a taxonomy spanning local device measures, path level techniques, global network-wide methods, and hybrid approaches. Finally, we summarize open research challenges and future directions, highlighting that modern networks demand monitoring frameworks that are not only scalable and real-time but also tightly integrated with network control and automation.
Full article

Figure 1
Open AccessReview
Hybrid NFC-VLC Systems: Integration Strategies, Applications, and Future Directions
by
Vindula L. Jayaweera, Chamodi Peiris, Dhanushika Darshani, Sampath Edirisinghe, Nishan Dharmaweera and Uditha Wijewardhana
Network 2025, 5(3), 37; https://doi.org/10.3390/network5030037 - 15 Sep 2025
Abstract
The hybridization of Near-Field Communication (NFC) with Visible Light Communication (VLC) presents a promising framework for robust, secure, and efficient wireless transmission. By combining proximity-based authentication of NFC with high-speed and interference-resistant data transfer of VLC, this approach mitigates the inherent limitations of
[...] Read more.
The hybridization of Near-Field Communication (NFC) with Visible Light Communication (VLC) presents a promising framework for robust, secure, and efficient wireless transmission. By combining proximity-based authentication of NFC with high-speed and interference-resistant data transfer of VLC, this approach mitigates the inherent limitations of each technology, such as the restricted range of NFC and authentication challenges of VLC. The resulting hybrid system leverages NFC for secure handshaking and VLC for high-throughput communication, enabling scalable, real-time applications across diverse domains. This study examines integration strategies, technical enablers, and potential use cases, including smart street poles for secure citizen engagement, patient authentication and record access systems in healthcare, personalized retail advertising, and automated attendance tracking in education. Additionally, this paper addresses key challenges in hybridization and explores future research directions, such as the integration of Artificial Intelligence and 6G networks.
Full article
(This article belongs to the Special Issue Advances in Wireless Communications and Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Efficient, Scalable, and Secure Network Monitoring Platform: Self-Contained Solution for Future SMEs
by
Alfred Stephen Tonge, Babu Kaji Baniya and Deepak GC
Network 2025, 5(3), 36; https://doi.org/10.3390/network5030036 - 10 Sep 2025
Abstract
In this paper, we introduce a novel, self-hosted Syslog collection platform designed specifically to address the challenges that small and medium enterprises (SMEs) face in implementing comprehensive syslog monitoring solutions. Our analysis begins with an assessment of current network observability practices, evaluating enterprise
[...] Read more.
In this paper, we introduce a novel, self-hosted Syslog collection platform designed specifically to address the challenges that small and medium enterprises (SMEs) face in implementing comprehensive syslog monitoring solutions. Our analysis begins with an assessment of current network observability practices, evaluating enterprise solutions, on-premises systems, and Software as a Service (SaaS) offerings to identify features crucial for SME environments. The proposed platform represents an advancement in the field through the incorporation of modern practices, including GitOps and continuous integration and continuous delivery/deployment (CI/CD), and its implementation onto a self-managed Kubernetes platform, which is an approach not commonly explored in SME-focused solutions. We will explore its scalability by leveraging dynamic templates, which allow us to select the number and type of nodes when deploying networks of various sizes. This architecture ensures organisations can deploy a pre-designed, scalable network monitoring solution without extensive external support. The resilience of the proposed platform is assessed by providing empirical evidence of the scaling performance and reliability under various failure scenarios, including node failure and high network throughput stress.
Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
When Robust Isn’t Resilient: Quantifying Budget-Driven Trade-Offs in Connectivity Cascades with Concurrent Self-Healing
by
Waseem Al Aqqad
Network 2025, 5(3), 35; https://doi.org/10.3390/network5030035 - 3 Sep 2025
Abstract
►▼
Show Figures
Cascading link failures continue to imperil power grids, transport networks, and cyber-physical systems, yet the relationship between a network’s robustness at the moment of attack and its subsequent resiliency remains poorly understood. We introduce a dynamic framework in which connectivity-based cascades and distributed
[...] Read more.
Cascading link failures continue to imperil power grids, transport networks, and cyber-physical systems, yet the relationship between a network’s robustness at the moment of attack and its subsequent resiliency remains poorly understood. We introduce a dynamic framework in which connectivity-based cascades and distributed self-healing act concurrently within each time-step. Failure is triggered when a node’s active-neighbor ratio falls below a threshold ; healing activates once the global fraction of inactive nodes exceeds trigger and is limited by budget . Two real data sets—a 332-node U.S. airport graph and a 1133-node university e-mail graph—serve as testbeds. For each graph we sweep the parameter quartet ( ) and record (i) immediate robustness , (ii) 90% recovery time T90, and (iii) cumulative average damage. Results show that targeted hub removal is up to three times more damaging than random failure, but that prompt healing with can halve T90. Scatter-plot analysis reveals a non-monotonic correlation: high- states recover quickly only when and are favorable, whereas low- states can rebound rapidly under ample budgets. A multiplicative fit (with ) captures these interactions. The findings demonstrate that structural hardening alone cannot guarantee fast recovery; resource-aware, early-triggered self-healing is the decisive factor. The proposed model and data-driven insights provide a quantitative basis for designing infrastructure that is both robust to failure and resilient in restoration.
Full article

Figure 1
Open AccessReview
Unlocking Blockchain’s Potential in Supply Chain Management: A Review of Challenges, Applications, and Emerging Solutions
by
Mahafuja Khatun and Tasneem Darwish
Network 2025, 5(3), 34; https://doi.org/10.3390/network5030034 - 26 Aug 2025
Abstract
►▼
Show Figures
Blockchain’s decentralized, immutable, and transparent nature offers a promising solution to enhance security, trust, and efficiency in supply chains. While integrating blockchain into the SCM process poses significant challenges, including technical, operational, and regulatory issues, this review analyzes blockchain’s potential in SCM with
[...] Read more.
Blockchain’s decentralized, immutable, and transparent nature offers a promising solution to enhance security, trust, and efficiency in supply chains. While integrating blockchain into the SCM process poses significant challenges, including technical, operational, and regulatory issues, this review analyzes blockchain’s potential in SCM with a focus on the key challenges encountered when applying blockchain in this domain—such as scalability limitations, interoperability barriers, high implementation costs, and privacy as well as data security concerns. The key contributions are as follows: (1) applications of blockchain across major SCM domains—including pharmaceuticals, healthcare, logistics, and agri-food; (2) SCM functions that benefit from blockchain integration; (3) how blockchain’s properties is reshaping modern SCM processes; (4) the challenges faced by businesses while integrating blockchain into supply chains; (5) a critical evaluation of existing solutions and their limitations, categorized into three main domains; (6) unresolved issues highlighted in dedicated “Critical Issues to Consider” sections; (7) synergies with big data, IoT, and AI for secure and intelligent supply chains, along with challenges of emerging solutions; and (8) unexplored domains for blockchain in SCM. By synthesizing current research and industry insights, this study offers practical guidance and outlines future directions for building scalable and resilient global trade networks.
Full article

Figure 1
Open AccessArticle
A Multiple-Input Multiple-Output Transmission System Employing Orbital Angular Momentum Multiplexing for Wireless Backhaul Applications
by
Afkar Mohamed Ismail, Yufei Zhao and Gaohua Ju
Network 2025, 5(3), 33; https://doi.org/10.3390/network5030033 - 25 Aug 2025
Abstract
This paper presents a long-range experimental demonstration of multi-mode multiple-input multiple-output (MIMO) transmission using orbital angular momentum (OAM) waves for Line-of-Sight (LoS) wireless backhaul applications. A 4 × 4 MIMO system employing distinct OAM modes is implemented and shown to support multiplexing data
[...] Read more.
This paper presents a long-range experimental demonstration of multi-mode multiple-input multiple-output (MIMO) transmission using orbital angular momentum (OAM) waves for Line-of-Sight (LoS) wireless backhaul applications. A 4 × 4 MIMO system employing distinct OAM modes is implemented and shown to support multiplexing data transmission over a single frequency band without inter-channel interference. In contrast, a 2 × 2 plane wave MIMO configuration fails to achieve reliable demodulation due to mutual interference, underscoring the spatial limitations of conventional waveforms. The results confirm that OAM provides spatial orthogonality suitable for high-capacity, frequency-efficient wireless backhaul links. Experimental validation is conducted over an 100 m outdoor path, demonstrating the feasibility of OAM-based MIMO in practical wireless backhaul scenarios.
Full article
(This article belongs to the Special Issue Advances in Wireless Communications and Networks)
►▼
Show Figures

Figure 1
Open AccessReview
A Comprehensive Review of Satellite Orbital Placement and Coverage Optimization for Low Earth Orbit Satellite Networks: Challenges and Solutions
by
Adel A. Ahmed
Network 2025, 5(3), 32; https://doi.org/10.3390/network5030032 - 20 Aug 2025
Abstract
Nowadays, internet connectivity suffers from instability and slowness due to optical fiber cable attacks across the seas and oceans. The optimal solution to this problem is using the Low Earth Orbit (LEO) satellite network, which can resolve the problem of internet connectivity and
[...] Read more.
Nowadays, internet connectivity suffers from instability and slowness due to optical fiber cable attacks across the seas and oceans. The optimal solution to this problem is using the Low Earth Orbit (LEO) satellite network, which can resolve the problem of internet connectivity and reachability, and it has the power to bring real-time, reliable, low-latency, high-bandwidth, cost-effective internet access to many urban and rural areas in any region of the Earth. However, satellite orbital placement (SOP) and navigation should be carefully designed to reduce signal impairments. The challenges of orbital satellite placement for LEO include constellation development, satellite parameter optimization, bandwidth optimization, consideration of signal impairment, and coverage optimization. This paper presents a comprehensive review of SOP and coverage optimization, examines prevalent issues affecting LEO internet connectivity, evaluates existing solutions, and proposes novel solutions to address these challenges. Furthermore, it recommends a machine learning solution for coverage optimization and SOP that can be used to efficiently enhance internet reliability and reachability for LEO satellite networks. This survey will open the gate for developing an optimal solution for global internet connectivity and reachability.
Full article
(This article belongs to the Special Issue Satellite Networks for Communication, Positioning, Navigation and Timing)
►▼
Show Figures

Figure 1
Open AccessCorrection
Correction: Saxena, U.R.; Kadel, R. RACHEIM: Reinforced Reliable Computing in Cloud by Ensuring Restricted Access Control. Network 2025, 5, 19
by
Urvashi Rahul Saxena and Rajan Kadel
Network 2025, 5(3), 31; https://doi.org/10.3390/network5030031 - 19 Aug 2025
Abstract
In the original publication [...]
Full article
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Electronics, Future Internet, Technologies, Telecom, Network, Microwave, Information, Signals
Advanced Propagation Channel Estimation Techniques for Sixth-Generation (6G) Wireless Communications
Topic Editors: Han Wang, Fangqing Wen, Xianpeng WangDeadline: 31 May 2026
Topic in
Computers, Electronics, Future Internet, IoT, Network, Sensors, JSAN, Technologies, BDCC
Challenges and Future Trends of Wireless Networks
Topic Editors: Stefano Scanzio, Ramez Daoud, Jetmir Haxhibeqiri, Pedro SantosDeadline: 30 September 2026
Conferences
Special Issues
Special Issue in
Network
Advanced Technologies in Network and Service Management, 2nd Edition
Guest Editors: Hakim Mellah, Filippo MalandraDeadline: 20 November 2025
Special Issue in
Network
Peer-to-Peer Networking and Applications
Guest Editors: Reshmi Mitra, Indranil RoyDeadline: 30 November 2025
Special Issue in
Network
Advances in Wireless Communications and Networking for Vertical Applications
Guest Editors: Lei Sun, Bo FanDeadline: 15 December 2025
Special Issue in
Network
Advances in Network Automation and Self-Organizing Networks: Architecture, Algorithms, and Applications
Guest Editors: Dapeng Dong, Jun (John) HuangDeadline: 31 December 2025



