Journal Description
Network
Network
is an international, peer-reviewed, open access journal on science and technology of networks, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within ESCI (Web of Science), Scopus, EBSCO, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Engineering (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 23.9 days after submission; acceptance to publication is undertaken in 5.7 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Network is a companion journal of Electronics.
- Journal Clusters of Network and Communications Technology: Future Internet, IoT, Telecom, Journal of Sensor and Actuator Networks, Network, Signals.
Impact Factor:
3.1 (2024);
5-Year Impact Factor:
2.9 (2024)
Latest Articles
LEPA: Low-Overhead and Efficient Privacy-Preserving Authentication Scheme in VANETs
Network 2026, 6(2), 29; https://doi.org/10.3390/network6020029 - 9 May 2026
Abstract
The dynamic nature of Vehicular Ad-hoc Networks (VANETs) necessitates robust authentication mechanisms to prevent adversaries from compromising vehicle privacy. To address privacy concerns, many existing approaches employ pseudonyms in place of real vehicle identities. However, the use of a single pseudonym is insufficient,
[...] Read more.
The dynamic nature of Vehicular Ad-hoc Networks (VANETs) necessitates robust authentication mechanisms to prevent adversaries from compromising vehicle privacy. To address privacy concerns, many existing approaches employ pseudonyms in place of real vehicle identities. However, the use of a single pseudonym is insufficient, as vehicle trajectories can still enable tracking. Consequently, vehicles must frequently change pseudonyms, typically selecting them from a pre-assigned pool, to ensure unlinkability and preserve privacy. In most existing schemes, a central authority issues certificates corresponding to each pseudonym, which vehicles present for authentication. While effective, this approach incurs significant computation, storage, and communication overhead, particularly in managing certificate revocation lists (CRLs), since each vehicle may possess a large number of pseudonyms. To address these challenges, we propose a Low-overhead and Efficient Privacy-preserving Authentication (LEPA) scheme for VANETs, leveraging Merkle Hash Trees (MHTs) and Cuckoo Filters (CFs) to efficiently manage pseudonym sets and revocation. We analyze the security of the proposed scheme against various attacks and demonstrate, through performance evaluation, that LEPA significantly reduces authentication and revocation overhead while maintaining strong privacy and security guarantees.
Full article
Open AccessReview
Fiber-Optic Gyroscopes in Modern Navigation Systems: A Comprehensive Review
by
Nurzhigit Smailov, Yerlan Tashtay, Pawel Komada, Yerzhan Nussupov, Kanat Zhunussov, Askhat Batyrgaliyev, Daulet Naubetov, Aziskhan Amir, Beibarys Sekenov and Darkhan Yerezhep
Network 2026, 6(2), 28; https://doi.org/10.3390/network6020028 - 29 Apr 2026
Abstract
►▼
Show Figures
This paper provides a comprehensive overview of the progress in fiber-optic gyroscope technology, covering 260 key studies of the last ten years. A critical comparative analysis of fiber-optic gyroscope with alternative inertial sensors (Micro-Electro-Mechanical Systems, Hemispherical Resonator Gyroscope, Ring Laser Gyroscope) has been
[...] Read more.
This paper provides a comprehensive overview of the progress in fiber-optic gyroscope technology, covering 260 key studies of the last ten years. A critical comparative analysis of fiber-optic gyroscope with alternative inertial sensors (Micro-Electro-Mechanical Systems, Hemispherical Resonator Gyroscope, Ring Laser Gyroscope) has been carried out. Confirming the unique advantages of fiber-optic gyroscope for autonomous navigation. Fundamental limitations of accuracy are considered in detail: temperature drifts, polarization noise, and Rayleigh backscattering. Modern hardware methods for suppressing these errors, including the use of photonic crystal and hollow fibers (Air-Core/Hollow-Core), are also considered in this work. The central place in the review is occupied by the analysis of the technological paradigm shift from bulky discrete circuits to hybrid integrated photonics (Indium Phosphide, Silicon Nitride, Lithium Niobate) and hybrid architectures to reduce weight and size characteristics. The role of artificial intelligence (Deep Learning, Long Short-Term Memory) methods in nonlinear drift compensation and calibration is discussed. The usage of the Brillouin effect and optomechanics promising areas are outlined, necessary to create a new generation of navigation systems operating in the absence of Global Navigation Satellite Systems signals.
Full article

Figure 1
Open AccessArticle
Performance Analysis of Discrete Hartley Transform-Based Orthogonal Frequency Division Multiplexing for Visible Light Communications
by
Ming Che
Network 2026, 6(2), 27; https://doi.org/10.3390/network6020027 - 21 Apr 2026
Abstract
►▼
Show Figures
A discrete Hartley transform (DHT)-based orthogonal frequency division multiplexing (OFDM) scheme is investigated for intensity modulation/direct detection (IM/DD) visible light communication (VLC) systems, where transmitted signals are required to be real-valued and non-negative. To address this constraint, a practical unipolar transmission framework with
[...] Read more.
A discrete Hartley transform (DHT)-based orthogonal frequency division multiplexing (OFDM) scheme is investigated for intensity modulation/direct detection (IM/DD) visible light communication (VLC) systems, where transmitted signals are required to be real-valued and non-negative. To address this constraint, a practical unipolar transmission framework with corresponding bipolar reconstruction is developed. By exploiting the real-valued and self-inverse properties of the DHT, the proposed scheme removes the need for Hermitian symmetry and enables full utilization of available subcarriers. Under equal-bandwidth conditions, this results in an approximately 50% reduction in computational complexity compared with conventional DCO-OFDM and ACO-OFDM schemes. Theoretical analysis and numerical results further show that the proposed approach achieves comparable bit error rate (BER) performance while exhibiting improved spectral confinement, as reflected by reduced out-of-band sidelobes under identical filtering conditions. In addition, it maintains spectral efficiency equivalent to DCO-OFDM under the same bandwidth constraint. These advantages are achieved at the cost of restricting subcarrier modulation to real-valued constellations, which may reduce flexibility in frequency-selective channels. Overall, these findings support DHT-OFDM as a low-complexity, spectrally confined multicarrier waveform for IM/DD VLC systems, particularly in scenarios where efficient spectrum utilization and reduced adjacent-channel interference are required.
Full article

Figure 1
Open AccessArticle
Ray Tracing Simulators for 5G New Radio Systems: Comparative Analysis Through Urban Measurements at 27 GHz
by
Francesca Lodato, Pierpaolo Salvo, Marcello Folli, Simona Valbonesi, Andrea Garzia, Giuseppe Ruello, Riccardo Suman, Massimo Perobelli, Rita Massa and Antonio Iodice
Network 2026, 6(2), 26; https://doi.org/10.3390/network6020026 - 19 Apr 2026
Abstract
►▼
Show Figures
The use of millimeter-wave spectrum in fifth-generation (5G) systems is increasing the need for accurate prediction of received power and coverage in real deployment scenarios. In this context, ray tracing (RT) is a promising approach for site-specific analysis, although its reliability depends on
[...] Read more.
The use of millimeter-wave spectrum in fifth-generation (5G) systems is increasing the need for accurate prediction of received power and coverage in real deployment scenarios. In this context, ray tracing (RT) is a promising approach for site-specific analysis, although its reliability depends on how accurately different tools reproduce measurements in complex urban environments. This work presents a comparative assessment at 27 GHz of three RT tools: in-house Exact tool based on Vertical Plane Launching (VPL), Matlab 5G and open-source Sionna RT based on Shooting and Bouncing Rays (SBR). The comparison relies on a large outdoor walk-test campaign, including about 14,725 measurement points collected in a real urban area around a 27 GHz mMIMO base station, using real operator-provided antenna radiation patterns. Measured and simulated power levels are compared using statistical metrics, including Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and a planning-oriented coverage-rate metric. The results show a reasonable agreement between simulations and measurements, with RMSE and MAE values around 10–12 dB, highlighting tool-specific behaviors related to boundary effects, interaction modeling, and high-power overestimation. This work confirms that RT is a flexible support for 5G preliminary network design, reducing the need for extensive drive tests.
Full article

Figure 1
Open AccessArticle
Enhancing Smart Grid Cyber Resilience Against FDI Attacks Using Multi-Agent Recurrent DDPG
by
Tahira Mahboob, Mingwei Li, Awais Aziz Shah and Dimitrios Pezaros
Network 2026, 6(2), 25; https://doi.org/10.3390/network6020025 - 17 Apr 2026
Abstract
►▼
Show Figures
Digital substations (DSs) play a critical role in modern Energy and Power Electrical Systems (EPESs), enabling intelligent control, monitoring, and automation. With increased reliance on communication and sensing technologies, DSs are vulnerable to cyberattacks such as False Data Injection (FDI). An adversary may
[...] Read more.
Digital substations (DSs) play a critical role in modern Energy and Power Electrical Systems (EPESs), enabling intelligent control, monitoring, and automation. With increased reliance on communication and sensing technologies, DSs are vulnerable to cyberattacks such as False Data Injection (FDI). An adversary may falsify transformer temperature readings, misleading protection mechanisms and resulting in incorrect disconnection actions. These false disconnections may disrupt power delivery, cause economic losses, and reduce equipment lifespan. To address these challenges, we propose a reinforcement learning-based approach for cyber protection of smart grids against false temperature data injection attacks. Specifically, this work designs a Long Short-Term Memory Deep Deterministic Policy Gradient (LSTM-DDPG) deep reinforcement learning algorithm that learns to detect normal patterns and responds to suspicious thermal patterns by dynamically adjusting disconnection decisions. The agents process sequential state features to differentiate between legitimate overload conditions and sudden anomalies caused by FDI attacks. We implement the proposed approach on the IEEE 30-bus distribution network using the Pandapower simulator. The experimental results indicate that the LSTM-DDPG controller outperforms conventional DDPG and DQN baselines, achieving a recall of 0.897, F1 of 0.945, precision of 1.00 and accuracy of 0.981 with a confidence interval of 95%. In addition, grid stability reaches up to 0.9815, 1.0, 1.0, 0.9926 with respect to the voltage stability score, transformer stability value, disconnection stability, and stability index, respectively. The proposed method led to fewer false disconnections, providing improved robustness against sensor manipulations.
Full article

Graphical abstract
Open AccessArticle
Evaluation of Attack and Recovery in USFC: A Dependability View
by
Jing Bai, Xiaohan Ge, Liangbin Yang, Chunding Wang and Ziyue Yin
Network 2026, 6(2), 24; https://doi.org/10.3390/network6020024 - 14 Apr 2026
Abstract
The integration of service function chains (SFCs) and unmanned aerial vehicles (UAVs) lays a crucial technological foundation for achieving efficient, reliable, and adaptive future airborne service networks. Service functions (SFs) in the SFC will be deployed on UAVs; this type of SFC is
[...] Read more.
The integration of service function chains (SFCs) and unmanned aerial vehicles (UAVs) lays a crucial technological foundation for achieving efficient, reliable, and adaptive future airborne service networks. Service functions (SFs) in the SFC will be deployed on UAVs; this type of SFC is called unmanned aerial vehicle-based service function chains (USFCs). However, due to the combined effects of open hardware and software architectures, exposed communication links, and complex mission environments, UAVs have become ideal targets for attackers. Once a vulnerability is successfully injected into a UAV, data from the SFs running on it will be stolen, seriously threatening the dependability of the USFC. Therefore, it is necessary to conduct a quantitative evaluation of the USFC dependability to provide insights for further improving its dependability. This paper develops a USFC dependability evaluation model based on a semi-Markov process (SMP) to capture the dynamic interaction between attacker behavior and USFC system recovery behavior. The dependability of the USFC is comprehensively evaluated from two perspectives: availability and security. Extensive numerical analysis experiments are conducted, and the results not only demonstrate the changing trends of various dependability metrics under different parameters but also show parameter combinations for synergistic optimization among metrics.
Full article
(This article belongs to the Special Issue Advancements in Space-Air-Ground Integrated Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Adaptive Decision-Level Intrusion Detection for Known and Zero-Day Attacks
by
Joseph P. Mchina, Neema Mduma and Ramadhani S. Sinde
Network 2026, 6(2), 23; https://doi.org/10.3390/network6020023 - 9 Apr 2026
Abstract
Network Intrusion Detection Systems (NIDS) face increasing challenges from sophisticated cyber threats, particularly zero-day attacks that evade signature-based methods. While supervised learning is effective for known attack classification, it struggles with novel threats, whereas anomaly-based approaches suffer from high false positive rates and
[...] Read more.
Network Intrusion Detection Systems (NIDS) face increasing challenges from sophisticated cyber threats, particularly zero-day attacks that evade signature-based methods. While supervised learning is effective for known attack classification, it struggles with novel threats, whereas anomaly-based approaches suffer from high false positive rates and unstable thresholds. To address these limitations, this paper proposes a decision-level adaptive intrusion-detection framework combining hierarchical CNN-based closed-set classification with autoencoder-based zero-day detection in a cascade architecture. The framework enables deployment-time adaptation by dynamically adjusting class-specific confidence thresholds and fusion parameters without model retraining. Experiments on the CSE-CIC-IDS2018 dataset demonstrate strong closed-set performance, achieving 98.98% accuracy and a macro-F1-score of 0.9342, with improved recall for minority attack classes under adaptive thresholding. Under a zero-day evaluation protocol in which Web_Attacks and Infiltration are excluded from training and validation, the proposed approach achieves an F1-score of 0.9319 while maintaining a low false positive rate of 0.0019. The framework is further evaluated on the Simulated University Network Environment (SUNE) dataset representing campus network traffic, achieving 96.18% closed-set accuracy and 97.54% accuracy in the integrated cascade setting. These results demonstrate that the proposed framework effectively balances minority attack detection, zero-day identification, and false-alarm control in dynamic and resource-constrained network environments.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Effective Intrusion Detection for Clouds)
►▼
Show Figures

Figure 1
Open AccessArticle
Mitigating Metamorphic Malware Through Adversarial Learning Techniques
by
Kehinde O. Babaagba and Zhiyuan Tan
Network 2026, 6(2), 22; https://doi.org/10.3390/network6020022 - 8 Apr 2026
Abstract
►▼
Show Figures
Antivirus (AV) solutions remain a core defence mechanism against malicious software. However, many of these engines struggle to detect metamorphic malware, which continually alters its internal form in unpredictable ways. To address this limitation, we present an adversarially oriented approach that automatically generates
[...] Read more.
Antivirus (AV) solutions remain a core defence mechanism against malicious software. However, many of these engines struggle to detect metamorphic malware, which continually alters its internal form in unpredictable ways. To address this limitation, we present an adversarially oriented approach that automatically generates novel malicious variants of existing malware that evade detection by a substantial proportion of AV systems, thereby providing material for strengthening defensive techniques. In this work, an Evolutionary Algorithm (EA) is used to evolve undetectable variants, guided by three fitness criteria: the evasiveness of the produced samples, and their behavioural and structural similarity to the original malware. The proposed method is assessed across three malware families to evaluate the effectiveness of the EA-generated variants. Results indicate that the EA produces diverse mutant variants capable of evading up to 94% of AV detectors for a given malware family, significantly surpassing the evasion rate of the original malware. Furthermore, we evaluated whether the mutants produced by the EA could enhance the training of machine learning models. In this context, a pretrained Natural Language Processing (NLP) transformer was employed within a transfer learning framework to improve the classification of metamorphic malware. When the evolved variants were incorporated into the training data, the approach achieved classification accuracies of up to 93%. These results highlight the value of using diverse EA-generated samples to strengthen malware classifiers, thereby improving the robustness of security systems against evolving threats.
Full article

Figure 1
Open AccessArticle
Efficient Serial Systolic Polynomial Multiplier for Lattice-Based Post-Quantum Cryptographic Schemes in IoT Edge Node
by
Atef Ibrahim and Fayez Gebali
Network 2026, 6(2), 21; https://doi.org/10.3390/network6020021 - 1 Apr 2026
Abstract
The rapid development of the Internet of Things (IoT) is transforming various economic and industrial sectors by embedding interconnected devices within their operational processes. However, security and privacy risks associated with these interconnected devices pose significant barriers to widespread adoption, particularly in light
[...] Read more.
The rapid development of the Internet of Things (IoT) is transforming various economic and industrial sectors by embedding interconnected devices within their operational processes. However, security and privacy risks associated with these interconnected devices pose significant barriers to widespread adoption, particularly in light of potential quantum threats. To mitigate these challenges, it is imperative to employ post-quantum cryptographic schemes. However, essential constraints on IoT edge nodes complicate the effective implementation of such schemes. Among the most promising approaches in post-quantum cryptography are lattice-based schemes, which rely heavily on polynomial multiplication operations at their core. Improving the implementation of polynomial multiplication will significantly enhance the performance of these schemes. Therefore, this paper proposes an efficent low-complexity serial systolic array optimized for polynomial multiplication, particularly tailored for the Binary Ring Learning With Errors (BRLWE) scheme. Designed for cryptographic processors targeting capable IoT edge nodes, the proposed architecture demonstrates remarkable performance improvements, achieving a maximum operating frequency of 280 MHz for a field size of 256, while requiring only 8232 lookup tables (LUTs) and 2616 flip-flops (FFs). These results reflect a 16.8% reduction in LUT usage and a 19% reduction in FFs compared to the nearest competing designs, all while maintaining high throughput and low area utilization. This work significantly advances the establishment of secure and efficient infrastructure for IoT systems, bolstering their resilience against post-quantum attacks and supporting the growth of a robust digital economy. Furthermore, it aligns with sustainable development goals 8 and 9 by fostering trust and facilitating the adoption of cutting-edge IoT technologies, ultimately promoting more resilient and innovative economic activities.
Full article
(This article belongs to the Special Issue Cybersecurity and Privacy in Internet-of-Things: Advances, Challenges, and Emerging Trends)
►▼
Show Figures

Figure 1
Open AccessArticle
Techno-Economic and SLA-Aware Control of 5G Cloud-RAN via Multi-Objective and Penalty-Constrained Reinforcement Learning
by
Sherif M. Aboul, Hala M. Abd El Kader, Esraa M. Eid and Shimaa S. Ali
Network 2026, 6(2), 20; https://doi.org/10.3390/network6020020 - 31 Mar 2026
Abstract
►▼
Show Figures
Fifth-generation (5G) mobile networks must simultaneously satisfy stringent latency targets, high user density, and energy-aware operation across heterogeneous services. Cloud Radio Access Networks (C-RAN) provide architectural flexibility through centralized baseband processing, but they also introduce new control challenges related to fronthaul constraints, dynamic
[...] Read more.
Fifth-generation (5G) mobile networks must simultaneously satisfy stringent latency targets, high user density, and energy-aware operation across heterogeneous services. Cloud Radio Access Networks (C-RAN) provide architectural flexibility through centralized baseband processing, but they also introduce new control challenges related to fronthaul constraints, dynamic traffic variations, and joint radio–compute coordination with Mobile Edge Computing (MEC). This paper proposes a unified AI-driven optimization framework for adaptive 5G C-RAN management, where the controller dynamically tunes key system decisions—including functional split selection, TDD downlink ratio, user–RU association, fronthaul load management, and MEC offloading proportion. To enable fair benchmarking under identical simulation settings, a static baseline policy is compared against five adaptive control strategies: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Multi-Objective Reinforcement Learning (MORL), and a Deterministic Service-Level Agreement (SLA)-aware controller Penalty-Constrained Hierarchical Action Controller (PCHAC). Performance evaluation across techno-economic and service KPIs shows that intelligent control significantly improves operational profit, tail-latency behavior, and energy efficiency while enhancing SLA compliance compared with non-adaptive operation. The results highlight the practicality of multi-objective and constraint-aware learning for next-generation C-RAN orchestration under scaling traffic demand.
Full article

Figure 1
Open AccessArticle
An Intelligent Framework for Crowdsource-Based Spectrum Misuse Detection in Shared-Spectrum Networks
by
Debarun Das and Taieb Znati
Network 2026, 6(2), 19; https://doi.org/10.3390/network6020019 - 26 Mar 2026
Abstract
►▼
Show Figures
Dynamic Spectrum Access (DSA) has emerged as a viable solution to address spectrum scarcity in shared-spectrum networks. In response, the FCC established the Citizens Broadband Radio Service (CBRS) to manage and facilitate shared use of the federal and non-federal spectrum in a three-tiered
[...] Read more.
Dynamic Spectrum Access (DSA) has emerged as a viable solution to address spectrum scarcity in shared-spectrum networks. In response, the FCC established the Citizens Broadband Radio Service (CBRS) to manage and facilitate shared use of the federal and non-federal spectrum in a three-tiered access and authorization framework. However, due to the open nature of spectrum access and the usually limited coverage of the monitoring infrastructure, enforcing access rights in a shared-spectrum network becomes a daunting challenge. In this paper, we stipulate the use of crowdsourcing as a viable approach to engaging volunteers in spectrum monitoring in order to enforce spectrum access rights robustly and reliably. The success of this approach, however, hinges strongly on ensuring that spectrum access enforcement is carried out by reliable and trustworthy volunteers within the monitored area. To this end, a hybrid spectrum monitoring framework is proposed, which relies on opportunistically recruiting volunteers to augment the otherwise limited infrastructure of trusted devices. Although a volunteer’s participation has the potential to enhance monitoring significantly, their mobility may become problematic in ensuring reliable coverage of the monitored spectrum area. To ensure continued monitoring, inspite of volunteer mobility, deep learning-based models are used to predict the likelihood that a volunteer will be available within the monitoring area. Three models, namely LSTM, GRU, and Transformer, are explored to assess their feasibility and viability to predict a volunteer’s availability likelihood over an extended time interval, in a given spectrum monitoring area. Recurrent Neural Networks (RNNs) such as GRU and LSTM are effective for tasks involving sequential data, where both spatial and temporal patterns matter, which is the focus of volunteer availability prediction in spectrum monitoring. Transformers, on the other hand, excel at handling long range dependencies and contextual understanding. Furthermore, their parallel processing capabilities allows faster training and inference compared to RNN-based models like GRU and LSTM. A simulation-based study is developed to assess the performance of these models, and carry out a comparative analysis of their ability to predict volunteers’ availability to monitor the spectrum reliably. To this end, a real-world trace dataset of volunteers’ location, collected over five years, is used. The simulation results show that the three models achieve high prediction accuracy of volunteers’ availability, ranging from 0.82 to 0.92. The results also show that a GRU-based model outperforms LSTM and Transformer-based models, in terms of accuracy, Root Mean Square Error (RMSE), geodesic distance, and execution time.
Full article

Figure 1
Open AccessArticle
TAFL-UWSN: A Trust-Aware Federated Learning Framework for Securing Underwater Sensor Networks
by
Raja Waseem Anwar, Mohammad Abrar, Abdu Salam and Faizan Ullah
Network 2026, 6(1), 18; https://doi.org/10.3390/network6010018 - 19 Mar 2026
Abstract
►▼
Show Figures
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient
[...] Read more.
Underwater Acoustic Sensor Networks (UASNs) are pivotal for environmental monitoring, surveillance, and marine data collection. However, their open and largely unattended operational settings, constrained communication capabilities, limited energy resources, and susceptibility to insider attacks make it difficult to achieve safe, secure, and efficient collaborative learning. Federated learning (FL) offers a privacy-preserving method for decentralized model training but is inherently vulnerable to Byzantine threats and malicious participants. This paper proposes trust-aware FL for underwater sensor networks (TAFL-UWSN), a trust-aware FL framework designed to improve security, reliability, and energy efficiency in UASNs by incorporating trust evaluation directly into the FL process. The goal is to mitigate the impact of adversarial nodes while maintaining model performance in low-resource underwater environments. TAFL-UWSN integrates continuous trust scoring based on packet forwarding reliability, sensing consistency, and model deviation. Trust scores are used to weight or filter model updates both at the node level and the edge layer, where Autonomous Underwater Vehicles (AUVs) act as mobile aggregators. A trust-aware federated averaging algorithm is implemented, and extensive simulations are conducted in a custom Python-based environment, comparing TAFL-UWSN to standard FedAvg and Byzantine-resilient FL approaches under various attack conditions. TAFL-UWSN achieved a model accuracy exceeding 92% with up to 30% malicious nodes while maintaining a false positive rate below 5.5%. Communication overhead was reduced by 28%, and energy usage per node dropped by 33% compared to baseline methods. The TAFL-UWSN framework demonstrates that integrating trust into FL enables secure, efficient, and resilient underwater intelligence, validating its potential for broader application in distributed, resource-constrained environments.
Full article

Figure 1
Open AccessSystematic Review
Green Scheduling and Task Offloading in Edge Computing: A Systematic Review
by
Adriana Rangel Ribeiro, Ana Clara Santos Andrade, Gabriel Leal dos Santos, Guilherme Dinarte Marcondes Lopes, Edvard Martins de Oliveira, Adler Diniz de Souza and Jeremias Barbosa Machado
Network 2026, 6(1), 17; https://doi.org/10.3390/network6010017 - 16 Mar 2026
Abstract
►▼
Show Figures
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures,
[...] Read more.
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, and the Internet of Things (IoT). In this context, Mobile Edge Computing (MEC) is often combined with Unmanned Aerial Vehicles (UAVs) to extend computational capabilities to areas with limited infrastructure, bringing processing closer to the data source to reduce latency and improve scalability. Nevertheless, these systems encounter substantial energy-related challenges, particularly in battery-powered or resource-constrained environments. To address these concerns, green computing strategies—especially energy-efficient scheduling and task offloading—have emerged as promising approaches to optimize energy usage in edge environments. Green scheduling optimizes task allocation to minimize energy consumption, whereas offloading redistributes workloads from resource-constrained devices to edge or cloud servers. Increasingly, these techniques are enhanced through artificial intelligence (AI) and machine learning (ML), enabling adaptive and context-aware decision-making in dynamic environments. This paper conducts a systematic literature review (SLR) to synthesize the most widely adopted strategies for energy-efficient scheduling and task offloading in edge computing, highlighting their impact on sustainability and performance. The analysis provides a comprehensive view of the state of the art, examines how architectural contexts influence energy-aware decisions, and highlights the role of AI/ML in enabling intelligent and sustainable edge systems. The findings reveal current research gaps and outline future directions to advance the development of robust, scalable, and environmentally responsible computing infrastructures.
Full article

Figure 1
Open AccessArticle
Accuracy of Fiber Propagation Evaluation Using Phenomenological Attenuation and Raman Scattering Models in Multiband Optical Networks
by
Giuseppina Maria Rizzi and Vittorio Curri
Network 2026, 6(1), 16; https://doi.org/10.3390/network6010016 - 12 Mar 2026
Abstract
►▼
Show Figures
The constant growth of IP data traffic, driven by sustained annual increases surpassing 26%, is pushing current optical transport infrastructures towards their capacity limits. Since the deployment of new fiber cables is economically demanding, ultra-wideband transmission is emerging as a promising cost-effective solution,
[...] Read more.
The constant growth of IP data traffic, driven by sustained annual increases surpassing 26%, is pushing current optical transport infrastructures towards their capacity limits. Since the deployment of new fiber cables is economically demanding, ultra-wideband transmission is emerging as a promising cost-effective solution, enabled by multi-band amplifiers and transceivers spanning the entire low-loss window of standard single-mode fibers. In this scenario, an accurate modeling of the frequency-dependent fiber parameters is essential to reliably model optical signal propagation. In particular, the combined impact of attenuation variations with frequency and inter-channel stimulated Raman scattering (SRS) fundamentally shapes the power evolution of wide wavelength division multiplexing (WDM) combs and directly affects nonlinear interference (NLI) generation, as well as the amount of ASE noise. In this work, we review a set of analytical approximations, based on phenomenological approaches, for frequency-dependent attenuation and Raman scattering gain, and analyze their impact on achieving an effective balance between computational efficiency and physical fidelity. Through extensive analyses performed with the open-source software GNPy (version 2.12, Telecom Infra Project) on an optical line system exploring multi-band scenarios spanning C+L+S, C+L+E, and U-to-E transmission, we demonstrate that the proposed approximations reproduce the reference SRS power evolution and NLI profiles with root mean square errors (RMSEs) consistently below 0.03 dB, and down to the 10−3–10−2 dB range for the most accurate configurations. Although the current implementation does not yet provide a direct reduction in computational time, the proposed framework lays the groundwork for future developments toward closed-form or semi-analytical solutions, enabling more efficient modeling and optimization of ultra-wideband optical transmission.
Full article

Figure 1
Open AccessArticle
Investigation of Underground Communication Quality Using Distributed Antenna Systems Considering Radio-Frequency Signal Propagation Characteristics in Almaty Metro Tunnels
by
Askar Abdykadyrov, Moldir Kuatova, Nurzhigit Smailov, Zhandos Dosbayev, Sunggat Marxuly, Maxat Mamadiyarov, Ainur Kuttybayeva, Nurlan Kystaubayev and Amirkhan Bekmurza
Network 2026, 6(1), 15; https://doi.org/10.3390/network6010015 - 10 Mar 2026
Abstract
►▼
Show Figures
This study investigates radio-frequency signal propagation in underground metro tunnels with a focus on distributed antenna system (DAS) deployment. Deterministic simulations were performed using Altair WinProp 2024.1 (ProMan) with a 3D ray-tracing engine (GO + UTD) at 2.4 GHz in a reinforced concrete
[...] Read more.
This study investigates radio-frequency signal propagation in underground metro tunnels with a focus on distributed antenna system (DAS) deployment. Deterministic simulations were performed using Altair WinProp 2024.1 (ProMan) with a 3D ray-tracing engine (GO + UTD) at 2.4 GHz in a reinforced concrete tunnel model of 900 m length. Two antenna configurations (B3: 8 dBi directional; B8: 5 dBi wide-beam) were evaluated under identical geometric and material conditions. Results show that path loss varies from 42 to 65 dB over 850 m, with estimated attenuation exponents lower than free-space values due to quasi-waveguide effects. The B3 configuration provides higher near-field received power (up to −7.5 dBm) but exhibits stronger attenuation over long distances. In contrast, the B8 configuration ensures a more uniform spatial power distribution and a reduced path-loss growth rate beyond 500 m. The findings confirm that antenna radiation pattern significantly influences underground communication performance and demonstrate the engineering suitability of distributed antenna systems for stable metro tunnel coverage.
Full article

Figure 1
Open AccessArticle
Experimental Study of Alien Crosstalk Limits in Densely Bundled Commodity 10GBASE-T Ethernet Cables
by
Aleksei Demin, Viktoriia Vasileva and Dmitrii Chaikovskii
Network 2026, 6(1), 14; https://doi.org/10.3390/network6010014 - 9 Mar 2026
Abstract
►▼
Show Figures
In the realm of high-speed Ethernet networks, alien crosstalk (AXT) significantly undermines the integrity and efficiency of data transmission. While existing works mostly focus on modeling and physical-layer mitigation techniques such as PAM16/DSQ128 modulation and LDPC coding, there is a lack of experimental
[...] Read more.
In the realm of high-speed Ethernet networks, alien crosstalk (AXT) significantly undermines the integrity and efficiency of data transmission. While existing works mostly focus on modeling and physical-layer mitigation techniques such as PAM16/DSQ128 modulation and LDPC coding, there is a lack of experimental evidence on how severe AXT affects commodity 10GBASE-T equipment in realistic, densely cabled installations. In this study, we assemble and evaluate the experimental testbed that emulates a highly adverse AXT environment by tightly bundling up to seven 60 m twisted-pair Ethernet cables and using only off-the-shelf 10GBASE-T network cards. We quantitatively characterize how increasing cable density leads to automatic speed downgrades, connection failures, and non-linear saturation of the aggregate throughput, and relate these effects to the observed link quality on individual ports. Our results demonstrate that, even in the presence of standard crosstalk mitigation and error-correction mechanisms, severe AXT can force commodity 10GBASE-T links to fall back from 10 Gbit/s to 1 Gbit/s or below. Based on these findings, we derive practical guidelines for dense-cabling deployments and identify key requirements for experimental testbeds that can more reliably quantify AXT severity and its impact on commodity 10GBASE-T link stability (rate fallback and link loss) under realistic conditions.
Full article

Figure 1
Open AccessArticle
Forecasting-Aware Digital Twin Calibration for Reliable Multi-Horizon Traffic Prediction
by
Zeyad AlJundi, Taqwa A. Alhaj, Fatin A. Elhaj, Inshirah Idris and Tasneem Darwish
Network 2026, 6(1), 13; https://doi.org/10.3390/network6010013 - 6 Mar 2026
Abstract
Digital twin systems are becoming an important tool in intelligent transportation management, as they provide simulation-based environments for monitoring, analyzing, and predicting traffic behavior. However, the predictive performance of traffic digital twins is often limited by the quality and temporal consistency of sensor-level
[...] Read more.
Digital twin systems are becoming an important tool in intelligent transportation management, as they provide simulation-based environments for monitoring, analyzing, and predicting traffic behavior. However, the predictive performance of traffic digital twins is often limited by the quality and temporal consistency of sensor-level data generated from microscopic simulations. Most current calibration methods focus mainly on matching macroscopic traffic indicators, such as vehicle count and speed, without explicitly addressing the requirements of multi-horizon forecasting. This creates a gap between achieving realistic simulations and building reliable predictive models. This research proposes a forecasting-aware digital traffic twin framework that integrates microscopic SUMO simulation, controlled sensor-level observation modeling through geometric misalignment and noise injection, behavioral calibration, and deep temporal forecasting within a unified end-to-end structure. Unlike traditional calibration approaches, the proposed Genetic Algorithm (GA) reformulates calibration as a multi-step predictive optimization task. Simulation parameters are optimized by minimizing forecasting error produced by a lightweight proxy sequence model embedded within the calibration loop. In this way, calibration moves beyond simple statistical matching and instead emphasizes temporal learnability and forecasting stability, enabling the digital twin to generate traffic patterns more suitable for long-term prediction. Based on the calibrated traffic time series, both convolutional and recurrent deep learning models are evaluated under single-step and multi-step forecasting scenarios. To further examine generalizability, external validation is performed using the real-world PEMS-BAY dataset. The experimental findings demonstrate that forecasting-aware calibration reduces macroscopic traffic signal errors by around 50% for vehicle count and around 40% for average speed, improves temporal stability, and significantly enhances forecasting accuracy across both short-term and long-term horizons.
Full article
(This article belongs to the Special Issue Emerging Trends and Applications in Vehicular Ad Hoc Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Satellite Backhaul for Extending Connectivity in Rural Remote Areas: Deployment and Performance Assessment
by
Souhaima Stiri, Maria Rita Palattella, Juan David Niebles Castano and Christos Politis
Network 2026, 6(1), 12; https://doi.org/10.3390/network6010012 - 24 Feb 2026
Abstract
Limited terrestrial network coverage in rural and remote areas constitutes a significant barrier to the digital transformation of the agricultural sector. Smart and precision farming applications, ranging from conventional environmental monitoring systems to advanced Digital Twin solutions, rely on the reliable transmission of
[...] Read more.
Limited terrestrial network coverage in rural and remote areas constitutes a significant barrier to the digital transformation of the agricultural sector. Smart and precision farming applications, ranging from conventional environmental monitoring systems to advanced Digital Twin solutions, rely on the reliable transmission of sensor data, images, and video streams from geographically isolated farms. Such data-intensive services cannot be effectively supported without a robust communication infrastructure. Non-Terrestrial Networks (NTNs), particularly satellite systems, offer both narrowband and broadband connectivity, enabling the transmission of low-rate sensor measurements, as well as high-throughput multimedia data from the field. This paper presents an experimental performance evaluation of two satellite backhauling solutions: a Geostationary Earth Orbit (GEO) system provided by SES and a Low Earth Orbit (LEO) system from Starlink. The networks were first deployed and tested in a laboratory environment and subsequently validated in an operational agricultural field setting. Their performance is benchmarked against a terrestrial cellular network to assess their suitability for supporting advanced agricultural applications. The performance assessment results indicate that both satellite backhauling solutions are reliable and capable of meeting the bandwidth and latency requirements of delay-tolerant agricultural applications. In addition to the technical evaluation, this work presents a cost–benefit analysis that further underscores the advantages of NTN-based solutions. Despite higher initial expenditures, they provide extended coverage in remote areas and enable cost sharing across multiple users, improving overall economic viability.
Full article
(This article belongs to the Special Issue Satellite Networks for Communication, Positioning, Navigation and Timing)
►▼
Show Figures

Figure 1
Open AccessArticle
Beyond Attention: Hierarchical Mamba Models for Scalable Spatiotemporal Traffic Forecasting
by
Zineddine Bettouche, Khalid Ali, Andreas Fischer and Andreas Kassler
Network 2026, 6(1), 11; https://doi.org/10.3390/network6010011 - 13 Feb 2026
Abstract
►▼
Show Figures
Traffic forecasting in cellular networks is a challenging spatiotemporal prediction problem due to strong temporal dependencies, spatial heterogeneity across cells, and the need for scalability to large network deployments. Traditional cell-specific models incur prohibitive training and maintenance costs, while global models often fail
[...] Read more.
Traffic forecasting in cellular networks is a challenging spatiotemporal prediction problem due to strong temporal dependencies, spatial heterogeneity across cells, and the need for scalability to large network deployments. Traditional cell-specific models incur prohibitive training and maintenance costs, while global models often fail to capture heterogeneous spatial dynamics. Recent spatiotemporal architectures based on attention or graph neural networks improve accuracy but introduce high computational overhead, limiting their applicability in large-scale or real-time settings. We propose HiSTM (Hierarchical SpatioTemporal Mamba), a spatiotemporal forecasting architecture built on state-space modeling. HiSTM combines spatial convolutional encoding for local neighborhood interactions with Mamba-based temporal modeling to capture long-range dependencies, followed by attention-based temporal aggregation for prediction. The hierarchical design enables representation learning with linear computational complexity in sequence length and supports both grid-based and correlation-defined spatial structures. Cluster-aware extensions incorporate spatial regime information to handle heterogeneous traffic patterns. Experimental evaluation on large-scale real-world cellular datasets demonstrates that HiSTM achieves better accuracy, outperforming strong baselines. On the Milan dataset, HiSTM reduces MAE by 29.4% compared to STN, while achieving the lowest RMSE and highest R2 score among all evaluated models. In multi-step autoregressive forecasting, HiSTM maintains 36.8% lower MAE than STN and 11.3% lower than STTRE at the 6-step horizon, with a 58% slower error accumulation rate compared to STN. On the unseen Trentino dataset, HiSTM achieves 47.3% MAE reduction over STN and demonstrates better cross-dataset generalization. A single HiSTM model outperforms 10,000 independently trained cell-specific LSTMs, demonstrating the advantage of joint spatiotemporal learning. HiSTM maintains best-in-class performance with up to 30% missing data, outperforming all baselines under various missing data scenarios. The model achieves these results while being 45× smaller than PredRNNpp, 18× smaller than xLSTM, and maintaining competitive inference latency of 1.19 ms, showcasing its effectiveness for scalable 5/6G traffic prediction in resource-constrained environments.
Full article

Figure 1
Open AccessArticle
Round-Trip Time Estimation Using Enhanced Regularized Extreme Learning Machine
by
Hassan Rizky Putra Sailellah, Hilal Hudan Nuha and Aji Gautama Putrada
Network 2026, 6(1), 10; https://doi.org/10.3390/network6010010 - 29 Jan 2026
Abstract
►▼
Show Figures
Reliable Internet connectivity is essential for latency-sensitive services such as video conferencing, media streaming, and online gaming. Round-trip time (RTT) is a key indicator of network performance and is central to setting retransmission timeout (RTO); inaccurate RTT estimates may trigger unnecessary retransmissions or
[...] Read more.
Reliable Internet connectivity is essential for latency-sensitive services such as video conferencing, media streaming, and online gaming. Round-trip time (RTT) is a key indicator of network performance and is central to setting retransmission timeout (RTO); inaccurate RTT estimates may trigger unnecessary retransmissions or slow loss recovery. This paper proposes an Enhanced Regularized Extreme Learning Machine (RELM) for RTT estimation that improves generalization and efficiency by interleaving a bidirectional log-step heuristic to select the regularization constant C. Unlike manual tuning or fixed-range grid search, the proposed heuristic explores C on a logarithmic scale in both directions ( and /10) within a single loop and terminates using a tolerance–patience criterion, reducing redundant evaluations without requiring predefined bounds. A custom RTT dataset is generated using Mininet with a dumbbell topology under controlled delay injections (1–1000 ms), yielding 1000 supervised samples derived from 100,000 raw RTT measurements. Experiments follow a strict train/validation/test split (6:1:3) with training-only standardization/normalization and validation-only hyperparameter selection. On the controlled Mininet dataset, the best configuration (ReLU, 150 hidden neurons, ) achieves , , , and on the test set, while maintaining millisecond-level runtime. Under the same evaluation pipeline, the proposed method demonstrates competitive performance compared to common regression baselines (SVR, GAM, Decision Tree, KNN, Random Forest, GBDT, and ELM), while maintaining lower computational overhead within the controlled simulation setting. To assess practical robustness, an additional evaluation on a public real-world WiFi RSS–RTT dataset shows near-meter accuracy in LOS and mixed LOS/NLOS scenarios, while performance degrades markedly under dominant NLOS conditions, reflecting physical-channel limitations rather than model instability. These results demonstrate the feasibility of the Enhanced RELM and motivate further validation on operational networks with packet loss, jitter, and path variability.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Electronics, Future Internet, Technologies, Telecom, Network, Microwave, Information, Signals
Advanced Propagation Channel Estimation Techniques for Sixth-Generation (6G) Wireless Communications
Topic Editors: Han Wang, Fangqing Wen, Xianpeng WangDeadline: 31 May 2026
Topic in
Computers, Electronics, Future Internet, IoT, Network, Sensors, JSAN, Technologies, BDCC
Challenges and Future Trends of Wireless Networks
Topic Editors: Stefano Scanzio, Ramez Daoud, Jetmir Haxhibeqiri, Pedro SantosDeadline: 30 September 2026
Topic in
Applied Sciences, Electronics, IoT, Sensors, Future Internet, JSAN, Telecom, Network
Applications of IoT in Multidisciplinary Areas
Topic Editors: Nurul Sarkar, Ivan CviticDeadline: 31 October 2026
Topic in
Applied Sciences, Future Internet, Information, Algorithms, Computers, Blockchains, Cryptography, Network
Security and Privacy in Distributed and Trustless Systems
Topic Editors: Longxiang Gao, Bruce GuDeadline: 31 March 2027
Conferences
Special Issues
Special Issue in
Network
Toward Net-Zero Networks: Energy-Aware Protocols and Systems for the Future Internet
Guest Editors: Maurizio D'Arienzo, Ricardo LentDeadline: 31 May 2026
Special Issue in
Network
Cybersecurity and Privacy in Internet-of-Things: Advances, Challenges, and Emerging Trends
Guest Editors: Bin Hu, Jiacun Wang, Xiwang GuoDeadline: 31 May 2026
Special Issue in
Network
Advances in Wireless Communications and Networking for Vertical Applications
Guest Editors: Lei Sun, Bo FanDeadline: 31 May 2026
Special Issue in
Network
Convergence of Edge Computing and Next Generation Networking
Guest Editors: Armir Bujari, Gabriele Elia, Johann M. Marquez-BarjaDeadline: 15 June 2026




