Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (610)

Search Parameters:
Keywords = software defined networks (SDN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2345 KiB  
Article
Towards Intelligent 5G Infrastructures: Performance Evaluation of a Novel SDN-Enabled VANET Framework
by Abiola Ifaloye, Haifa Takruri and Rabab Al-Zaidi
Network 2025, 5(3), 28; https://doi.org/10.3390/network5030028 - 5 Aug 2025
Abstract
Critical Internet of Things (IoT) data in Fifth Generation Vehicular Ad Hoc Networks (5G VANETs) demands Ultra-Reliable Low-Latency Communication (URLLC) to support mission-critical vehicular applications such as autonomous driving and collision avoidance. Achieving the stringent Quality of Service (QoS) requirements for these applications [...] Read more.
Critical Internet of Things (IoT) data in Fifth Generation Vehicular Ad Hoc Networks (5G VANETs) demands Ultra-Reliable Low-Latency Communication (URLLC) to support mission-critical vehicular applications such as autonomous driving and collision avoidance. Achieving the stringent Quality of Service (QoS) requirements for these applications remains a significant challenge. This paper proposes a novel framework integrating Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV) as embedded functionalities in connected vehicles. A lightweight SDN Controller model, implemented via vehicle on-board computing resources, optimised QoS for communications between connected vehicles and the Next-Generation Node B (gNB), achieving a consistent packet delivery rate of 100%, compared to 81–96% for existing solutions leveraging SDN. Furthermore, a Software-Defined Wide-Area Network (SD-WAN) model deployed at the gNB enabled the efficient management of data, network, identity, and server access. Performance evaluations indicate that SDN and NFV are reliable and scalable technologies for virtualised and distributed 5G VANET infrastructures. Our SDN-based in-vehicle traffic classification model for dynamic resource allocation achieved 100% accuracy, outperforming existing Artificial Intelligence (AI)-based methods with 88–99% accuracy. In addition, a significant increase of 187% in flow rates over time highlights the framework’s decreasing latency, adaptability, and scalability in supporting URLLC class guarantees for critical vehicular services. Full article
31 pages, 2736 KiB  
Article
Unseen Attack Detection in Software-Defined Networking Using a BERT-Based Large Language Model
by Mohammed N. Swileh and Shengli Zhang
AI 2025, 6(7), 154; https://doi.org/10.3390/ai6070154 - 11 Jul 2025
Viewed by 626
Abstract
Software-defined networking (SDN) represents a transformative shift in network architecture by decoupling the control plane from the data plane, enabling centralized and flexible management of network resources. However, this architectural shift introduces significant security challenges, as SDN’s centralized control becomes an attractive target [...] Read more.
Software-defined networking (SDN) represents a transformative shift in network architecture by decoupling the control plane from the data plane, enabling centralized and flexible management of network resources. However, this architectural shift introduces significant security challenges, as SDN’s centralized control becomes an attractive target for various types of attacks. While the body of current research on attack detection in SDN has yielded important results, several critical gaps remain that require further exploration. Addressing challenges in feature selection, broadening the scope beyond Distributed Denial of Service (DDoS) attacks, strengthening attack decisions based on multi-flow analysis, and building models capable of detecting unseen attacks that they have not been explicitly trained on are essential steps toward advancing security measures in SDN environments. In this paper, we introduce a novel approach that leverages Natural Language Processing (NLP) and the pre-trained Bidirectional Encoder Representations from Transformers (BERT)-base-uncased model to enhance the detection of attacks in SDN environments. Our approach transforms network flow data into a format interpretable by language models, allowing BERT-base-uncased to capture intricate patterns and relationships within network traffic. By utilizing Random Forest for feature selection, we optimize model performance and reduce computational overhead, ensuring efficient and accurate detection. Attack decisions are made based on several flows, providing stronger and more reliable detection of malicious traffic. Furthermore, our proposed method is specifically designed to detect previously unseen attacks, offering a solution for identifying threats that the model was not explicitly trained on. To rigorously evaluate our approach, we conducted experiments in two scenarios: one focused on detecting known attacks, achieving an accuracy, precision, recall, and F1-score of 99.96%, and another on detecting previously unseen attacks, where our model achieved 99.96% in all metrics, demonstrating the robustness and precision of our framework in detecting evolving threats, and reinforcing its potential to improve the security and resilience of SDN networks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Network Management)
Show Figures

Figure 1

17 pages, 2103 KiB  
Article
Optimizing Time-Sensitive Traffic Scheduling in Low-Earth-Orbit Satellite Networks
by Wei Liu, Nan Xiao, Bo Liu, Yuxian Zhang and Taoyong Li
Sensors 2025, 25(14), 4327; https://doi.org/10.3390/s25144327 - 10 Jul 2025
Viewed by 332
Abstract
In contrast to terrestrial networks, the rapid movement of low-earth-orbit (LEO) satellites causes frequent changes in the topology of intersatellite links (ISLs), resulting in dynamic shifts in transmission paths and fluctuations in multi-hop latency. Moreover, limited onboard resources such as buffer capacity and [...] Read more.
In contrast to terrestrial networks, the rapid movement of low-earth-orbit (LEO) satellites causes frequent changes in the topology of intersatellite links (ISLs), resulting in dynamic shifts in transmission paths and fluctuations in multi-hop latency. Moreover, limited onboard resources such as buffer capacity and bandwidth competition contribute to the instability of these links. As a result, providing reliable quality of service (QoS) for time-sensitive flows (TSFs) in LEO satellite networks becomes a challenging task. Traditional terrestrial time-sensitive networking methods, which depend on fixed paths and static priority scheduling, are ill-equipped to handle the dynamic nature and resource constraints typical of satellite environments. This often leads to congestion, packet loss, and excessive latency, especially for high-priority TSFs. This study addresses the primary challenges faced by time-sensitive satellite networks and introduces a management framework based on software-defined networking (SDN) tailored for LEO satellites. An advanced queue management and scheduling system, influenced by terrestrial time-sensitive networking approaches, is developed. By incorporating differentiated forwarding strategies and priority-based classification, the proposed method improves the efficiency of transmitting time-sensitive traffic at multiple levels. To assess the scheme’s performance, simulations under various workloads are conducted, and the results reveal that it significantly boosts network throughput, reduces packet loss, and maintains low latency, thus optimizing the performance of time-sensitive traffic in LEO satellite networks. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

11 pages, 1015 KiB  
Article
An OpenFlow-Based Elephant-Flow Monitoring and Scheduling Strategy in SDN
by Qinghui Chen, Mingyang Chen, Hong Wen and Yazhi Shi
Electronics 2025, 14(13), 2663; https://doi.org/10.3390/electronics14132663 - 30 Jun 2025
Viewed by 337
Abstract
This paper introduces a novel monitoring and scheduling strategy based on software-defined networking (SDN) to address the challenges of elephant flow scheduling and localization in conventional networks. The plan involves collecting and analyzing switch data, effectively monitoring elephant flows, and enhancing the traditional [...] Read more.
This paper introduces a novel monitoring and scheduling strategy based on software-defined networking (SDN) to address the challenges of elephant flow scheduling and localization in conventional networks. The plan involves collecting and analyzing switch data, effectively monitoring elephant flows, and enhancing the traditional distributed solution. Meanwhile, elephant flow scenarios are simulated by the iperf tool, and Fat-Tree and Leaf-Spine topologies are simulated in Mininet. Experimental results demonstrate significant network stability and resource utilization improvements with the proposed strategy. Specifically, in the Leaf-Spine topology, the network throughput stabilized around 8 Mbps with minimal fluctuation and no congestion over a 120-s test, compared to multiple throughput drops to 0 Mbps under the Fat-Tree topology. In addition, the proposed scheduling approach takes advantage of monitoring and scheduling for elephant flow, a promising scheme to enhance traffic management efficiency in large-scale network environments. Full article
Show Figures

Figure 1

20 pages, 2579 KiB  
Article
ERA-MADDPG: An Elastic Routing Algorithm Based on Multi-Agent Deep Deterministic Policy Gradient in SDN
by Wanwei Huang, Hongchang Liu, Yingying Li and Linlin Ma
Future Internet 2025, 17(7), 291; https://doi.org/10.3390/fi17070291 - 29 Jun 2025
Viewed by 348
Abstract
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic [...] Read more.
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) in deep reinforcement learning. The algorithm first builds a three-layer architecture based on Software-Defined Networking (SDN). The top-down layers are the multi-agent layer, the controller layer, and the data layer. The architecture’s processing flow, including real-time data layer information collection and dynamic policy generation, enables the ERA-MADDPG algorithm to exhibit strong elasticity by quickly adjusting routing decisions in response to topology changes. The actor-critic framework combined with Convolutional Neural Networks (CNN) to implement the ERA-MADDPG routing algorithm effectively improves training efficiency, enhances learning stability, facilitates collaboration, and improves algorithm generalization and applicability. Finally, simulation experiments demonstrate that the convergence speed of the ERA-MADDPG routing algorithm outperforms that of the Multi-Agent Deep Q-Network (MADQN) algorithm and the Smart Routing based on Deep Reinforcement Learning (SR-DRL) algorithm, and the training speed in the initial phase is improved by approximately 20.9% and 39.1% compared to the MADQN algorithm and SR-DRL algorithm, respectively. The elasticity performance of ERA-MADDPG is quantified by re-convergence speed: under 5–15% topology node/link changes, its re-convergence speed is over 25% faster than that of MADQN and SR-DRL, demonstrating superior capability to maintain routing efficiency in dynamic environments. Full article
Show Figures

Figure 1

28 pages, 1509 KiB  
Article
Adaptive Congestion Detection and Traffic Control in Software-Defined Networks via Data-Driven Multi-Agent Reinforcement Learning
by Kaoutar Boussaoud, Abdeslam En-Nouaary and Meryeme Ayache
Computers 2025, 14(6), 236; https://doi.org/10.3390/computers14060236 - 16 Jun 2025
Viewed by 556
Abstract
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven [...] Read more.
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven framework based on Multi-Agent Reinforcement Learning (MARL) to enable intelligent, adaptive congestion control in SDNs. The framework integrates two collaborative agents: a Congestion Classification Agent that identifies congestion levels using metrics such as delay and packet loss, and a Decision-Making Agent based on Deep Q-Learning (DQN or its variants), which selects the optimal actions for routing and bandwidth management. The agents are trained offline using both synthetic and real network traces (e.g., the MAWI dataset), and deployed in a simulated SDN testbed using Mininet and the Ryu controller. Extensive experiments demonstrate the superiority of the proposed system across key performance metrics. Compared to baseline controllers, including standalone DQN and static heuristics, the MARL system achieves up to 3.0% higher throughput, maintains end-to-end delay below 10 ms, and reduces packet loss by over 10% in real traffic scenarios. Furthermore, the architecture exhibits stable cumulative reward progression and balanced action selection, reflecting effective learning and policy convergence. These results validate the benefit of agent specialization and modular learning in scalable and intelligent SDN traffic engineering. Full article
Show Figures

Figure 1

28 pages, 2413 KiB  
Article
A Performance Evaluation for Software Defined Networks with P4
by Omesh A. Fernando, Hannan Xiao, Joseph Spring and Xianhui Che
Network 2025, 5(2), 21; https://doi.org/10.3390/network5020021 - 11 Jun 2025
Viewed by 573
Abstract
The exponential growth in the number of devices connected via the internet has led to the need to achieve granular programmability for increased performance, resilience, reduced latency, and jitter. Software Defined Networking (SDN) and Programming Protocol independent Packet Processing (P4) are designed to [...] Read more.
The exponential growth in the number of devices connected via the internet has led to the need to achieve granular programmability for increased performance, resilience, reduced latency, and jitter. Software Defined Networking (SDN) and Programming Protocol independent Packet Processing (P4) are designed to introduce programmability into the control and data plane of networks, respectively. Despite their individual potential and capabilities, the performance of combining SDN and P4 remains underexplored. This study presents a comprehensive evaluation of SDN with data plane programmability using P4 (SDN+P4) against traditional SDN with Open vSwitch (SDN+OvS), aimed at answering the hypothesis that combining SDN and P4 strengthens the control and data plane programmability and offers improved management and adaptability, which would provide a platform with faster packet processing with reduced jitter, loss, and processing overhead. Mininet was employed to emulate three distinct topologies: multi-path, grid, and transit-stub. Various traffic types were transmitted to assess performance metrics across the three topologies. Our results demonstrate that SDN+P4 outperform SDN+OvS significantly due to parallel processing, flexible parsing, and reduced overhead. The evaluation demonstrates the potential of SDN+P4 to provide a more resilient and stringent service with improved network performance for the future internet and its heterogeneity of applications. Full article
Show Figures

Figure 1

24 pages, 4339 KiB  
Article
Dynamic Load Management in Modern Grid Systems Using an Intelligent SDN-Based Framework
by Khawaja Tahir Mehmood and Muhammad Majid Hussain
Energies 2025, 18(12), 3001; https://doi.org/10.3390/en18123001 - 6 Jun 2025
Viewed by 465
Abstract
For modern power plants to be dependable, safe, sustainable, and provide the highest operational efficiency (i.e., enhance dynamic load distribution with a faster response time at reduced reactive losses), there must be an intelligent dynamic load management system based on modern computational techniques [...] Read more.
For modern power plants to be dependable, safe, sustainable, and provide the highest operational efficiency (i.e., enhance dynamic load distribution with a faster response time at reduced reactive losses), there must be an intelligent dynamic load management system based on modern computational techniques to prevent overloading of power devices (i.e., alternators, transformers, etc.) in grid systems. In this paper, a co-simulation framework (Panda-SDN Load Balancer) is designed to achieve maximum operational efficiency from the power grid with the prime objective of real-time intelligent load balancing of operational power devices (i.e., power transformers, etc.). This framework is based on the integration of two tools: (a) PandaPower (an open-source Python tool) used for real-time power data (voltage; current; real power, PReal; apparent power, PApparent; reactive power, PReactive; power factor, PF; etc.) load flow analysis; (b) Mininet used for the designing of a Software-Defined Network (SDN) with a POX controller for managing the load patterns on power transformers after load flow analysis obtained through PandaPower via the synchronization tool Message Queuing Telemetry Transport (MQTT) and Intelligent Electrical Devices (IEDs). In this research article, the simulation is performed in three scenarios: (a) normal flow, (b) loaded flow without the proposed framework, and (c) loaded flow with the proposed framework. As per simulation results, the proposed framework offered intelligent substation automation with (a) balanced utilization of a transformer, (b) enhanced system power factor in extreme load conditions, and (c) significant gain in system operational efficiency as compared to legacy load management methods. Full article
Show Figures

Figure 1

20 pages, 817 KiB  
Article
Cross-Layer Security for 5G/6G Network Slices: An SDN, NFV, and AI-Based Hybrid Framework
by Zeina Allaw, Ola Zein and Abdel-Mehsen Ahmad
Sensors 2025, 25(11), 3335; https://doi.org/10.3390/s25113335 - 26 May 2025
Viewed by 903
Abstract
Within the dynamic landscape of fifth-generation (5G) and emerging sixth-generation (6G) wireless networks, the adoption of network slicing has revolutionized telecommunications by enabling flexible and efficient resource allocation. However, this advancement introduces new security challenges, as traditional protection mechanisms struggle to address the [...] Read more.
Within the dynamic landscape of fifth-generation (5G) and emerging sixth-generation (6G) wireless networks, the adoption of network slicing has revolutionized telecommunications by enabling flexible and efficient resource allocation. However, this advancement introduces new security challenges, as traditional protection mechanisms struggle to address the dynamic and complex nature of sliced network environments. This study proposes a Hybrid Security Framework Using Cross-Layer Integration, combining Software-Defined Networking (SDN), Network Function Virtualization (NFV), and AI-driven anomaly detection to strengthen network defenses. By integrating security mechanisms across multiple layers, the framework effectively mitigates threats, ensuring the integrity and confidentiality of network slices. An implementation was developed, focusing on the AI-based detection process using a representative 5G security dataset. The results demonstrate promising detection accuracy and real-time response capabilities. While full SDN/NFV integration remains under development, these findings lay the groundwork for scalable, intelligent security architectures tailored to the evolving needs of next-generation networks. Full article
Show Figures

Figure 1

19 pages, 2392 KiB  
Article
Intelligent Resource Allocation for Immersive VoD Multimedia in NG-EPON and B5G Converged Access Networks
by Razat Kharga, AliAkbar Nikoukar and I-Shyan Hwang
Photonics 2025, 12(6), 528; https://doi.org/10.3390/photonics12060528 - 22 May 2025
Viewed by 598
Abstract
Immersive content streaming services are becoming increasingly popular on video on demand (VoD) platforms due to the growing interest in extended reality (XR) and spatial experiences. Unlike traditional VoD, immersive VoD (IVoD) offers more engaging and interactive content beyond conventional 2D video. IVoD [...] Read more.
Immersive content streaming services are becoming increasingly popular on video on demand (VoD) platforms due to the growing interest in extended reality (XR) and spatial experiences. Unlike traditional VoD, immersive VoD (IVoD) offers more engaging and interactive content beyond conventional 2D video. IVoD requires substantial bandwidth and minimal latency to deliver its interactive XR experiences. This research examines intelligent resource allocation for IVoD services across NG-EPON and B5G X-haul converged networks. A proposed software-defined networking (SDN) framework employs artificial neural networks (ANN) with a backpropagation technique to predict bandwidth control based on traffic patterns and network conditions. The new immersive video storage, field-programmable gate array (FPGA), Queue Manager, and logical layer components are added to the existing OLT and ONU hardware architecture to implement the SDN framework. The SDN framework manages the entire network, predicts bandwidth requirements, and operates the immersive media dynamic bandwidth allocation (IMS-DBA) algorithm to efficiently allocate bandwidth to IVoD network traffic, ensuring that QoS metrics are met for IM services. Simulation results demonstrate that the proposed framework significantly enhances mean packet delay by up to 3% and improves packet drop probability by up to 4% as the traffic load varies from light to high across different scenarios, leading to enhanced overall QoS performance. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

14 pages, 397 KiB  
Article
Service Function Chain Migration: A Survey
by Zhiping Zhang and Changda Wang
Computers 2025, 14(6), 203; https://doi.org/10.3390/computers14060203 - 22 May 2025
Viewed by 695
Abstract
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant [...] Read more.
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant challenges, such as resource fluctuations, user mobility, and fault recovery. To ensure service continuity and optimize resource utilization, an efficient migration mechanism is essential. This paper presents a comprehensive review of SFC migration research, analyzing it across key dimensions including migration motivations, strategy design, optimization goals, and core challenges. Existing approaches have demonstrated promising results in both passive and active migration strategies, leveraging techniques such as reinforcement learning for dynamic scheduling and digital twins for resource prediction. Nonetheless, critical issues remain—particularly regarding service interruption control, state consistency, algorithmic complexity, and security and privacy concerns. Traditional optimization algorithms often fall short in large-scale, heterogeneous networks due to limited computational efficiency and scalability. While machine learning enhances adaptability, it encounters limitations in data dependency and real-time performance. Future research should focus on deeply integrating intelligent algorithms with cross-domain collaboration technologies, developing lightweight security mechanisms, and advancing energy-efficient solutions. Moreover, coordinated innovation in both theory and practice is crucial to addressing emerging scenarios like 6G and edge computing, ultimately paving the way for a highly reliable and intelligent network service ecosystem. Full article
Show Figures

Figure 1

28 pages, 2049 KiB  
Review
A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions
by Baha Uddin Kazi, Md Kawsarul Islam, Muhammad Mahmudul Haque Siddiqui and Muhammad Jaseemuddin
Network 2025, 5(2), 16; https://doi.org/10.3390/network5020016 - 20 May 2025
Cited by 1 | Viewed by 1248
Abstract
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address [...] Read more.
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address these challenges, edge cloud-distributed computing networks emerge. Because of the distributed nature of edge cloud networks, many research works considering software defined networks (SDNs) and network–function–virtualization (NFV) could be key enablers for managing, orchestrating, and load balancing resources. This article provides a comprehensive survey of these emerging technologies, focusing on SDN controllers, orchestration, and the function of artificial intelligence (AI) in enhancing the capabilities of controllers within the edge cloud computing networks. More specifically, we present an extensive survey on the research proposals on the integration of SDN controllers and orchestration with the edge cloud networks. We further introduce a holistic overview of SDN-enabled edge cloud networks and an inclusive summary of edge cloud use cases and their key challenges. Finally, we address some challenges and potential research directions for further exploration in this vital research area. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

11 pages, 1182 KiB  
Proceeding Paper
A Decentralized Framework for the Detection and Prevention of Distributed Denial of Service Attacks Using Federated Learning and Blockchain Technology
by Mao-Hsiu Hsu and Chia-Chun Liu
Eng. Proc. 2025, 92(1), 48; https://doi.org/10.3390/engproc2025092048 - 6 May 2025
Viewed by 553
Abstract
With the rapid development of the internet of things (IoT) and smart cities, the risk of network attacks, particularly distributed denial of service (DDoS) attacks, has significantly increased. Traditional centralized security systems struggle to address large-scale attacks while simultaneously safeguarding privacy. In this [...] Read more.
With the rapid development of the internet of things (IoT) and smart cities, the risk of network attacks, particularly distributed denial of service (DDoS) attacks, has significantly increased. Traditional centralized security systems struggle to address large-scale attacks while simultaneously safeguarding privacy. In this study, we created a decentralized security framework that integrates federated learning (FL) with blockchain technology for DDoS attack detection and prevention. Federated learning enables devices to collaboratively learn without sharing raw data and ensures data privacy, while blockchain provides immutable event logging and distributed monitoring to enhance the overall security of the system. The created framework leverages multi-layer encryption and Hashgraph technology for event recording, ensuring data integrity and efficiency. Additionally, software-defined networking (SDN) was employed for dynamic resource management and rapid responses to attacks. This system improves the accuracy of DDoS detection and effectively reduces communication costs and resource consumption. It has significant potential for large-scale attack defense in IoT and smart city environments. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

5 pages, 135 KiB  
Editorial
Novel Methods Applied to Security and Privacy Problems in Future Networking Technologies
by Irfan-Ullah Awan, Amna Qureshi and Muhammad Shahwaiz Afaqui
Electronics 2025, 14(9), 1816; https://doi.org/10.3390/electronics14091816 - 29 Apr 2025
Viewed by 348
Abstract
The rapid development of future networking technologies, such as 5G, 6G, blockchain, the Internet of Things (IoT), cloud computing, and Software-Defined Networking (SDN) is set to revolutionize our methods of connection, communication, and data sharing [...] Full article
22 pages, 5204 KiB  
Article
Faulty Links’ Fast Recovery Method Based on Deep Reinforcement Learning
by Wanwei Huang, Wenqiang Gui, Yingying Li, Qingsong Lv, Jia Zhang and Xi He
Algorithms 2025, 18(5), 241; https://doi.org/10.3390/a18050241 - 24 Apr 2025
Viewed by 418
Abstract
Aiming to address the high recovery delay and link congestion issues in the communication network of Wide-Area Measurement Systems (WAMSs), this paper introduces Software-Defined Networking (SDN) and proposes a deep reinforcement learning-based faulty-link fast recovery method (DDPG-LBBP). The DDPG-LBBP method takes delay and [...] Read more.
Aiming to address the high recovery delay and link congestion issues in the communication network of Wide-Area Measurement Systems (WAMSs), this paper introduces Software-Defined Networking (SDN) and proposes a deep reinforcement learning-based faulty-link fast recovery method (DDPG-LBBP). The DDPG-LBBP method takes delay and link utilization as the optimization objectives and uses gated recurrent neural network to accelerate algorithm convergence and output the optimal link weights for load balancing. By designing maximally disjoint backup paths, the method ensures the independence of the primary and backup paths, effectively preventing secondary failures caused by path overlap. The experiment compares the (1+2ε)-BPCA, FFRLI, and LIR methods using IEEE 30 and IEEE 57 benchmark power system communication network topologies. Experimental results show that DDPG-LBBP outperforms the others in faulty-link recovery delay, packet loss rate, and recovery success rate. Specifically, compared to the superior algorithm (1+2ε)-BPCA, recovery delay is decreased by about 12.26% and recovery success rate is improved by about 6.91%. Additionally, packet loss rate is decreased by about 15.31% compared to the superior FFRLI method. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Back to TopTop