Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,152)

Search Parameters:
Keywords = software defined network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 4926 KB  
Article
Hybrid MOCPO–AGE-MOEA for Efficient Bi-Objective Constrained Minimum Spanning Trees
by Dana Faiq Abd, Haval Mohammed Sidqi and Omed Hasan Ahmed
Computers 2025, 14(10), 422; https://doi.org/10.3390/computers14100422 - 2 Oct 2025
Abstract
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the [...] Read more.
The constrained bi-objective Minimum Spanning Tree (MST) problem is a fundamental challenge in network design, as it simultaneously requires minimizing both total edge weight and maximum hop distance under strict feasibility limits; however, most existing algorithms tend to emphasize one objective over the other, resulting in imbalanced solutions, limited Pareto fronts, or poor scalability on larger instances. To overcome these shortcomings, this study introduces a Hybrid MOCPO–AGE-MOEA algorithm that strategically combines the exploratory strength of Multi-Objective Crested Porcupines Optimization (MOCPO) with the exploitative refinement of the Adaptive Geometry-based Evolutionary Algorithm (AGE-MOEA), while a Kruskal-based repair operator is integrated to strictly enforce feasibility and preserve solution diversity. Moreover, through extensive experiments conducted on Euclidean graphs with 11–100 nodes, the hybrid consistently demonstrates superior performance compared with five state-of-the-art baselines, as it generates Pareto fronts up to four times larger, achieves nearly 20% reductions in hop counts, and delivers order-of-magnitude runtime improvements with near-linear scalability. Importantly, results reveal that allocating 85% of offspring to MOCPO exploration and 15% to AGE-MOEA exploitation yields the best balance between diversity, efficiency, and feasibility. Therefore, the Hybrid MOCPO–AGE-MOEA not only addresses critical gaps in constrained MST optimization but also establishes itself as a practical and scalable solution with strong applicability to domains such as software-defined networking, wireless mesh systems, and adaptive routing, where both computational efficiency and solution diversity are paramount Full article
Show Figures

Figure 1

27 pages, 6869 KB  
Article
Evaluation of Cyberattack Detection Models in Power Grids: Automated Generation of Attack Processes
by Davide Cerotti, Daniele Codetta Raiteri, Giovanna Dondossola, Lavinia Egidi, Giuliana Franceschinis, Luigi Portinale, Davide Savarro and Roberta Terruggia
Appl. Sci. 2025, 15(19), 10677; https://doi.org/10.3390/app151910677 - 2 Oct 2025
Abstract
The recent growing adversarial activity against critical systems, such as the power grid, has raised attention on the necessity of appropriate measures to manage the related risks. In this setting, our research focuses on developing tools for early detection of adversarial activities, taking [...] Read more.
The recent growing adversarial activity against critical systems, such as the power grid, has raised attention on the necessity of appropriate measures to manage the related risks. In this setting, our research focuses on developing tools for early detection of adversarial activities, taking into account the specificities of the energy sector. We developed a framework to design and deploy AI-based detection models, and since one cannot risk disrupting regular operation with on-site tests, we also included a testbed for evaluation and fine-tuning. In the test environment, adversarial activity that produces realistic artifacts can be injected and monitored, and evidence analyzed by the detection models. In this paper we concentrate on the emulation of attacks inside our framework: A tool called SecuriDN is used to define, through a graphical interface, the network in terms of devices, applications, and protection mechanisms. Using this information, SecuriDN produces sequences of attack steps (based on the MITRE ATT&CK project) that are interpreted and executed by software called Netsploit. A case study related to Distributed Energy Resources is presented in order to show the process stages, highlight the possibilities given by our framework, and discuss possible limitations and future improvements. Full article
(This article belongs to the Special Issue Advanced Smart Grid Technologies, Applications and Challenges)
Show Figures

Figure 1

19 pages, 2205 KB  
Article
Final Implementation and Performance of the Cheia Space Object Tracking Radar
by Călin Bîră, Liviu Ionescu and Radu Hobincu
Remote Sens. 2025, 17(19), 3322; https://doi.org/10.3390/rs17193322 - 28 Sep 2025
Abstract
This paper presents the final implemented design and performance evaluation of the ground-based C-band Cheia radar system, developed to enhance Romania’s contribution to the EU Space Surveillance and Tracking (EU SST) network. All data used for performance analysis are real-time, real-life measurements of [...] Read more.
This paper presents the final implemented design and performance evaluation of the ground-based C-band Cheia radar system, developed to enhance Romania’s contribution to the EU Space Surveillance and Tracking (EU SST) network. All data used for performance analysis are real-time, real-life measurements of true spatial test objects orbiting Earth. The radar is based on two decommissioned 32 m satellite communication antennas already present at the Cheia Satellite Communication Center, that were retrofitted for radar operation in a quasi-monostatic architecture. A Linear Frequency Modulated Continuous Wave (LFMCW) Radar design was implemented, using low transmitted power (2.5 kW) and advanced software-defined signal processing for detection and tracking of Low Earth Orbit (LEO) targets. System validation involved dry-run acceptance tests and calibration campaigns with known reference satellites. The radar demonstrated accurate measurements of range, Doppler velocity, and angular coordinates, with the capability to detect objects with radar cross-sections as low as 0.03 m2 at slant ranges up to 1200 km. Tracking of medium and large Radar Cross Section (RCS) targets remained robust under both fair and adverse weather conditions. This work highlights the feasibility of re-purposing legacy satellite infrastructure for SST applications. The Cheia radar provides a cost-effective, EUSST-compliant performance solution using primarily commercial off-the-shelf components. The system strengthens the EU SST network while demonstrating the advantages of LFMCW radar architectures in electromagnetically congested environments. Full article
Show Figures

Figure 1

16 pages, 2957 KB  
Article
A Machine Learning Approach to Investigating Key Performance Factors in 5G Standalone Networks
by Yedil Nurakhov, Aksultan Mukhanbet, Serik Aibagarov and Timur Imankulov
Electronics 2025, 14(19), 3817; https://doi.org/10.3390/electronics14193817 - 26 Sep 2025
Abstract
Traditional machine learning approaches for 5G network management relieve data from operational networks, which are often noisy and confounded, making it difficult to identify key influencing factors. This research addresses the critical gap between correlation-based prediction and interpretable, data-driven explanation. To this end, [...] Read more.
Traditional machine learning approaches for 5G network management relieve data from operational networks, which are often noisy and confounded, making it difficult to identify key influencing factors. This research addresses the critical gap between correlation-based prediction and interpretable, data-driven explanation. To this end, a software-defined standalone 5G architecture was developed using srsRAN and Open5GS to support multi-user scenarios. A multi-user environment was then simulated with GNU Radio, from which the initial dataset was collected. This dataset was further generated using a Conditional Tabular Generative Adversarial Network (CTGAN) to improve diversity and balance. Several machine learning models, including Linear Regression, Decision Tree, Random Forest, Gradient Boosting, and XGBoost, were trained and evaluated for predicting network performance. Among them, XGBoost achieved the best results, with an R2 score of 0.998. To interpret the model, we conducted a SHAP (SHapley Additive exPlanations) analysis, which revealed that the download-to-upload bitrate ratio (dl_ul_ratio) and upload bitrate (brate_ul) were the most influential features. By leveraging a controlled experimental 5G environment, this study demonstrates how machine learning can move beyond predictive accuracy to uncover the fundamental principles governing 5G system performance, providing a robust foundation for future network optimization. Full article
Show Figures

Figure 1

19 pages, 1167 KB  
Article
PointFuzz: Efficient Fuzzing of Library Code via Point-to-Point Mutations
by Sheng Wen, Liwei Tian and Suping Liu
Electronics 2025, 14(19), 3796; https://doi.org/10.3390/electronics14193796 - 25 Sep 2025
Abstract
Fuzzing has established itself as a cornerstone technique for uncovering defects in both stand-alone executables and software libraries. In the domain of library testing, prior research has predominantly concentrated on the automated generation of fuzz drivers-code harnesses that invoke individual Application Programming Interfaces [...] Read more.
Fuzzing has established itself as a cornerstone technique for uncovering defects in both stand-alone executables and software libraries. In the domain of library testing, prior research has predominantly concentrated on the automated generation of fuzz drivers-code harnesses that invoke individual Application Programming Interfaces (APIs) under test. While these approaches successfully orchestrate API calls in the correct sequence, they often neglect a critical factor: the semantic relevance and structural validity of the input data supplied to each API parameter. Unlike monolithic programs, where inputs are typically drawn from well-defined file or network formats, API parameters may span a broad spectrum of primitive and composite data types-ranging from integers and floating-point values to strings, containers, and user-defined aggregates—each of which demands tailored mutation strategies to exercise deep code paths and trigger latent faults. To address this gap, we introduce PointFuzz, a novel fuzzing framework that integrates type-aware input generation into existing harness generation pipelines. PointFuzz begins by statically analyzing the API’s function signatures and associated type definitions to accurately identify the data type of every parameter. It then applies a suite of specialized mutation operators. This data-type-guided mutation maximizes the likelihood of traversing previously untested execution branches. Moreover, PointFuzz incorporates an innovative feedback mechanism that dynamically adjusts mutation priorities based on real-time coverage gains. By assigning quantitative scores to parameter-specific operators, our system continuously learns which strategies yield the most valuable inputs, and reallocates computational effort accordingly. Empirical evaluation across multiple widely used C/C++ libraries demonstrates that PointFuzz achieves superior API coverage compared to generic, agnostic-type fuzzers. These results validate the efficacy of combining type-aware mutation with adaptive feedback to advance the state of library API fuzzing. Full article
(This article belongs to the Special Issue Software Engineering: Status and Perspectives)
Show Figures

Figure 1

20 pages, 3944 KB  
Article
Performance Analysis and Security Preservation of DSRC in V2X Networks
by Muhammad Saad Sohail, Giancarlo Portomauro, Giovanni Battista Gaggero, Fabio Patrone and Mario Marchese
Electronics 2025, 14(19), 3786; https://doi.org/10.3390/electronics14193786 - 24 Sep 2025
Viewed by 53
Abstract
Protecting communications within vehicular networks is of paramount importance, particularly when data are transmitted using wireless ad-hoc technologies such as Dedicated Short-Range Communications (DSRC). Vulnerabilities in Vehicle-to-Everything (V2X) communications, especially along highways, pose significant risks, such as unauthorized interception or alteration of vehicle [...] Read more.
Protecting communications within vehicular networks is of paramount importance, particularly when data are transmitted using wireless ad-hoc technologies such as Dedicated Short-Range Communications (DSRC). Vulnerabilities in Vehicle-to-Everything (V2X) communications, especially along highways, pose significant risks, such as unauthorized interception or alteration of vehicle data. This study proposes a Software-Defined Radio (SDR)-based tool designed to assess the protection level of V2X communication systems against cyber attacks. The proposed tool can emulate both reception and transmission of IEEE 802.11p packets while testing DSRC implementation and robustness. The results of this investigation offer valuable contributions toward shaping cybersecurity strategies and frameworks designed to protect the integrity of Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications. Full article
(This article belongs to the Special Issue Computer Networking Security and Privacy)
Show Figures

Figure 1

16 pages, 3974 KB  
Article
Optimizing FDM Printing Parameters via Orthogonal Experiments and Neural Networks for Enhanced Dimensional Accuracy and Efficiency
by Jinxing Wu, Yi Zhang, Wenhao Hu, Changcheng Wu, Zuode Yang and Guangyi Duan
Coatings 2025, 15(10), 1117; https://doi.org/10.3390/coatings15101117 - 24 Sep 2025
Viewed by 111
Abstract
Optimizing printing parameters is crucial for enhancing the efficiency, surface quality, and dimensional accuracy of Fused Deposition Modeling (FDM) processes. A review of numerous publications reveals that most scholars analyze factors such as nozzle diameter and printing speed, while few investigate the impact [...] Read more.
Optimizing printing parameters is crucial for enhancing the efficiency, surface quality, and dimensional accuracy of Fused Deposition Modeling (FDM) processes. A review of numerous publications reveals that most scholars analyze factors such as nozzle diameter and printing speed, while few investigate the impact of layer thickness, infill density, and shell layer count on print quality. Therefore, this study employed 3D slicing software to process the three-dimensional model and design printing process parameters. It systematically investigated the effects of layer thickness, infill density, and number of shells on printing time and geometric accuracy, quantifying the evaluation through volumetric error. Using an ABS connecting rod model, optimal parameters were determined within the defined range through orthogonal experimental design and signal-to-noise ratio (S/N) analysis. Subsequently, a backpropagation (BP) neural network was constructed to establish a predictive model for process optimization. Results indicate that parameter selection significantly impacts print duration and surface quality. Validation confirmed that the combination of 0.1 mm layer thickness, 40% infill density, and 5-layer shell configuration achieves the highest dimensional accuracy (minimum volumetric error and S/N value). Under this configuration, the volumetric error rate was 3.062%, with an S/N value of −9.719. Compared to other parameter combinations, this setup significantly reduced volumetric error, enhanced surface texture, and improved overall print precision. Statistical analysis indicates that the BP neural network model achieves a Mean Absolute Percentage Error (MAPE) of no more than 5.41% for volume error rate prediction and a MAPE of 5.58% for signal-to-noise ratio prediction. This validates the model’s high-precision predictive capability, with the established prediction model providing effective data support for FDM parameter optimization. Full article
Show Figures

Figure 1

21 pages, 2310 KB  
Article
Development of a Model for Detecting Spectrum Sensing Data Falsification Attack in Mobile Cognitive Radio Networks Integrating Artificial Intelligence Techniques
by Lina María Yara Cifuentes, Ernesto Cadena Muñoz and Rafael Cubillos Sánchez
Algorithms 2025, 18(10), 596; https://doi.org/10.3390/a18100596 - 24 Sep 2025
Viewed by 117
Abstract
Mobile Cognitive Radio Networks (MCRNs) have emerged as a promising solution to address spectrum scarcity by enabling dynamic access to underutilized frequency bands assigned to Primary or Licensed Users (PUs). These networks rely on Cooperative Spectrum Sensing (CSS) to identify available spectrum, but [...] Read more.
Mobile Cognitive Radio Networks (MCRNs) have emerged as a promising solution to address spectrum scarcity by enabling dynamic access to underutilized frequency bands assigned to Primary or Licensed Users (PUs). These networks rely on Cooperative Spectrum Sensing (CSS) to identify available spectrum, but this collaborative approach also introduces vulnerabilities to security threats—most notably, Spectrum Sensing Data Falsification (SSDF) attacks. In such attacks, malicious nodes deliberately report false sensing information, undermining the reliability and performance of the network. This paper investigates the application of machine learning techniques to detect and mitigate SSDF attacks in MCRNs, particularly considering the additional challenges introduced by node mobility. We propose a hybrid detection framework that integrates a reputation-based weighting mechanism with Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) classifiers to improve detection accuracy and reduce the influence of falsified data. Experimental results on software defined radio (SDR) demonstrate that the proposed method significantly enhances the system’s ability to identify malicious behavior, achieving high detection accuracy, reduces the rate of data falsification by approximately 5–20%, increases the probability of attack detection, and supports the dynamic creation of a blacklist to isolate malicious nodes. These results underscore the potential of combining machine learning with trust-based mechanisms to strengthen the security and reliability of mobile cognitive radio networks. Full article
Show Figures

Figure 1

23 pages, 3656 KB  
Article
DDoS Attacks Detection in SDN Through Network Traffic Feature Selection and Machine Learning Models
by Edith Paola Estupiñán Cuesta, Juan Carlos Martínez Quintero and Juan David Avilés Palma
Telecom 2025, 6(3), 69; https://doi.org/10.3390/telecom6030069 - 19 Sep 2025
Viewed by 390
Abstract
This research presents a methodology for the detection of distributed denial-of-service (DDoS) attacks in software-defined networks (SDNs). An SDN was configured using the Mininet simulator, the Open Daylight controller, and a web server, which acted as the target to execute a DDoS attack [...] Read more.
This research presents a methodology for the detection of distributed denial-of-service (DDoS) attacks in software-defined networks (SDNs). An SDN was configured using the Mininet simulator, the Open Daylight controller, and a web server, which acted as the target to execute a DDoS attack on the HTTP protocol. The attack tools GoldenEye, Slowloris, HULK, Slowhttptest, and XerXes were used, and two datasets were built using the CICFlowMeter and NTLFlowLyzer flow and feature generation tools, with 424,922 and 731,589 flows, respectively, as well as two independent test datasets. These tools were used to compare their functionalities and efficiency in generating flows and features. Finally, the XGBoost and Random Forest models were evaluated with each dataset, with the objective of identifying the model that provides the best classification result in the detection of malicious traffic. For the XGBoost model, the accuracy results were 99.48% and 97.61%, while for the Random Forest model, better results were obtained with 99.97% and 99.99% using the CIC-Dataset and NTL-Dataset, respectively, in both cases. This allows determining that the Random Forest model outperformed XGBoost in classification, as it achieved the lowest false negative rate of 0.00001 using the NTL-Dataset. Full article
Show Figures

Figure 1

29 pages, 466 KB  
Review
From Counters to Telemetry: A Survey of Programmable Network-Wide Monitoring
by Nofel Yaseen
Network 2025, 5(3), 38; https://doi.org/10.3390/network5030038 - 16 Sep 2025
Viewed by 472
Abstract
Network monitoring is becoming increasingly challenging as networks grow in scale, speed, and complexity. The evolution of monitoring approaches reflects a shift from device-centric, localized techniques toward network-wide observability enabled by modern networking paradigms. Early methods like SNMP polling and NetFlow provided basic [...] Read more.
Network monitoring is becoming increasingly challenging as networks grow in scale, speed, and complexity. The evolution of monitoring approaches reflects a shift from device-centric, localized techniques toward network-wide observability enabled by modern networking paradigms. Early methods like SNMP polling and NetFlow provided basic insights but struggled with real-time visibility in large, dynamic environments. The emergence of Software-Defined Networking (SDN) introduced centralized control and a global view of network state, opening the door to more coordinated and programmable measurement strategies. More recently, programmable data planes (e.g., P4-based switches) and in-band telemetry frameworks have allowed fine grained, line rate data collection directly from traffic, reducing overhead and latency compared to traditional polling. These developments mark a move away from single point or per flow analysis toward holistic monitoring woven throughout the network fabric. In this survey, we systematically review the state of the art in network-wide monitoring. We define key concepts (topologies, flows, telemetry, observability) and trace the progression of monitoring architectures from traditional networks to SDN to fully programmable networks. We introduce a taxonomy spanning local device measures, path level techniques, global network-wide methods, and hybrid approaches. Finally, we summarize open research challenges and future directions, highlighting that modern networks demand monitoring frameworks that are not only scalable and real-time but also tightly integrated with network control and automation. Full article
Show Figures

Figure 1

28 pages, 3252 KB  
Article
Toward Secure SDN Infrastructure in Smart Cities: Kafka-Enabled Machine Learning Framework for Anomaly Detection
by Gayathri Karthick, Glenford Mapp and Jon Crowcroft
Future Internet 2025, 17(9), 415; https://doi.org/10.3390/fi17090415 - 11 Sep 2025
Viewed by 325
Abstract
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This [...] Read more.
As smart cities evolve, the demand for real-time, secure, and adaptive network monitoring, continues to grow. Software-Defined Networking (SDN) offers a centralized approach to managing network flows; However, anomaly detection within SDN environments remains a significant challenge, particularly at the intelligent edge. This paper presents a conceptual Kafka-enabled ML framework for scalable, real-time analytics in SDN environments, supported by offline evaluation and a prototype streaming demonstration. A range of supervised ML models covering traditional methods and ensemble approaches (Random Forest, Linear Regression & XGBoost) were trained and validated using the InSDN intrusion detection dataset. These models were tested against multiple cyber threats, including botnets, dos, ddos, network reconnaissance, brute force, and web attacks, achieving up to 99% accuracy for ensemble classifiers under offline conditions. A Dockerized prototype demonstrates Kafka’s role in offline data ingestion, processing, and visualization through PostgreSQL and Grafana. While full ML pipeline integration into Kafka remains part of future work, the proposed architecture establishes a foundation for secure and intelligent Software-Defined Vehicular Networking (SDVN) infrastructure in smart cities. Full article
Show Figures

Figure 1

6 pages, 505 KB  
Proceeding Paper
Building Application for Software-Defined Network
by Delyan Genkov, Tsvetan Raykov and Miroslav Slavov
Eng. Proc. 2025, 104(1), 92; https://doi.org/10.3390/engproc2025104092 - 11 Sep 2025
Viewed by 234
Abstract
Software-defined networks are a modern approach to computer networks. With this concept, network devices can be monitored and configured centrally. While the lower layers of a software-defined network—devices and controllers—are relatively well known and standardized, the upper layers consist of APIs and software [...] Read more.
Software-defined networks are a modern approach to computer networks. With this concept, network devices can be monitored and configured centrally. While the lower layers of a software-defined network—devices and controllers—are relatively well known and standardized, the upper layers consist of APIs and software applications and are not standard. This article aims to propose one possible way to interact with a software-defined network and to build applications for monitoring and configuring such networks. Full article
Show Figures

Figure 1

38 pages, 3071 KB  
Article
A Hybrid Framework for the Sensitivity Analysis of Software-Defined Networking Performance Metrics Using Design of Experiments and Machine Learning Techniques
by Chekwube Ezechi, Mobayode O. Akinsolu, Wilson Sakpere, Abimbola O. Sangodoyin, Uyoata E. Uyoata, Isaac Owusu-Nyarko and Folahanmi T. Akinsolu
Information 2025, 16(9), 783; https://doi.org/10.3390/info16090783 - 9 Sep 2025
Viewed by 416
Abstract
Software-defined networking (SDN) is a transformative approach for managing modern network architectures, particularly in Internet-of-Things (IoT) applications. However, ensuring the optimal SDN performance and security often needs a robust sensitivity analysis (SA). To complement existing SA methods, this study proposes a new SA [...] Read more.
Software-defined networking (SDN) is a transformative approach for managing modern network architectures, particularly in Internet-of-Things (IoT) applications. However, ensuring the optimal SDN performance and security often needs a robust sensitivity analysis (SA). To complement existing SA methods, this study proposes a new SA framework that integrates design of experiments (DOE) and machine-learning (ML) techniques. Although existing SA methods have been shown to be effective and scalable, most of these methods have yet to hybridize anomaly detection and classification (ADC) and data augmentation into a single, unified framework. To fill this gap, a targeted application of well-established existing techniques is proposed. This is achieved by hybridizing these existing techniques to undertake a more robust SA of a typified SDN-reliant IoT network. The proposed hybrid framework combines Latin hypercube sampling (LHS)-based DOE and generative adversarial network (GAN)-driven data augmentation to improve SA and support ADC in SDN-reliant IoT networks. Hence, it is called DOE-GAN-SA. In DOE-GAN-SA, LHS is used to ensure uniform parameter sampling, while GAN is used to generate synthetic data to augment data derived from typified real-world SDN-reliant IoT network scenarios. DOE-GAN-SA also employs a classification and regression tree (CART) to validate the GAN-generated synthetic dataset. Through the proposed framework, ADC is implemented, and an artificial neural network (ANN)-driven SA on an SDN-reliant IoT network is carried out. The performance of the SDN-reliant IoT network is analyzed under two conditions: namely, a normal operating scenario and a distributed-denial-of-service (DDoS) flooding attack scenario, using throughput, jitter, and response time as performance metrics. To statistically validate the experimental findings, hypothesis tests are conducted to confirm the significance of all the inferences. The results demonstrate that integrating LHS and GAN significantly enhances SA, enabling the identification of critical SDN parameters affecting the modeled SDN-reliant IoT network performance. Additionally, ADC is also better supported, achieving higher DDoS flooding attack detection accuracy through the incorporation of synthetic network observations that emulate real-time traffic. Overall, this work highlights the potential of hybridizing LHS-based DOE, GAN-driven data augmentation, and ANN-assisted SA for robust network behavioral analysis and characterization in a new hybrid framework. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Graphical abstract

27 pages, 2027 KB  
Article
Comparative Analysis of SDN and Blockchain Integration in P2P Streaming Networks for Secure and Reliable Communication
by Aisha Mohmmed Alshiky, Maher Ali Khemakhem, Fathy Eassa and Ahmed Alzahrani
Electronics 2025, 14(17), 3558; https://doi.org/10.3390/electronics14173558 - 7 Sep 2025
Viewed by 498
Abstract
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) [...] Read more.
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) and blockchain individually address aspects of these limitations, their combined potential for comprehensive optimization remains underexplored. This study proposes a distributed SDN (DSDN) architecture enhanced with blockchain support to provide secure, scalable, and reliable P2P video streaming. We identified research gaps through critical analysis of the literature. We systematically compared traditional P2P, SDN-enhanced, and hybrid architectures across six performance metrics: latency, throughput, packet loss, authentication accuracy, packet delivery ratio, and control overhead. Simulations with 200 peers demonstrate that the proposed hybrid SDN–blockchain framework achieves a latency of 140 ms, a throughput of 340 Mbps, an authentication accuracy of 98%, a packet delivery ratio of 97.8%, a packet loss ratio of 2.2%, and a control overhead of 9.3%, outperforming state-of-the-art solutions such as NodeMaps, the reinforcement learning-based routing framework (RL-RF), and content delivery networks-P2P networks (CDN-P2P). This work establishes a scalable and attack-resilient foundation for next-generation P2P streaming. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Graphical abstract

23 pages, 2216 KB  
Article
An Adaptive Application-Aware Dynamic Load Balancing Framework for Open-Source SD-WAN
by Teodor Petrović, Aleksa Vidaković, Ilija Doknić, Mladen Veinović and Živko Bojović
Sensors 2025, 25(17), 5516; https://doi.org/10.3390/s25175516 - 4 Sep 2025
Viewed by 1030
Abstract
Traditional Software-Defined Wide Area Network (SD-WAN) solutions lack adaptive load-balancing mechanisms, leading to inefficient traffic distribution, increased latency, and performance degradation. This paper presents an Application-Aware Dynamic Load Balancing (AADLB) framework designed for open-source SD-WAN environments. The proposed solution enables dynamic traffic routing [...] Read more.
Traditional Software-Defined Wide Area Network (SD-WAN) solutions lack adaptive load-balancing mechanisms, leading to inefficient traffic distribution, increased latency, and performance degradation. This paper presents an Application-Aware Dynamic Load Balancing (AADLB) framework designed for open-source SD-WAN environments. The proposed solution enables dynamic traffic routing based on real-time network performance indicators, including CPU utilization, memory usage, connection delay, and packet loss, while considering application-specific requirements. Unlike conventional load-balancing methods, such as Weighted Round Robin (WRR), Weighted Fair Queuing (WFQ), Priority Queuing (PQ), and Deficit Round Robin (DRR), AADLB continuously updates traffic weights based on application requirements and network conditions, ensuring optimal resource allocation and improved Quality of Service (QoS). The AADLB framework leverages a heuristic-based dynamic weight assignment algorithm to redistribute traffic in a multi-cloud environment, mitigating congestion and enhancing system responsiveness. Experimental results demonstrate that compared to these traditional algorithms, the proposed AADLB framework improved CPU utilization by an average of 8.40%, enhanced CPU stability by 76.66%, increased RAM utilization stability by 6.97%, slightly reduced average latency by 2.58%, and significantly enhanced latency consistency by 16.74%. These improvements enhance SD-WAN scalability, optimize bandwidth usage, and reduce operational costs. Our findings highlight the potential of application-aware dynamic load balancing in SD-WAN, offering a cost-effective and scalable alternative to proprietary solutions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop