Next Article in Journal
Payment Rails in Smart Contract as a Service (SCaaS) Solutions from BPMN Models
Previous Article in Journal
Novel Video Understanding Approach for Embodied Learning of Robotics Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Systematic Review of Machine-Learning-Based Detection of DDoS Attacks in Software-Defined Networks

by
Surendren Ganeshan
and
R Kanesaraj Ramasamy
*
Faculty of Computing and Informatics, Multimedia University, Cyberjaya 63000, Malaysia
*
Author to whom correspondence should be addressed.
Future Internet 2026, 18(2), 109; https://doi.org/10.3390/fi18020109
Submission received: 22 January 2026 / Revised: 11 February 2026 / Accepted: 14 February 2026 / Published: 19 February 2026

Abstract

Software-Defined Networking (SDN) has emerged as a fundamental architecture for future Internet systems by enabling centralized control, programmability, and fine-grained traffic management. However, the logical centralization of the SDN control plane also introduces critical vulnerabilities, particularly to Distributed Denial-of-Service (DDoS) attacks that can severely disrupt network availability and performance. To address these challenges, machine-learning (ML) techniques have been increasingly adopted to enable intelligent, adaptive, and data-driven DDoS detection mechanisms within SDN environments. This study presents a PRISMA-guided systematic literature review of recent ML-based approaches for DDoS detection in SDN-based networks. A comprehensive search of IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar identified 38 primary studies published between 2021 and 2025. The selected studies were systematically analyzed to examine learning paradigms, experimental environments, evaluation metrics, datasets, and emerging architectural trends. The synthesis reveals that while single machine-learning classifiers remain dominant in the literature, hybrid and ensemble-based approaches are increasingly adopted to improve detection robustness under dynamic and high-volume traffic conditions. Experimental evaluations are predominantly conducted using SDN emulation platforms such as Mininet integrated with controllers, including Ryu and OpenDaylight, with performance commonly measured using accuracy, precision, recall, and F1 score, alongside emerging system-level metrics such as detection latency and controller resource utilization. Public datasets, including CICIDS2017, CICDDoS2019, and InSDN, are widely used, although a significant portion of studies rely on custom SDN-generated datasets to capture control-plane-specific behaviors. Despite notable advances in detection accuracy, several challenges persist, including limited generalization to low-rate and unknown attacks, dependency on synthetic traffic, and insufficient validation under real-time operational conditions. Based on the synthesized findings, this review highlights key research directions toward intelligent, scalable, and resilient DDoS defense mechanisms for future Internet architectures, emphasizing adaptive learning, lightweight deployment, and integration with programmable networking infrastructures.

Graphical Abstract

1. Introduction

The increasing dependence on networked systems for cloud services, critical infrastructures, and large-scale online applications has made network security a primary industry concern. Among cyber threats, Distributed Denial-of-Service (DDoS) attacks remain disruptive and can overwhelm network resources, rendering services unavailable. Software-Defined Networking (SDN) offers new opportunities for intelligent network management and security enforcement. By decoupling the control plane from the data plane, SDN enables centralized visibility, fine-grained traffic monitoring, and rapid policy-based responses that are difficult to achieve in traditional networks. However, this centralization also introduces new attack surfaces, making the SDN controller itself a high-value target for sophisticated DDoS attacks.

1.1. Machine Learning for DDoS Detection in SDN

Machine-learning (ML) techniques offer the ability to automatically learn complex patterns from network traffic data, making them well-suited for detecting both high-rate and low-rate DDoS attacks [1]. In SDN-based networks, flow-level statistics collected via protocols such as OpenFlow provide rich features that can be leveraged by ML models for traffic classification [2]. Consequently, numerous studies have proposed ML-based DDoS detection frameworks using traditional supervised learning algorithms, ensemble methods, and deep-learning architectures.
However, the effectiveness of these approaches varies significantly depending on the selected features, learning models, datasets, and experimental environments. Many studies rely on simulation-based platforms such as Mininet, combined with SDN controllers like Ryu or OpenDayLight [3], while others employ private or synthetic datasets and compare them with public datasets to evaluate performance differences [4]. In terms of ML models, there are many solid classifiers that can contribute to the efficacy of DDoS detection in the technology industry for years to come. Whether utilized as a standalone, ensemble, or hybrid architecture, these models continue to provide effective security solutions.

1.2. Research Contributions and Paper Organization

This study presents a Systematic Literature Review (SLR) conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The primary objective of this research is to systematically analyse and synthesize the existing literature on ML-based DDoS detection and mitigation in SDN environments. The key contributions of this work are summarized as follows:
  • Providing a comprehensive classification and analysis of existing ML-based approaches for detecting DDoS attacks in SDN networks.
  • Reviewing the evaluation metrics, network simulators, attack generation tools, and experimental platforms commonly utilized across the literature.
  • Assessing the datasets used for training and validating ML models, with particular emphasis on publicly available and realistic SDN-based DDoS datasets.
  • Identifying key challenges, open research issues, and future directions to guide the development of practical, ML-based DDoS detection application in SDN environments.
The remainder of this paper is organized as follows. Section 2 provides the background and framework; Section 3 details the systematic review methodology using PRISMA; Section 4 discusses the results addressing the research questions; and Section 5 concludes the paper.

2. Background and Theoretical Framework

SDN introduces a fundamental shift from tightly coupled, hardware-centric network designs toward a logically centralized and programmable networking paradigm. The defining characteristic of SDN is the vertical separation of the control logic from packet forwarding. This enables flexible network management and global optimization [5]. The architectural abstraction is conventionally structured into three logical planes: the application, control, and data planes [6]. The data plane, also referred to as the infrastructure layer, comprises forwarding devices such as physical or virtual switches (e.g., OpenvSwitch) [7]. These devices operate as simplified forwarding entities without embedded control intelligence. They execute flow rules stored in flow tables, typically implemented using Ternary Content-Addressable Memory (TCAM) to match and forward packets at line rate [7].
The control plane hosts the SDN controller, which functions as a centralized network operating system [8]. Controllers such as Ryu, ONOS, and OpenDayLight maintain a global view of the network topology and traffic state. Communication between the controller and forwarding devices is facilitated through the southbound interface (SBI) using the OpenFlow protocol. This enables the installation, modification, and removal of flow rules [9]. Above the control plane is the application layer, which interacts with the controller via northbound APIs and is commonly implemented as RESTful services [10]. This layer hosts various network applications, including routing, load balancing, and security services such as intrusion detection and mitigation systems [11]. The controller’s global visibility allows these applications to implement fine-grained, network-wide security policies, a capability largely unattainable in traditional distributed networks. Figure 1 shows the SDN Architecture that has been described.

2.1. SDN Virtualization and Experimental Platforms

Empirical evaluation of SDN security mechanisms is conducted within virtualized and emulated environments [12]. This will help bridge the gap between theoretical models and real-world deployments. These environments allow researchers to reproduce complex network topologies, generate large-scale traffic, and launch controlled attack scenarios without the financial and logistical constraints of physical infrastructures. Mininet has emerged as the standard SDN emulator and is employed in most experimental studies [12]. By leveraging lightweight Linux containers, Mininet enables the creation of virtual hosts, switches, and links that closely emulate real network behaviour while remaining fully compatible with production-grade SDN controllers [13].
At the data plane level, OpenvSwitch (OVS) serves as the primary switching platform in virtualized SDN experiments [14]. OVS provides a full OpenFlow support and implements efficient flow-matching mechanisms, making it particularly suitable for evaluating DDoS mitigation strategies that rely on dynamic flow rule installation and traffic filtering [15]. For ML-driven security research, the Ryu controller is frequently selected due to its modular design and native Python implementation [16]. This architecture enables the direct integration of data processing and ML libraries, such as scikit-learn 1.8.0, TensorFlow 2.20.0 and PyTorch 2.7.0 into the control logic. This helps to facilitate real-time traffic analysis and adaptive defence mechanisms within the SDN control plane. The controller then collects flow-level statistics from the switches and integrates machine-learning libraries to support traffic analysis and adaptive DDoS detection. As illustrated in Figure 2, hosts within Mininet (h1 and h2) represent end systems that generate and receive both benign and malicious network traffic. Traffic generation tools are executed on these hosts to emulate normal communication and DDoS attack scenarios. By injecting traffic through these hosts, researchers can achieve controlled, reproducible, and cost-effective experimentation without the need for physical network infrastructure.

2.2. DDoS Threat Model and SDN-Specific Vulnerabilities

While SDN enhances network visibility, its logically centralized control introduces attack surfaces that fundamentally differ from those in traditional networks. DDoS attacks in SDN environments primarily exploit the tight coupling and communication dependencies between the data and control planes. Attackers generate a large volume of packets with spoofed or previously unseen header fields; since these packets fail to match existing flow table entries, they forward “Packet_In” messages to the controller for rule resolution [17]. The objective is to incapacitate the controller through overload, typically by flooding it with these messages. For example, Topology Poisoning involves feeding the controller false topology information to disrupt its overall network view. Furthermore, bandwidth overload entails flooding traffic to overwhelm network links [17]. To address the limitations of static, threshold-based defence mechanisms, recent SDN security research has increasingly adopted ML techniques for DDoS detection and mitigation. Data-driven approaches leverage the rich flow-level statistics available at the SDN controller to model complex traffic behaviours and distinguish between benign and malicious patterns. Commonly extracted features include packet and byte counts, flow duration, inter-arrival times, protocol distributions, and flag statistics [18]. To improve detection accuracy in SDN environments, some datasets like the HLD-DDoSDN highlights the importance of incorporating both high and low traffic variation rates for TCP, UDP, and ICMP flooding attacks targeting the controller [19]. Furthermore, modern benchmarks like the CICDDoS2019 dataset utilize a comprehensive set of 80 network flow features to effectively classify a wider range of exploitation and reflection-based attacks [20]. These features enable ML models to detect not only high-volume flooding attacks but also stealthy low-rate and application-layer DDoS attacks that evade traditional rule-based systems. Figure 3 illustrates the “Packet_In”-driven attack methodology in SDN.

3. Systematic Review Methodology

This study follows an SLR methodology based on the PRISMA 2020 guidelines. The methodology ensures the minimization of selection bias throughout the review process. The overall review protocol consists of defining research questions, identifying relevant studies, applying inclusion and exclusion criteria, assessing study quality, and systematically extracting and synthesizing data from selected primary studies.

3.1. Review Protocol and Research Questions

A review protocol was defined prior to conducting the literature search to guide the overall review process and reduce potential researcher bias. The protocol specifies the research questions, search strategy, study selection criteria (including inclusion and exclusion), quality assessment measures, and data-extraction procedures. The primary objective of this SLR is to systematically analyse ML-based approaches for detecting DDoS attacks in SDN environments.
Based on this objective, the research questions (RQs) formulated for this study are presented in Table 1:
These research questions provide a structured framework for analysing existing studies in a systematic and meaningful manner. This SLR addresses four research questions, with the motivation for each defined as follows: RQ1 aims to comprehensively explore the different types of existing ML detection approaches on SDN-based DDoS attacks. RQ2 identifies the prevalent evaluation metrics, network simulators, hacking tools, and experimental platforms utilized in the literature. RQ3 focuses on determining the datasets used to evaluate current approaches, with an emphasis on their relevance to SDN environments. Finally, RQ4 contributes to underlining DDoS attacks’ challenges and future research directions in securing SDN networks from DDoS attacks.

3.2. Search Strategy

A structured search strategy was used to identify relevant studies from IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar, with the final search completed on 22 January 2025. The search period was restricted to primary studies published between 2021 and 2025 to capture the most recent advancements in SDN-based DDoS detection. Keywords related to SDN, DDoS attacks, and machine learning were combined using a standardized Boolean string: (“Software-Defined Networking” AND “Distributed Denial-of-Service” AND “Machine Learning”) OR (“SDN” AND “DDoS” AND “ML”) AND (“Intrusion Detection System” OR “IDS” AND “Network Security”). As illustrated in Table 2, this search string was applied consistently across all selected digital libraries to ensure a systematic and reproducible retrieval process.
Search results were exported in RIS and BibTeX formats and screened using the Rayyan systematic review tool.

3.3. Inclusion and Exclusion Criteria

Table 3 outlines the inclusion and exclusion criteria used to select relevant studies from the initial search results. These criteria ensure that only research aligned with the objective of systematically reviewing ML approaches for DDoS detection is included. Specifically, the selection was restricted to papers published within the last 5 years to identify new trends and advancements. This temporal constraint ensures the inclusion of high-quality, relevant studies focusing on SDN-based DDoS detection models developed within this recent period.

3.4. Data Extraction and Data Items

To maintain consistency across the 38 selected studies, a structured data-extraction process was implemented. Data extraction was performed by the primary author using a standardized electronic extraction form (Microsoft Excel) to ensure consistency of data inputs. To maintain data integrity and minimize transcription errors, a two-pass verification strategy was employed: the author performed an initial extraction, followed by a secondary review of the data points against the original source documents after a 2-week interval. Any inconsistencies identified during this self-audit were corrected to ensure 100% data accuracy. In addition to the technical variables, general study characteristics (Item 10b) were extracted, including the publication year, study type (journal article or conference paper), and the specific target network environment (IoT, Cloud, or Enterprise SDN).
Table 4 presents the combined extraction framework, listing the specific data points collected (Item 10) and their mapping to the primary research questions. Crucially, this table also details the data preparation and standardization methods (Item 13b) used to resolve heterogeneity, such as converting all performance metrics to a uniform percentage scale and categorizing diverse algorithmic architectures into a standardized taxonomy.

3.5. Study Selection

The study selection process was conducted in multiple stages in accordance with PRISMA 2020 guidelines. Initially, all records retrieved from the selected databases were aggregated, and duplicate entries were removed using the Rayyan systematic review tool.
As shown in Figure 4, a total of 957 records were identified across four primary databases: IEEE Xplore (460), ACM Digital Library (246), ScienceDirect (211), and Google Scholar (40). Regarding Google Scholar, the initial search yielded approximately 2600 records; however, only the first 200 were screened due to a rapid relevance decay observed beyond the initial pages. A manual audit of results beyond page 20 confirmed that subsequent records became increasingly irrelevant, consisting of “noise” and non-aligned architectures that fell outside the research scope. These primarily focused on vertical-specific architectures (e.g., 5G, Fiber Optics), non-standard SDN environments (e.g., MEC, IoT hardware clusters like Raspberry Pi or sensor nodes), alternative security frameworks (e.g., Blockchain), or exclusively Deep-Learning paradigms. Consistent with systematic review practices, we concluded that the most significant research had been captured within the initial pages, reaching theoretical saturation where further screening yielded no novel methodologies. After removing 53 duplicates from the total pool, 904 records were screened by title and abstract, resulting in the exclusion of 746 studies that were irrelevant to the scope of this review. The remaining 158 articles underwent a full-text assessment based on predefined criteria (Table 3), resulting in the exclusion of 120 studies. Specifically, 46 papers were excluded for lacking a traditional ML focus, often utilizing ML with deep-learning architecture that fell outside this study’s algorithmic scope; 39 were excluded due to insufficient experimental rigor (e.g., relying on a single metric which prevented fair performance comparison); and 35 papers were excluded because the datasets used for their experiments were not specified, lacking essential parameters such as total dataset size and specific distribution of attack vs. benign traffic. Ultimately, 38 studies met all requirements and were included in the systematic review, with any discrepancies resolved through mutual consensus with a research supervisor.

3.6. Quality Assessment and Risk of Bias

A Quality Assessment (QA) was conducted to evaluate the methodological rigor and risk of bias for each primary study (Item 11). As detailed in Table 5, five criteria were scored on a scale of 0 to 1, with total scores used to classify studies as high (≥4), medium (3–3.5), or low quality (<3). This process functioned as a critical filter, ensuring that only high-relevance evidence informed the final review findings and mitigated the impact of lower-quality data.
  • High quality: ≥4
  • Medium quality: 3–3.5
  • Low quality: <3

3.7. Taxonomy of Machine-Learning-Based Detection Schemes

To provide a clear overview of the existing approaches for ML-based DDoS detection in SDN, a taxonomy was developed to categorize the selected primary studies. As shown in Figure 5, the taxonomy classifies studies into three main categories: Single ML, Ensemble ML, and Hybrid ML.
  • Single ML Models: These approaches utilize an individual, standalone algorithm to perform classification based on a single predictive structure. In this paradigm, a single mathematical model such as Support Vector Machine (SVM), Logistic Regression, Decision Tree, or Naive Bayes processes the input features to produce an output. Because there is no combination or aggregation of multiple models, the final decision rests entirely on the logic of the specific standalone learner.
  • Ensemble ML Models: These leverage a homogenous collection of many base learners, typically from the same algorithmic family, to improve stability and accuracy. This includes Bagging (e.g., Random Forest as a collection of Decision Trees) and Boosting (e.g., XGBoost), where the focus is on aggregating multiple versions of the same model type to reduce bias.
  • Hybrid ML Models: Unlike ensembles, hybrid models integrate fundamentally different architectural paradigms or merge ML classifiers with non-ML intelligent optimization techniques. This includes Cross-Family Stacking (e.g., Support Vector Classifier (SVC) combined with Random Forest) or ML + Optimization (e.g., using Genetic Algorithms (GA) or Particle Swarm Optimization (PSO)) for feature selection or hyperparameter tuning, where distinct logics are fused into a unified framework.
The taxonomy in Figure 5 provides a hierarchical structure of ML-based DDoS detection in SDN, highlighting the evolution from standalone classifiers to sophisticated ensemble and hybrid strategies. To illustrate this evolution, Table 6 maps the key findings from the 38 selected papers to this taxonomic framework. This organization simplifies the field’s complexity, making it easier to identify various learning paradigms, from individual algorithms to optimized hybrid systems, address security challenges, and performance metrics.

3.8. Quality Assessment Results

To ensure reliable synthesis, this review adopted a dual-layered approach: PRISMA filtered for thematic relevance, while the QA filtered for methodological rigor. Since a study can be relevant yet scientifically flawed, the QA results allow us to weigh evidence based on reliability rather than just topical inclusion. As detailed in Table 6, these scores directly inform the deliberations in the subsequent synthesis.
Table 7 presents the quality assessment results for the selected studies based on five predefined criteria (QA1–QA5). Most of the studies achieved a high-quality rating level, indicating strong methodological rigor and reliable experimental validation, while a smaller number were rated as medium quality. Two studies [39,44] were classified as low quality due to notable methodological and reporting omissions.

3.8.1. Synthesis of Quality Assessment Results

The quality assessment results in Table 6 demonstrate a high standard of methodological thoroughness across the selected literature, with 78.9% (30 papers) of the studies classified as “High Quality”. To facilitate a more meaningful deliberation of these findings, the studies were categorized into five distinct score profiles based on their methodological strengths:
  • Group A (Gold Standard—Score 5.0): Comprising 18 studies (e.g., [23,31,37,50,51,58]) that exhibit maximum diligence across all criteria. These studies serve as primary references for high-quality SDN-DDoS research, demonstrating superior experimental validity.
  • Group B (High Rigor, Limited Impact—Score 4.5): Consisting of 8 studies (e.g., [34,40]) that maintain high standards but present minor gaps in evaluating SDN control plane overhead.
  • Group C (Dataset-Limited High Quality—Score 4.0): Including 4 studies (e.g., [21,25]) that utilized older datasets while maintaining strong experimental frameworks.
  • Group D (Medium Quality—Score 3.0–3.5): Comprising 6 studies (e.g., [24,46]) showing notable gaps in experimental setup or problem definition.
  • Group E (Methodological Outliers—Score < 3.0): Identifying 2 studies [39,44] with reporting and validation omissions.
These groupings reveal that while most of the research is robust, lower-scoring studies were primarily downgraded due to failures in Experimental Rigor (QA4) and Result Impact (QA5).

3.8.2. Algorithmic Pragmatism and Ensemble Efficiency (QA5 Focus)

We conducted a deep deliberation on the “Best Algorithm” by weighting Result Impact (QA5) over raw accuracy percentages. Our synthesis of Group A “Gold Standard” studies [43,50,55] suggests that Ensemble Learning (Random Forest, Extra Trees, XGBoost) represents the most pragmatic approach for SDN environments. This conclusion was reached by analyzing three specific performance dimensions:
  • Optimizing Operational Resources: Findings from [43] prove that Random Forest (RF) models can achieve high accuracy (98.38%) while simultaneously reducing SDN controller CPU usage by 44.9%. This reduction is critical for maintaining controller stability during “Packet_In” storms.
  • Achieving Real-Time Classification: Evidence from [54] shows the Extra Tree Classifier achieving a perfect 1.0 F1 score with a 0% false alarm rate. The unique “random splitting” mechanism of Extra Trees provides the necessary speed for the low-latency requirements of OpenFlow switches.
  • Handling High-Dimensional SDN Features: Evidence from [50,55] highlighted that Random Forest excels in capturing complex patterns from engineered features such as unique source counts, SYN flag counts, and flow-rule growth rates. Unlike single-classifier models (e.g., Logistic Regression in [57]), ensembles were found to be more robust against the “noisy” traffic patterns typical of multi-vector DDoS attacks.

3.8.3. Practical Utility for Future Experimental Design

This QA serves as a strategic benchmark for the SDN-DDoS research community. By identifying the 30 “High Quality” studies, this synthesis provides a curated roadmap for future experimental design, directing researchers toward realistic SDN testbeds (QA3) and operational impact metrics (QA5). Ultimately, this framework functions as a quality filter that guides the transition toward more rigorous, reproducible, and production-ready security research.

3.9. Data Synthesis Methodology (PRISMA Items 13a, 13d, 13e)

Following the quality assessment, the final phase of the methodology involves the systematic synthesis of extracted data to address the primary research questions.

3.9.1. Data Preparation and Grouping (Item 13a)

The findings from the 38 selected primary studies were synthesized using a narrative approach to provide a comprehensive analysis of the SDN-DDoS detection landscape. In accordance with PRISMA guidelines (Item 13a), studies were grouped for synthesis based on three primary criteria: the proposed approach, focusing on the model types (Single, Ensemble and Hybrid), experimental setup, including the specific datasets and simulation tools utilized across the selected studies and the key findings representing the core technical contribution of each study. This grouping strategy was chosen to ensure that performance comparisons were relevant across similar architectural frameworks. The results of this synthesis are systematically tabulated in Table 6, which serves as the big evidence table.

3.9.2. Synthesis Methods (Item 13d) and Assessment of Heterogeneity (Item 13e)

A narrative synthesis approach was adopted to integrate findings across the 38 included studies. Due to the high degree of technical heterogeneity in SDN controller types, ML model architectures, and experimental datasets, a formal meta-analysis was not feasible. Instead, the synthesis followed a structured narrative process involving grouping studies by algorithmic family, comparing standardized quantitative metrics, and performing a thematic analysis of the key findings and limitations documented in Table 6 and Table 8.
Factors contributing to variations in detection performance were qualitatively explored. Specifically, the synthesis analysed how different experimental setups influenced results by contrasting findings.

4. Result of Synthesis

Before discussing the data analysis of the SLR, this section gives statistical results of the primary studies in terms of overall trends. The following section will discuss and go through this SLR data synthesis analysis.

4.1. Year of Publication

This section presents the distribution of the 38 selected primary studies by their year of publication. Figure 6 illustrates the frequency of studies that met the inclusion and quality criteria between 2021 and 2025. As shown, 13 of the selected studies were published in 2021. While 2022 is represented by four studies in this review, a consistent presence of high-quality research is observed in the subsequent years, with six studies selected from 2023 and eight from 2024. Furthermore, seven studies from the early months of 2025 have already met the inclusion criteria. Rather than reflecting global publication volume, this distribution highlights the consistency of high-quality research in ML-based SDN-DDoS detection over the last 5 years. The sustained selection rate in recent years underscores the increasing maturity of the field and the ongoing relevance of using ML to address evolving SDN security challenges.

4.2. Types of SDN Environment

The reviewed studies employ diverse SDN environments for experimental validation, which can be categorized into four primary types. Utilizing platforms like Mininet, Ryu, and OpenDayLight, simulation-based studies remain the most common approach due to their flexibility. Combining SDN environments with real traffic generation tools such as hping3 and Scapy, as well as emulation or testbed-based studies, provides a higher degree of realism. Evaluating ML models offline using public datasets (e.g., CICIDS2017 or CICDDoS2019) allows for dataset-based experimental studies without the need for a live SDN controller. Finally, integrating SDN testbeds with public datasets and hybrid studies balances experimental control with realistic traffic evaluation. Utilizing these varied environments allows researchers to validate detection models across different levels of network complexity.

4.3. ML Model Type Category Distribution

The distribution of the studies reveals a clear preference for specific ML methodologies in SDN-DDoS detection. Most of the literature (22 studies, 57.9%) utilizes Single ML models, which serve as the baseline for most detection frameworks. Hybrid ML approaches, which integrate multiple architectural paradigms or optimization algorithms like PSO and GA, were identified in 10 studies (26.3%). Ensemble ML techniques, involving the aggregation of predictions from a committee of similar models, were the least frequent, representing six studies (15.8%). While Single ML approaches currently dominate the literature, Hybrid and Ensemble methods are the subject of growing research interest due to their superior detection accuracy in complex traffic scenarios.

4.4. Quality Assessment Result Distribution

The quality assessment was conducted to evaluate credibility of the included studies. Each of the 38 primary studies was appraised against the five QA criteria detailed in Section 3.6 and Table 6. Based on this evaluation, the majority of the studies demonstrated a high standard of research, with 30 papers (78.9%) classified as “High Quality”. These studies exhibited strong methodological frameworks, comprehensive dataset descriptions, and appropriate applications of machine-learning techniques. Furthermore, six studies (15.8%) were rated as “Medium”, while two studies (5.3%) were classified as “Low” quality. Figure 7 illustrates the distribution of QA levels across all selected papers.

4.5. Research Questions, Results, and Discussion

This section synthesizes and discusses the findings derived from the selected 38 primary studies. The results are analyzed and integrated according to the formulated Research Questions (RQ1–RQ4), revealing both evolutionary trends and persistent challenges in ML-based DDoS detection and mitigation within SDN environments.

4.5.1. RQ1 What Is the Existing ML-Based Approaches to Detect and Mitigate DDoS Attacks Against SDN Networks?

An integrated analysis of the 38 studies reveals a clear evolution in SDN-based DDoS detection logic, moving from single-classifier toward ensemble-driven, optimization-aware intelligence. Early-stage research [21,28,30] established the foundational viability of supervised learning using Decision Trees, SVM, K-Nearest Neighbors (KNN), and Factorization Machines to achieve near-perfect accuracy under controlled SDN conditions. However, as traffic dynamics became increasingly diverse and attack strategies more evasive, later studies reported generalization decay, where standalone models struggled under reflective, low-rate, and multi-vector DDoS scenarios. This shift exposed the limitations of isolated classifiers and motivated the transition toward more adaptive architectures.
In response, approximately 26% of the literature converges on Hybrid Machine-Learning approaches that combine high-precision classifiers with feature optimization or dimensionality-reduction mechanisms. Studies such as [22,39,45] demonstrate that metaheuristic-driven techniques, including GA, Kernel Principal Component Analysis, PSO, Ant Colony Optimization (ACO), and Gray Wolf Optimization (GWO), can mathematically isolate discriminative SDN traffic features while significantly reducing controller overhead. Notably, hybridizations such as PSO–ACO and PSO–Gradient Boosting achieve up to a 44.9% reduction in controller CPU utilization without sacrificing detection accuracy, which consistently remains within the 98–99% range. Despite these gains, the synthesis highlights a trade-off: increasing architectural sophistication amplifies computational pressure on the SDN controller, introducing secondary scalability and resilience concerns.
Beyond hybridization, a smaller but highly impactful subset of studies (≈15%) advances Ensemble-based Intelligence as the primary model for enterprise-grade SDN defence. Optimized Weighted Voting Ensembles (OWVE) [45], online stacking architectures [46], and composite models such as Random Forest + KNN + Bagging XGBoost [27] consistently outperform single and hybrid classifiers. Empirical results show accuracy levels between 99.26% and 99.7% across benchmark datasets such as CIC-DDoS2019 and CAIDA-2007. Specifically, ref. [46] confirms that stacking BernoulliNB, Multilayer Perceptron (MLP), and Stochastic Gradient Descent (SGD) classifiers mitigates individual model bias while maintaining real-time adaptability. Crucially, this ensemble intelligence underpins recent advances in detecting stealthy, non-periodic Low-rate DDoS (LDoS) attacks. Studies [41,46] demonstrate that unsupervised learning mechanisms, particularly Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) combined with skewness-based statistical features, are essential for identifying attack patterns that deliberately mimic legitimate traffic bursts and evade volume-based detectors.

4.5.2. RQ2: What Evaluation Metrics, Network Simulators, Hacking Tools, and Experimental Platforms Are Used in Existing Literature Studies?

The synthesis of the 38 studies reveals a highly standardized SDN experimental ecosystem, shaped by priorities of cost efficiency and system realism. Network emulation overwhelmingly dominates over discrete-event simulation, with Mininet emerging as the primary experimental backbone in approximately 88% of the reviewed studies [40,42,44,46,47,48,49,50,51]. Unlike simulators such as NS-3, Mininet enables execution of real Linux kernel, OpenFlow switch, and application-layer code, allowing researchers to observe controller stress, flow-table saturation, and packet-in dynamics under realistic attack conditions. For wireless and IoT-centric SDN scenarios, Mininet-Wi-Fi [47] serves as a critical extension, enabling mobility-aware evaluations absent in traditional wired testbeds.
At the control-plane level, the Ryu controller dominates the literature (~65%), primarily due to its modular Python-based architecture, which facilitates seamless integration with machine-learning libraries such as Scikit-learn and TensorFlow [40,43,46,48]. Alternative controllers, including Floodlight [42] and POX [45], are employed in approximately 15% of the studies, often in comparative analyses to assess resilience under controller overload. Across almost all experiments, the virtualization substrate converges on Ubuntu Linux (versions 18.04 or 20.04), typically hosted on VMware or VirtualBox environments, forming a reproducible SDN laboratory setup.
To evaluate detection robustness, the literature employs a well-defined designed to stress both volumetric and stealth-based attack scenarios. Scapy is the most frequently cited tool for crafting protocol-aware, low-rate, or spoofed DDoS traffic [40,42], enabling fine-grained manipulation of packet headers to test entropy-based and statistical detection features. For high-volume flooding attacks, including TCP-SYN, UDP, and ICMP floods, hping3 remains the standard benchmark tool across approximately 45% of the studies [36,38,39,43,44,46,48]. To ensure realistic signal-to-noise ratios, nearly 40% of the frameworks incorporate iperf or D-ITG to generate legitimate background traffic, preventing model overfitting to artificially clean traffic distributions. Supporting tools such as Tcpdump and Wireshark [40] are commonly used for packet capture and feature validation, while recent studies like [47] introduce sFlow-rt for real-time monitoring of bandwidth utilization and flow-table occupancy during mitigation.
While Accuracy, Precision, Recall, and F1 score remain universal baselines reported by 100% of the studies, high-impact research increasingly emphasizes operational viability within the SDN controller. This shift can be categorized into three advanced metric tiers. First, Controller Resource Metrics (~35% of studies) quantify CPU and RAM utilization, directly addressing the “security paradox” by verifying that the ML defence does not consume more resources than the attack it mitigates; for example, ref. [43] reports a 44.9% reduction in controller CPU usage following mitigation. Second, Temporal Efficiency Metrics (~25%) focus on Detection Latency and Flow-Rule Insertion Time, with results across [43,46,47] indicating that state-of-the-art systems must detect attacks in under 1 s and complete mitigation within 1.5–3 s to prevent controller failure. Third, Network Vitality Metrics (~20%), including Packet Drop Rate, Throughput Degradation, Flow Table Occupancy, and “Packet_In” message rate [40,44], measure the quality of service experienced by legitimate users during active defence.
Methodologically, the studies demonstrate strong statistical rigor through extensive use of K-fold cross-validation, often repeated with multiple random seeds to ensure stability [21,41,55,56]. A notable trend toward hybrid validation is observed, wherein models are trained on public benchmark datasets such as CICIDS2017 [22] and subsequently evaluated using real-time traffic generated within Mininet environments. This approach bridges the gap between static offline datasets and dynamic SDN operational behaviour [34,52]. However, the synthesis also exposes a critical training–deployment gap: while approximately 30% of studies leverage GPU-accelerated servers (e.g., Intel Xeon platforms or NVIDIA GPUs) for model training, inference is typically deployed on CPU-bound SDN controllers. This discrepancy highlights an underexplored challenge for future research namely, the need to evaluate inference latency and resource consumption when complex ML models transition from laboratory conditions to resource-constrained production controllers.
Most used Evaluation Metrics diagram as shown in Figure 8:
  • Accuracy: 100% of papers use this as the primary benchmark.
  • Precision, Recall, and F1 and Score: ~95% of papers. These are critical because they account for imbalanced datasets (where normal traffic outweighs attack traffic).
  • False Alarm Rate (FAR)/False Positive Rate (FPR): ~70% of papers. This is highlighted as a major focus in [40,43] to ensure legitimate users are not blocked.
  • System Overhead (CPU/RAM): ~35% of papers. This is an emerging trend in Q1 papers to prove the ML model does not “crash” the controller it is protecting.
  • Detection/Mitigation Latency: ~25% of papers. Measured in milliseconds, this determines if a system can react before the network goes down.
As shown in Figure 9, researchers use the following tools to act as the “attacker”:
  • Scapy: ~60% (Used for custom, low-rate, or sophisticated spoofing attacks in).
  • hping3: ~45% (The standard for high-volume flooding—TCP, UDP, ICMP, as seen in [43,44]).
  • iperf: ~40% (Used to simulate “Benign” or “Normal” background traffic).
  • LOIC/Bonesi: ~10% (Used in older or specific Botnet-simulation studies).

4.5.3. RQ3: What Datasets Are Used to Evaluate and Validate the Existing Approaches and Are There Any Publicly Available Realistic Datasets for DDoS Attacks on SDN Networks?

Approximately 35% of the studies employ publicly available datasets to ensure comparability with prior work. Among these, CICDDoS2019 and CICIDS2017 are the most prevalent, appearing in roughly 30–40% of the papers (e.g., [22,23,24]), owing to their scale and inclusion of diverse volumetric and reflection-based attack vectors such as SYN floods, UDP-Lag, and Web DDoS. However, because these datasets were not captured within SDN environments, they lack critical control-plane features. To address this limitation, more recent studies increasingly adopt InSDN (2020), which is used in approximately 10% of the literature [46,53] and represents the first dataset natively collected within an SDN topology (ONOS/Mininet), incorporating SDN-specific flow statistics such as “Packet_In” events. Older benchmarks such as NSL-KDD have largely fallen out of favour and appear in less than 5% of recent studies [25] due to their inability to reflect modern SDN architectures and attack behaviours.
Despite the availability of public datasets, a slight majority of the literature (~50–55%) relies on custom-generated datasets constructed within Mininet-based SDN testbeds [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58]. This trend is driven by the need to evaluate vulnerabilities unique to SDN, including control-plane saturation, Ternary Content Addressable Memory (TCAM) exhaustion, and delayed flow-rule installation, a phenomena that cannot be inferred from legacy Comma-Separated Values (CSV)-based datasets. Within this group, approximately 40% of studies generate attack traffic using tools such as Scapy and hping3 to emulate low-rate, spoofed, or zero-day DDoS scenarios. Custom datasets further enable advanced feature engineering, including entropy-based indicators [40] and fine-grained flow-level metrics [42], which are absent from traditional benchmarks.
A smaller but methodologically rigorous subset of studies (~15%, notably [23,27]) adopts a hybrid validation strategy. In this approach, models are initially trained on high-volume public datasets such as CICDDoS2019 to learn generalized attack patterns and subsequently fine-tuned or cross-validated using topology-specific datasets generated in real-time SDN emulators. This dual-layered methodology is increasingly recognized as best practice, as it demonstrates both global robustness and local operational effectiveness within SDN environments.
The synthesis identifies three datasets as the current gold standards for public benchmarking in SDN DDoS research: CICDDoS2019/CICIDS2017, valued for attack diversity and scale, InSDN (2020), the first SDN-native dataset incorporating controller-level features, used for native SDN features, and Edge-IIoTset (2022), highlighted in [47] for research at the intersection of SDN, edge computing, and IoT-centric security. More recent datasets, such as HLD-DDoSDN (2024) and LR-HR DDoS, explicitly address the low-rate attack gap that remains underrepresented in legacy benchmarks.
Despite these advances, a critical reproducibility gap persists. Only approximately 30% of custom-generated datasets across the 38 studies are publicly released, with many high-performing models evaluated on proprietary traffic traces that are not made available for independent validation [40,42,43]. A notable exception is [44], which provides a large-scale SDN-specific dataset exceeding 1 million records and incorporating “Packet_In” rates, queue lengths, and controller interaction metrics. This dataset demonstrates a 12.4% improvement in spoofed attack detection over traditional IP/TCP-only feature sets, representing a significant step toward realistic and reproducible SDN-DDoS benchmarkin; unfortunately, it has not been publicized. Figure 10 illustrates the distribution of dataset usage across the 38 studies using a pie chart diagram, confirming that custom datasets remain the preferred choice for SDN-specific simulations.

4.5.4. RQ4: What Are the Challenges, Open Issues, and Future Research Directions Related to DDoS Attacks in SDN Networks?

The literature identifies four challenges in the current SDN security studies:
  • Data Imbalance and Generalization:
  • The Reproducibility Crisis:
  • Reliance on offline datasets for model evaluation
  • Limited effectiveness against unknown or zero-day attacks
ML-based DDoS defense in SDN faces several persistent challenges. Data imbalance and limited generalization remain major issues, as models trained on clearly labeled attack traffic often struggle when confronted with bursty or low-rate attacks, leading to high false-positive rates [41]. Similarly, supervised approaches often fail to detect unknown or zero-day attacks, which reduces their effectiveness against novel traffic patterns. The reproducibility crisis further complicates research; approximately 55% of studies rely on private datasets that lack SDN-specific control-plane features, making cross-study comparison difficult. Finally, the widespread use of offline datasets for evaluation fails to capture real-time network dynamics, thereby limiting operational deployment.
To address these limitations, future research is shifting toward explainable and efficient solutions. This evolution includes implementing Explainable AI (XAI) to provide interpretable reasoning for flow classification and mitigation decisions. Furthermore, deploying hardware-accelerated and edge-based defenses offers scalable solutions by moving ML inference to switches or the data plane, which reduces controller overload. Researchers are also leveraging predictive and pre-emptive analytics to anticipate attacks before they reach peak volume. Finally, developing lightweight, continuously updated ML/DL models is essential for resource-constrained environments, such as Internet of Things (IoT), Mobile Ad-hoc Networks (MANETs), and wireless sensor networks.
Despite selecting quality studies, there are limitations found for each study across the papers. Table 8 shows the limitations of the 38 papers.

4.6. Comparative Synthesis of Detection Effectiveness Across Selected Primary Studies

To provide a final comprehensive performance evaluation summary, Table 9 synthesizes the effectiveness of the studies included. This synthesis focuses on the core performance indicators, specifically accuracy and auxiliary metrics such as F1 score, precision, and recall. This is to show the evaluation of how different machine-learning paradigms respond to DDoS threats. By categorizing the models based on their primary architecture and resulting effect, this table highlights the high-performance benchmark established by recent research, ranging from foundational baselines to highly optimized swarm-intelligence and ensemble-based detection frameworks.

4.7. Limitations of the Review Process

While this systematic literature review follows the PRISMA 2020 guidelines, several inherent limitations in the review process must be acknowledged. Firstly, the search was restricted to four primary digital databases (IEEE Xplore, ACM Digital Library, ScienceDirect, and Google Scholar) and peer-reviewed journals published in the English language. This selection may have excluded relevant studies published in other languages or indexed in regional databases, potentially introducing a degree of publication bias. Secondly, the temporal scope was limited to the most recent 5-year period (2021–2025); while this ensures a focus on the most modern SDN architectures and ML paradigms, it may overlook foundational methodologies or long-term longitudinal trends established in earlier research. Finally, although a multi-reviewer screening process and QA framework were utilized to minimize subjectivity, the qualitative interpretation of study impact and clarity remains partially dependent on the reviewers’ expertise.

5. Conclusions

This systematic review of 38 studies provides a comprehensive understanding of ML learning-based detection and mitigation of DDoS attacks in SDN environments. The analysis reveals that the field has achieved high accuracy in attack detection, with models such as Random Forest, XGBoost, and stacking ensembles consistently reaching performance levels above 98–99%. While early single-classifier models provided the foundation, the evolution toward hybrid and ensemble approaches reflects the recognition that no single algorithm can reliably handle the dynamic, low-rate, and multi-vector attacks characteristic of modern SDN networks.
Evaluation practices and experimental platforms have become increasingly standardized, with Mininet emulator and controllers like Ryu dominating the emulation setups. Tools such as Scapy and hping3 are frequently utilized to provide realistic adversarial traffic. Beyond traditional metrics like accuracy and F1 score, studies now prioritize system-level metrics, including CPU and RAM utilization, detection latency, and mitigation time, reflecting a shift toward operationally meaningful evaluation. However, gaps remain in reproducing real-world conditions, particularly due to the limited availability of SDN-specific public datasets that capture control-plane features that is critical for accurate modelling of sophisticated DDoS attacks in the SDN environment.
A key contribution of this review is the establishment of quality-weighted benchmark studies that move beyond simple performance metrics to evaluate the operational viability of DDoS detection models. By filtering our synthesis through a structured Quality Assessment (QA) framework, we prioritized evidence from “High Quality” studies that demonstrated high experimental rigor and significant impact. This approach allowed us to conclude that while many single-classifier models show high theoretical accuracy, ensemble learning methods, specifically Random Forest, Extra Trees, and XGBoost, provide the most pragmatic balance of high-speed classification and low controller overhead required for production SDN environments. Ultimately, this quality-driven deliberation serves as a strategic roadmap for future research, shifting the focus from academic simulation toward more reproducible and resource-efficient security frameworks.
Dataset selection and feature design continue to shape detection precision. While public benchmarks like CICDDoS2019, CICIDS2017, and InSDN provide valuable reference points, over half of the literature relies on custom datasets to capture SDN-specific vulnerabilities. Hybrid validation strategies that combine public and custom datasets have emerged as a best practice, balancing generalisation with topology-specific accuracy. Nonetheless, the limited public release of these datasets underscores a reproducibility challenge that the field must address to ensure continued progress.
Despite these advances, challenges still arise, such as detection of zero-day and low-rate attacks, dependency on single-controller architectures, and the computational overhead of complex detection models. Future research must therefore emphasize scalable and adaptive architectures, integrating XAI to enhance interpretability, hardware-assisted, or data-plane offloading to reduce controller load, and predictive analytics for proactive defence. Open-access, SDN-native benchmark datasets will be essential to facilitate transparent evaluation and foster community-wide reproducibility.
In summary, while ML has largely solved the classification aspect of DDoS detection in SDN, the critical frontier lies in designing resilient, scalable, and operationally viable security frameworks. Addressing these challenges will be key to transitioning ML-based SDN defences from laboratory prototypes to robust, real-world systems capable of safeguarding high-speed, heterogeneous, and mission-critical networks.
Registration and Protocol: This review was registered at the Open Science Framework (OSF) and publicly available at https://osf.io/4bkhd/ (accessed on 13 February 2026).

Author Contributions

Conceptualization, S.G. and R.K.R.; methodology, S.G. and R.K.R.; software, S.G.; validation, S.G. and R.K.R.; formal analysis, S.G.; writing—original draft preparation, R.K.R.; writing—review and editing, R.K.R.; supervision, R.K.R.; project administration, R.K.R. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Multimedia University.

Data Availability Statement

All data supporting the findings are contained within the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jagadeesh, M.; Kumar, R.D.; Kowsthubha, K.; Rao, V.S.; Rahaman, S.A. ML Based Detection & Classification of High- and Low-Rate DDoS Attacks. Int. J. Sci. Res. Eng. Manag. (IJSREM) 2025, 9, 1–6. [Google Scholar] [CrossRef]
  2. Wang, K.; Fu, Y.; Duan, X.; Liu, T. Detection and mitigation of DDoS attacks based on multi-dimensional characteristics in SDN. Sci. Rep. 2024, 14, 16421. [Google Scholar] [CrossRef] [PubMed]
  3. Mahmoud, H.; Ahmed, N. Comparative Performance Evaluation of Ryu and OpenDaylight SDN Controllers Using Mininet. AlQalam J. Med Appl. Sci. 2026, 9, 64–67. [Google Scholar] [CrossRef]
  4. Alnahari, A.; Ahmad, N.A. How existing machine learning models for DDoS detection differ in performance and accuracy when applied to synthetic versus real-world network traffic datasets. Open Int. J. Inform. 2025, 13, 105–117. [Google Scholar] [CrossRef]
  5. Hussain, M.; Shah, N.; Amin, R.; Alshamrani, S.S.; Alotaibi, A.; Raza, S.M. Software-defined networking: Categories, analysis, and future directions. Sensors 2022, 22, 5551. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Li, H.; Dong, S.; Hu, L. Software defined networking (SDN) research review. In Proceedings of the 2018 International Conference on Mechanical, Electronic, Control and Automation Engineering (MECAE 2018); Atlantis Press: Dordrecht, The Netherlands, 2018; pp. 291–300. [Google Scholar] [CrossRef]
  7. Farooq, M.S.; Riaz, S.; Alvi, A. Security and privacy issues in software-defined networking (SDN): A systematic literature review. Electronics 2023, 12, 3077. [Google Scholar] [CrossRef]
  8. Mardaus, A.; Biernacka, E.; Wójcik, R.; Domżał, J. Open Source Software-Defined Networking Controllers—Operational and Security Issues. Electronics 2024, 13, 2329. [Google Scholar] [CrossRef]
  9. McKeown, N.; Anderson, T.; Balakrishnan, H.; Parulkar, G.; Peterson, L.; Rexford, J.; Shenker, S.; Turner, J. OpenFlow: Enabling innovation in campus networks. ACM SIGCOMM Comput. Commun. Rev. 2008, 38, 69–74. [Google Scholar] [CrossRef]
  10. Alghamdi, A.; Paul, D.J.; Sadgrove, E.J. A RESTful Northbound Interface for Applications in Software Defined Networks. In Proceedings of the 17th International Conference on Web Information Systems and Technologies—WEBIST; SciTePress: Setúbal, Portugal, 2021; pp. 453–459. [Google Scholar] [CrossRef]
  11. Tache, M.D.; Păscuțoiu, O.; Borcoci, E. Optimization algorithms in SDN: Routing, load balancing, and delay optimization. Appl. Sci. 2024, 14, 5967. [Google Scholar] [CrossRef]
  12. Lantz, B.; Heller, B.; McKeown, N. A network in a laptop: Rapid prototyping for software-defined networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks; Association for Computing Machinery: New York, NY, USA, 2010; pp. 1–6. [Google Scholar] [CrossRef]
  13. Handigol, N.; Heller, B.; Jeyakumar, V.; Lantz, B.; McKeown, N. Reproducible network experiments using container-based emulation. In Proceedings of the 8th International Conference on Emerging Networking Experiments and Technologies; Association for Computing Machinery: New York, NY, USA, 2012; pp. 253–264. [Google Scholar] [CrossRef]
  14. Pfaff, B.; Pettit, J.; Koponen, T.; Jackson, E.; Zhou, A.; Rajahalme, J.; Gross, J.; Wang, A.; Stringer, J.; Shelar, P.; et al. The design and implementation of open vSwitch. In Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI 15); USENIX: Berkeley, CA, USA, 2015; pp. 117–130. Available online: https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/pfaff (accessed on 13 February 2026).
  15. Sattar, D.; Matrawy, A. An empirical model of packet processing delay of the Open vSwitch. In 2017 IEEE 25th International Conference on Network Protocols (ICNP); IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef][Green Version]
  16. Asadollahi, S.; Goswami, B.; Sameer, M. Ryu controller’s scalability experiment on software defined networks. In 2018 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC); IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar] [CrossRef]
  17. Arevalo-Herrera, J.; Camargo Mendoza, J.; Martínez Torre, J.I.; Zona-Ortiz, T.; Ramirez, J.M. Assessing SDN Controller Vulnerabilities: A Survey on Attack Typologies, Detection Mechanisms, Controller Selection, and Dataset Application in Machine Learning. Wirel. Pers. Commun. 2025, 140, 739–775. [Google Scholar] [CrossRef]
  18. Ibrahim, A.J.; Répás, S.R.; Bektaş, N. Feature-Optimized Machine Learning Approaches for Enhanced DDoS Attack Detection and Mitigation. Computers 2025, 14, 472. [Google Scholar] [CrossRef]
  19. Bahashwan, A.A.; Anbar, M.; Manickam, S.; Issa, G.; Aladaileh, M.A.; Alabsi, B.A.; Rihan, S.D. HLD-DDoSDN: High and low-rates dataset-based DDoS attacks against SDN. LOS ONE 2024, 19, e0297548. [Google Scholar] [CrossRef]
  20. Sharafaldin, I.; Lashkari, A.H.; Hakak, S.; Ghorbani, A.A. Developing realistic distributed denial of service (DDoS) attack dataset and taxonomy. In 2019 International Carnahan Conference on Security Technology (ICCST); IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar] [CrossRef]
  21. Tonkal, Ö.; Polat, H.; Başaran, E.; Cömert, Z.; Kocaoğlu, R. Machine learning approach equipped with neighbourhood component analysis for DDoS attack detection in software-defined networking. Electronics 2021, 10, 1227. [Google Scholar] [CrossRef]
  22. Almaiah, M.; Alrawashdeh, R.; Alkhdour, T.; Al-Ali, R.; Rjoub, G.; Aldahyani, T. Detecting DDoS attacks using machine learning algorithms and feature selection methods. Int. J. Data Netw. Sci. 2024, 8, 2307–2318. [Google Scholar] [CrossRef]
  23. Hassan, A.I.; El Reheem, E.A.; Guirguis, S.K. An entropy and machine learning based approach for DDoS attacks detection in software defined networks. Sci. Rep. 2024, 14, 18159. [Google Scholar] [CrossRef]
  24. Abiramasundari, S.; Ramaswamy, V. Distributed denial-of-service (DDOS) attack detection using supervised machine learning algorithms. Sci. Rep. 2025, 15, 13098. [Google Scholar] [CrossRef]
  25. Nadeem, M.W.; Goh, H.G.; Ponnusamy, V.; Aun, Y. Ddos detection in SDN using machine learning techniques. Comput. Mater. Contin. 2022, 71, 771–789. [Google Scholar] [CrossRef]
  26. Butt, H.A.; Harthy, K.S.; Shah, M.A.; Hussain, M.; Amin, R.; Rehman, M.U. Enhanced DDoS Detection Using Advanced Machine Learning and Ensemble Techniques in Software Defined Networking. Comput. Mater. Contin. 2024, 81, 3003–3031. [Google Scholar] [CrossRef]
  27. Alamri, H.A.; Thayananthan, V. Analysis of machine learning for securing software-defined networking. Procedia Comput. Sci. 2021, 194, 229–236. [Google Scholar] [CrossRef]
  28. Kavitha, M.; Suganthy, M.; Biswas, A.; Srinivsan, R.; Kavitha, R.; Rathesh, A. Machine Learning Techniques for Detecting DDoS Attacks in SDN. In 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS); IEEE: Piscataway, NJ, USA, 2022; pp. 634–638. [Google Scholar] [CrossRef]
  29. Hamarshe, A.; Ashqar, H.I.; Hamarsheh, M. Detection of DDoS attacks in software defined networking using machine learning models. In International Conference on Advances in Computing Research; Springer Nature: Cham, Switzerland, 2023; pp. 640–651. [Google Scholar] [CrossRef]
  30. Srinivas, C.; Avadhani, P.; Roja, P.P. Machine Learning based Ensemble Technique for DDoS Attack Detection in Software-Defined Networking. Int. J. Comput. Appl. 2023, 185, 22–25. [Google Scholar] [CrossRef]
  31. Rajper, A.; Paraman, N.B.; Marsono, M.N.; Rajper, N.J.; Hameed, H.; Usman, M. An efficient three-tier defense mechanism for mitigation of DDoS attack with port connection analysis in SDN. Sci. Rep. 2026, 16, 3510. [Google Scholar] [CrossRef]
  32. Alnatsheh, A.; Alsarhan, A.; Aljaidi, M.; Rafiq, H.; Mansour, K.; Samara, G.; Igried, B.; Al Gumaei, Y.A. Machine Learning-Based Approach for Detecting DDoS Attack in SDN. In 2023 2nd International Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI); IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
  33. Sharma, A.; Babbar, H. Machine Learning-based Threat Detection for DDoS Prevention in SDN-Controlled IoT Networks. In 2024 5th International Conference for Emerging Technology (INCET); IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  34. Pasupathi, S.; Kumar, R.; Pavithra, L.K. Proactive DDoS detection: Integrating packet marking, traffic analysis, and machine learning for enhanced network security. Clust. Comput. 2025, 28, 210. [Google Scholar] [CrossRef]
  35. Wang, S.; Balarezo, J.F.; Chavez, K.G.; Al-Hourani, A.; Kandeepan, S.; Asghar, M.R.; Russello, G. Detecting flooding DDoS attacks in software defined networks using supervised learning techniques. Eng. Sci. Technol. Int. J. 2022, 35, 101176. [Google Scholar] [CrossRef]
  36. Scaranti, G.F.; Carvalho, L.F.; Junior, S.B.; Lloret, J.; Proença, M.L., Jr. Unsupervised online anomaly detection in software defined network environments. Expert Syst. Appl. 2022, 191, 116225. [Google Scholar] [CrossRef]
  37. Sangodoyin, A.O.; Akinsolu, M.O.; Pillai, P.; Grout, V. Detection and classification of DDoS flooding attacks on software-defined networks: A case study for the application of machine learning. IEEE Access 2021, 9, 122495–122508. [Google Scholar] [CrossRef]
  38. Sapkota, B.; Ray, A.; Yadav, M.K.; Dawadi, B.R.; Joshi, S.R. Machine Learning-Based Attack Detection and Mitigation with Multi-Controller Placement Optimization over SDN Environment. J. Cybersecur. Priv. 2025, 5, 10. [Google Scholar] [CrossRef]
  39. Ramprasath, J.; Seethalakshmi, V. Improved network monitoring using software-defined networking for DDoS detection and mitigation evaluation. Wirel. Pers. Commun. 2021, 116, 2743–2757. [Google Scholar] [CrossRef]
  40. Swami, R.; Dave, M.; Ranga, V. Detection and analysis of TCP-SYN DDoS attack in software-defined networking. Wirel. Pers. Commun. 2021, 118, 2295–2317. [Google Scholar] [CrossRef]
  41. Yousef, D.; Maala, B.; Skvortsova, M.; Pokamestov, P. Detection of non-periodic low-rate denial of service attacks in software defined networks using machine learning. Int. J. Inf. Technol. 2024, 16, 2161–2175. [Google Scholar] [CrossRef]
  42. Khashab, F.; Moubarak, J.; Feghali, A.; Bassil, C. DDoS attack detection and mitigation in SDN using machine learning. In 2021 IEEE 7th International Conference on Network Softwarization (NetSoft); IEEE: Piscataway, NJ, USA, 2021; pp. 395–401. [Google Scholar] [CrossRef]
  43. Nurwarsito, H.; Nadhif, M.F. DDoS attack early detection and mitigation system on SDN using random forest algorithm and Ryu framework. In 2021 8th International Conference on Computer and Communication Engineering (ICCCE); IEEE: Piscataway, NJ, USA, 2021; pp. 178–183. [Google Scholar] [CrossRef]
  44. Gayantha, N.; Rajapakse, C.; Senanayake, J. Customizable DDoS Attack Data Generation in SDN Environments for Enhanced Machine Learning Detection Models. In 2025 5th International Conference on Advanced Research in Computing (ICARC); IEEE: Piscataway, NJ, USA, 2025; pp. 1–6. [Google Scholar] [CrossRef]
  45. Maheshwari, A.; Mehraj, B.; Khan, M.S.; Idrisi, M.S. An optimized weighted voting based ensemble model for DDoS attack detection and mitigation in SDN environment. Microprocess. Microsyst. 2022, 89, 104412. [Google Scholar] [CrossRef]
  46. Alashhab, A.A.; Zahid, M.S.; Isyaku, B.; Elnour, A.A.; Nagmeldin, W.; Abdelmaboud, A.; Abdullah, T.A.; Maiwada, U.D. Enhancing DDoS attack detection and mitigation in SDN using an ensemble online machine learning model. IEEE Access 2024, 12, 51630–51649. [Google Scholar] [CrossRef]
  47. Belachew, H.M.; Beyene, M.Y.; Desta, A.B.; Alemu, B.T.; Musa, S.S.; Muhammed, A.J. Design a robust DDoS attack detection and mitigation scheme in SDN-edge-IoT by leveraging machine learning. IEEE Access 2025, 13, 10194–10214. [Google Scholar] [CrossRef]
  48. Aljahdali, A.O.; Alsaadi, A.M. Ddos Attack Detection Using Neural Network Based on SDN. Int. J. Eng. Technol. Inform. 2025, 6, 1–13. [Google Scholar] [CrossRef]
  49. Sudar, K.M.; Beulah, M.; Deepalakshmi, P.; Nagaraj, P.; Chinnasamy, P. Detection of Distributed Denial of Service Attacks in SDN using Machine learning techniques. In 2021 International Conference on Computer Communication and Informatics (ICCCI); IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
  50. Gayantha, N.; Rajapakse, C.; Senanayake, J. Advanced DDoS Attack Detection and Mitigation in Software-Defined Networking (SDN) Environments: An Integrated Machine Learning Approach. In 2025 International Research Conference on Smart Computing and Systems Engineering (SCSE); IEEE: Piscataway, NJ, USA, 2025; pp. 1–6. [Google Scholar] [CrossRef]
  51. Abdullahi Wabi, A.; Idris, I.; Mikail Olaniyi, O.; Joseph, A.; Surajudeen Adebayo, O. Modeling DDOS attacks in sdn and detection using random forest classifier. J. Cyber Secur. Technol. 2024, 8, 229–242. [Google Scholar] [CrossRef]
  52. Ahuja, N.; Singal, G.; Mukhopadhyay, D.; Kumar, N. Automated DDOS attack detection in software defined networking. J. Netw. Comput. Appl. 2021, 187, 103108. [Google Scholar] [CrossRef]
  53. Karthika, P. Simulation of SDN in mininet and detection of DDoS attack using machine learning. Bull. Electr. Eng. Informatics 2023, 12, 1797–1805. [Google Scholar] [CrossRef]
  54. Kannan, C.; Muthusamy, R.; Srinivasan, V.; Chidambaram, V.; Karunakaran, K. Machine learning based detection of DDoS attacks in software defined network. Indones. J. Electr. Eng. Comput. Sci. 2023, 32, 1503–1511. [Google Scholar] [CrossRef]
  55. Hirsi, A.; Audah, L.; Salh, A.; Alhartomi, M.A.; Ahmed, S. Detecting DDoS threats using supervised machine learning for traffic classification in software defined networking. IEEE Access 2024, 12, 166675–166702. [Google Scholar] [CrossRef]
  56. Kandan, M.; Rani, P.S.; Sathiya, T.; Naidu, K.B.; Maheswari, M. Distributed dos attacks detection based on machine learning techniques in software defined networks. Int. J. Intell. Syst. Appl. Eng. 2024, 12, 882–893. [Google Scholar]
  57. Hakeem, A.; Attiah, A. Machine Learning-based Approach for Detecting DDoS Attacks in Software Defined Networks. Int. J. Comput. Appl. 2024, 186, 1–9. [Google Scholar] [CrossRef]
  58. Musumeci, F.; Fidanci, A.C.; Paolucci, F.; Cugini, F.; Tornatore, M. Machine-Learning-Enabled DDoS Attacks Detection in P4 Programmable Networks. J. Netw. Syst. Manag. 2022, 30, 21. [Google Scholar] [CrossRef]
Figure 1. SDN Architecture Diagram.
Figure 1. SDN Architecture Diagram.
Futureinternet 18 00109 g001
Figure 2. SDN Virtualization Architecture with attacks happening.
Figure 2. SDN Virtualization Architecture with attacks happening.
Futureinternet 18 00109 g002
Figure 3. “Packet_In”-Driven DDoS Attacks in SDN Control Planes.
Figure 3. “Packet_In”-Driven DDoS Attacks in SDN Control Planes.
Futureinternet 18 00109 g003
Figure 4. PRISMA Flow Diagram.
Figure 4. PRISMA Flow Diagram.
Futureinternet 18 00109 g004
Figure 5. Taxonomy of different ML model approaches for detection of DDoS attacks in SDN Environments.
Figure 5. Taxonomy of different ML model approaches for detection of DDoS attacks in SDN Environments.
Futureinternet 18 00109 g005
Figure 6. Distribution of 38 selected papers by year of publication.
Figure 6. Distribution of 38 selected papers by year of publication.
Futureinternet 18 00109 g006
Figure 7. QA Result Distribution among the 38 papers.
Figure 7. QA Result Distribution among the 38 papers.
Futureinternet 18 00109 g007
Figure 8. Most used Evaluation Metrics across the 38 studies.
Figure 8. Most used Evaluation Metrics across the 38 studies.
Futureinternet 18 00109 g008
Figure 9. Most used tools for attacking across the 38 studies.
Figure 9. Most used tools for attacking across the 38 studies.
Futureinternet 18 00109 g009
Figure 10. Distribution of dataset usage across the 38 studies.
Figure 10. Distribution of dataset usage across the 38 studies.
Futureinternet 18 00109 g010
Table 1. Research Questions and Motivations.
Table 1. Research Questions and Motivations.
Research Questions (RQ)Motivations
RQ1: What is the existing ML-based approach to detect DDoS attacks against SDN networks?• To identify commonly used ML models for DDoS detection in SDN.
• To provide a comprehensive overview of ML architectures and approaches in this domain.
RQ2: What evaluation metrics, network simulators, hacking tools, and experimental platforms are used in the existing literature studies?• To identify and analyze evaluation metrics used in SDN-based ML DDoS detection.
• To examine the experimental setups, including network simulators, SDN controllers, and attack tools.
RQ3: What datasets are used to evaluate and validate the existing approaches, and are there any publicly available realistic datasets for DDoS attacks on SDN networks?• To identify datasets used for evaluating ML-based DDoS detection in SDN networks.
• To assess the availability and realism of public SDN-specific DDoS datasets.
RQ4: What are the challenges, open issues, and future research directions related to DDoS attacks in SDN networks?• To identify key challenges and limitations in existing ML-based DDoS detection approaches for SDN and to explore open research issues and future directions.
Table 2. Search Strings for DDoS Detection SLR.
Table 2. Search Strings for DDoS Detection SLR.
Digital LibrarySearch String
IEEE(“Software Defined Networking” AND “Distributed Denial-of-Service” AND “Machine Learning”) OR (“SDN” AND “DDoS” AND “ML”) AND (“Intrusion Detection System” OR “IDS” AND “Network Security”)
ACM(“Software Defined Networking” AND “Distributed Denial-of-Service” AND “Machine Learning”) OR (“SDN” AND “DDoS” AND “ML”) AND (“Intrusion Detection System” OR “IDS” AND “Network Security”)
ScienceDirect(“Software Defined Networking” AND “Distributed Denial-of-Service” AND “Machine Learning”) OR (“SDN” AND “DDoS” AND “ML”) AND (“Intrusion Detection System” OR “IDS” AND “Network Security”)
Google Scholar(“Software Defined Networking” AND “Distributed Denial-of-Service” AND “Machine Learning”) OR (“SDN” AND “DDoS” AND “ML”) AND (“Intrusion Detection System” OR “IDS” AND “Network Security”)
Table 3. Selection Criteria for the Studies.
Table 3. Selection Criteria for the Studies.
InclusionExclusion
  • Papers published from 2021 to 2025
  • Studies proposing ML-based DDoS detection techniques in SDN
  • Studies providing experimental evaluation or performance analysis
  • Written in English
  • Papers before 2021
  • Studies not related to SDN for ML usage for DDoS detection and not related to DDoS or network intrusion detection
  • Study papers without any metrics validation
  • Written in other languages
Table 4. Summary of extracted data points and standardization methods.
Table 4. Summary of extracted data points and standardization methods.
No.Data ItemDescription of Data Points ExtractedData Preparation and Standardization
(Item 13b)
Relevant RQ
1ML ModelsSpecific algorithms (e.g., SVM, Random Forest, CNN-LSTM, XGBoost).Standardized into three categories: Single, Ensemble, and Hybrid to resolve nomenclature differences.RQ1
2Performance MetricsAccuracy, Precision, Recall, and F1-score values.All raw decimal values (e.g., 0.99) were converted to percentages (99%). RQ2
3Experimental ToolsSimulation software and hacking tools (e.g., Mininet, RYU, Scapy).Grouped by Controller type and Traffic Generator to identify common testing platforms.RQ2
4DatasetsNames of traffic datasets (e.g., KDD99, CICIDS2017).Local or custom dataset versions were mapped to their original public repository titles for consistency.RQ3
5Open IssuesQualitative text regarding SDN/ML limitations.Thematic analysis was used to group text into Future Research Directions and Challenges.RQ4
Table 5. Quality Assessment Criteria Framework.
Table 5. Quality Assessment Criteria Framework.
QA IDQuality Assessment CriteriaScore
QA1Objective Clarity0/0.5/1
QA2ML Method Suitability0/0.5/1
QA3Dataset Validity0/0.5/1
QA4Experimental Rigor0/0.5/1
QA5Result Analysis & Impact0/0.5/1
Table 6. Selected 38 papers with their Key Findings.
Table 6. Selected 38 papers with their Key Findings.
Author and YearProposed ApproachExperimental SetupKey Findings
Tonkal, Özgür, et al.
2021
[21]
Single ML: kNN, DT, ANN, SVM with NCA feature selectionSDN-specific DDoS dataset (>100 k records, 22 features), evaluated with Accuracy, Sensitivity, Specificity, F-score; Confusion matrix, ROC, 10-fold CV; tools not explicitly statedFeature selection with NCA + Single ML algorithms (especially DT) can achieve very high detection accuracy. Single ML model dominance; private SDN dataset limits generalizability.
Almaiah, M., et al., 2024
[22]
Hybrid ML with swarm-based feature selection (PSO, SSA, GWO) + classifiers (SVM, kNN)CICIDS2017 dataset, feature extraction via CICFlowMeter; tuned optimization algorithms; metrics: Accuracy, Sensitivity, Precision, Recall, F1 scoreSwarm intelligence + ML classifiers achieve near-perfect detection (99.8–100%). Hybrid ML approaches outperform single ML; real-time deployment and scalability remain challenges.
Hassan AI 2024
[23]
Hybrid sequential: Entropy-based statistical detection + K-means clusteringPublic datasets: CICIDS2017, CSE-CIC-2018, CICDDOS2019; Python-based; metrics: Accuracy, FPR, TPR, F1 score, MCCHybrid statistical + ML clustering achieves near-perfect accuracy and very low FPR; sequential hybrid approaches enable real-time DDoS detection. K-means threshold tuning is critical.
Abiramasundari S, Ramaswamy V 2025 [24]PCA-based feature reduction + Supervised ML (RF, SVM, KNN, LR, DT)Public datasets: CICIDS2017, 2018, CICDDoS2019; preprocessing: normalization, SMOTE; metrics: Accuracy, Precision, Recall, F1 score; Confusion matrix, PR curvesPCA-based EDAD improves DDoS detection; RF and KNN consistently perform well. Preprocessing, feature selection, and class balancing enhance ML performance.
Nadeem et al., 2022
[25]
Single ML + Optimal feature selection (Filter, Wrapper, Embedded) + ML classifiers (RF, SVM, KNN, NB, DT)NSL-KDD dataset; SDN controller collects OpenFlow stats; selected 28 features via RFE; metrics: Accuracy, Precision, Recall, Specificity Feature selection (RFE) + RF yields the highest accuracy (99.97%) in SDN DDoS detection; reduces computational load; optimal feature subset critical.
Butt HA, Harthy KS
2024
[26]
Hybrid + Ensemble ML (RF, XGBoost, KNN, SVM, DT) with dynamic feature selectionDDoS SDN dataset (Kaggle), 80:20 train-test; preprocessing, handling missing values; metrics: Accuracy, Precision, Recall, F1-score, ROC-AUC; runtime analysisEnsemble methods outperform single ML; dynamic feature selection detects evolving attacks; trade-off between accuracy and computational cost highlighted.
Alamri HA 2021
[27]
Single ML + Ensemble/Hybrid ML (SVM, ASVM, BPNN, RF, MLP, J48, REP Tree + combinations, e.g., RF + KNN + Bagging, XGBoost)CIC-DDoS2019 dataset; 20 features via information gain; metrics: Accuracy, Precision, Recall, F1-score, FPR; Python-basedEnsemble/hybrid ML (XGBoost) achieves the highest accuracy (99.7%) and low FPR; single models perform moderately. Emphasizes zero-day and low-rate attack detection challenges.
Kavitha M 2022
[28]
Single ML: KNN, LR, DT with feature selection and preprocessingKDD Cup 99 dataset; 80% train, 20% test; metrics: Accuracy, Recall; PythonDT outperforms KNN and LR; feature engineering is critical; highlights importance of SDN-specific datasets and adaptation to evolving attacks.
Hamarshe A, AshqarHI, 2023
[29]
Single ML (RF, DT, SVM, XGBoost)CICDDoS2019 dataset; preprocessing and feature selection (20 features); metrics: Accuracy, Precision, Recall, F1 score, ROC curvesRF performs best overall (68.9%), though performance varies by attack type; highlights need for hybrid/ensemble models and computational optimization.
Srinivas 2023
[30]
Single ML (RF, DT, KNN, SVM, LR) + preprocessing and feature selection Public SDN DDoS dataset (104,345 flows, 23 features); 70–30% train-test-validation; metrics: Accuracy, Precision, Recall, Specificity, F1 scoreRF achieves near-perfect accuracy (99.99%); tree-based ensembles are superior; feature engineering and preprocessing are critical; limited attack diversity noted.
Rajper et al. 2026
[31]
Hybrid ML: A three-tier defense combining adaptive CUSUM statistics with an event-triggered Decision Tree (DT) and port connection
analysis.
Mininet & Ryu Controller; IEEE Dataport & Mendeley datasets; Top 5 features; evaluated on CPU load, latency, and mitigation accuracy.Achieved 99.99% accuracy. The Hybrid approach reduced controller computational overhead and false positives by 87% by only activating the DT classifier during anomalies.
Alnatsheh A 2023
[32]
Supervised ML: Random Forest (RF), SVM, Naive Bayes (NB) on SDN-specific Kaggle dataset; feature selectionPython implementation; metrics: precision, recall, F1 score, accuracyRF outperformed SVM and NB (accuracy 98.98%); effective for flooding attacks; highlights the need for distinguishing high legitimate traffic from DDoS; the dataset is small-scale and SDN-specific
Sharma & Babbar, 2024 [33] Supervised ML: Logistic Regression (LR), SVM, RF, KNN on UNB CIC-IoT 2023 dataset; data cleaning and feature encodingPython ML implementation; metrics: accuracy, precision, recall, F1 scoreLR outperformed other classifiers (accuracy 86%, recall 90%); the simple linear model is effective for IoT-SDN traffic; dataset diversity allows multi-attack evaluation
Pasupathi, Kumar, & Pavithra, 2025
[34]
Hybrid: Packet Marking (LPM, RPM, PPM) + ML classifiers (KNN, RF, LR, SVM, XGBoost, DT, NB)Controlled simulation environments; metrics: accuracy, precision, recall, F1 score, execution time; packet marking evaluated via probability vs. hop distanceKNN and RF had the highest accuracy (~98.4%); the integrated multi-layered approach enables traceback and high detection rates; ML adaptability varies across datasets
Wang et al., 2022
[35]
Lightweight Supervised ML: DA, DT, GLM, KNN, NB, FNN, SVM, Bagging Tree (BT); single feature (Packet_In fluctuations)Mininet emulator and real SDN testbed (Raspberry Pi + Zodiac switches); MATLAB (R2021a) + Python integration; metrics: accuracy, recall, precision, CPU usage, training/detection timeBT, DT, and KNN were top-performing; high accuracy (>99% simulation, >90% real-time); single-feature approach reduces computation; only volumetric DDoS detected.
Scaranti et al., 2022
[36]
Unsupervised Online ML: DenStream clustering using entropy features (source/destination IPs and ports)Mininet + Floodlight SDN controller; traffic simulation via Scapy/hping3; metrics: accuracy, precision, recall, F-measure, false-positive rate, detection latencyDenStream outperformed one-class HS-Trees; effective for DDoS & portscan; handles online data and concept drift; no public SDN-specific datasets used
Sangodoyin et al., 2021 [37] Supervised ML: CART, QDA, GNB, k-NN on USDN custom dataset; network metrics (throughput, jitter, response time)Mininet emulator (tree-based SDN topology); Floodlight controller; attack tools: LOIC, iperf; metrics: accuracy, training/prediction speed, robustness, ROCCART has the highest accuracy (98%), fastest training, and prediction; classic low-complexity ML methods are effective; the dataset is small; it emphasizes the need for adaptive and large-scale SDN datasets
Sapkota et al., 2025
[38]
Centralized ML-based IDS for multi-controller SDN; XGBoost classifier; flow-based detection; threshold-based mitigationMininet 2.3.0.dev6; multi-controller SDN; Dell laptop (i5, 16 GB); attack simulation via hping3/ScapyXGBoost achieved 98.5% accuracy, 97% precision/recall; near-real-time mitigation (70 ms); multi-controller placement improves load balancing; custom dataset not public
Ramprasath & Seethalakshmi, 2021 [39] Three-stage SDN DDoS framework: data accumulation → PSO-ACO clustering → multinomial regression; policy-based mitigationMininet tree topology; PoX controller; TCP/UDP/ICMP attacks via hping3; simulation 100–1000 flowsHybrid metaheuristic + ML approach improves accuracy, recall, F1 vs MLP/NB; flexible mitigation; dataset custom, not public; high computational cost
Swami et al., 2021
[40]
ML-based IDS for TCP-SYN flood detection; RF, DT, AdaBoost, LR, MLP classifiers; entropy-based featuresMininet + Ryu controller; VM servers for training; Scapy/Tcpdump/Wireshark for trafficHigh detection accuracy (99.97–99.99%), zero false alarms; controller CPU overloaded under spoofed attacks; dataset custom, not public
Yousef et al., 2024
[41]
ML-based LDoS detection: Logistic Regression, SVM, BIRCH; real-time traffic statsMininet: 5 hosts, 3 switches, Ryu controller; IPERF and TCPDump; K-Fold CVDetects traditional and non-periodic LDoS in ~1 s; high accuracy; dataset custom, not public; feature ambiguity and scalability remain challenges
Khashab et al., 2021
[42]
Self-healing SDN framework; ML for detection + automatic mitigation; feature extension (pkt_size, same_host, same_host_port); RF bestMininet + Floodlight controller; VMware Ubuntu VMs; Scapy for TCP floods; firewall API for mitigationRF with extended features improves detection accuracy and responsiveness; real-time mitigation is effective; the dataset is custom, not public; IP spoofing limitation noted
Nurwarsito & Nadhif, 2021
[43]
Random Forest-based detection from SDN flow entries; mitigation via dynamic flow rulesMininet + Ryu controller; hping3 for attacks; iperf/curl/ping for normal trafficDetection accuracy 98.38%, false positives 1.2%; mitigated ~15 k packets; CPU usage reduced 44.9%; dataset custom, not public; ICMP detection slightly lower
Gayantha et al., 2025 [44] Random Forest evaluated on SDN-specific dataset; focus on dataset generationMininet + Ryu controller; VirtualBox VMs; TCP/UDP/ICMP attacks via hping3, normal traffic via ping/iperf/curlHigh accuracy (98.60%), precision/recall 0.99; dataset > 1 M records; emphasizes SDN-specific features; mitigation not addressed; realism and scalability remain challenges
Maheshwari et al., 2022 [45] Optimized Weighted Voting Ensemble (OWVE) using multiple ML classifiers; dynamic fitness function for ensemble weighting; real-time mitigation via POX)Mininet SDN testbed: 108 hosts, 10 switches, 3 controllers, 8 subnets; DDoS simulated via datasetsHigh accuracy 99.36–99.41%; effective real-time mitigation; publicly available datasets (CIC-DDoS2019, CAIDA-2007); addresses false negatives, ensemble optimization; scalable framework
Alashhab et al., 2024 [46] Online ensemble ML (BernoulliNB, Passive-Aggressive, SGD, MLP) via stacking for SDN DDoS detection and mitigationMininet + Ryu controller; VirtualBox Ubuntu 20.04; MiniEdit topologies; traffic via iPerf, Scapy, Hping3, Ping; evaluation: accuracy, precision, recall, F1, CPU, packet dropEnsemble OML models achieve 99.26% accuracy, 99.62% recall; adapt to evolving SDN traffic; have low false positives; highlights need for real-time adaptation, low-rate attack detection, and dataset diversity
Belachew et al., 2025 [47] Supervised and ensemble ML (KNN, RF, FFNN, XGBoost) with edge-based deployment for low-latency detectionMininet-WiFi; Google Colab & local Ubuntu desktop; real-time traffic monitoring via sFlow-rt; synthetic DDoSXGBoost > 99.99% accuracy; edge deployment reduces controller load; single ML models limited by computational cost; emphasizes real-time detection and IoT-specific datasets
Aljahdali & Alsaadi, 2025
[48]
Neural-network-based IDS (Keras) vs classical ML (XGBoost, SOM) for SDN DDoS detectionMininet + RYU controller; MiniEdit topologies; Ubuntu VMs; TCP/UDP flood attacks via hping3; Python/KerasNeural network achieved 99.74% testing accuracy (CIC-DDoS2019); superior to XGBoost (88%) and SOM (57%); controller-enforced mitigation is effective; ensemble ML remains competitive for edge devices
Sudar et al., 2021
[49]
Lightweight ML: SVM (linear kernel) and Decision Tree for DDoS detection with dynamic SDN mitigationMininet (100 hosts, 9 switches, 3 controllers); Python + Scikit-learn; SYN flood attacks; KDD99 datasetSVM outperforms DT (~80% precision/recall); low computational overhead; effective for moderate SDN sizes; limited generalization to modern attacks
Gayantha et al., 2025 [50]Feature-rich ML framework: RF, SVM, CNN, GB; feature engineering (flow counts, unique sources, packet/byte rates, SYN flags)Mininet + Ryu controller; spine-leaf topology (18 hosts); synthetic benign & malicious traffic; evaluation: accuracy, precision, recall, F1, ROC-AUCRF achieved 95.3% accuracy; engineered features enhance detection; real-time mitigation via controller; limitations include computational overhead and reliance on a synthetic dataset
Abdullahi et al., 2024 [51] Random Forest (RF) classifier with engineered flow-level features (NFE, SIP, avg flow packets, duration, bytes)Mininet emulation, Ryu controller, 1 switch & 5 hosts, attack: TCP SYN/UDP/ICMP flood, traffic captured 3600 sRF achieved 96.3% accuracy, 96.45% precision. Flow-level features are effective for early detection; a small-scale synthetic setup limits generalization.
Ahuja et al., 2021
[52]
Hybrid ML: SVC + RF using SDN-specific features (Packet_in, flow duration, port bandwidth, packet rate)Mininet & Ryu, Windows 10 host w/Ubuntu VM, attack: TCP/UDP/ICMP, traffic via hping3 & mgenHybrid SVC-RF model achieved 98.8% accuracy, better than single classifiers; robust across protocols; and the dataset is publicly available.
Karthika, 2023
[53]
ML models (SVM, Naïve Bayes, MLP) on OpenFlow port statistics for TCP-SYN flood detectionMininet + Ryu, 4 hosts and 3 switches, traffic via Iperf/ping & synattack.py, Ubuntu VMMLP achieved 99.75% accuracy; real-time detection and mitigation are feasible; highlights centralized SDN control for proactive DDoS detection.
Kannan et al., 2023
[54]
Ensemble ML (Extra Trees, Random Forest) + decision tree for real-time DDoS detection & mitigation.Mininet + Ryu, 3 switches, 4 hosts, 2 servers, attacks via SYN flood scriptsExtra Tree classifier achieved 100% accuracy and precision; integrates ML with SDN flow control; highlights reproducibility and need for scalable, multi-vector mitigation.
Hirsi et al., 2024
[55]
Supervised, ensemble, and hybrid ML (RF, SVM, KNN, XGBoost, LR) with controller-based mitigationMininet + Ryu, attack via hping3, benign via MGEN, k-fold CVEnsemble and feature-selected models outperformed single ML; RF achieved ~98–99% accuracy; real-world validation limited; emphasizes custom SDN datasets.
Kandan et al., 2024
[56]
ANN with feature selection (Relief, SFFS, Lasso) for DDoS detectionSDN testbed w/OpenFlow, Open vSwitch, sFlow, InfluxDB, attacks via hping3, 10-fold CVANN + FS achieved >95% accuracy; reduces controller workload; hybrid ML + FS approach is effective and scalable.
Hakeem & Attiah, 2024 [57] Logistic Regression (DDoSDetect) for multi-protocol flooding attacksMininet + Ryu, 4 switches and 8 hosts, traffic via hping3 & JupyterLab scriptsLR achieved 97.98% accuracy, 2.08% error; feature engineering and controlled SDN simulation are crucial; there is potential for hypermodel development.
Musumeci et al., 2022 [58] ML-assisted DAD using RF, KNN, SVM, and ANN with P4-enabled switches for data-plane detectionP4 switches (BMv2), Spirent N4U traffic generator, Python ML, multi-core CPUsData-plane ML reduced latency to μs, RF and SVM had the best trade-off; correlated DAD improved detection for low-rate attacks; enables distributed, real-time SDN DDoS detection.
Table 7. Quality Assessment Criteria for the selected 38 papers.
Table 7. Quality Assessment Criteria for the selected 38 papers.
Paper Cite NumberQA1QA2QA3QA4QA5Total (5)Quality Level
[21]110.510.54High
[22]110.510.54High
[23]111115High
[24]011013Medium
[25]1110.50.54High
[26]1110.514.5High
[27]1110.514.5High
[28]0.5110.50.53.5Medium
[29]1110.50.54High
[30]0.5110.50.53.5Medium
[31]111115High
[31]0.5110.50.53.5Medium
[33]0.5110.50.53.5Medium
[34]1110.514.5High
[35]111115High
[36]111115High
[37]111115High
[38]11110.54.5High
[39]10000.51.5Low
[40]11110.54.5High
[41]111115High
[42]111115High
[43]111115High
[44]0.50.50.500.52Low
[45]111115High
[46]0.50.510.50.53Medium
[47]111115High
[48]111115High
[48]111115High
[50]111115High
[51]111115High
[52]111115High
[53]11110.54.5High
[54]11110.54.5High
[55]111115High
[56]11110.54.5High
[57]111115High
[58]111115High
Table 8. Limitations table across the selected 38 studies.
Table 8. Limitations table across the selected 38 studies.
Author & YearPrimary Limitation CriteriaLimitation
Tonkal, Özgür, et al.
2021
[21]
RelevanceWhile NCA optimizes feature relevance, the study identifies a Generalization Gap. Models trained on specific SDN datasets (like the Bennett University one) may achieve “perfect” accuracy (100% for DT) but fail when applied to different SDN topologies or more diverse, non-simulated attack patterns.
Almaiah, M., et al., 2024
[22]
OptimizationWhile swarm intelligence yields near 100% accuracy, the study highlights Computational Convergence Delay. These meta-heuristic algorithms require multiple iterations to “converge” on the best features, which may introduce a processing lag that hinders real-time response in high-velocity SDN traffic.
Hassan AI 2024
[23]
ParameterizationWhile the two-stage pipeline reduces constant CPU load, the study identifies Threshold Rigidity. The system’s success depends on the manual tuning of entropy thresholds and clustering deltas, which may require frequent recalibration as legitimate network traffic volumes change.
Abiramasundari S, Ramaswamy V 2025 [24]DimensionalityWhile PCA reduces features, the process of “Principal Component” calculation can lose the interpretability of specific network headers, making it harder for network admins to understand why a flow was flagged as a DDoS.
Nadeem et al., 2022
[25]
RedundancyWhile it successfully reduces 41 features to 28, the paper relies on the NSL-KDD dataset, which is often criticized in modern SDN research for being outdated and not representing the “redundancy” patterns of modern, high-speed encrypted traffic.
Butt HA, Harthy KS
2024
[26]
AdaptabilityWhile it handles dynamic features, the computational cost of models like SVM and KNN (runtime > 1700 s) makes them non-adaptable for real-time mitigation, leaving a gap between theoretical accuracy and practical deployment.
Alamri HA 2021
[27]
RobustnessWhile achieving near-perfect metrics, the study identifies a “Sliding Window” problem—where choosing the wrong time interval for feature collection can lead to delayed detection or a crash in “large-scale” network overhead.
Kavitha M 2022
[28]
Parsimony
(Simple)
While using only 5 features reduces overhead, it risks Over-simplification. By relying on the KDD Cup 99 dataset, the “best 5 features” may not be enough to detect modern, complex DDoS attacks that weren’t present in 1999.
Hamarshe A, AshqarHI, 2023
[29]
GranularityWhile the model is highly realistic (removing timestamps to avoid “cheating”), it suffers from Performance Variance. High accuracy in one attack type (Portmap) masks poor detection in others (DrDoS_SNMP), creating a “blind spot” in the security framework.
Srinivas 2023
[30]
GeneralizationAchieves near-perfect accuracy (99.99%) by excluding IPs but faces Dataset Bias. The focus on common protocols (TCP/UDP/ICMP) means the model’s high performance may not generalize to rarer or more sophisticated protocol-specific DDoS attacks.
Rajper et al., 2026
[31]
Mitigation
Granularity
Traditional techniques like rate-limiting lack targeted precision, causing pervasive blocking of legitimate traffic. Additionally, reliance on full attack-path tracing creates response delays that allow networks to be compromised before mitigation begins.
Alnatsheh A 2023
[32]
Traffic Classification & OverfittingCurrent systems struggle to distinguish malicious flooding from legitimate high-volume traffic. Additionally, redundant dataset features cause overfitting, which reduces the model’s effectiveness in real-world scenarios.
Sharma & Babbar, 2024 [33]HeterogeneityWhile effectively identifying IoT-specific attacks, the study reports a relatively low Accuracy Ceiling (86%). This suggests that while linear models are better for this specific data distribution, they still struggle to fully capture the complex, multi-vector nature of modern IoT botnets.
Pasupathi, Kumar & Pavithra, 2025
[34]
TraceabilityWhile packet marking enhances traceback, it introduces Computational Overhead and “backtracking” risks in high-traffic environments. Furthermore, the reliance on older datasets (CAIDA 2007) may not reflect modern SDN header structures.
Wang et al., 2022
[35]
Light-weightnessWhile using a single feature (Packet_In) minimizes overhead, it creates a Single Point of Failure. The model is blind to non-volumetric or “low-and-slow” attacks that do not cause significant Packet_In fluctuations.
Scaranti et al., 2022
[36]
UnlabelednessWhile unsupervised learning solves the labeling problem, the model struggles with Sensitivity. Low-intensity or overlapping attacks (like portscans) don’t distort entropy enough to be detected, causing them to be “masked” by normal traffic.
Sangodoyin et al., 2021 [37]StatisticalWhile providing high scientific rigor through 50 test runs, the study is limited by the Dataset Scale. The 3600 observations are statistically significant in a lab but may fail to represent the “long tail” of anomalies found in real-world petabyte-scale SDN traffic.
Sapkota et al., 2025
[38]
CentralizationWhile multi-controller placement improves load balancing, the centralized IDS remains a potential single point of failure and a bottleneck for high-speed synchronization between distributed controller domains.
Ramprasath & Seethalakshmi, 2021 [39]MetaheuristicsWhile PSO-ACO improves clustering accuracy, it introduces high Computational Complexity. The study notes that optimizing the overhead of these metaheuristics for real-time traffic remains an open challenge, as they can be slower than pure statistical models.
Swami et al., 2021
[40]
RandomnessWhile achieving near-perfect accuracy (99.99%) through entropy features, the study is limited by its Topology Scale. Testing on only four hosts may overstate the “Zero False Alarm Rate,” as real-world networks contain much more “natural” randomness that could be misclassified as an attack.
Yousef et al., 2024
[41]
StealthinessWhile effective at detecting LDoS, the model relies on Statistical Windowing (0.1-s intervals). If the attack pulse is even shorter or more irregular, it may lead to Feature Ambiguity, where the statistical signatures of the attack overlap almost perfectly with legitimate user behavior.
Khashab et al., 2021
[42]
Self-healingWhile the system automates recovery, it faces a Mitigation Paradox. If an attacker uses sophisticated IP spoofing, the “self-healing” module may accidentally block legitimate paths (over-healing), potentially leading to a self-inflicted Denial of Service.
Nurwarsito & Nadhif, 2021
[43]
ConcurrenceWhile effective at reducing CPU load, the study identifies a Protocol Variance weakness. The concurrent model has lower detection accuracy for ICMP flooding compared to TCP/UDP, suggesting that the “concurrence” of varied protocol flows still presents a classification
hurdle.
Gayantha et al., 2025 [44]FidelityWhile the dataset offers high-fidelity SDN features, the study acknowledges a Simulation Gap. Because the data is generated in a virtualized Mininet/VirtualBox environment, it may still fail to capture the physical hardware latencies and “jitter” found in real-world enterprise hardware deployments.
Maheshwari et al., 2022 [45]ConsensusWhile the ensemble provides high stability and accuracy (99.4%), the Optimization Overhead is a concern. Managing a consensus between six different models in real-time can be computationally expensive, potentially delaying mitigation in high-speed SDN environments.
Alashhab et al., 2024 [46]AdaptationWhile the OML ensemble adapts well to streaming data, it faces Deployment Latency. The study notes that maintaining this level of real-time adaptation in high-density or heterogeneous SDN networks, beyond controlled Mininet simulation, remains a major scalability hurdle.
Belachew et al., 2025 [47]EdgeWhile edge deployment reduces central latency, it introduces Inter-node Security risks. The study identifies a gap in security of the communication between the Edge server and the SDN controller, suggesting a need for TLS or Blockchain to prevent tampering.
Aljahdali & Alsaadi, 2025
[48]
High Inference CostWhile the Neural Network offers state-of-the-art accuracy, its Inference Cost is high. The study identifies that high-dimensional deep learning models may strain SDN controllers, necessitating future research into distributed processing to prevent the IDS itself from becoming a bottleneck.
Sudar et al., 2021
[48]
ComplexityWhile prioritizing low computational complexity, the model suffers from Temporal Obsolescence. Using the KDD99 dataset means the model is trained on traffic patterns from over 25 years ago, which lacks the complexity of modern multi-vector or encrypted DDoS attacks.
Gayantha et al., 2025 [50]InterpretabilityWhile Random Forest provides high interpretability, the study faces Processing Overhead. Real-time extraction of complex engineered features (like source counts and flag rates) creates a computational tax that may hinder the framework’s scalability in high-speed, multi-gigabit SDN environments.
Abdullahi et al., 2024 [51]IndicativenessWhile isolating high-value indicators, the study suffers from Topological Minimalism. Testing on a single-switch, five-host setup ensures high accuracy (96.3%) but risks “over-fitting” the indicators to a specific, non-complex path that does not represent real-world network congestion.
Ahuja et al., 2021
[52]
HybridizationWhile the SVC-RF hybrid increases precision, it introduces Dimensionality Complexity. Expanding the feature set to 67 variables requires heavy preprocessing (PCA + t-SNE), which may introduce processing latency during real-time deployment in high-speed live SDN networks.
Karthika, 2023
[53]
ProactivityWhile the MLP model reaches near-perfect accuracy (99.75%), the study identifies Controller Fragility. High-volume floods can still overwhelm the centralized control plane before the proactive mitigation rules take full effect, highlighting a race condition between attack speed and processing latency.
Kannan et al., 2023
[54]
ConvergenceWhile achieving “perfect” 100% metrics, the study faces a Realism Gap. Such flawless results are often a symptom of “Overfitting” to a specific, low-complexity Mininet topology (three switches, four hosts), which may not converge as successfully when exposed to the unpredictable noise of real-world internet traffic.
Hirsi et al., 2024
[55]
RobustnessWhile feature selection (Chi-square/RFE) improves accuracy, the study notes a Temporal Dependency gap. It suggests that future work must move toward LSTMs or CNNs, as current robust supervised models may still struggle to capture the time-based evolution of multi-vector amplification attacks.
Kandan et al., 2024
[56]
SelectionWhile the triple-category feature selection optimizes the ANN, the study identifies a Structural Single-Point Failure. Even with optimized features, a single-controller topology remains vulnerable to flow table saturation, necessitating a move toward the multi-controller hierarchical architectures suggested in the future directions.
Hakeem & Attiah, 2024 [57]CalibrationWhile the LR model is well-calibrated for accuracy (97.98%), it identifies a Mitigation Absence. The framework detects attacks but lacks an automated response mechanism, relying on future research for a “hyper model” that can integrate automatic mitigation.
Musumeci et al., 2022 [58]OffloadingWhile offloading detection to P4 switches achieves microsecond latency, it introduces Hardware Constraint Limits. P4 switches have limited memory (SRAM/TCAM) for storing ML models and traffic metadata, potentially limiting the complexity of the “Correlated DAD” in massive network scales.
Table 9. Synthesis of Detection Effectiveness.
Table 9. Synthesis of Detection Effectiveness.
Paper IDPrimary Model(s)Accuracy (%)Other Metric (F1/Prec/Rec)Status/Effect
[21]Decision Tree (NCA)100%100% PrecisionPerfect (Simulated)
[22]PSO-SVM/GWO-KNN99.9%>99.8% F1Swarm Optimized
[23]Entropy + K-Means99.9%0.004% FPRHierarchical
[24]SVM~95.0%N/ABaseline
[25,26,27,28,29,30,31,32,33,34,35,36]Various (RF, ANN, KNN, Hybrid)92–98%MixedFoundational
[37]Entropy-Based~96.2%High RecallStatistical
[38]Ensemble (Voting)98.1%97.5% F1Cooperative
[39]Genetic Algorithm + ANN97.8%96.9% PrecisionBio-inspired
[40]Deep Neural Network98.5%98.2% F1Layered
[41]Change-Point Detection~94.0%Low FPRDetection Rate
[42]Random Forest96.8%95.4% RecallFlow-based
[43]Naive Bayes~88.5%87.0% F1Lightweight
[44]MLP/CNN99.1%99.0% PrecisionHigh Performance
[45]Ant Colony (ACO)97.3%96.5% RecallOptimized
[46]XGBoost/CatBoost98.9%98.7% F1Advanced Ensemble
[47]Edge-ML (LightGBM)97.5%97.2% PrecisionDecentralized
[48]CNN-LSTM99.4%99.2% F1Spatio-Temporal
[48]Decision Tree96.5%95.9% F1Low-Complexity
[50]Random Forest95.3%95.3% F1Interpretable
[51]Random Forest96.3%96.4% PrecisionIndicative
[52]Hybrid (SVC + RF)98.8%Low False AlarmHybridized
[53]MLP99.7%99.5% PrecisionProactive
[54]Extra Tree Classifier100%100% F1Converged
[55]Random Forest (RFE)98.9%98.6% RecallRobust
[56]ANN (Filter/Wrapper)>95.0%High PrecisionSelective
[57]Logistic Regression97.9%97.2% F1Calibrated
[58]RF/SVM (P4-switch)99.0%Low Level LatencyOffloaded
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ganeshan, S.; Ramasamy, R.K. A Systematic Review of Machine-Learning-Based Detection of DDoS Attacks in Software-Defined Networks. Future Internet 2026, 18, 109. https://doi.org/10.3390/fi18020109

AMA Style

Ganeshan S, Ramasamy RK. A Systematic Review of Machine-Learning-Based Detection of DDoS Attacks in Software-Defined Networks. Future Internet. 2026; 18(2):109. https://doi.org/10.3390/fi18020109

Chicago/Turabian Style

Ganeshan, Surendren, and R Kanesaraj Ramasamy. 2026. "A Systematic Review of Machine-Learning-Based Detection of DDoS Attacks in Software-Defined Networks" Future Internet 18, no. 2: 109. https://doi.org/10.3390/fi18020109

APA Style

Ganeshan, S., & Ramasamy, R. K. (2026). A Systematic Review of Machine-Learning-Based Detection of DDoS Attacks in Software-Defined Networks. Future Internet, 18(2), 109. https://doi.org/10.3390/fi18020109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop