Next Article in Journal
A 0.9 V, Ultra-Low-Power OTA with Low NEF and High CMRR for Batteryless Biomedical Front-Ends
Previous Article in Journal
Vibration Suppression Strategy for Bearingless Interior Permanent Magnet Synchronous Motor Based on Proportional–Integral–Resonant Controller
Previous Article in Special Issue
High Step-Up Interleaved DC–DC Converter with Voltage-Lift Capacitor and Voltage Multiplier Cell
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Software-Defined Networking for Secure and Resilient Real-Time Power Sharing in Multi-Microgrid Systems

Energy Systems Research Laboratory, Department of Electrical and Computer Engineering, Florida International University, Miami, FL 33174, USA
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(22), 4518; https://doi.org/10.3390/electronics14224518
Submission received: 25 October 2025 / Revised: 13 November 2025 / Accepted: 16 November 2025 / Published: 19 November 2025
(This article belongs to the Special Issue Efficient and Resilient DC Energy Distribution Systems)

Abstract

Cyber-physical power systems integrate sensing, communication, and control, ensuring power system resiliency and security, particularly in clustered networked microgrids. Software-Defined Networking (SDN) provides a suitable foundation by centralizing policy, enforcing traffic isolation, and adopting a deny-by-default policy in which only explicitly authorized flows are admitted. This paper proposes and experimentally validates a cyber-physical architecture that couples three DC microgrids through an SDN backbone to deliver rapid, reliable, and secure power sharing under highly dynamic conditions, including pulsed-load disturbances. The cyber layer comprises four SDN switches that establish dedicated paths for protection messages, supervisory control commands, and high-rate sensor data streams. An OpenFlow controller administers flow-rule priorities, link monitoring, and automatic failover to preserve control command paths during disturbances and communication faults. Resiliency is further assessed by subjecting the network to a deliberate denial-of-service (DoS) attack, where deny-by-default policies prevent unauthorized traffic while maintaining essential control flows. Performance is quantified through packet captures, which include end-to-end delay, jitter, and packet loss percentage, alongside synchronized electrical measurements from high-resolution instrumentation. Results show that SDN-enforced paths, combined with coordinated multi-microgrid control, maintain accurate power sharing. A validated, hardware testbed demonstration substantiates a scalable, co-designed communication-and-control framework for next-generation cyber-physical DC multi-microgrid deployments.

1. Introduction

Cyber-Physical Systems (CPSs) represent a crucial technological advancement where physical processes are tightly integrated with computational and communication capabilities, bridging cyberspace with the physical world. CPSs combine sensing, actuation, and computing to facilitate real-time data exchange, enabling intelligent monitoring, precise control, and adaptive system behavior [1]. These capabilities have become integral across various critical sectors, including power systems, healthcare, transportation, manufacturing, and defense, substantially enhancing operational efficiency, scalability, and adaptability [2]. The successful operation of CPS heavily relies on robust communication infrastructures to manage and transmit data effectively between system components. The communication layer is particularly vital as it ensures the timely, reliable, and secure exchange of information, which is critical for maintaining synchronization between cyber and physical components. This real-time data exchange enables precise control actions, rapid decision-making, and effective coordination among distributed components, thereby ensuring the stability and resilience of the overall system [3].
However, the increased reliance on communication networks introduces significant security vulnerabilities, exposing CPS to various cyber threats. These threats range from conventional cyberattacks, such as denial-of-service (DoS) and man-in-the-middle (MITM) attacks, to sophisticated cyber-physical attacks designed to disrupt physical processes by exploiting network vulnerabilities. For instance, the authors in [4] introduced a dual-attention spatio-temporal detector that blends graph-attention over the grid topology with temporal convolution to capture spatial and temporal attack signatures, showing superior detection accuracy and robustness on standard IEEE test systems. Meanwhile, in [5], the authors develop a data-driven framework for DC microgrids that first identifies an input–output model using subspace methods, and then applies adaptive residual generators with adaptive thresholds and dedicated localization observers to detect and pinpoint compromised sensors, validated on a meshed microgrid with four distributed generation units. Consequently, cyber-attacks on CPS can result in catastrophic outcomes, including operational disruptions, physical damage, financial loss, and even threats to human safety. Therefore, securing the communication layer has emerged as a primary concern in the design and deployment of CPS [6].
Software-Defined Networking (SDN) has emerged as a promising technology that can address CPS’s unique security and operational challenges. By decoupling network control from the data plane, SDN provides centralized, programmable, and agile network management compared to traditional network architectures. This flexibility enables operators to dynamically configure, monitor, and secure network traffic in response to evolving operational requirements and real-time security threats. SDN enables deterministic communication paths, proactive threat detection, and immediate implementation of security policies across the network, significantly enhancing the cyber resilience of CPS. Moreover, SDN’s centralized control architecture supports rapid reconfiguration and fault recovery, which is crucial for maintaining continuous and secure operation amidst cyberattacks and network disturbances [7].
A critical research gap remains regarding the experimental demonstration of SDN-based control in real-world physical systems, particularly at the data plane level, where hardware performance can be accurately observed and assessed. This paper addresses this gap by implementing an experimental cyber-physical system (CPS), in which the physical layer comprises interconnected DC multi-microgrids. The cyber layer utilizes SDN-enabled programmable switches coordinated by an SDN controller, as shown in Figure 1. This practical integration provides crucial insights into real-time operational behavior, controller responsiveness, and overall system resilience under realistic dynamic conditions, marking a significant advancement in bridging theoretical SDN applications with practical implementation.
The key contribution is the end-to-end, experimental validation of SDN at the hardware data plane, demonstrating resilient, seamless control and protection while operating under highly dynamic disturbances, including fast-rise, pulsed-load events that stress both the electrical and communication layers, as well as a targeted denial-of-service (DoS) attack. The study reveals real-time operational behavior and controller responsiveness, while deny-by-default policies maintain deterministic control and protection channels amid disturbances. Quantitative metrics—end-to-end delay, jitter, and packet loss percentage from packet captures, together with synchronized electrical measurements—substantiate the robustness and seamless coordination across interconnected microgrids, bridging SDN’s theoretical promises with a practical, hardware-level implementation for reliable multi-microgrid operation.
The remainder of the paper is organized as follows: Section 2 discusses the literature review, focusing on previous work related to SDN and studies that have incorporated SDN concepts in similar applications. Section 3 presents the modeling of the physical layer, providing detailed mathematical and structural descriptions of the distributed generation units, energy storage systems, and various load types within the interconnected DC multi-microgrids. Section 4 describes the cyber layer, emphasizing the implemented SDN and communication architecture designed for secure and deterministic data exchange. Section 5 outlines the experimental setup and testing scenarios, and discusses the results, highlighting system resilience and performance across different operating conditions. Finally, Section 6 concludes the paper by summarizing the main contributions, assessing the effectiveness of the SDN-based approach, and proposing directions for future research in secure and adaptive cyber-physical power systems.

2. State of the Art

Numerous studies have examined SDN’s potential to enhance network programmability, scalability, and security. Ref. [8] presented a comprehensive overview of SDN, outlining its layered architecture, key protocols, and major use cases across communication and computing infrastructures. Their work emphasized how SDN’s control plane and programmable interfaces can simplify network management, improve scalability, and enable flexible traffic control and real-time resource optimization.
The work presented in [9] reviewed the application of SDN concepts in the power sector, with a particular focus on smart grid communication architectures. Their survey discussed how SDN controllers could potentially improve grid reliability and enable intelligent routing for distributed renewable energy resources. They also emphasized the potential role of SDN in enhancing cybersecurity, offering valuable guidance for researchers aiming to leverage SDN for the advancement of smart grid infrastructures.
Authors in [10] offered a detailed survey of SDN load-balancing mechanisms, categorizing approaches based on controller placement, algorithmic optimization, and machine-learning-assisted decision strategies. The review highlighted how adaptive and AI-based load-balancing methods can effectively balance control-plane workload, enhance resource utilization, and maintain network stability under varying loads.
Several studies have explored the potential of SDN through simulation-based architectures and frameworks. The authors in [11] presented an SDN architecture based on a flat multi-controller network to achieve efficient load balancing. The architecture was developed using the Ryu SDN controller and tested within the Mininet environment, where OpenFlow switches and host computers were emulated to simulate network behavior. While the study demonstrates performance advantages, it remains limited to simulation with no experimental validation.
Building on similar simulation-based evaluations, ref. [12] highlighted the advantages of migrating to SDN from traditional networks. Implementing two network scenarios in a virtual environment demonstrates improved performance and simplified network management under the SDN model. However, this work also lacks experimental validation.
The work presented in [13] reinforces the benefits of SDN by addressing the complexity of managing traditional network devices. It emphasizes how optimized SDN architectures can improve transactional performance across network participants. Using the Mininet SDN emulator and AnyLogic simulation software, the study supports its findings in a virtual setting; however, it does not extend to real-world testing.
In [14], the authors proposed a more sophisticated SDN architecture to enhance scalability and load balancing in power communication networks. It introduces a Prediction-based Switch Virtualization and Distribution (PSVD) algorithm alongside RabbitMQ and Storm technologies to enhance data platform management. The SDN security controller is distributed across four machines, supporting modularity and scalability. While the solution shows potential, the study identifies challenges such as delayed algorithm activation and computational overhead, highlighting the need for further development and experimental analysis.
Expanding SDN’s application into energy systems, the work in [15] introduced a model for cooperative energy sharing in an Interconnected Multi-Microgrid (IMMG) framework. It proposes an Energy Sharing Factor (ESF) as a novel metric to ensure fair energy distribution while maintaining the autonomy of individual microgrids. Using a mesh topology, the model facilitates interaction between microgrids but remains confined to simulation, lacking practical validation.
In [16], the authors presented a co-simulation framework designed to enhance the resilience and reliability of microgrids using SDN. It proposes a novel SDN-based control strategy to manage communication during power-sharing operations, reducing latency and improving system responsiveness. The framework effectively manages voltage and frequency deviations, referencing future cloud-based microgrid implementations. Despite its strengths, this work is also limited to simulation studies.
The work presented in [17] proposes a Software-Defined Networking routing framework for data center networks that respects flow priority while reducing energy use and balancing traffic. The authors formulate a multi-objective mixed-integer optimization model to maximize admitted flows, minimize path energy, and minimize load variance. Because exact optimization is expensive, they design two controller-driven heuristics: one selects lower-energy paths with priority preserved, and another favors balanced paths with priority preserved. Simulations show higher admission ratios with lower energy and better balance than baselines. A key limitation is the largely static setting, which weakens performance under congestion or link failures, suggesting the need for real-time, adaptive strategies.
From a cybersecurity perspective, ref. [18] proposed a machine-learning–based framework for detecting low-rate Distributed Denial-of-Service (LDDoS) attacks in Software-Defined Networks (SDNs). The core of their approach is an ensemble classifier (ENC) that combines multiple learning models to accurately identify subtle, low-rate attack patterns while maintaining a low detection time. The method is specifically designed to achieve a very low false-alarm rate so that legitimate traffic is not incorrectly flagged as malicious, thereby preserving the efficiency and reliability of the SDN environment. The authors continued studying the same idea in [19], developing the ML technique, with a proposed approach that combines ensemble learning models with principal component analysis (PCA) for feature selection.
In a subsequent work, the same authors introduced a hybrid Support Vector Machine–Random Forest (SVM–RF) model for DDoS detection and mitigation in SDNs [20]. Their dual-plane architecture utilized data from both control and data planes to identify anomalies more accurately. This work highlighted how intelligent machine learning integration can strengthen SDN’s inherent security model. The work was validated using Mininet with a RYU controller and Open vSwitch (OVS) configured as a multilayer virtual switch, thus remaining limited to virtual environments.
Moreover, the authors in [21] propose a cross-domain SDN scheme that integrates reinforcement-learning-assisted path selection with coordinated gain scheduling to strengthen microgrid resilience against denial-of-service attacks. The authors build a cyber-resilience validation platform and demonstrate it on an IEEE 34-bus physical model with an emulated cyber stack using Cisco Modeling Labs, Mininet, and Ryu. Results show substantial latency reductions during attacks and improved frequency stability with clear reward convergence. The contribution is chiefly methodological and platform-oriented, as the evaluation relies on hardware-in-the-loop and emulation rather than power hardware. Overall, the work offers credible pre-deployment evidence and would benefit from validation on a real microgrid testbed.
Finally, ref. [22] surveys and proposes SDN-based defenses for DDoS, emphasizing centralized control, dynamic traffic engineering, segmentation/isolation, and real-time monitoring with Machine learning-assisted detection. It motivates SDN as a flexible alternative to static firewalls and IDS, enabling dynamic path steering and scrubbing redirection. The core contribution is a conceptual model and workflow for controller-driven detection, flow diversion, and traffic classification, with qualitative results. Technically, it details traffic engineering policies, null routing, and fine-grained segmentation to contain attacks. Overall, the work is methodological; future iterations need reproducible, quantitative evaluations and validation on hardware-in-the-loop or real testbeds.
While previous studies have collectively demonstrated the potential of SDN in network management, power systems, and microgrid coordination, their validation primarily relies on simulations lacking hardware implementation.

3. Physical System Modeling

In the evolving landscape of modern power systems, interconnected multi-microgrids have emerged as a resilient and decentralized solution to meet growing energy demands while integrating renewable energy sources. These microgrids are composed of multiple distributed generation (DG) units—such as photovoltaic (PV) arrays and wind turbines—alongside energy storage systems (ESSs) like batteries or supercapacitors. By linking these components into a cohesive network, interconnected microgrids offer enhanced reliability, improved energy efficiency, and the flexibility to operate in grid-connected and islanded modes. The synergy between DG and ESS enables the real-time balancing of supply and demand, enhancing the system’s ability to respond to disturbances. As the penetration of renewable energy increases, interconnected microgrids play a crucial role in enabling localized energy autonomy, reducing transmission losses, and fostering a more sustainable and adaptive power infrastructure. This section presents the modeling of the physical system components, including DG units, battery storage units, and electrical loads. The proposed system architecture is then provided to represent the complete microgrid configuration under study.

3.1. Distributed Generation Units

3.1.1. Photovoltaic Unit Modeling and Integration

PV systems are central to DC microgrids by providing modular and efficient energy generation. Their integration typically includes connecting PV arrays to a shared DC bus alongside DC-DC converters, ESS, and local loads. Compared to AC microgrids, DC configurations avoid issues related to reactive power and minimize energy losses from conversions, making them especially well-suited for PV deployment. Nevertheless, challenges such as variable output, voltage instability, and shading effects require sophisticated control methods to ensure consistent performance and optimal energy utilization [23]. Hierarchical control schemes are commonly employed to address these issues [24].
The single-diode model is widely adopted for PV modeling because it balances accuracy and computational efficiency. Its output current-voltage relationship is governed by [25]:
I = I p h I o exp V + I R s a V t 1 V + I R s R s h
where I is the module current, Iph represents photocurrent, Io is the diode saturation current, and Rs and Rsh model resistive losses. The thermal voltage V t = k A T q , where k is the Boltzmann’s constant equal to 1.38 × 10−23 J/K; q is the electron charge equal to 1.602 × 10 −19 C; and A is the diode ideality constant.
Here, the photocurrent Iph and the diode saturation current Io depend on irradiance G and temperature T, as defined by:
I p h = I S C r e f + K i T T r e f   G G r e f
I o = I o   r e f T T r e f 3 exp q E g a k 1 T r e f 1 T
These equations enable the real-time simulation of PV behavior, which is crucial for Maximum Power Point Tracking (MPPT) algorithms, such as Perturb-and-Observe (P & O) or Model Predictive Control (MPC). For instance, adaptive MPC can leverage the model’s I-V curve to predict optimal operating points under partial shading, minimizing power oscillations.

3.1.2. Wind DG Unit Modeling and Integration

Wind energy systems are increasingly adopted in DC microgrids due to their scalability and efficient integration with DC infrastructures, which avoids frequency synchronization and minimizes conversion losses. Typically, modern wind turbines use permanent magnet synchronous generators (PMSGs) combined with power electronic converters, such as active rectifiers or back-to-back voltage source converters (VSCs) to connect to the DC bus. Given the variable nature of wind, effective control strategies are crucial for performing maximum power point tracking (MPPT), regulating DC bus voltage, and coordinating with other distributed energy resources, such as PV and storage systems. Hierarchical and decentralized control approaches are key in maintaining stability and balancing supply and demand in these dynamic environments [26].
The equation governs the mechanical power extracted by a wind turbine:
P m e c h = 1 2 ρ A C p β , λ υ ω 3  
where ρ is the air density, A is the rotor swept area, Cp is the power coefficient, which is a function of the tip-speed ratio λ and blade pitch angle β, and υ ω is the wind speed.
The tip-speed ratio is defined as:
λ = ω r R υ ω
where ω r is the angular speed, and R is the radius. To maximize energy capture, MPPT algorithms dynamically adjust ω r to maintain λ at its optimal value, ensuring Cp remains near its peak [27].

3.2. Battery Storage Unit Modeling

Energy storage units play a vital role in microgrids by maintaining power balance and enhancing system reliability. They store excess electrical energy during periods of low demand and discharge it during peak load times or when generation from renewable sources is insufficient, thereby helping to mitigate supply fluctuations and support stable microgrid operation. This capability is critical when managing variable energy inputs from PV and wind systems. Batteries, one of the most common storage technologies, store energy through electrochemical processes and are well-suited for applications requiring moderate power over extended durations, such as load leveling and peak shaving. For example, when generation from PV and wind sources exceeds demand, the surplus energy is stored in the battery until it reaches a predefined state of charge (SoC). When generation falls below the load requirement, the battery discharges to supplement the power supply and maintain continuity [28]. A representative battery voltage model can be written as [25]:
V b a t t = E 0 k Q Q 0 t i   d t + A e x p B   0 t i   d t
where E0 is the constant open-circuit voltage, k is the polarization voltage (V), Q is the nominal capacity (Ah), A is the exponential zone amplitude, B is the inverse time constant of the exponential zone, and i(t) is the charging (negative) or discharging (positive) current.

3.3. Loads

In DC microgrids, loads are often modeled as purely resistive to simplify analysis, as there is no requirement to manage reactive power or maintain phase synchronization. Given the direct linear relationship between current and voltage in DC systems, this assumption enables more straightforward calculation of power flow and voltage regulation. It also simplifies the design and simulation of control strategies, particularly in islanded or off-grid scenarios. In real-world applications, loads may be either constantly drawing a fixed amount of power over time or variable, fluctuating due to user activity or external factors, such as industrial pulse loads or rapidly changing electronic devices. Regardless of the load type, the microgrid control system must continuously adapt to changes in demand to stabilize voltage, manage energy storage behavior, and allocate resources efficiently.
Pulsed power loads (PPLs) are characterized by a load demand that alternates between a minimum value ( P m i n ) and a maximum value ( P m a x ) over a period T, operating with a duty cycle D. Depending on how the duty cycle is set during operation, the PPL can behave as either a constant or a variable load. The instantaneous power demand for N pulses can be described by [29]:
P D t = P m i n +   K = 0 N 1 P m a x P m i n δ t K T δ T K + D T
where δ represents the unit step function for t = 0.

3.4. Proposed System Architecture

The proposed experimental setup’s physical layer infrastructure, as shown in Figure 2, comprises three interconnected DC multi-microgrids, each equipped with key electronic and renewable components to support dynamic energy exchange and system resilience. Each microgrid includes a photovoltaic (PV) source, a Lithium-ion battery for energy storage, and a bidirectional DC-DC converter to enable seamless power flow in both directions.
Some microgrids also integrate additional elements such as wind turbine emulators, ultracapacitor banks, and electric vehicle loads to enhance system flexibility and dynamic response under varying conditions. A local controller—implemented using a dSPACE-1104 real-time control platform—manages each microgrid’s operation, ensuring voltage regulation and power balance at the local level. These controllers communicate with a centralized Multi-Microgrid Control Center (MMCC), hosted on an external computing platform, which aggregates real-time measurements and coordinates multi-microgrid power-sharing decisions. The architecture enables hierarchical control, where local autonomy is maintained while facilitating global coordination, making the physical layer modular and scalable, and suitable for validating advanced control algorithms and real-time energy management strategies in networked microgrid environments.

4. SDN-Based Cyber Layer

This section focuses on the cyber layer of multi-microgrids, specifically the implementation of SDN for communication and control. The first subsection introduces the concept of SDN, highlighting its key differences from traditional networking architectures, particularly its separation of control and data planes, centralized decision-making, and programmability. These features make SDN well-suited for dynamic, distributed energy systems, such as microgrids. The second subsection presents the integration of SDN within the proposed multi-microgrid system, detailing how communication between DG units, ESS, and loads is managed through an SDN controller. This setup enables real-time monitoring, flexible reconfiguration, and coordinated control actions, enhancing the system’s responsiveness and resilience.

4.1. SDN Layer Architecture

Software-Defined Networking (SDN) is an advanced networking framework that decouples the control plane from the data plane, enabling centralized control and enhanced programmability of network resources [30,31]. In contrast to conventional IP-based networks—which often suffer from rigid architectures and tightly coupled control and forwarding mechanisms—SDN simplifies network management by introducing a centralized software controller, a network operating system [32,33]. While SDN has been widely deployed in IT domains such as cloud computing and data centers, the proposed work focuses on its application in operational technology (OT) networks for the power industry, where it can enable more adaptive routing, efficient resource allocation, and streamlined configuration in critical infrastructure systems.
In a typical SDN architecture, as depicted in Figure 3, the infrastructure layer or data plane comprises forwarding devices like switches, which handle packet processing based on instructions received from a centralized controller via a southbound interface [34]. OpenFlow is the most widely used protocol for this interface, allowing the controller to define flow entries that determine how packets should be managed—such as forwarding, dropping, or modifying—based on match criteria, including packet headers. This centralized control ensures consistent, optimized behavior across the network. However, a centralized SDN controller can be a single point of failure or a DoS target. Importantly, controller presence is only required for installing or reconfiguring flows; if the system is static, switches continue to forward with existing rules during the controller timeout. Adding standby or clustered controllers reduces risk but introduces orchestration complexity and the risk of split-brain hazards.
The control layer houses the SDN controller, which collects network-wide information from the data plane and makes traffic management decisions accordingly. It translates policies such as routing, quality of service, and security into specific forwarding rules. This enables quick responses to congestion or link failure, maintaining efficient and reliable network operation. Above this, the application layer comprises software services that interact with the controller via a northbound interface. While not as standardized as the southbound interface, it provides APIs or abstracted network views for implementing higher-level functions like firewalls, load balancers, or intrusion detection systems [35].
One of SDN’s key advantages is its strong security model. Centralized programmability not only simplifies management but also enables robust traffic control. A critical security feature is the “deny-by-default” policy. In this model, any traffic from a device not explicitly authorized by the controller is automatically blocked. If a device’s IP address (i.e., or other identifier, such as MAC) is not preconfigured in the SDN controller, no flow rules will be assigned, and the switch will drop all packets from that device. This creates a built-in protection layer, significantly reducing the risk of unauthorized or malicious access.
In summary, SDN reshapes conventional networking by decoupling control from data forwarding and enabling programmable, centralized management through defined interfaces. With its “deny-by-default” mechanism and centralized control, SDN enhances flexibility and visibility, while strengthening the network’s defense against unauthorized activity.

4.2. SDN Integration with Multi-Microgrids

In the proposed SDN-integrated multi-microgrid architecture as depicted in Figure 4, every physical component, whether a DG unit, ESS device, load controller, or microcontroller unit (MCU), is wired directly to the SDN switches via dedicated Ethernet cables and assigned a unique IPv4 address with a corresponding subnet mask. These switches form the data plane and treat each piece of equipment as an IP-enabled endpoint, providing fine-grained visibility into status and performance metrics. MCUs are further subdivided by function: some execute low-level control loops for generation setpoints, while others manage load dispatch and demand response.
Regarding scalability and adaptability, the proposed SDN approach can support growth without redesign by onboarding additional DERs, sensors, and controllers via role templates, keeping flow tables manageable (role-based whitelists, VLANs, group tables) and aggregating telemetry at site controllers. Adaptation can be event-driven: the controller monitors link/device health, as well as grid events, and then applies predefined routes and queue policies. Precomputed backup paths enable immediate failover. A deny-by-default whitelist enables secure plug-and-play for known roles.
Once the physical topology and IP provisioning are complete, the SDN controller, residing in the control plane, orchestrates traffic flows between endpoints. Over a TCP/IP and ARP-based southbound interface, the controller programs flow entries into each switch, specifying match criteria (i.e., source/destination IP, TCP/UDP port, VLAN tag) and corresponding actions (forward, drop, modify) based on real-time network and power-system conditions. ARP request/reply flows are explicitly managed to populate the controller’s IP-to-MAC mappings and prevent broadcast storms. The SDN controller enforces performance isolation and security policies by tailoring these flow rules to distinct communication patterns (i.e., generation control traffic, load-shedding commands, SCADA polling). This centralized approach allows dynamic adaptation to operational changes, such as islanding events or load transients, without requiring manual reconfiguration of individual switches.
This clear separation between the data plane, modeled as SDN switches forwarding traffic, and the control plane, the SDN controller issuing flow entries, ensures that the microgrid can adapt packet routes on the fly without requiring manual reconfiguration of individual switches.
Figure 5 illustrates the practical topology of the SDN-integrated multi-microgrid system, as captured from the Flow Controller interface. At the top of the configuration is the centralized SDN controller, which provides centralized management and programming of four SDN switches. These switches interconnect all system components, enabling structured and efficient communication. The microgrid architecture comprises three microgrids (MG1, MG2, and MG3), with each microgrid’s load managed by dedicated microcontroller units (MCU1, MCU2, and MCU3, respectively). This enables centralized oversight and dynamic reconfiguration through standard network utilities.
A fourth microcontroller (MCU4) controls the load at the point of common coupling (PCC). Additionally, the system incorporates a host device, enabling remote access and configuration of the MCUs through a user-friendly web interface. Furthermore, a Wireshark network analyzer is integrated into the network, monitoring packet flow, latency, and packet loss between components.
Communication across all these system components is governed by HTTP, TCP/IP, and ARP protocols, which are explicitly programmed into the SDN controller to construct accurate and secure flow tables. These protocols ensure robust, efficient, and secure data exchanges, enhancing reliability and responsiveness throughout the interconnected microgrid system.

5. Experimental Results and Discussion

5.1. Experimental Setup

The experimental validation of the proposed system was conducted using the smart grid testbed, as illustrated in Figure 6. This testbed comprises a laboratory-scale power system integrating diverse power sources, including conventional AC sources represented by synchronous generators and renewable DC sources, such as PV emulators. Multiple ESSs, including lithium-ion batteries, ultracapacitors, and flywheel systems, are integrated to support various operational scenarios. This versatile environment enables comprehensive development, testing, and practical implementation of innovative control strategies to improve the performance, resiliency, reliability, sustainability, and security of DC microgrids and broader smart grid systems. Furthermore, the testbed facilitates the integration of renewable energy sources (RESs) with ESSs via bidirectional converters, forming standalone DC microgrid configurations. The testbed also enables the seamless interconnection of multi-microgrids through secure and flexible communication infrastructures, combined with advanced hierarchical control systems, thereby extending capabilities to multi-microgrid operations. Additionally, the DC microgrid can interface with a small-scale AC utility grid through bidirectional AC/DC converters, enabling real-world testing and validation of control, protection, communication, and cybersecurity solutions relevant to industrial applications.
This paper specifically addresses the standalone DC multi-microgrid operational level and its associated communication infrastructure enabled by the SDN layer. Electrical waveforms were accurately captured using a high-speed digital oscilloscope and controller by the control desk interface shown in Figure 7a, while packet-level network traffic was simultaneously recorded in real-time via Wireshark running on a network-attached analyzer, as illustrated in the experimental setup of the communication layer shown in Figure 7b. This integrated measurement approach enables a precise correlation between power-system dynamics and network-layer performance metrics, effectively demonstrating the advantages of centralized SDN orchestration in enhancing the resilience, reliability, and responsiveness of DC multi-microgrid operations. To validate the proposed SDN-integrated microgrid and quantify its dynamic performance, all devices are connected via dedicated Ethernet cabling and communicate over TCP/IP and ARP under the direction of the SDN controller. Two test scenarios were designed:
  • Local Load/Generation Variations—step changes injected into individual microgrid setpoints.
  • Pulsed Power Load (PPL) Injection—a short-duration, high-magnitude load applied at the PCC bus.
The ratings and specifications of the proposed DC microgrid system components considered in this study are summarized in Table 1. The table provides detailed parameters for each microgrid, including DC bus voltage, load ranges, renewable generation capacities, energy storage specifications, and the rating for the external pulse load utilized during testing.

5.2. Test Scenarios

5.2.1. Load and Generation Variations

Scenario 1 is divided into three sequential cases as shown in Figure 8. In all cases, each waveform uses its vertical axis on the left margin, which is color-coded with four labels indicating the reference (Net Power = Generation Power − Demand Power = Zero) of each waveform and a horizontal time axis with 4 secs/div. During the initial interval, MG3′s net power is below its reference, indicating that its demand power is higher than its generation. At the same time, MG1′s and MG2′s waveforms lie above their reference axes, which indicates net supply, confirming that MG1 and MG2 jointly furnish MG3′s demand.
Case 1 examines the effect of decreasing the local load of MG3 over a 4 s interval. As a result, MG3′s net power shifts upward and MG1′s net power correspondingly decreases by the same magnitude, demonstrating that MG1 scales back its contribution to match the lower demand of MG3. While MG2′s net power waveform remains almost unchanged throughout this adjustment, indicating that its generation capacity continues to meet its load without requiring reconfiguration.
Similarly, the MMCC (Host PC) issues a generation-reduction command for MG2 via the SDN communication layer during case 2. As a result, MG2′s net power is reduced to 35% of its initial capacity. Consequently, MG1′s net power rises correspondingly to rebalance the DC bus, shouldering the shortfall while MG3′s consumption remains unchanged. Immediately thereafter, the host PC sends a second SDN-layer command to reduce MG1′s local load setpoint; MG1′s net power then falls by the same amount, with MG2 holding steady at its reduced level. Throughout these adjustments, every setpoint update and acknowledgment traverses the SDN layer and can be correlated between the oscilloscope waveforms and the Wireshark-captured packet flows, demonstrating seamless power sharing and coordination under SDN control.
Lastly, the MMCC issues a local load reduction command for MG2 over the SDN communication layer during case 3, lowering MG2′s setpoint so that its net power shifts toward zero—i.e., MG2 draws less power. As a direct consequence, MG1′s supply power decreases by an equivalent amount, reflecting the reduced energy transfer, while MG3′s consumption remains at its nominal level.
A mirror (SPAN) port on one of the switches was used to verify the precise timing and content of SDN control commands. All Ethernet frames were captured using Wireshark on a dedicated analysis PC, which utilized tap connections from the flow controller software. This setup enabled packets from different ports to be sent to a specific port where the network analyzer was connected, thereby capturing all network flows. Display filters were applied to isolate the TCP/HTTP streams between the host PC and each MG controller’s IP address. The Wireshark GUI presents each packet’s number, arrival timestamp, source/destination addresses, protocol (HTTP or TCP), and length, with color coding to distinguish GET/POST requests, ACKs, and Teardown messages. Figure 9 shows representative snapshots for the three cases: the load-change HTTP GET issued by the web interface, the SDN-controller’s flow-mod handshake, and the subsequent TCP teardown. In every instance, the packet timestamps in Wireshark align with the corresponding waveform transitions on the oscilloscope, confirming that SDN command dispatch over TCP/IP is tightly synchronized with the electrical response of the microgrid. Moreover, this tightly coordinated response—enabled by sub-second SDN orchestration of control commands and flow-table entries—highlights the microgrid’s ability to dynamically rebalance generation and consumption without manual switch reconfiguration.

5.2.2. Pulsed Load Injection

In Scenario 2, a series of pulsed-load steps is applied at the common coupling bus via the SDN layer, with each pulse changing magnitude approximately every 4 s. The resulting DC-bus voltage and microgrid power waveforms, captured on the oscilloscope, are presented in Figure 10, illustrating the transient response to each load injection and removal. Simultaneously, Figure 11 displays the corresponding Wireshark packet captures of the HTTP/TCP commands sent from the host PC to the bus-load controller, with timestamps that align closely with the oscilloscope markers. This packet-level monitoring follows the same method as in Scenario 1, confirming that each pulsed-load command and acknowledgement traverse the SDN communication layer in lockstep with the observed electrical dynamics.
The performance evaluation is conducted to include quantitative metrics for both scenarios—End-to-End (E2E) communication delay and packet loss percentage—reported in Table 2. E2E delay is defined as the total time from host command issuance to receipt of the MCU’s reply, encompassing outbound/return transmission, switching and queuing, protocol stack processing, and MCU handling. Request departure and response arrival timestamps are extracted from the packet capture. Per transaction, E2E is computed by t i = t r e s p t r e q . The mean E2E delay is then calculated using the arithmetic average for N transactions by: t ¯ =   1 N i = 1 N t i where t ¯ captures the typical request→response time. Lastly, to quantify variability (jitter), the standard deviation σ is computed using σ = 1 N i = 1 N t i t ¯ 2 .
Additionally, the packet loss percentage, defined as the fraction of host requests that do not receive a matching response within the observation window, is calculated. Let Nreq be the number of requests and Nresp the number of responses; then the loss can be presented by P a c k e t   L o s s % =   N r e q N r e s p N r e q     100 .

5.3. Cyber-Resilience of SDN Under DoS Attack

To assess cyber-resilience under denial-of-service, identical TCP-SYN floods were generated with hping3 toward a designated agent endpoint, as shown in Figure 12, using spoofed source addresses to issue large numbers of connection-initiating packets that never complete the handshake. Tests were run in two modes: a conventional switched network and the proposed SDN configured with a deny-by-default whitelist. In the conventional network, the flood traffic reached the target and saturated the host interface. The measurements and packet captures reported below therefore refer to this baseline case. Under SDN control, only preauthorized device-to-device exchanges were admitted, and unknown or spoofed packets were dropped, preventing delivery to the endpoint. Consequently, SDN performance under DoS matched Scenario 2, and the pulsed-load injection sequence continued normally without interruption.
The tool hping3 can generate TCP-level floods via “hping3 -c 15000 -d 120 -S -w 64 –rand-source Targeted IP Address”. In this experiment, 15,000 TCP packets, each 120 bytes in size, were transmitted with the SYN flag set and a nominal TCP window size of 64. The packets were sent at line rate using spoofed source IP addresses, thereby preventing any SYN-ACK responses from returning to the sender. This configuration effectively emulates a TCP SYN flood condition to evaluate the system’s resilience under a DoS scenario. Figure 13 presents a packet-level capture from the network monitoring interface, clearly indicating the onset of the flooding event. The system’s power dynamics are depicted in Figure 14, while the microgrid operated under a pulsed load shared between MG1 and MG3. Prior to the attack (t < 22 s), the plot shows stable power-sharing behavior, with MG1 and MG3 supplying their respective loads according to normal operational setpoints. MG2 does not supply the pulse load due to the local status. At t = 22 s, the SYN flood commenced, coinciding with the packet surge observed in the network capture. Following the attack, the pulsed load controller (IP: 192.168.5.110) stopped responding to normal control signals, resulting in an immediate disruption.
The conventional switched network operates with a distributed control plane, where packet forwarding decisions are made locally by each switch based on static MAC address tables and predefined rules. During the flooding event, this configuration allowed all malicious packets to traverse the network freely, fully saturating the target interface without any centralized coordination or filtering. In contrast, SDN-based architecture employs a logically centralized controller that dynamically manages flow rules. When the flood traffic began, the controller detected abnormal connection requests and enforced deny-by-default policies through programmable flow tables. This prevented spoofed packets from reaching the endpoint and maintained normal power-sharing operations across the microgrids as previously presented in Scenario 2. The observed results emphasize how SDN’s global visibility and policy enforcement en-hance network resilience and mitigate cyber-physical disturbances under high-rate denial-of-service conditions.

5.4. Redundancy and Failover Mechanisms in the SDN Network

SDN architecture significantly enhances the resilience and redundancy of the cyber-physical system by providing multiple communication paths for network traffic, as illustrated in Figure 15. Specifically, SDN enables automatic failover capabilities, ensuring uninterrupted communication in the event of a link failure. For instance, under normal operating conditions, the route for commands sent from the host to MCU1 follows a primary path through Switch 3 and Switch 2 until it reaches MCU1. However, if a link failure occurs between Switch 2 and Switch 3, the SDN controller instantly redirects the packet flow through an alternate failover path—traversing Switch 3, Switch 4, Switch 1, and then Switch 2—thanks to the loop configuration connecting all switches. This automated rerouting capability maintains network integrity, system reliability, and operational continuity without human intervention, significantly improving system robustness and resilience.

6. Conclusions

In conclusion, this paper has presented and experimentally validated a cyber-physical architecture that integrates SDN with interconnected DC microgrids. The proposed SDN-enabled framework enhanced efficiency and resilience through traffic isolation, deterministic routing for protection and control, and prioritized handling of high-rate sensor data under fast-rise pulsed-load disturbances. Performance was evaluated using packet captures, which included end-to-end delay, jitter, and packet-loss percentage, as well as synchronized electrical measurements, confirming accurate power sharing and stable operation. The system remained resilient under a deliberate denial-of-service attack, as default-denial policies effectively blocked unauthorized traffic while maintaining essential control and protection flows. These results verify the practical effectiveness of SDN-based security mechanisms and provide a foundation for more advanced network-security testing. Although the experiments used a specific hardware platform, the proposed architecture is universal and scalable: its core idea—centralized control, dynamic flow orchestration, and default-denial security—can be migrated to other SDN devices and real-time control platforms that comply with the OpenFlow standard. Centralized control also streamlined network management and enabled rapid configuration adjustments, establishing a robust basis for secure, efficient, and adaptive multi-microgrid deployments.
Future work will include more comprehensive testing under link failure conditions and the evaluation of additional types of cyberattacks, such as Man-in-the-Middle (MITM) and False Data Injection (FDI) attacks, as well as other advanced attack strategies. These enhancements will allow us to further validate the robustness and security of the proposed architecture.

Author Contributions

Conceptualization, R.A.T. and A.A.; methodology, R.A.T. and A.A.; software, R.A.T.; validation, R.A.T.; formal analysis, R.A.T. and A.A.; resources, R.A.T., A.A. and O.A.M.; data curation, R.A.T. and A.A.; writing—original draft preparation, R.A.T., A.A. and S.H.M.; writing—review and editing, R.A.T., A.A. and O.A.M.; visualization, R.A.T. and A.A.; supervision, O.A.M.; project administration, O.A.M.; funding acquisition, O.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by grants from the U.S. Army DEVCOM ARL Army Research Office award # W911NF-23-2-0229. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army or the U.S. Government. The authors are with the Energy Systems Research Laboratory, Florida International University, Miami, FL 33174 (Corresponding author: Osama A. Mohammed, mohammed@fiu.edu).

Data Availability Statement

Data is available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CPSCyber-Physical System
DGDistributed Generation
ESSEnergy Storage System
MPPTMaximum Power Point Tracking
PPLPulsed Power Load
PVPhotovoltaic
SDNSoftware-Defined Networking

References

  1. Zhang, K.; Shi, Y.; Karnouskos, S.; Sauter, T.; Fang, H.; Colombo, A.W. Advancements in Industrial Cyber-Physical Systems: An Overview and Perspectives. IEEE Trans. Ind. Inform. 2023, 19, 716–729. [Google Scholar] [CrossRef]
  2. Kim, S.; Park, K.J.; Lu, C. A Survey on Network Security for Cyber-Physical Systems: From Threats to Resilient Design. IEEE Commun. Surv. Tutor. 2022, 24, 1534–1573. [Google Scholar] [CrossRef]
  3. Yu, Z.; Gao, H.; Cong, X.; Wu, N.; Song, H.H. A Survey on Cyber-Physical Systems Security. IEEE Internet Things J. 2023, 10, 21670–21686. [Google Scholar] [CrossRef]
  4. Wang, X.; Li, Y.; Luo, X.; Guan, X. Dual-attention-based spatio-temporal detection model against false data injection attacks in smart grids. Int. J. Electr. Power Energy Syst. 2025, 172, 111156. [Google Scholar] [CrossRef]
  5. Wang, X.; Zhu, H.; Luo, X.; Guan, X. Data-Driven-Based Detection and Localization Framework Against False Data Injection Attacks in DC Microgrids. IEEE Internet Things J. 2025, 12, 36079–36093. [Google Scholar] [CrossRef]
  6. Duo, W.; Zhou, M.C.; Abusorrah, A. A Survey of Cyber Attacks on Cyber Physical Systems: Recent Advances and Challenges. IEEE/CAA J. Autom. Sin. 2022, 9, 784–800. [Google Scholar] [CrossRef]
  7. Maleh, Y.; Qasmaoui, Y.; El Gholami, K.; Sadqi, Y.; Mounir, S. A comprehensive survey on SDN security: Threats, mitigations, and future directions. J. Reliab. Intell. Environ. 2022, 9, 201–239. [Google Scholar] [CrossRef]
  8. Kumari, N.; Kathuria, K. Overview of SDN Building Foundations and Applications. J. Res. Sci. Eng. 2024, 6, 43–53. [Google Scholar] [CrossRef]
  9. Velasquez, W.; Moreira-Moreira, G.Z.; Alvarez-Alvarado, M.S. Smart Grids Empowered by Software-Defined Network: A Comprehensive Review of Advancements and Challenges. IEEE Access 2024, 12, 63400–63416. [Google Scholar] [CrossRef]
  10. Farahi, R. A comprehensive overview of load balancing methods in software-defined networks. Discov. Internet Things 2025, 5, 6. [Google Scholar] [CrossRef]
  11. Truchly, P.; Kubica, J. Communication Networks with Multiple SDN Controllers. In Proceedings of the Elmar-International Symposium Electronics in Marine, Zadar, Croatia, 11–13 September 2023; pp. 29–32. [Google Scholar] [CrossRef]
  12. Sotindjo, P.; Gbemavo, G.; Djogbe, L.; Goussi, G.; Agossou, C.M.M.; Vianou, A. Study of the Complexity of Implementation and Maintenance of an SDN Network. In Proceedings of the International Conference on Electrical, Computer and Energy Technologies, ICECET 2023, Cape Town, South Africa, 16–17 November 2023. [Google Scholar] [CrossRef]
  13. Roux, R.; Olwal, T.O.; Chowdhury, D.S.P. Software Defined Networking Architecture for Energy Transaction in Smart Microgrid Systems. Energies 2023, 16, 5275. [Google Scholar] [CrossRef]
  14. Song, J.; Zhang, Y.; Wang, Q. Research on Power Communication Data Network System and Key Technologies Based on SDN Architecture. In Proceedings of the 2022 IEEE 4th International Conference on Civil Aviation Safety and Information Technology, ICCASIT 2022, Dali, China, 12–14 October 2022; pp. 850–855. [Google Scholar] [CrossRef]
  15. Zargar, R.H.M.; Yaghmaee, M.H. Energy exchange cooperative model in SDN-based interconnected multi-microgrids. Sustain. Energy Grids Netw. 2021, 27, 100491. [Google Scholar] [CrossRef]
  16. Perez Guzman, R.E.; Rivera, M.; Wheeler, P.W.; Mirzaeva, G.; Espinosa, E.E.; Rohten, J.A. Microgrid Power Sharing Framework for Software Defined Networking and Cybersecurity Analysis. IEEE Access 2022, 10, 111389–111405. [Google Scholar] [CrossRef]
  17. Pathan, M.N.; Muntaha, M.; Sharmin, S.; Saha, S.; Uddin, M.A.; Nur, F.N.; Aryal, S. Priority based energy and load aware routing algorithms for SDN enabled data center network. Comput. Netw. 2024, 240, 110166. [Google Scholar] [CrossRef]
  18. Hirsi, A.; Audah, L.; Salh, A.; Sahar, N.M.; Alhartomi, M.A.; Ahmed, S. Detecting Low-Rate DDoS Attacks in SDN Using Ensemble Machine Learning Techniques. In Proceedings of the 2024 IEEE 22nd Student Conference on Research and Development, SCOReD 2024, Selangor, Malaysia, 19–20 December 2024; pp. 299–304. [Google Scholar] [CrossRef]
  19. Hirsi, A.; Audah, L.; Salh, A.; Alhartomi, M.A.; Ahmed, S. Enhancing SDN security using ensemble-based machine learning approach for DDoS attack detection. Indones. J. Electr. Eng. Comput. Sci. 2025, 38, 1073. [Google Scholar] [CrossRef]
  20. Hirsi, A.; Lukman, A.; Mohammed, A.A.; Adeb, S.; Okon, A.G.; Maad, H.M.; Galih, S.D.; Salman, A.; Abdullahi, F. HSF: A Hybrid SVM-RF Machine Learning Framework for Dual-Plane DDoS Detection and Mitigation in Software-Defined Networks. IEEE Access 2025, 13, 112303–112323. [Google Scholar] [CrossRef]
  21. Dai, J.; Dai, Z.; Thing, V.L.L. Cyber-Resilience Enhancement With Cross-Domain Software-Defined Network for Cyber-Physical Microgrids Against Denial of Service Attacks. IEEE Trans. Ind. Cyber-Phys. Syst. 2025, 3, 273–284. [Google Scholar] [CrossRef]
  22. Pillai, S.E.V.S.; Polimetla, K. Mitigating DDoS Attacks using SDN-based Network Security Measures. In Proceedings of the 2nd International Conference on Integrated Circuits and Communication Systems, ICICACS 2024, Raichur, India, 23–24 February 2024. [Google Scholar] [CrossRef]
  23. Han, Y.; Ning, X.; Yang, P.; Xu, L. Review of Power Sharing, Voltage Restoration and Stabilization Techniques in Hierarchical Controlled DC Microgrids. IEEE Access 2019, 7, 149202–149223. [Google Scholar] [CrossRef]
  24. Nasir, M.; Anees, M.; Khan, H.A.; Khan, I.; Xu, Y.; Guerrero, J.M. Integration and Decentralized Control of Standalone Solar Home Systems for Off-Grid Community Applications. IEEE Trans. Ind. Appl. 2019, 55, 7240–7250. [Google Scholar] [CrossRef]
  25. Aghmadi, A.; Hussein, H.; Mohammed, O.A. Enhancing Energy Management System for a Hybrid Wind Solar Battery Based Standalone Microgrid. In Proceedings of the 2023 IEEE International Conference on Environment and Electrical Engineering and 2023 IEEE Industrial and Commercial Power Systems Europe, EEEIC/I and CPS Europe 2023, Madrid, Spain, 6–9 June 2023. [Google Scholar] [CrossRef]
  26. Lu, S.Y.; Wang, L.; Lo, T.M.; Prokhorov, A.V. Integration of Wind Power and Wave Power Generation Systems Using a DC Microgrid. IEEE Trans. Ind. Appl. 2015, 51, 2753–2761. [Google Scholar] [CrossRef]
  27. Al Alahmadi, A.A.; Youcef, B.; Nasim, U.; Habti, A.; Mohamed, S.S.; Hassan, K.Y.S.; Mohammed, A.Y. Hybrid wind/PV/battery energy management-based intelligent non-integer control for smart DC-microgrid of smart university. IEEE Access 2021, 9, 98948–98961. [Google Scholar] [CrossRef]
  28. Oliveira, T.R.; Gonçalves Silva, W.W.A.; Donoso-Garcia, P.F. Distributed secondary level control for energy storage management in DC microgrids. IEEE Trans. Smart Grid 2017, 8, 2597–2607. [Google Scholar] [CrossRef]
  29. Aghmadi, A.; Ali, O.; Mohammed, O.A. Stability Enhancement of DC Microgrid Operation Involving Hybrid Energy Storage and Pulsed Loads. IEEE Trans. Consum. Electron. 2025, 71, 3204–3217. [Google Scholar] [CrossRef]
  30. Nunes, B.A.A.; Mendonca, M.; Nguyen, X.N.; Obraczka, K.; Turletti, T. A survey of software-defined networking: Past, present, and future of programmable networks. IEEE Commun. Surv. Tutor. 2014, 16, 1617–1634. [Google Scholar] [CrossRef]
  31. Xia, W.; Wen, Y.; Foh, C.H.; Niyato, D.; Xie, H. A Survey on Software-Defined Networking. IEEE Commun. Surv. Tutor. 2015, 17, 27–51. [Google Scholar] [CrossRef]
  32. Ahmad, S.; Mir, A.H. Scalability, Consistency, Reliability and Security in SDN Controllers: A Survey of Diverse SDN Controllers. J. Netw. Syst. Manag. 2020, 29, 9. [Google Scholar] [CrossRef]
  33. Kreutz, D.; Ramos, F.M.V.; Verissimo, P.E.; Rothenberg, C.E.; Azodolmolky, S.; Uhlig, S. Software-defined networking: A comprehensive survey. Proc. IEEE 2015, 103, 14–76. [Google Scholar] [CrossRef]
  34. Rahman, A.; Jahidul, I.M.; Antonio, M.; Kamal, N.M.; Mahfuz, R.M.; Shahab, S.B.; Antonio, P.; Mahedi, H.; Mehdi, S.; Amir, M. SmartBlock-SDN: An Optimized Blockchain-SDN Framework for Resource Management in IoT. IEEE Access 2021, 9, 28361–28376. [Google Scholar] [CrossRef]
  35. Al Razib, M.; Javeed, D.; Khan, M.T.; Alkanhel, R.; Muthanna, M.S.A. Cyber Threats Detection in Smart Environments Using SDN-Enabled DNN-LSTM Hybrid Framework. IEEE Access 2022, 10, 53015–53026. [Google Scholar] [CrossRef]
Figure 1. Overall Proposed System Architecture.
Figure 1. Overall Proposed System Architecture.
Electronics 14 04518 g001
Figure 2. Proposed System Multi-Microgrid Architecture.
Figure 2. Proposed System Multi-Microgrid Architecture.
Electronics 14 04518 g002
Figure 3. SDN System Architecture Layers.
Figure 3. SDN System Architecture Layers.
Electronics 14 04518 g003
Figure 4. SDN-Integrated Multi-Microgrid Architecture.
Figure 4. SDN-Integrated Multi-Microgrid Architecture.
Electronics 14 04518 g004
Figure 5. Network topology visualization and Packet Count from the SDN controller.
Figure 5. Network topology visualization and Packet Count from the SDN controller.
Electronics 14 04518 g005
Figure 6. Experimental Setup of the Smart Grid Testbed.
Figure 6. Experimental Setup of the Smart Grid Testbed.
Electronics 14 04518 g006
Figure 7. Hardware Implementation of the Proposed System. (a) Physical System Hardware Setup; (b) Communication System Hardware Setup.
Figure 7. Hardware Implementation of the Proposed System. (a) Physical System Hardware Setup; (b) Communication System Hardware Setup.
Electronics 14 04518 g007
Figure 8. DC bus voltage and Power Sharing Waveforms during Scenario 1.
Figure 8. DC bus voltage and Power Sharing Waveforms during Scenario 1.
Electronics 14 04518 g008
Figure 9. HTTP/TCP packet exchanges through SDN during Scenario 1 (a) case 1: MG3 load changes; (b) case 2: MG2 generation and MG1 load changes; (c) case 3: MG2 load changes.
Figure 9. HTTP/TCP packet exchanges through SDN during Scenario 1 (a) case 1: MG3 load changes; (b) case 2: MG2 generation and MG1 load changes; (c) case 3: MG2 load changes.
Electronics 14 04518 g009aElectronics 14 04518 g009b
Figure 10. DC bus voltage and Power Sharing Waveforms during Scenario 2.
Figure 10. DC bus voltage and Power Sharing Waveforms during Scenario 2.
Electronics 14 04518 g010
Figure 11. HTTP/TCP packet exchanges through SDN during Scenario 2: Pulsed load injection.
Figure 11. HTTP/TCP packet exchanges through SDN during Scenario 2: Pulsed load injection.
Electronics 14 04518 g011
Figure 12. Command-line invocation of hping3 used to start flood (target: 192.168.5.110).
Figure 12. Command-line invocation of hping3 used to start flood (target: 192.168.5.110).
Electronics 14 04518 g012
Figure 13. Wireshark packet list during the attack, showing a sudden surge in packets and fake IP addresses.
Figure 13. Wireshark packet list during the attack, showing a sudden surge in packets and fake IP addresses.
Electronics 14 04518 g013
Figure 14. DC bus voltage and Power Sharing Waveforms during a cyber-attack on the conventional switched network.
Figure 14. DC bus voltage and Power Sharing Waveforms during a cyber-attack on the conventional switched network.
Electronics 14 04518 g014
Figure 15. SDN Network Visualization Showing Primary and Failover Paths for Enhanced Redundancy.
Figure 15. SDN Network Visualization Showing Primary and Failover Paths for Enhanced Redundancy.
Electronics 14 04518 g015
Table 1. Proposed DC multi-microgrid system component ratings.
Table 1. Proposed DC multi-microgrid system component ratings.
Parameter NameParameter Value
Microgrid I
DC bus rated voltage60 V
Local load I360–720 W
PV rated output power5 kW
Wind Turbine Emulator5 kW
Battery rating12 V, 100 Ah
Microgrid II
DC bus rated voltage60 V
Local load II50–380 W
PV rated output power5 kW
Battery rating12 V, 100 Ah
Ultracapacitor51.8 V, 188 F
Microgrid III
DC bus rated voltage60 V
Local load III110–230 W
PV rated output power5 kW
Battery rating12 V, 100 Ah
External Pulse Load
1–2.5 kW
Table 2. End-to-End Communication Delay and Packet Loss Percentage for both Scenarios.
Table 2. End-to-End Communication Delay and Packet Loss Percentage for both Scenarios.
Performance MetricsScenario 1Scenario 2
E2E communication delayMean ( t ¯ )Standard Deviation σ Mean ( t ¯ )Standard Deviation σ
23.38 ms6.5 ms23.52 ms7.87 ms
Packet Loss Percentage0%0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Taha, R.A.; Aghmadi, A.; Moustafa, S.H.; Mohammed, O.A. Leveraging Software-Defined Networking for Secure and Resilient Real-Time Power Sharing in Multi-Microgrid Systems. Electronics 2025, 14, 4518. https://doi.org/10.3390/electronics14224518

AMA Style

Taha RA, Aghmadi A, Moustafa SH, Mohammed OA. Leveraging Software-Defined Networking for Secure and Resilient Real-Time Power Sharing in Multi-Microgrid Systems. Electronics. 2025; 14(22):4518. https://doi.org/10.3390/electronics14224518

Chicago/Turabian Style

Taha, Rawan A., Ahmed Aghmadi, Sara H. Moustafa, and Osama A. Mohammed. 2025. "Leveraging Software-Defined Networking for Secure and Resilient Real-Time Power Sharing in Multi-Microgrid Systems" Electronics 14, no. 22: 4518. https://doi.org/10.3390/electronics14224518

APA Style

Taha, R. A., Aghmadi, A., Moustafa, S. H., & Mohammed, O. A. (2025). Leveraging Software-Defined Networking for Secure and Resilient Real-Time Power Sharing in Multi-Microgrid Systems. Electronics, 14(22), 4518. https://doi.org/10.3390/electronics14224518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop