Next Article in Journal
Machine Learning-Based Predictive Model for Risk Stratification of Multiple Myeloma from Monoclonal Gammopathy of Undetermined Significance
Previous Article in Journal
Predicting Out-of-Stock Risk Under Delivery Schedules Using Neural Networks
Previous Article in Special Issue
Research on Nonlinear Pitch Control Strategy for Large Wind Turbine Units Based on Effective Wind Speed Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Redundancy Design and Application of Excitation Systems for Large Hydro-Generator Units Based on ATS and DDS

1
School of Automation Wuhan, University of Technology Wuhan, Wuhan 430070, China
2
Three Gorges Intelligent Control Technology Co., Ltd., Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(15), 3013; https://doi.org/10.3390/electronics14153013
Submission received: 19 May 2025 / Revised: 16 July 2025 / Accepted: 21 July 2025 / Published: 29 July 2025
(This article belongs to the Special Issue Power Electronics in Renewable Systems)

Abstract

The large-scale integration of stochastic renewable energy sources necessitates enhanced dynamic balancing capabilities in power systems, positioning hydropower as a critical balancing asset. Conventional excitation systems utilizing hot-standby dual-redundancy configurations remain susceptible to unit shutdown events caused by regulator failures. To mitigate this vulnerability, this study proposes a peer-to-peer distributed excitation architecture integrating asynchronous traffic shaping (ATS) and Data Distribution Service (DDS) technologies. This architecture utilizes control channels of equal priority and achieves high redundancy through cross-communication between discrete acquisition and computation modules. This research advances three key contributions: (1) design of a peer-to-peer distributed architectural framework; (2) development of a real-time data interaction methodology combining ATS and DDS, incorporating cross-layer parameter mapping, multi-priority queue scheduling, and congestion control mechanisms; (3) experimental validation of system reliability and redundancy through dynamic simulation. The results confirm the architecture’s operational efficacy, delivering both theoretical foundations and practical frameworks for highly reliable excitation systems.

1. Introduction

Hydropower, as a clean energy source, plays a vital role in conventional power systems and is the world’s second-largest source of electricity generation. Amid the global shift toward low-carbon energy, the growing integration of variable renewable energy sources such as wind and solar power has highlighted hydropower’s unique ability to stabilize the grid. As a result, it has become the key element in coordinating the balancing of dynamic supply and demand in modern power grids. Currently, large hydro-turbine units primarily utilize self-excited static excitation systems, where excitation power is generated by transformers connected at the generator terminals. This power is then supplied to the generator’s field windings through controlled rectification. However, traditional excitation system designs lack distributed control modules within both the excitation power unit and the regulating unit. Failures in excitation regulation units, therefore, necessitate the immediate shutdown of the entire system, highlighting the reliability limitations of centralized control systems. Improving the stability and dependability of excitation systems remains the main goal driving technological progress in this area.
Compared to centralized control architectures, distributed architectures have been widely adopted to enhance system reliability. In the context of distributed system architecture design, Reference [1] introduces a distributed application architecture based on the GridAPPS-D platform, which coordinates the control of distributed energy resources by standardizing APIs and hierarchical architecture. The authors of [2] designed a distributed control architecture for parallel Buck-Boost converters, employing adaptive droop control parameters. The local controllers operate independently without real-time central communication, thereby reducing the communication burden. In harmonic compensation and power control, the master–slave distributed coordination strategy presented in Reference [3] empowers local controllers with autonomous decision-making capabilities, effectively mitigating the risks associated with single-point failures. The modular generation unit design in Reference [4] supports localized fault isolation, preventing system-wide collapse. Regarding reactive power compensation, Reference [5] systematically summarizes microgrid reactive compensation technologies, providing theoretical foundations for decentralized reactive control modules in distributed architectures. The authors of [6] propose a chain-connected STATCOM employing a redundant master sub-controller architecture, where sub-controllers can be dynamically dispatched to maintain DC bus voltage balance during main controller failures, significantly enhancing fault tolerance. In terms of microgrid integration and device interconnection, Reference [7] combines network reconfiguration (NR) and soft open points (SOPs) to overcome traditional distribution network capacity limitations and enhance fault recovery flexibility. The authors of [8] introduce a modular multi-port DC power electronic transformer (MDCPET) that employs an input-independent-output-series topology to eliminate single-point failures inherent in centralized designs, supporting “plug-and-play” expandability and improving system robustness. Regarding synchronization and coordinated control, Reference [9] proposes a distributed phase-locked loop (PLL) with a GPS-synchronized parallel mechanism to reduce reliance on communication links, ensuring frequency stability even during localized communication disruptions. The authors of [10] utilize virtual synchronous generator (VSG) control to emulate the inertial characteristics of traditional synchronous generators, enabling autonomous cooperative control of inverters. This approach minimizes dependence on central controllers, strengthening frequency stability and reliability in islanded microgrids. In terms of intelligent control and cybersecurity, Reference [11] employs a hierarchical communication architecture to distribute control functions to local controllers, reducing the risk of system-wide collapse caused by cyberattacks targeting central nodes. Regarding voltage regulation and novel devices, Reference [12] utilizes a consensus algorithm to coordinate multiple electric springs with BC (ESBC) to suppress voltage fluctuations. Its distributed cooperative strategy ensures bus voltage stability even under variations in load characteristics. The authors of [13] propose a novel decentralized stochastic recursive gradient (DSRG) method for solving the optimal power flow (OPF) problem in multi-area power systems. This method achieves global optimization through local decisions and minimizes information exchange, avoiding the data sharing requirements of traditional centralized approaches. The core steps of DSRG include initialization, local gradient calculation, stochastic update, recursive aggregation, and a consensus algorithm, with convergence checks ensuring the validity of the optimization results. The authors of [14] propose a hybrid approach combining an artificial neural network (ANN) with a backstepping controller to enhance the maximum power point tracking (MPPT) performance of photovoltaic generation systems (PVGSs). This method employs a hybrid strategy of particle swarm optimization (PSO) and genetic algorithm (GA) to optimize the weights and biases of the ANN for accurate prediction of the PV reference voltage. The backstepping controller ensures system stability and dynamic tracking performance by recursively constructing Lyapunov functions. Although distributed architectures and coordinated strategies have become a key research focus in the industry, studies specifically targeting distributed excitation systems remain scarce.
To further enhance the redundancy of excitation systems, an effective and practical technical approach involves eliminating hierarchical differences between control channels and ensuring that all control nodes operate in real-time equivalence, thereby establishing a functionally equivalent redundant control architecture within the system. Concurrently, this peer-to-peer redundant control framework requires seamless data flow across acquisition and computing units. Consequently, developing a robust distributed data distribution mechanism with strong real-time capabilities and high reliability becomes pivotal to realizing an equivalent distributed excitation control system.
Time-Sensitive Networking (TSN), with its microsecond-level transmission precision and predictable communication performance, demonstrates unique advantages in precision industrial control scenarios. Notably, in industrial control environments such as excitation systems, where stringent real-time responsiveness and substantial electromagnetic interference coexist, the collaborative architecture design of TSN and Data Distribution Service (DDS) has achieved significant technological breakthroughs: TSN establishes a hard real-time transmission channel at the physical layer, providing deterministic latency guarantees for data flow. DDS, as a middleware framework, offers semantic-rich data orchestration and intelligent scheduling capabilities at the logical layer. This vertically integrated “deterministic pipeline + semantic interconnection” architecture fundamentally addresses longstanding challenges in distributed excitation control systems, including the real-time coordination of multi-node operations and reliable data interaction under complex electromagnetic conditions [15]. Meanwhile, DDS implements intelligent data filtering and dynamic routing optimization at the application layer [16]. This cross-layer collaborative mechanism effectively resolves the latency bottleneck inherent in traditional industrial communication architectures, signaling that network switching technology research is evolving towards innovative integration of TSN and DDS. In studies of Time-Sensitive Networking (TSN) traffic scheduling mechanisms, asynchronous traffic shaping (ATS) technology has garnered significant attention due to its inherent independence from global clock synchronization. The authors of [17] broke through traditional synchronization constraints by innovatively developing an Urgency-Based Scheduling (UBS) algorithm. Through formal temporal modeling and large-scale simulation verification, its tight delay boundaries satisfy the hybrid transmission requirements of high-bandwidth sensor data and control flows in in-vehicle backbone networks, providing a low-complexity deterministic transmission solution for automotive active safety scenarios. This pioneering work inspired subsequent theoretical advancements, notably the Pi-regulation theoretical framework proposed in [18]. By integrating network calculus with max-plus algebra, this framework has fundamentally revealed the mathematical equivalence between min-plus and max-plus algebras in traffic constraint formulations while establishing a minimal interleaving regulator model that provides theoretical grounding for asynchronous traffic shaping techniques, such as UBS. In collaborative research on heterogeneous shaping mechanisms, Reference [19] established, for the first time, a quantitative analysis model for hybrid TAS-ATS-CBS architectures. By constructing a worst-case latency upper-bound calculation framework, it revealed the gap in temporal analysis for ATS-CBS hybrid queue scenarios, providing both experimental evidence and a guiding framework for researchers and practitioners to select TSN sub-protocols tailored to application requirements.
As the number of communication nodes increases, the logical structure of distributed systems becomes more complex, rendering traditional communication middleware inadequate. Consequently, the data-centric DDS holds high application prospects. For example, in the power systems domain, Reference [20] proposed a DDS-based publish/subscribe protocol to implement dynamic monitoring in digital substations. By partitioning a multi-domain network structure to reduce discovery traffic and designing mapping rules between IEC 61850 [21] and DDS objects, the study experimentally validated the dynamic identification capability of IED devices (within 0.7 s), suitable for dynamic configuration scenarios such as microgrids. The authors of [22] addressed large-scale DDS data transmission scenarios by proposing a hierarchical priority scheduling and parallel processing mechanism. It categorizes monitoring tasks into three levels—basic storage, simple error detection, and complex data mining—and ensures real-time performance through multi-queue allocation. The system supports in-depth error analysis and monitoring while maintaining low latency. In intelligent transportation, Reference [23] developed a DDS and cloud computing-based validation framework for autonomous driving systems. By integrating FMI co-simulation, 3D vehicle simulators, and HILS hardware interfaces, DDS facilitates communication between distributed modules. This framework enables efficient resource allocation and multi-scenario validation, thereby lowering the cost and complexity associated with traditional hardware-in-the-loop simulations. To address emergency event response requirements, Reference [24] introduced an emergency QoS mechanism into the DDS publish/subscribe middleware, designing multi-level scheduling queues to prioritize high-urgency data. By optimizing scheduling algorithms, inefficiencies in traditional single-queue insertion are resolved, ensuring immediate responses to critical tasks.
The integration of DDS with ATS can enhance the efficiency and reliability of data distribution while improving system flexibility and security. However, research on the convergence of DDS and ATS remains limited, both domestically and internationally. It was not until 2023 that the Object Management Group (OMG) released a test version of the DDS extension specification 1.0, based on TSN [25]. Current studies on the integration of DDS and TSN are still in their initial stages.
To achieve cross-redundant links in a peer-to-peer distributed excitation system, the key lies in realizing high-precision data synchronization and efficient data distribution. This study investigates dynamic multi-priority queue scheduling algorithms and communication congestion control strategies that integrate ATS and DDS technologies, thereby completing a high-availability design to provide critical technical support for implementing peer-to-peer distributed excitation systems.
The primary contributions of this study include the following: (1) proposing a peer-to-peer distributed redundant architecture and quantitatively verifying its reliability advantages; (2) conducting research on core technologies for peer-to-peer distributed excitation systems using real-time data exchange methods based on ATS and DDS fusion; (3) validating the system’s effectiveness through real-time simulation.

2. High-Redundancy Architecture Design for Peer-to-Peer Distributed Excitation Systems

2.1. Traditional Distributed Excitation System Redundant Architecture

Excitation systems employ various excitation methods. The self-shunt static excitation system typically comprises an excitation power unit and an excitation regulation unit, supplemented by auxiliary components such as a de-excitation device. A simplified system architecture is illustrated in Figure 1.
The traditional distributed excitation system features a regulating cabinet containing two regulating unit control channels and power cabinets, each housing one power unit control channel; a fiber-optic communication network diagram is shown in Figure 2. While the entire system incorporates five control channels capable of excitation control, the power unit control channels lack full excitation functionality, consequently necessitating operation under a two-tier priority architecture, where the two excitation regulator control channels constitute high-priority control and the three power unit control channels serve as low-priority control. System operation defaults to high-priority dual-channel control, with low-priority power unit control channels assuming authority only upon complete failure of both high-priority channels; within any single priority group, control channels dynamically determine control authority through real-time communication status detection, prioritizing selection of the channel maintaining normal communication with the maximum number of power cabinets and, when multiple channels exhibit identical communication integrity levels, executing switching according to the preset control channel priority sequence.
Focusing on single control channel failures, for the sake of analytical simplicity, it is assumed that the self-failure probabilities of the three internal modules within a control channel and the probability of communication abnormalities are all equal to p.
For the power unit control channel, the acquisition failure probability comprises two components: the self-failure of the acquisition module and the communication failure between the acquisition module and the calculation module. Let the acquisition failure probability of the power unit be P1; then, the failure probability is expressed as:
P 1 = 1 1 p 2
Calculation failure includes only self-failure, with a probability of p.
Execution failure also comprises two components: the self-failure of the execution module and the communication failure between the execution module and the calculation module. The failure probability is, thus, expressed as:
P 2 = 1 1 p 2
Since the necessary and sufficient condition for a control channel to remain operational is that no failures occur, the failure probability P3 of a single power unit control channel is expressed as:
P 3 = 1 1 p 5
For the regulating unit control channel, the probabilities of acquisition failure and calculation failure are consistent with the failure probability analysis of the power unit control channel presented earlier. The necessary and sufficient condition for a regulating unit control channel to maintain operation is the absence of all failures; consequently, the failure probability P4 of a single regulating unit control channel is expressed as:
P 4 = 1 1 p 3

2.2. Peer-to-Peer Distributed Excitation System Redundant Architecture

The redundant architecture of the peer-to-peer distributed excitation system proposed in this study innovatively establishes module-level redundancy mechanisms within control channels. In this architecture, all control channels adopt identical hardware and software configurations. By creating cross-channel data interaction interfaces, the computing modules in each control channel can access monitoring data from the data acquisition modules of other control channels, while the actuation modules in each channel also receive control command output from the computing modules of different channels. A schematic diagram of this redundant architecture is shown in Figure 3.
For the reliability analysis of a single power unit control channel, it is assumed that the probabilities of module self-failure and communication failure are both p. Accordingly, the failure probability P5 of a single power unit control channel in the peer-to-peer distributed excitation system is:
P 5 = p + 2 p 10 2 p 9 2 p 6 + 4 p 5 2 p p 10 2 p 9 2 p 6 + 4 p 5 p 10 2 p 9 2 p 6 + 4 p 5 2 + p p 10 2 p 9 2 p 6 + 4 p 5 2
The exit paths for the deactivation of the excitation regulating unit control channel include failure of the local calculation module or the calculation module’s inability to acquire data, and its failure probability is expressed as:
P 6 = p + 4 p 5 6 p 6 + 2 p 7 2 p 9 + 3 p 10 p 11
Separately plot the comparative curves of the failure probability function of a single power unit control channel between the traditional distributed excitation system and the peer-to-peer distributed excitation system, as well as the comparative curves of the failure probability function of a single regulating unit control channel between the traditional distributed excitation system and the peer-to-peer distributed excitation system. As shown in Figure 4 and Figure 5, within the probability interval [0, 1], the failure probability curve of the control channels of the same type in the peer-to-peer distributed excitation system always lies below that of the traditional system, and its vertical coordinate values are significantly reduced. The curve comparison results indicate that the peer-to-peer distributed excitation system designed in this study has obvious advantages in terms of reliability indicators. Through the quantitative comparative analysis of the probability function curves, the effectiveness of this architecture in optimizing the control system’s failure rate is verified.

3. Research on Data Exchange Technologies and High-Availability Function Design for Peer-to-Peer Distributed Excitation Systems

3.1. Introduction to Asynchronous Traffic Shaping (ATS) Technology

The IEEE 802.1Qcr protocol [26], officially released in 2020, introduced an asynchronous traffic shaping (ATS) mechanism that breaks through traditional synchronous scheduling constraints. This mechanism utilizes an Urgency-Based Scheduler (UBS) to establish a hierarchical queue management architecture, primarily comprising traffic shaping queues and shared queues. As illustrated in Figure 6, during initial data processing, packets undergo traffic filtering and classification modules, where they are categorized into two types: guaranteed-transmission packets and best-effort packets. The UBS core components include multiple shaping queues and priority queues. The shaping queues utilize token bucket traffic policing technology to enforce bandwidth control. Simultaneously, by monitoring the dwell time of packets in rate adjustment queues, the system dynamically decides whether to reallocate guaranteed packets to corresponding hierarchical processing queues.
In the field of Time-Sensitive Networking (TSN), the asynchronous traffic shaping (ATS) mechanism eliminates dependency on network-wide clock synchronization systems by employing autonomous time reference maintenance and event-driven calibration technologies, establishing a decentralized temporal coordination framework. Through distributed timestamp management and dynamic priority allocation, it provides an innovative solution for deterministic transmission in heterogeneous network environments. The workflow involves three sequential phases: input ports first perform traffic shaping and classification via the Per-Stream Filtering and Policing (PSFP) mechanism to preliminarily filter and regulate data streams, directing qualified traffic into independent shaping queues configured with algorithms like Length-Based Rate Queuing (LRQ) or Token Bucket Emulation (TBE) to smooth traffic rates and suppress burst interference. Subsequently, the scheduler dynamically maps stream classifications to internal priorities by calculating an urgency metric derived from the ratio of a frame’s remaining delay budget to a predefined threshold, ensuring highly time-sensitive traffic is prioritized for shared queues. Finally, shared queues execute conflict-free asynchronous transmission using a strict priority (SP) scheduling mechanism, optimized by real-time monitoring of output port link conditions to select optimal transmission windows. This technology excels in large-scale network deployments or environments where precise clock synchronization is impractical.

3.2. Introduction to Data Distribution Service

The DDS (Data Distribution Service) protocol is a data-centric, agentless middleware technology based on the publish/subscribe model, designed to provide efficient, reliable, and real-time data communication services for distributed systems. The DDS protocol defines data sharing as a domain where publishers send data to the domain space, and subscribers receive the required data from the domain space. The data in the domain space itself are stored in the local memory of all participating nodes. Therefore, during the data exchange process, it is as if the locally stored data are being operated on, resulting in very low latency for data publishing and receiving. This means that the protocol meets reasonable real-time requirements. At the same time, the nodes in the protocol’s networks are distributed, with no master–slave relationships between them, functioning as peer nodes, and capable of supporting one-to-one, one-to-many, and many-to-many communication methods.
The DDS standard provides a set of QoS (Quality of Service) mechanisms. Some of the main strategies, such as the reliability mechanism, control the reliability of data transmission, including the following options: Best Effort: make every effort but no guarantee of data delivery; option Reliable: ensures reliable data delivery, where lost data will be retransmitted, suitable for scenarios requiring high reliability; the Durability mechanism controls whether data persist after publication, ideal for scenarios that require historical data access; the History mechanism controls the storage method of data samples, suitable for scenarios that require data caching; the latency budget mechanism defines the maximum allowable delay for data transmission, helping to optimize network and system performance to ensure data arrive within the specified time. The above mechanism allows developers to control the reliability, real-time performance, storage methods, and resource management of data transmission, thereby adapting to different application scenarios.
Within the DDS architecture, clock management and synchronization constitute critical components, particularly in distributed real-time systems. They are essential for ensuring data consistency, temporal accuracy of events, and fulfillment of QoS requirements. Given that the DDS standard itself does not mandate a specific clock synchronization protocol, it permits the integration of external clock synchronization solutions to support time management in distributed systems. The primary integrated synchronization approach utilizes the IEEE 1588 protocol [27] to achieve sub-microsecond synchronization accuracy. Hardware timestamping is utilized to minimize latency impacts from operating system scheduling and protocol stack processing. The master clock periodically broadcasts synchronization messages, and the slave clock adjusts its local time accordingly. The primary workflow for clock synchronization is as follows:
  • The master clock broadcasts a Sync message, marking the transmission time T1;
  • The slave clock receives the Sync message, recording the reception time T2;
  • The slave clock sends a Delay Request message, marking the transmission time T3;
  • The master clock receives the Delay Request message, recording the reception time T4;
  • Calculate the clock offset;
    o f f s e t = T 2 T 1 + T 4 T 3 2
  • Adjust the local clock to align it with the master clock.

3.3. ATS and DDS Integration Technology

In the joint scheduling system of ATS and DDS, DDS is responsible for QoS management at the application layer, such as data reliability and delay budgeting, while ATS is responsible for network traffic scheduling, such as traffic priority scheduling and traffic shaping. To ensure efficient data transmission between ATS and DDS during cross-layer collaboration, it is necessary to establish mapping rules between DDS QoS and ATS mechanisms, ensuring that ATS network scheduling can meet the real-time requirements of the DDS application layer.
The designed peer-to-peer distributed excitation system leverages the integration of ATS and DDS technologies, employing a five-layer software architecture, as depicted in Figure 7. The Driver Layer facilitates resource access via hardware interfaces, including AD drivers and memory drivers. Building upon the operating system kernel, the Transport Layer achieves efficient cross-node data distribution and real-time communication through the synergistic fusion of DDS middleware and ATS capabilities, specifically multi-priority traffic scheduling and traffic shaping. The Control Layer orchestrates multi-node resources via distributed data exchange management and communication management modules, thereby enabling peer-to-peer coordination. Finally, the application layer implements core algorithms such as excitation control and limit protection, establishing a decentralized control loop.
To meet these requirements, traffic scheduling must adopt a multi-priority scheduling mechanism. This is achieved through proxy nodes to ensure prioritized transmission of HRT (hard real-time traffic) data for low-latency demands, while simultaneously balancing SRT (soft real-time traffic) performance and mitigating the impact of BE (best-effort) traffic on network performance. Thus, the overall design architecture is depicted in Figure 8.
Among them, HRT traffic has the highest priority, mapped to high-priority traffic in ATS, ensuring low-latency transmission, suitable for critical task flows. SRT traffic is mapped to medium-priority traffic according to the latency budget QoS policy, with the system calculating priority weights based on the latency budget. BE data have the lowest priority and are mapped as best-effort traffic in ATS. The transmission rate is controlled through traffic shaping to prevent network congestion. The mapping rules are shown in Table 1.
The specific implementation logic of the mapping rules between DDS QoS and ATS mechanisms is visually elaborated in the Traffic Scheduling Priority Mapping Process, as shown in Figure 9. Initiated with DDS Topic Tasks, this process explicitly delineates the end-to-end decision-making workflow—from QoS policy extraction to ATS priority queue allocation—encompassing the dynamic logic of priority determination and queue mapping for HRT/SRT/BE traffic categories based on QoS parameters (e.g., latency budget). It serves as a critical enabler for translating the theoretical mapping rules into actionable network scheduling practices. The remaining flows along with the integrated architecture of ATS and DDS systems can be found in Appendix B.

3.4. Multi-Priority Queue Scheduling Algorithm Introduction

To ensure HRT traffic is transmitted with the highest-priority sequence, SRT traffic is prioritized based on the latency budget to transmit data with tighter deadlines first; BE traffic is rate-controlled into the lowest-priority queue for transmission. The queue scheduling model must adopt a strict priority queue mechanism. Based on these scheduling requirements, this study employs the strict priority queue with the Delay Bound (SP2DB) scheduling model [28]. Refer to Appendix A for definitions of critical variables.
Suppose there exists a priority scheduling queue pool composed of k (k > 1) multi-level sequential queues I with fixed capacities. The priorities of these multi-level sequential queues descend from I1, I2, , Ik, for any queue Ii (1 < i < k), and its queue length and capacity limit are Li and Ci, respectively. When a data packet attempts to enter the strict priority queue, the system verifies whether the real-time detected queue length has reached its capacity limit. If exceeded, the tail data discarding policy is executed. When a data packet exits the strict priority queue, forwarding strictly adheres to the queue priority order.
Furthermore, assume there exists a sequence of N incoming data packets {p1, p2, … , pN}, each carrying its desired queuing delay upper bound di, where i ranges from 1 to N. Let the arrival time of packet pi at the strict priority queue be t a i , and its departure time be t b i . If t b i t a i , ≤ di, pi is deemed successfully scheduled; otherwise, it is marked as failed. Based on this, the scheduling target—packet acceptance rate Sd—for the queue model is defined by Equation (8):
S d = i = 1 N P s i N
Let P s ( i ) indicate whether the i-th data packet is successfully scheduled, where P s u c ( i )   =   1 for success and 0 otherwise. The scheduling objective of the queue model is to maximize the packet acceptance rate S d .
Assume the current cached packets in queues I1, I2, and I3 are n, z, and m, respectively. For a pending data packet p(x) with a preset maximum cache depth threshold x (where n + z ≤ x < n + z + m), the actual queuing length depends on its injection target: if injected into I1, the actual queuing length is n; if injected into I2, the queuing length becomes n + z due to I1’s priority scheduling; if injected into I3, the queuing length is n + z + m.
Analysis shows that p(x) can be successfully scheduled when injected into either I1 or I2. Moreover, injecting p(x) into I2 enables I1 to prioritize packets with stricter delay requirements, thereby maximizing Sd and achieving optimal resource allocation under system delay constraints.
However, when the newly added data packet p(b) preemptively enters queue I1, if x < n+z + b at this time, the data packet p(x) will fail to be scheduled. Therefore, the scheduling algorithm must account for the impact of high-priority preemption on queuing delays.
As illustrated by the analysis above, the key to improving the packet acceptance rate lies in minimizing the queuing length of data packets under the influence of new packet preemption, thereby maximizing the likelihood of successful scheduling.
Define the maximum port data packet forwarding rate as R, with a fixed data transmission time interval Δt. This is equivalent to assigning each data packet a processing time of Δt in the priority scheduling queue. Let the maximum queuing length of data packets be l, and map the queuing delay upper bound d in the strict priority queue to this maximum queuing length l:
l = d t
Define the queue limit qi of queue Ii as the queue length within ΔT after the data packet p enters queue Ii, which serves as the reference value for predicting the queue depth upon the packet’s entry. Based on a dynamic comparison between this estimated value and the data packet’s maximum allowable queue length l, the system executes the hierarchical queue dynamic selection mechanism. By sequentially comparing the relationship between the maximum queuing time length and the queue limit from low to high priority if qi ≤ li, it enters this queue; otherwise, it jumps to a queue with higher priority until reaching the highest-priority tier—an appropriate queue limit must be identified to improve the packet acceptance rate.
The SP2DB−EQ algorithm [28] analyzes the traffic characteristics of the historical monitoring period. Using the inbound traffic intensity from the preceding t statistical periods, it calculates the traffic intensity Vi of queue Ii during statistical period Tt+1. Define the arrival time of packet p at the switch during Tt+1 as Ts. The duration from Ts to the end of Tt+1 is ΔT. In this dynamic evaluation model, the system treats continuously arriving packets within the ΔT window as the critical factor affecting delay accumulation for p.
As shown in Figure 10, this is the calculation framework for the queue limit qi, assuming the transmission time interval does not exceed ∆T.
When the data packet p is assigned to the transmission channel of the highest-priority queue I1, the incoming data packet sequence in the subsequent statistical period T will not interfere with the queuing duration of p. In this case, the channel admission threshold is calculated based solely on the real-time depth parameter of the current buffer queue, i.e., q1 = L1.
If the data packet p is assigned to a non-highest-priority queue, it must wait until all buffered data packets in I1 are processed. At this point, a dynamic transmission model for I1 must be established: starting from the injection of the current data packet into other queues until either the queued data packets in I1 are fully cleared or the monitoring statistical period Tt+1 terminates. During this dynamic process, data packets are continuously injected and forwarded. Key evaluation parameters include the following: traffic intensity V1 entering I1 during this process, queue length L1, duration of the dynamic process (JT1), total data packet count in the dynamic process (JP1)
When V1 < R, the term (RV1) represents the net egress forwarding rate:
J T 1 = L 1 R V 1 , L 1 R V 1 < T T , L 1 R V 1 T , J P 1 = J T 1 × V 1 + L 1
When V1 > R, L1 gradually increases. Denote the time it takes for the packet queue length to grow from L1 to the maximum queue capacity C1 as T 1 C .
T 1 C = C 1 L 1 V 1 R
If T 1 C  < ΔT, then within the future ΔT period, the queue I1 will reach the maximum capacity C1. At this time, the duration of the dynamic process and the total number of data packets in this dynamic process are expressed as:
J T 1 = T , J P 1 = J T 1 × R + C 1
Conversely, if T 1 C  ≥ ΔT, within the future ΔT period, queue I1 will gradually increase but will not reach its maximum capacity C1. In this case, the duration of the dynamic process and the total number of data packets within this dynamic process are defined as:
J T 1 = T , J P 1 = J T 1 × V 1 + L 1
If the traffic intensity V1 is equal to the maximum rate value R, then within the future ΔT period, the actual length of the queue I1 will remain unchanged at L1. In this scenario, both the duration of the dynamic process and the total number of data packets in this process align with Equation (13).
Furthermore, if data packet p enters queue I2, it must wait for the completion of the dynamic process in I1. Only when V1 < R does I2 gain the opportunity to forward p. During the time required to send out L2 data packets, L2/(RV1)V1 packets enter I1 and are forwarded. If V1 ≥ R, I2 will persistently idle until the condition improves. The queue boundary q2 is then defined as:
q 2 = J P 1 + L 2 + L 2 R V 1 V 1 , V 1 < R J P 1 + L 2 , V 1 R
When the data packet p is assigned to a queue with a priority lower than I2, the system triggers the queue-emptying and waiting mechanism. It must wait until all historical data packets in the queue channels I1 and I2 are fully forwarded. Define the dynamic sending process of I2: starting from the injection of the current data packet until the channel becomes completely idle. During this period, data packets arrive at both I1 and I2; queued packets in I1 and I2 are transmitted. The traffic intensity of packets arriving at the queue during this process is V1 + V2. After a duration JT1, there will be JT1 × V2 packets arriving at queue I2, and its actual queue length becomes L 2 . Let JT2 denote the duration of the dynamic process for I2 and JP2 represent the total number of packets in the dynamic process of I2. Assuming the duration does not exceed ΔT, then:
L 2 = L 2 + V 2 × J T 1 , L 2 + V 2 J T 1 < C 2 C 2 , L 2 + V 2 J T 1 C 2
Based on the foregoing analysis, the expression for the queue boundary qi (1 ⩽ Im) can be derived as:
q i = j = 0 i 1 J P j + L i + L i R j = 0 i 1 V j j = 0 i 1 V j , j = 0 i 1 V j < R j = 0 i 1 J P j + L i , j = 0 i 1 V j R
In the dynamic transmission process of queue Ii, the traffic intensity arriving at the queue is the sum of all individual traffic intensities. Let the queue length be L i . Denote JPi as the total number of data packets transmitted during the dynamic process of Ii, and JTi as the duration of this process. Assuming JTi does not exceed ΔT, the queue length is defined as:
L i = L i + V i j = 1 i 1 J T j , L i + V i j = 1 i 1 J T j < C i C i , L i + V i j = 1 i 1 J T j C i
If the traffic intensity is less than the switch egress forwarding rate, then:
J T i = L i R j = 1 i V j , L i R j = 1 i V j < T j = 1 i 1 J T j T j = 1 i 1 J T j , L i R j = 1 i V j T j = 1 i 1 J T j , J P i = J T i × j = 1 i V j + L i
If the traffic intensity exceeds the switch egress forwarding rate, when the queue length grows from L i to the maximum queue capacity Ci, the elapsed time is denoted as T i C :
T i C = C i L i j = 1 i V j R
J T i = T j = 1 i 1 J T j , J P i = J T i × R + C i , T i C < J T i J T i × j = 1 i V j + L i , T i C J T i
If the traffic intensity equals the switch egress forwarding rate, then:
J T i = T j = 1 i 1 J T j , J P i = J T i × j = 1 i V j + L i

3.5. Communication Blocking Regulation Strategies

In the peer-to-peer distributed excitation system, redundancy functionality is implemented through cross-communication links. Compared to traditional architectures, the number of internal communication nodes significantly increases, with simultaneous and substantial improvements in both communication latency sensitivity and data throughput requirements. To achieve highly reliable peer-to-peer distributed system configurations, the critical approach is to adopt robust distributed communication technologies. Through congestion control strategies, the synchronization and coordination of internal link states across control units are ensured.
Based on this analysis, priority indices must be computed for each incoming data stream and inserted into strict priority queues. During dequeuing, the highest-priority data are constantly retrieved from the queue.
Let the attributes of data stream fi include data type T y p e f i , QoS policy set Q o S f i = { D e a d l i n e , L a t e n c y b u d g e t , R e l i a b i l i t y , D u r a b i l i t y , } . Define the strategy mapping function Φ ( f i ) as the mapping of the service flow to its corresponding service category. Then,
Φ f i = H R T D e a d l i n e i T h r t o r T y p e f i T c r i t i c a l S R T L a t e n c y B u d g e t i T s r t B E e l s e
Among them, Thrt is the hard real-time deadline threshold, Tsrt is the soft real-time delay budget threshold, and T c r i t i c a l is the set of critical data types. These parameters are mapped to their corresponding priority queues via priority mapping.
For dynamic priority calculation, let the current scheduling time of data packet i be t and its delay upper bound be di. The remaining maximum waiting time is defined as:
s i t = d i t
Since the allowable maximum waiting time is inversely proportional to priority, the dynamic priority index is computed by incorporating the estimated service time ω i and ε denotes the permissible error on the maximum waiting time:
I i t = 1 d i t w i + ε
After implementing real-time preemption for key traffic flows through the dynamic priority adjustment mechanism, the system further detects congestion states via a congestion detection mechanism. Subsequently, a token bucket-based dynamic flow control algorithm is employed to regulate the growth of low-priority traffic, thereby preventing congestion in high-priority data transmission.
The congestion detection employs a two-stage congestion discrimination method, performing judgments based on queue backlog status and delay proximity estimation. The queue performance analysis for DDS nodes must consider the following parameters: current queue backlog length   L i ( t ) ; average packet arrival rate   λ i ( t ) ; queue service rate μ i ( t ) ; average queuing delay D i ( t ) . Based on these parameters, the congestion judgment thresholds are defined as follows:
C i t = 1 , L i t > L h i g h 0 , L i t < L l o w C i t 1 , o t h e r w i s e
In the congestion detection mechanism, Lhigh and Llow represent the upper and lower backlog thresholds, respectively. The queuing delay D i ( t ) is estimated using the current queue backlog length L i ( t ) and service rate μ i ( t ) and then compared against these thresholds to determine congestion status.
For traffic flow control, the token bucket mechanism is implemented: a token bucket is configured for each flow fi, with a maximum bucket capacity Bi. Let T i ( t ) denote the current token count and r i ( t ) the allocated transmission rate. The token update rule is defined as:
T i t + t = m i n {   B i   , T i t +   r i t Δ t  
The sending rate is dynamically adjusted based on congestion status. The calculation formula for rate adjustment is defined as follows:
r i t + t = r i ( t ) ( 1 α ) , C i ( t ) = 1 m i n { r i ( t ) + β , r i , m a x } , C i ( t ) = 0 r i ( t ) , e l s e
Among them, α∈(0, 1) is the multiplicative decreasing factor, β > 0 is the additive increment, and r i , m a x is the maximum allocable rate.

3.6. High-Availability Functional Design

In the peer-to-peer distributed excitation system, each controller’s acquisition module communicates with the computation modules in other controllers. In contrast, each computation module communicates with the execution modules in other controllers. Probe-live communication links exist between modules of the same type—all peer agent nodes of the same type share equal priority. When a node operates normally, other nodes remain capable of receiving domain-specific data. If a node fails, its responsibilities are reassigned to peer nodes of the same type, and data consistency and reliability are ensured through backups of DDS node-published production data. Additionally, online agent nodes proactively send heartbeat DDS messages to standby peer nodes at regular intervals to maintain a keep-alive mechanism. When an online agent node encounters communication link failures or internal malfunctions, standby agents initiate an election mechanism to designate a new primary agent, which subsequently assumes data forwarding duties. Figure 11 illustrates the architecture of the hot backup module system.
In this system, the online agent node broadcasts predefined topic information to the local area network (LAN) segment. This information serves as heartbeat liveness detection messages from the online agent node. By continuously transmitting these heartbeat signals, the online agent node declares its normal and active operational status to other components within the LAN, particularly standby peer agent nodes of the same type. The standby agent nodes persistently monitor the primary agent’s heartbeat detection messages and use a timer to record the intervals between received heartbeats. If the timer exceeds a predefined threshold without detecting any heartbeat from the online agent node, the standby agent node concludes that the primary node has failed or become unavailable. At this point, the standby agent node replaces it as the new online agent node. Figure 12 illustrates the timing diagram for the hot backup functionality.
In the peer-to-peer distributed excitation system designed in this study, all five controllers operate at an equal hierarchical level. Each controller integrates signal acquisition and execution modules; however, outputs from homogeneous modules across different controllers may exhibit discrepancies due to interference or communication delays. To ensure system reliability, an optimization strategy is embedded within the redundancy framework to preserve superior data while eliminating inferior inputs. This involves filtering out transient interference or faulty signals and selecting the most representative data from multi-source inputs that align with actual operational conditions. By doing so, both high fault tolerance and precise, rapid excitation regulation are maintained.
To fulfill these requirements, homogeneous modules across the five controllers execute time-synchronized parallel signal output and backup. The system first performs data pre-screening to discard outliers violating physical constraints. It then employs a 5-choose-3 majority voting mechanism: if three or more modules produce values within a ± α % error margin, the majority consensus is adopted. These modules are ranked by predefined priority, with the highest-priority module designated as the online module. If no clear majority emerges, dynamic clustering analysis is activated: adjacent signal differences are computed to identify densely clustered intervals, their median is derived, and the module closest to this median is selected as the online module. Figure 13 illustrates this workflow.

4. Experimental Verification of the Peer-to-Peer Distributed Excitation System

4.1. Performance Testing of Distributed Real-Time Data Distribution System

Configure one switch and one host. The development board is equipped with five onboard DDS nodes (A, B, C, D, E). The development board is connected to the switch, allowing data transmission through it. The test network topology is shown in Figure 14.
To evaluate the computational overhead and resource requirements for implementing ATS and DDS, under data distribution traffic of 1000 packets per second (pps), the CPU utilization stabilizes within the range of 30–35%. Under this service load, the processor is capable of operating the DDS middleware, TSN traffic scheduling module, and operating system kernel efficiently and stably. The end-to-end latency of DDS is maintained at the sub-millisecond level. The total memory usage is approximately 200 megabytes (MB), accounting for about 2.6% of the current onboard memory.
To evaluate the performance of the multi-priority scheduling and flow control mechanisms in the DDS system proposed in this study, we aim to verify whether the scheduling mechanism can prioritize high-priority data streams (HRT/SRT) under high-load or resource-constrained network conditions, thereby ensuring compliance with both soft and hard real-time latency requirements. The test focuses on the mechanism’s ability to mitigate congestion at DDS node transmitters under mixed traffic loads (HRT/SRT/BE), analyzing latency and packet loss metrics.
Without enabling dynamic priority scheduling or flow control policies, all nodes’ service flow publishers and subscribers are activated, allowing low-priority background traffic (customizable rates) to exchange data between nodes. The scheduling mechanism on each node is configured to wait for publishers to transmit fixed-size high-priority data. Publishers iteratively attempt to send 64-byte test samples, await acknowledgments, and measure end-to-end latency. The latency results (Table 2) are collected under varying background traffic bandwidths (Mbits/sec).
From the analysis of latency results, as the bandwidth increases, the maximum deviation of the tested service flow’s latency gradually rises, with the average latency increase remaining within approximately 10%. However, when the bandwidth reaches 450 Mbps, the system encounters severe congestion, resulting in significant deviations in maximum latency and a sharp increase in packet loss rates.
Building upon the previous configuration, dynamic priority scheduling and flow control strategies were enabled to evaluate latency performance under identical network conditions. The experimental results are summarized in Table 3.
The test results indicate that, after implementing priority scheduling and flow control strategies, the average latency of the tested service flow remains stable as the bandwidth increases, showing no significant upward trend. These experimental outcomes validate the effectiveness of the distributed communication technology, which combines the ATS and DDS frameworks, along with the associated QoS enforcement strategies proposed in this study.

4.2. Calculation Module Redundant Switching Test

In practical engineering, active fault injection for redundancy validation is infeasible due to safety risks and potential economic losses. Leveraging a high-fidelity dynamic simulation platform, this study replicates real-world grid conditions, including electromagnetic transient processes and complex operational scenarios, while enabling controlled fault injection. The photograph of the dynamic simulation platform can be found in Appendix C. Based on this platform, we systematically test the redundancy mechanisms of the designed peer-to-peer distributed excitation system. This approach validates the dynamic response of redundancy strategies under multi-failure scenarios, providing dual assurance (theoretical and practical) for system reliability and fault tolerance.
As shown in Figure 15, the experiment validates the redundancy functionality of computational modules between Control Channel 1 and Control Channel 2 in the regulator cabinet by actively switching the online module priority between the two channels. During the transition from Channel 1’s computational module to Channel 2’s, the generator terminal voltage exhibits no abrupt fluctuations. Similarly, when switching back to Channel 1’s module, voltage stability is maintained. The results demonstrate that both control channels’ computational modules achieve seamless control handover during real-time switching, with a voltage fluctuation deviation rate of no more than 0.36%. This confirms the effectiveness of the redundancy mechanism between the computational modules of Control Channel 1 and Control Channel 2.
As shown in Figure 16, the experiment validates the redundancy mechanism between computational modules of Control Channel 1 (Regulation Cabinet) and Power Cabinet 1 (denoted as SCR1 in the legend) by actively switching the online priority of computational modules across channels. When the active module transitions from Control Channel 1 (Regulation Cabinet) to Power Cabinet 1’s Control Channel, the generator terminal voltage exhibits no abrupt fluctuations. Similarly, switching back to Control Channel 1’s computational module maintains voltage stability. The results confirm that the computational modules of both control channels achieve seamless control handover during real-time switching, with a voltage fluctuation deviation rate not exceeding 0.34%, thereby verifying the redundancy functionality between the modules of these channels.
As shown in Figure 17, the redundancy functionality of computational modules between the Control Channel of Power Cabinet 1 and the Control Channel of Power Cabinet 2 was experimentally validated. The results indicate that no abrupt fluctuations in generator terminal voltage occurred during the switching process, thereby confirming the redundancy capability between the computational modules of these two power cabinet control channels.
The test items and summary are shown in Table 4.
The test results for the redundant switching of computational modules demonstrate that the maximum deviation rate of the generator terminal voltage is effectively controlled within 2%, strictly complying with the national standard’s technical specification limit (a maximum allowable fluctuation of 5% during redundancy switching). These experimental data validate the effectiveness of the 2 + 3 redundant architecture implemented in the computational modules.

4.3. Redundant Switching Test of Acquisition Module

In the peer-to-peer distributed excitation system, the synchronization signal, as the core acquisition parameter with the highest requirements for real-time performance and stability, directly impacts the system’s dynamic response characteristics through its redundant reliability. This experiment validates the effectiveness of the system’s redundancy design through redundant switching of synchronization signals. As shown in Figure 18, when the Phase A synchronization source in the control channel of Power Cabinet 1 is disconnected, the system seamlessly switches to the Phase A synchronization source output by the acquisition unit of Power Cabinet 2 through its redundancy mechanism. The current-sharing coefficient measured during the test is 99.45%.
As shown in Figure 19, when the B-phase synchronization source of Power Cabinet 1’s control channel is disconnected, the system automatically switches the synchronization signal to the B-phase synchronization source output by the acquisition unit of Power Cabinet 2, utilizing a redundant design. The current-sharing coefficient during the test is 99.44%.
Synchronized signal source acquisition redundancy switching test items and a summary are provided in Table 5.
The results of the synchronous signal acquisition redundancy switching test show that the system’s current-sharing coefficient stays stable at ≥98% during the switching process. Throughout the test, all five computational modules, non-faulty acquisition modules, and power unit execution modules remain operational. This confirms the high availability of both the mutual redundancy among acquisition modules and the cross-redundancy between “acquisition-computation” modules.
Under no-load rated operating conditions of the dynamic simulation unit, the excitation system operates in Automatic Voltage Regulation (AVR) mode. By sequentially disconnecting the input signal sources of the generator terminal voltage acquisition modules in the regulation cabinet and power cabinets, the system’s redundancy reliability is validated. As shown in Figure 20, disconnecting the generator terminal voltage input source (PT1) of Control Channel 1 in the regulation cabinet results in a maximum voltage deviation rate of 0.71%, while Control Channel 1 remains fully operational throughout the test.
The test data for redundant switching of generator terminal voltage sampling signals indicate that the maximum voltage deviation is strictly controlled within 1%, meeting the national standard’s technical requirement (fluctuation rate < 5% during redundancy switching). Throughout the test, all five computational modules, acquisition modules with non-disrupted signal sources, and power unit execution modules functioned normally, while preset Control Channel 1 remained fully operational. These results further validate the mutual redundancy between acquisition units and the cross-redundancy between “acquisition-computation” modules in the system.

4.4. Redundancy Switching Test During Load Rejection Transient

To validate the effectiveness of redundant switching functionality during transient processes, a load rejection test was conducted, focusing on the impact of online control channel switching on generator terminal voltage stability. As shown in Figure 21, a load rejection disturbance at 1.8 s triggered voltage fluctuations and an increase in unit frequency. Subsequently, during the voltage recovery phase, an online control channel switchover was performed. Throughout the switching process, the voltage recovery transitioned smoothly and stabilized at the rated value, with no secondary fluctuations observed due to the switching action.

5. Discussion

With the ongoing progress of the energy transition, the widespread integration of renewable energy sources like wind and solar power presents significant challenges to the flexible regulation capabilities of power systems. Hydroelectric units, leveraging their quick response times and power regulation advantages, have become essential in modern power systems for stabilizing fluctuations and maintaining grid reliability. As the primary control equipment of synchronous generators, excitation systems directly influence power angle stability and voltage support capacity through dynamic adjustments of excitation currents. However, traditional distributed excitation systems face limitations in redundancy design. To overcome these challenges, this study combines ATS and DDS technologies to create a distributed real-time data exchange framework. All control nodes have identical configurations at both the hardware and software levels. When adding a new controller, it only needs to be connected to the ATS switches and DDS networks. Then, the domain discovery mechanism built into DDS automatically detects topology and synchronizes data, removing the need for changes to central controller settings. This allows for online, dynamic system expansion. Experimental results showed coordinated operation among five controllers, with the system theoretically scalable to N nodes. Without extra hardware costs, the reliability of the excitation system has been effectively improved through software and protocol modifications.
Validation through a dynamic simulation platform demonstrates the effectiveness of the peer-to-peer distributed excitation system’s redundancy features, with key metrics including a voltage deviation of ≤1% and a current-sharing coefficient of ≥98%. These findings provide empirical evidence of operational reliability in high-renewable-penetration grids scenarios.

Author Contributions

Conceptualization, H.W.; Methodology, X.W., X.D., X.Y. and H.W.; Software, X.H.; Validation, X.H.; Formal analysis, X.D.; Investigation, X.W.; Resources, X.D.; Data curation, X.W.; Writing—original draft, X.W.; Writing—review & editing, X.W.; Supervision, X.D., X.Y. and X.L.; Project administration, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Three Gorges Intelligent Control Technology Co., Ltd. Research Project (NBZZ202400220).

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Acknowledgments

I sincerely thank the editors for their insightful guidance and extend gratitude for their invaluable academic support throughout this research.

Conflicts of Interest

Authors Xuxin Yue, Haoran Wang, Xiaokun Li and Xuemin He were employed by the company Three Gorges Intel-ligent Control Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

Table A1. Variable description summary table.
Table A1. Variable description summary table.
Variable NameDescription
Iimulti-level sequential queues
Cicapacity limit
Liqueue length
qiqueue limit
Rthe maximum port data packet forwarding rate
VTraffic intensity
JTiDuration of the dynamic process
JPiTotal data packet count in the dynamic process
Thrtthe hard real-time deadline threshold
Tsrtthe soft real-time delay budget threshold
T c r i t i c a l the set of critical data types
didelay upper bound
ω i the estimated service time
εAllowable error in the maximum waiting time
r i ( t ) the allocated transmission rate
Bimaximum bucket capacity
T i ( t ) the current token count
αthe multiplicative decreasing factor
βthe additive increment

Appendix B

Figure A1. Flowchart for priority task registration process.
Figure A1. Flowchart for priority task registration process.
Electronics 14 03013 g0a1
Figure A2. Flowchart of priority task execution process.
Figure A2. Flowchart of priority task execution process.
Electronics 14 03013 g0a2
Figure A3. ATS and DDS system architecture diagram.
Figure A3. ATS and DDS system architecture diagram.
Electronics 14 03013 g0a3

Appendix C

Figure A4. Physical diagram of the dynamic simulation platform.
Figure A4. Physical diagram of the dynamic simulation platform.
Electronics 14 03013 g0a4
The physical diagram of the dynamic simulation platform utilized in the dynamic simulation test comprises a small dynamic simulation test cabinet, a high-voltage low-current test cabinet, a low-voltage high-current test cabinet, a dynamic simulation unit, and a reactor, among other components. The text “配电网” in the diagram denotes “Distribution Network”, and “有电危险” denotes “Danger Electrical Hazard”.

References

  1. Sharma, P.; Reiman, A.P.; Anderson, A.A.; Poudel, S.; Allwardt, C.H.; Fisher, A.R.; Slay, T.E.; Mukherjee, M.; Dubey, A.; Ogle, J.P.; et al. GridAPPS-D Distributed App Architecture and API for Modular and Distributed Grid Operations. IEEE Access 2024, 12, 39862–39875. [Google Scholar]
  2. Mesbah, M.A.; Sayed, K.; Ahmed, A.; Aref, M.; Gaafar, M.A.; Mossa, M.A.; Almalki, M.M.; Alghamdi, T.A.H. A distributed architecture of parallel buck-boost converters and cascaded control of DC microgrids-real time implementation. IEEE Access 2024, 12, 47483–47493. [Google Scholar]
  3. dos Santos Alonso, A.M.; Brandao, D.I.; Caldognetto, T.; Marafão, F.P.; Mattavelli, P. A selective harmonic compensation and power control approach exploiting distributed electronic converters in microgrids. Int. J. Electr. Power Energy Syst. 2020, 115, 105452. [Google Scholar]
  4. Naderi, Y.; Hosseini, S.H.; Zadeh, S.G.; Mohammadi-Ivatloo, B.; Vasquez, J.C.; Guerrero, J. An overview of power quality enhancement techniques applied to distributed generation in electrical distribution networks. Renew. Sustain. Energy Rev. 2018, 93, 201–214. [Google Scholar]
  5. Gayatri, M.T.L.; Parimi, A.M.; Kumar, A.V.P. A review of reactive power compensation techniques in microgrids. Renew. Sustain. Energy Rev. 2018, 81, 1030–1036. [Google Scholar]
  6. Li, S.; Geng, H.; Yang, G. A distributed control system of cascade static synchronous compensator. Trans. China Electrotech. Soc. 2017, 32, 153–159. [Google Scholar]
  7. Qi, Q.; Wu, J. Increasing distributed generation penetration using network reconfiguration and soft open points. Energy Procedia 2017, 105, 2169–2174. [Google Scholar]
  8. Zhu, X.; Hou, J.; Liu, L.; Zhang, B.; Wu, Y. A modular multiport DC power electronic transformer based on triple-active-bridge for multiple distributed DC units. IEEE Trans. Power Electron. 2024, 39, 15191–15205. [Google Scholar]
  9. Bellini, A.; Bifaretti, S.; Giannini, F. A robust synchronization method for centralized microgrids. IEEE Trans. Ind. Appl. 2014, 51, 1602–1609. [Google Scholar]
  10. Kandula, A.; Verma, V.; Solanki, S.K.; Solanki, J. Comparative analysis of self-synchronized virtual synchronous generator control and droop control for inverters in islanded microgrid. In Proceedings of the 2019 North American Power Symposium (NAPS), Wichita, KS, USA, 13–15 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  11. Palahalli, H.; Hemmati, M.; Gruosso, G. Analysis and design of a smart controller for managing penetration of renewable energy including cybersecurity issues. Electronics 2022, 11, 1861. [Google Scholar]
  12. Zheng, Y.; Zhang, C.; Hill, D.J.; Meng, K. Consensus control of electric spring using back-to-back converter for voltage regulation with ultra-high renewable penetration. J. Mod. Power Syst. Clean Energy 2017, 5, 897–907. [Google Scholar]
  13. Hussan, U.; Wang, H.; Ayub, M.A.; Rasheed, H.; Majeed, M.A.; Peng, J.; Jiang, H. Decentralized stochastic recursive gradient method for fully decentralized OPF in multi-area power systems. Mathematics 2024, 12, 3064. [Google Scholar]
  14. Hussan, U.; Waheed, A.; Bilal, H.; Wang, H.; Hassan, M.; Ullah, I.; Peng, J.; Hosseinzadeh, M. Robust Maximum Power Point Tracking in PV Generation System: A Hybrid ANN-Backstepping Approach With PSO-GA Optimization. IEEE Trans. Consum. Electron. 2025. [Google Scholar] [CrossRef]
  15. Maletić, Ž.; Mlađen, M.; Ljubojević, M. A survey on the current state of time-sensitive networks standardization. In Proceedings of the 2023 10th International Conference on Electrical, Electronic and Computing Engineering (IcETRAN), East Sarajevo, Bosnia and Herzegovina, 5–8 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  16. Huang, T.; LV, J.J.; Zhu, H.L.; Zhang, H.Y.; Gu, Q.M. Automotive In-Vehicle Time-Sensitive Networking: The State of the Art and Prospect. J. Beijing Univ. Posts Telecommun. 2023, 46, 46. [Google Scholar]
  17. Specht, J.; Samii, S. Urgency-based scheduler for time-sensitive switched ethernet networks. In Proceedings of the 2016 28th Euromicro Conference on Real-Time Systems (ECRTS), Toulouse, France, 5–8 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 75–85. [Google Scholar]
  18. Le Boudec, J.Y. A theory of traffic regulators for deterministic networks with application to interleaved regulators. IEEE/ACM Trans. Netw. 2018, 26, 2721–2733. [Google Scholar]
  19. Zhao, L.; Pop, P.; Steinhorst, S. Quantitative performance comparison of various traffic shapers in time-sensitive networking. IEEE Trans. Netw. Serv. Manag. 2022, 19, 2899–2928. [Google Scholar]
  20. Shin, I.; Choi, D.; Choi, H. Monitoring of digital substations using DDS. In Proceedings of the 2019 54th International Universities Power Engineering Conference (UPEC), Bucharest, Romania, 3–6 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  21. IEC 61850; Communication Networks and Systems for Power Utility Automation. Edition 2.0. International Electrotechnical Commission: Geneva, Switzerland, 2011–2013.
  22. Son, M.Y.; Kim, D.S.; Cha, J.H. Efficient DDS monitoring system for large amount of data. In Proceedings of the 2018 14th IEEE International Workshop on Factory Communication Systems (WFCS), Imperia, Italy, 13–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  23. Cho, D.S.; Yun, S.; Kim, H.; Kwon, J.; Kim, W.-T. Autonomous driving system verification framework with FMI co-simulation based on OMG DDS. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 4–6 January 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  24. Meng, Y.; Xingmin, W.; Wei, S.Z. Design and Implementation of Emergency QoS of Publish-Subscribe Middleware Based on DDS. In Proceedings of the 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE), Xiamen, China, 18–20 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2024–2027. [Google Scholar]
  25. Object Management Group. DDS Extensions for Time Sensitive Networking; 1.0 beta; Object Management Group: Milford, MA, USA, 2023. [Google Scholar]
  26. IEEE Std 802.1Qcr-2020 (Amendment to IEEE Std 802.1Q-2018 as amended by IEEE Std 802.1Qcp-2018, IEEE Std 802.1Qcc-2018, IEEE Std 802.1Qcy-2019, and IEEE Std 802.1Qcx-2020); IEEE Standard for Local and Metropolitan Area Networks--Bridges and Bridged Networks-Amendment 34: Asynchronous Traffic Shaping. IEEE: Piscataway, NJ, USA, 6 November 2020; pp. 1–151.
  27. IEEE Std 1588-2019 (Revision ofIEEE Std 1588-2008); IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems. IEEE: Piscataway, NJ, USA, 16 June 2020; pp. 1–499.
  28. Shi, X. Research on Delay-Constrained Queue Scheduling Method Based on Strict Priority. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2023. [Google Scholar]
Figure 1. Schematic diagram of fundamental principles for self-shunt static excitation system.
Figure 1. Schematic diagram of fundamental principles for self-shunt static excitation system.
Electronics 14 03013 g001
Figure 2. Traditional distributed excitation system fiber-optic network.
Figure 2. Traditional distributed excitation system fiber-optic network.
Electronics 14 03013 g002
Figure 3. Cross-redundant architecture for peer-to-peer distributed excitation systems.
Figure 3. Cross-redundant architecture for peer-to-peer distributed excitation systems.
Electronics 14 03013 g003
Figure 4. Comparison diagram of failure probability functions for single power unit control channel.
Figure 4. Comparison diagram of failure probability functions for single power unit control channel.
Electronics 14 03013 g004
Figure 5. Comparison diagram of failure probability functions for single regulating unit control channel.
Figure 5. Comparison diagram of failure probability functions for single regulating unit control channel.
Electronics 14 03013 g005
Figure 6. UBS queue structure diagram.
Figure 6. UBS queue structure diagram.
Electronics 14 03013 g006
Figure 7. System software framework diagram.
Figure 7. System software framework diagram.
Electronics 14 03013 g007
Figure 8. Traffic scheduling overall architecture.
Figure 8. Traffic scheduling overall architecture.
Electronics 14 03013 g008
Figure 9. Implementation flow of mapping DDS QoS policies to ATS traffic priorities.
Figure 9. Implementation flow of mapping DDS QoS policies to ATS traffic priorities.
Electronics 14 03013 g009
Figure 10. Queue boundary calculation.
Figure 10. Queue boundary calculation.
Electronics 14 03013 g010
Figure 11. Hot backup module system architecture.
Figure 11. Hot backup module system architecture.
Electronics 14 03013 g011
Figure 12. Sequence diagram of hot backup function.
Figure 12. Sequence diagram of hot backup function.
Electronics 14 03013 g012
Figure 13. Flowchart of optimization design.
Figure 13. Flowchart of optimization design.
Electronics 14 03013 g013
Figure 14. DDS test network topology.
Figure 14. DDS test network topology.
Electronics 14 03013 g014
Figure 15. The online calculation module switches between Control Channel 1 and Control Channel 2 of the regulation cabinet.
Figure 15. The online calculation module switches between Control Channel 1 and Control Channel 2 of the regulation cabinet.
Electronics 14 03013 g015
Figure 16. The online calculation module switches between Control Channel 1 and SCR 1.
Figure 16. The online calculation module switches between Control Channel 1 and SCR 1.
Electronics 14 03013 g016
Figure 17. The online calculation module switches between SCR 1 and SCR 2.
Figure 17. The online calculation module switches between SCR 1 and SCR 2.
Electronics 14 03013 g017
Figure 18. Redundancy test for fault switching of Phase A synchronous sources between the Control Channel of Power Cabinet 1 and the Control Channel of Power Cabinet 2.
Figure 18. Redundancy test for fault switching of Phase A synchronous sources between the Control Channel of Power Cabinet 1 and the Control Channel of Power Cabinet 2.
Electronics 14 03013 g018
Figure 19. Redundancy test for fault switching of Phase-B synchronous sources between the Control Channel of Power Cabinet 1 and the Control Channel of Power Cabinet 2.
Figure 19. Redundancy test for fault switching of Phase-B synchronous sources between the Control Channel of Power Cabinet 1 and the Control Channel of Power Cabinet 2.
Electronics 14 03013 g019
Figure 20. Redundancy test for acquisition fault of generator terminal voltage in Control Channel 1 of the regulation cabinet.
Figure 20. Redundancy test for acquisition fault of generator terminal voltage in Control Channel 1 of the regulation cabinet.
Electronics 14 03013 g020
Figure 21. Redundancy test for online control channel switching during transient processes.
Figure 21. Redundancy test for online control channel switching during transient processes.
Electronics 14 03013 g021
Table 1. Mapping of data types to scheduling strategies.
Table 1. Mapping of data types to scheduling strategies.
Business TypeDDS QoS
Configuration
ATS Scheduling StrategyMapping RulesTarget Effect
HRT1. LatencyBudget: <1 ms
2. Reliability: Reliable
ATS UBS Queue schedulingATS traffic scheduling ensures the highest-priority transmission of traffic, free from interference by low-priority traffic.Low latency and highest priority, ensuring data delivery within the specified latency budget.
SRT1. LatencyBudget:
1~10 ms
2. Reliability: Reliable
ATS UBS Queue schedulingIn DDS, the latency budget is converted into allocation weights for queue priorities.Avoid the impact of burst traffic on the network and ensure low-latency transmission of SRT traffic.
BE1. LatencyBudget:default
2. Reliability: Best Effort
Traffic Shaping Low-Priority QueuingBE traffic is transmitted when the network is idle. ATS allocates according to available bandwidth and may discard data if the network becomes congested.BE traffic is transmitted when the network is idle. TSN allocates according to available bandwidth and may discard data if the network becomes congested.
Table 2. Latency performance test data without priority scheduling and flow control policies.
Table 2. Latency performance test data without priority scheduling and flow control policies.
Mbits/sLatency (Mean/μs)Latency (Min/μs)Latency (Max/μs)Packet Loss Rate (%)
501126.72746.592452.980
1001115.36783.652189.750
1501163.68772.842509.560
2001179.33770.822643.750
2501216.47771.962782.270
3001182.14796.692846.580
3501311.15793.423359.790
4001372.43751.413916.541
4502301.89802.088923.2412
Table 3. Latency performance test data with priority scheduling and flow control policies enabled.
Table 3. Latency performance test data with priority scheduling and flow control policies enabled.
Mbits/sLatency (Mean/μs)Latency (Min/μs)Latency (Max/μs)Packet Loss Rate (%)
50824.36569.78984.380
100818.90678.161012.250
150833.19698.451029.360
200819.25670.121049.350
250846.71679.24989.230
300882.29690.301046.820
350851.11641.321102.930
400875.68651.821046.480
450904.18701.381151.760
Table 4. Summary of computing module redundancy switching test results.
Table 4. Summary of computing module redundancy switching test results.
No.Test ItemMaximum Voltage Deviation Rate at Generator Terminals
1Switching from Channel 1 to Channel 2 and back to Channel 10.36%
2Switching from Channel 1 to Power Cabinet 1 and back to Channel 10.34%
3Switching from Power Cabinet 1 to Power Cabinet 2 0.6%
Table 5. Synchronization source acquisition redundancy switching test results summary table.
Table 5. Synchronization source acquisition redundancy switching test results summary table.
No.Test ItemTheoretical Synchronization Signal Switching ResultsActual Synchronization Signal Switching Test ResultsCurrent Sharing CoefficientActive Computing Module
1Isolate Phase A synchronization source of Power Cabinet 1Switch to Phase A of Power Cabinet 2Switch to Phase A of Power Cabinet 299.45%Computing Module 1 operating as primary
2Isolate Phase B synchronization source of Power Cabinet 1Switch to Phase B of Power Cabinet 2Switch to Phase B of Power Cabinet 299.44%Computing Module 1 operating as primary
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Deng, X.; Yue, X.; Wang, H.; Li, X.; He, X. High-Redundancy Design and Application of Excitation Systems for Large Hydro-Generator Units Based on ATS and DDS. Electronics 2025, 14, 3013. https://doi.org/10.3390/electronics14153013

AMA Style

Wang X, Deng X, Yue X, Wang H, Li X, He X. High-Redundancy Design and Application of Excitation Systems for Large Hydro-Generator Units Based on ATS and DDS. Electronics. 2025; 14(15):3013. https://doi.org/10.3390/electronics14153013

Chicago/Turabian Style

Wang, Xiaodong, Xiangtian Deng, Xuxin Yue, Haoran Wang, Xiaokun Li, and Xuemin He. 2025. "High-Redundancy Design and Application of Excitation Systems for Large Hydro-Generator Units Based on ATS and DDS" Electronics 14, no. 15: 3013. https://doi.org/10.3390/electronics14153013

APA Style

Wang, X., Deng, X., Yue, X., Wang, H., Li, X., & He, X. (2025). High-Redundancy Design and Application of Excitation Systems for Large Hydro-Generator Units Based on ATS and DDS. Electronics, 14(15), 3013. https://doi.org/10.3390/electronics14153013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop