1. Introduction
The IEC 61850 standard forms the foundation of the discussed digital innovations. Developed in 2004 by the International Electrotechnical Commission (IEC), it defines a universal communication model for substation automation and power systems. Its primary goal is to ensure interoperability and seamless integration of Intelligent Electronic Devices (IEDs), which eliminates the limitations typical of traditional, proprietary protocols [
1,
2].
Within the IEC 61850 standard, two key mechanisms are responsible for handling time-critical information: GOOSE (Generic Object Oriented Substation Event) and SV (Sampled Values). GOOSE is a low-latency, high-reliability, event-based protocol designed for real-time control and monitoring tasks. SV, on the other hand, is used for streaming digital samples of analog signals (current and voltage), which are essential for precise protection functions. However, the high sampling rate requirements of SV create a risk of network congestion. In response to this challenge, effective load reduction methods have been developed, including data aggregation, signal compression, and feature extraction techniques. These solutions, by minimizing bandwidth and latency requirements, are crucial in monitoring systems for extensive critical infrastructure, where there is also a need to process large volumes of data [
1,
3,
4].
The modern energy sector faces growing demands for digitalization, automation, and enhanced flexibility and resilience of the power infrastructure. However, while the literature largely focuses on transmission and distribution substations, considerably less attention has been devoted to industrial power substations, where the behavior of protection and control systems directly determines the continuity of technological processes. In industrial environments—such as manufacturing plants, petrochemical facilities, metallurgical operations, or pulp and paper mills—that operate their own transformer stations, generators, and power protection systems, any delay in the operation of protective relays, automation schemes, or communication systems may lead not only to power supply failures but also to process disturbances, product losses, and prolonged production recovery times. Although several studies have addressed communication delays and response times in transmission and distribution substations [
5,
6], the literature still lacks a thorough investigation of the correlation between plant load, technological process characteristics, and protection operation delay in industrial settings. For example, the works presented in [
7,
8] highlight the specific challenges associated with industrial power systems and protection coordination, yet they do not focus on the dependency of protection reaction times on generator or plant load conditions. Delays within protection systems—whether caused by measurement, communication, or switching action—have a direct impact on the reliability and safety of industrial facilities. The authors of [
6] demonstrated that in process-bus-based substation networks, delays may become a critical factor within the time margins of protection functions. In industrial environments, where production cycles are tightly scheduled, any instability in power supply or transient disturbances in power quality can result in significant downtime costs, material waste, or the need to shut down entire technological lines.
In the rapidly advancing energy sector, reliable and high-precision time synchronization has become a fundamental requirement for ensuring the safe, efficient, and coordinated operation of complex systems. The IEEE 1588 Precision Time Protocol (PTP), particularly in its second version (PTPv2), has emerged as a pivotal technology capable of achieving sub-microsecond accuracy in Ethernet networks, which renders it directly applicable to railway infrastructure [
9].
Within the power sector, PTP is an integral component of modern Substation Automation Systems (SASs) compliant with the IEC 61850 standard, which forms the basis for the digitalization and interoperability of Intelligent Electronic Devices (IEDs). Precise time synchronization is indispensable for a range of critical functions. For instance, the Sampled Values (SV) protocol, used for the real-time transmission of digital samples of analog signals (currents and voltages) from Merging Units (MUs) to protection and control systems, requires a sampling accuracy of 1 µs or better. Consequently, PTPv2 is utilized by Three-Phase Merging Units (TPMUs) to assign timestamps to each measurement sample, ensuring the integrity and security of SV transmissions. Synchronization accuracy at the 1 µs level is essential for protection automation systems to meet their performance and reliability criteria. Even GOOSE messages, which facilitate the rapid exchange of event-based information, rely on precise time synchronization for correct and coordinated operation [
1,
10,
11].
However, the implementation of PTP in the energy sector presents several challenges. Fluctuations in the propagation delay of PTP messages, resulting from network traffic load, can lead to synchronization errors. Studies have shown that PTP performance is dependent on the selection of the grandmaster and slave clocks, with significant variations in jitter being possible. Transparent Clocks (TCs) and Boundary Clocks (BCs) are crucial for precisely compensating for delays introduced by network traffic, including that generated by SV and GOOSE. Analyzing their ability to accurately measure the residence time of a PTP message within a switch is critical, especially in heavily loaded Ethernet networks. Although increasing the prioritization of PTP messages in Ethernet networks is often recommended, it may not be necessary when using Transparent Clocks with the peer-delay mechanism, as these devices effectively compensate for queuing delays. Despite these challenges, PTP, in accordance with the IEEE Std. C37.238-2011 fulfills the synchronization requirements for SV process busses in shared networks [
10]. The main clock, to which all devices synchronize, is called the “Grandmaster,” and the devices that receive this time and synchronize to it are called the “slave clock”.This process involves sending and receiving PTP messages with timestamps, which allow the Slave clocks to adjust to the Grandmaster clock’s time. A constant flow of time information and correction of any errors ensures precise indications and synchronization of clocks in the network. This protocol uses two types of messages: event messages (
Sync,
Delay_Req,
Pdelay_Req, and
Pdelay_Resp), which convey time information between devices, and general messages, which fulfill various communication functions in the PTP protocol. General messages include
Announce,
Follow_Up,
Delay_Resp,
Pdelay_Resp_Follow_Up,
Management and
Signaling [
12,
13,
14,
15]. The device synchronization procedure based on the basic exchange of PTP time messages is presented in
Figure 1 and includes the following stages:
SYNC Messages—in this data packet, the Grandmaster clock sends SYNC messages to all Slave clocks with its current time.
Follow-Up Messages—after sending the SYNC message, the Grandmaster clock sends a Follow-Up message which contains more precise time information related to the just-sent “SYNC Message.” Thanks to this information, the Slave device is able to calculate the propagation delay.
Delay Request (Delay-Req) Messages—this is a message from the Slave clock to the Grandmaster, in which there is a request to measure the delay time between them. This information includes a timestamp indicating when this message was sent.
Delay Response (Delay-Resp) Messages—the Grandmaster clock responds to Delay-Req messages by sending a Delay-Resp message back to the Slave clock. The data packet sent in this message contains the same timestamps as received in the Delay-Req message. By comparing the timestamps in the Delay-Req and Delay-Resp messages, Slave clocks can calculate the propagation delay more precisely.
In connection with the above, the delay
and offset
(differences between two reference times) are as follows:
If the connection to the Grandmaster clock is lost or the signal quality is so poor that signals arrive with a significant delay (this information is in the “Announce” message), a distributed algorithm for selecting a new Grandmaster clock, called BMCA (Best Master Clock Algorithm), is initiated. The purpose of this algorithm is to select a Grandmaster clock that will serve as the time reference for the other clocks. All clocks participate in the selection process to determine which one has the best time reference.
For the energy sector, the peer-to-peer (One Step) approach is used; delay measurement takes place between two ports of connected devices. The sequence of individual steps is shown in
Figure 2, which is as follows:
Port-1 sends a Pdelay_Req message and generates timestamp for this message,
Port-2 receives the Pdelay_Req message and generates timestamp ,
Next, Port-2 sends a Pdelay_Resp message and generates timestamp . To minimize errors resulting from frequency differences between PTP ports, Port-2 sends the Pdelay_Resp message as quickly as possible after receiving the Pdelay_Req message,
Port-2 transmits the difference between timestamps and in the Pdelay_Resp message, or in a separate Pdelay_Resp_Follow_Up message, or transmits both timestamps in both of these messages,
Port-1 generates timestamp after receiving the Pdelay_Resp message,
Port-1 uses these four timestamps to calculate the average PTP link delay between the two devices.
The increasing demand for reliability and uninterrupted operation in critical industrial environments, such as power stations and heavy industry, necessitates the implementation of advanced network topologies to support digital protection automation systems. To minimize downtime resulting from communication failures, two key redundancy protocols are widely adopted, defined under the IEC 62439-3 standard: Parallel Redundancy Protocol (PRP) and High-availability Seamless Redundancy (HSR) [
16]. PRP (Parallel Redundancy Protocol) utilizes two completely independent Local Area Networks (LANs), allowing simultaneous transmission of identical data frames over both paths to the destination node. This architecture ensures zero recovery time in the event of a single network failure, as the receiving device automatically selects the first arriving frame and discards duplicates. The independence of the two LANs—referred to as LAN A and LAN B—not only guarantees fault tolerance but also enables deterministic communication, which is essential in time-critical applications such as protection automation systems. In the experimental setup described in this study, each Intelligent Electronic Device (IED) was equipped with dual network interfaces and operated within a PRP environment, receiving time synchronization signals from two separate clocks (Master and Slave) distributed across both networks. This configuration, shown in
Figure 3, illustrates the physical separation of the redundant paths and the integration of switches (LAN A and LAN B) and time servers, which collectively form the backbone of the high-availability industrial communication infrastructure [
17,
18,
19,
20].
HSR (High-availability Seamless Redundancy) employs a ring topology in which each node forwards Ethernet frames simultaneously in both clockwise and counterclockwise directions. This bidirectional transmission ensures that data reaches its destination even if a single link or node within the ring fails, thereby providing seamless redundancy without any recovery time. Unlike PRP, which requires two physically separate networks, HSR achieves fault tolerance within a single logical ring, reducing infrastructure complexity. However, this approach necessitates that all participating devices support HSR natively, including frame duplication, filtering, and forwarding mechanisms. In the tested system, HSR could offer enhanced resilience and deterministic communication, particularly in time-critical applications such as protection automation. The topology of the industrial network infrastructure, including the redundant paths and device interconnections, is illustrated in
Figure 4 of the article [
17,
18,
19,
20].
Both protocols achieve zero-recovery-time upon a single failure, a requirement critical for time-sensitive protection messages like GOOSE and Sampled Values (SV). While PRP offers greater reliability due to its fully duplicated network structure, HSR is often preferred in smaller installations due to lower infrastructure costs.
Comparative analyses show that PRP and HSR provide significantly faster switchover times (near zero milliseconds) compared to traditional link redundancy protocols like RSTP (Rapid Spanning Tree Protocol), which can introduce unacceptable delays (up to several hundred milliseconds). As demonstrated by Bernardino [
21], RSTP, despite its improvements over legacy STP, still relies on dynamic topology recalculation and convergence mechanisms that are inherently non-deterministic. This makes it unsuitable for time-critical applications such as GOOSE and Sampled Values communication in digital substations.
Modern power systems are increasingly digitalized and integrate protection automation within centralized control and protection schemes. The introduction of communication based on IEC 61850-compliant protocols enables significant improvements in flexibility and coordination of protective functions; however, it also introduces new challenges related to transmission delays, time synchronization, and overall system reliability. Time delays in the exchange of trip signals between protective devices can lead to prolonged fault durations and increased thermal losses in power cables and conductors, which in turn affects the durability of system components and operational safety. In recent years, there has been a growing number of studies addressing complex coordination and reliability of protection systems within integrated network structures. In [
22], an approach based on policy learning is presented, ensuring safe and coordinated operation of multi-energy microgrids, where limiting congestion and control delays is critical. In [
23], a method for assessing power system reliability considering topological uncertainties and nodal power variations is proposed, which can be directly related to issues concerning the reliability and temporal coordination of digital protection systems.
The power system protection schemes increasingly rely on digital substations and communication networks compliant with the IEC 61850 standard. The foundation of proper operation in such systems is precise time synchronization, typically implemented using the Precision Time Protocol (PTP, IEEE 1588), which enables reliable comparison of phase angles and measurement values across Intelligent Electronic Devices (IEDs) and allows for fast responses to fault events [
24,
25]. The literature emphasizes that microsecond-level synchronization accuracy is critical not only for the correct functioning of protective functions but also for minimizing delays caused by communication in the digital network and signal processing within the executing devices [
25,
26]. High-precision synchronization ensures that the protection algorithms can operate deterministically, even under high network load or in the presence of transient disturbances. In both laboratory and industrial settings, communication delays resulting from network congestion, packet queuing, and clock jitter significantly affect the overall operating time of protection systems [
27,
28]. Modeling these delays and their impact on protective actions is a key step in designing modern digital protection systems, enabling engineers to define requirements for network bandwidth, traffic prioritization, and hardware redundancy [
19,
26,
29]. Various approaches have been proposed in the literature for evaluating process bus network performance, including simulation-based methods and hardware-in-the-loop (HIL) testing, which allow for the quantification of latency as a function of the number of data streams, sampling frequency, and network topology [
19,
28,
30]. Such modeling is essential to understand how network-induced delays propagate through protection functions and affect operational reliability. Furthermore, research on digital substations highlights the importance of cyber-physical system modeling, encompassing both power system dynamics and network communication aspects, including the analysis of GOOSE and SV messages under high-load scenarios [
27,
31]. The practical implications of these studies include improving the safety and reliability of protection systems, optimizing communication architectures, and implementing mechanisms to minimize latency, such as deterministic Ethernet (TSN, HSR, PRP) and adaptive algorithms in IEDs [
24,
29,
32]. Therefore, a comprehensive understanding of network delays and time synchronization is an essential aspect of designing modern digital protection systems in compliance with the IEC 61850 standard.
Modern protection automation systems significantly improve the accuracy of fault detection and the effectiveness of response to all types of emergency conditions in protected networks and circuits [
17,
33]. They ensure full selectivity of protection operation, enable archiving of the performance of individual protection devices and intermediate elements during the disturbance response, and additionally record and archive the disturbance signal itself, including its waveform and time-domain variations [
34]. This functionality makes it possible not only to reconstruct the system’s reaction to a disturbance but also to correctly identify the event and trigger the appropriate protective action (activation of the designated switching device) without the risk of unintended operations of auxiliary components that could result in the unnecessary disconnection of a greater number of circuits than those actually affected by the fault [
35]. However, these systems also have certain limitations. They are not related to a higher probability of failure—since this risk is mitigated by the use of redundant configurations—but stems from their inherent complexity. The processing of the disturbance signal, its identification, transmission to the actuating device, and subsequent feedback reading require additional operations, which extend the overall reaction time compared to conventional protection systems [
33,
36]. This issue may be further exacerbated by network traffic and challenges in maintaining precise real-time synchronization. As a consequence, the duration of the fault until its clearance becomes longer. This is particularly critical in the case of short circuits, as prolonged fault current flow increases the amount of thermal energy released in the installation. In turn, the additional heating of conductors and components may necessitate the oversizing of applied cables and the selection of larger cross-sections to ensure the safe and reliable operation of the system.
Emphasizing, in modern protection automation systems, accurate time synchronization—most commonly achieved using the Precision Time Protocol (PTP, IEEE 1588)—is the foundation of correct system operation. Microsecond-level synchronization accuracy is a prerequisite for the proper functioning of Intelligent Electronic Devices (IEDs) and for the transmission and analysis of sampled voltage and current data under the Sampled Values (SV) standard. When synchronization is properly maintained, protection devices can reliably compare phase and measurement values and make fast tripping decisions. However, when synchronization quality deteriorates—for example, due to time offset, clock jitter, or PTP packet loss—additional buffering and correction mechanisms become necessary, inevitably introducing further signal processing delays within the IED. Another link in this cause-and-effect chain is the communication network load within the substation. In typical digital architectures, multiple measurement (MU) and protection (IED) devices simultaneously publish SV and GOOSE messages, often using multicast transmission. As the number of devices, sampling frequency, and engineering traffic increase, network congestion can occur, leading to packet queuing, delay fluctuations, and increased jitter. In Ethernet switches—especially those lacking proper QoS configuration or equipped with undersized buffers—temporary transmission bottlenecks may occur, extending the total delivery time of samples to the receiver. Furthermore, when deterministic communication mechanisms (e.g., TSN, HSR, PRP) are not implemented, the risk of SV frame misordering arises, requiring reordering at the receiving end. As a result, not only the average latency but also its variability increase, which negatively affects the stability and predictability of protection algorithms. Packet transmission delays are not merely a communication problem—they have a direct impact on the performance of protection systems. Each IED must process the received data: time-align it, perform computations (e.g., symmetrical component transformation, RMS estimation), and make a logical tripping decision. When SV samples arrive with delays or irregular timing, the device must wait for the full frame set or use interpolation algorithms, both of which extend the overall protection operating time. In systems where the tripping decision depends on data from multiple measurement points—such as in differential protection schemes—delay asymmetry among channels introduces additional timing errors, necessitating safety margins that further prolong the total clearing time in practice.
Increased latency and jitter directly translate into longer fault durations. The total fault-clearing time includes both the detection time within the IED, the communication delay, and the mechanical opening time of the breaker. The thermal energy released in a conductor during a fault is proportional to the integral and for a constant fault current. This means that even a slight increase in the protection operating time leads to a proportional rise in thermal energy dissipated in the fault path. For example, if the fault current equals 5000 A and the fault-path resistance is 0.01 , extending the clearing time from 0.2 to 0.4 s increases the released energy from 50 kJ to 100 kJ—i.e., doubles it. Such a significant thermal increase may exceed the permissible operating temperature of conductors, damage insulation, or permanently degrade the short-circuit endurance of network components.
This means that even a slight increase in the protection operating time leads to a proportional rise in thermal energy dissipated in the fault path. For example, if the fault current equals and the fault-path resistance is , extending the clearing time from to increases the released energy from to —i.e., doubles it. Such a significant thermal increase may exceed the permissible operating temperature of conductors, damage insulation, or permanently degrade the short-circuit endurance of network components.
Consequently, an incorrect or incomplete assessment of communication delay effects may lead to dangerous secondary phenomena in power systems. Longer clearing times result in higher thermal stress and, consequently, an increased risk of equipment failure, insulation degradation, or fire hazards. In many protection system designs, the impact of SV and GOOSE traffic congestion on actual operating times is still neglected, leading to underestimation of protective device requirements and insufficient thermal margins in cables and joints. Therefore, when analyzing modern protection systems, communication and synchronization delays should be treated as critical technical factors influencing both safety and equipment longevity. Effective mitigation requires stable time synchronization (redundant and monitored PTP sources), deterministic data transmission (e.g., via TSN, HSR, or PRP), as well as traffic prioritization (QoS) and local fallback algorithms within IEDs. At the same time, cable and protection system design should account for potential fault-clearing time extensions by analyzing their impact on short-circuit energy (I2t) and the thermal endurance of current-carrying components. Only such a holistic approach ensures consistency between the digital communication layer and the physical resilience of power system infrastructure.
The remainder of this paper is structured as follows.
Section 2 provides a detailed exposition of the laboratory test-bed configuration and the established measurement methodology. This chapter presents an analysis of the influence of digital network load on the operating time of protective relays across three distinct load scenarios. The empirical data derived from these experiments form the basis for the quantitative assessment of the correlation between network traffic characteristics and the consequential delay in the protection system’s response.
Section 3 subsequently investigates the ramifications of the identified delays with respect to the thermal energy dissipated within the fault circuit, using a single-phase-to-ground fault as a case study. A simplified thermal analysis is conducted to determine the total fault-clearing time, which is expressed as the summation of the experimentally measured protection delay (
) and the mechanical operating time of the circuit breaker (
). Finally,
Section 5 synthesizes the principal research findings and presents the resulting conclusions regarding the direct relationship between an elevated data sampling frequency and a corresponding increase in protection trip time. These conclusions offer practical recommendations for optimizing network parameters and selecting appropriate equipment in modern protection and automation systems.
3. The Impact of Protection Tripping Delay Time on the Amount of Thermal Energy Generated During a Short-Circuit Phenomena
In this chapter, the impact of the time delays identified in the previous section of the article—based on laboratory tests—on the additional amount of energy released in the fault circuit is demonstrated using the example of a single-phase-to-ground fault. It should be noted that each communication process between the components forming part of the protective automation system architecture extends the time interval between the occurrence of a fault (in this case, a short-circuit) and the actual initiation of operation by the system’s actuating device. As demonstrated in the previous sections of this study, the magnitude of this operating delay depends on multiple factors. To illustrate the significance of this phenomenon, and to assess whether it may cause operational issues such as cable overheating during fault conditions, this chapter presents a simplified analysis of the thermal energy released in the course of a short-circuit event. The assessment was performed in reference to standardized methodologies defined in IEC 60909 (short-circuit current calculations) and IEC 60949 (calculation of thermal effects of short-circuit currents). Assuming that the voltage at the instant of fault occurrence can be expressed as [
37,
38]:
where:
—peak phase voltage [V]
—angular frequency [rad/s]
—initial phase of voltage [rad]
—fault angle [rad]
The remote short-circuit current, typical for faults occurring in the considered electrical installation, can be represented as the sum of an AC component and a DC component:
while the switch is:
where:
—peak value of the fault current [A],
—time constant of the DC component [s], depending on the system ratio and fault location.
In turn, the peak fault current
can be expressed in terms of the RMS value of the AC short-circuit current and the system X/R ratio:
where:
—RMS value of the short-circuit current [A],
R—total resistance of the fault loop [],
X—total reactance of the fault loop [].
This expression accounts for the initial DC offset due to the system impedance, which decays exponentially with time constant , and is essential for accurately modeling remote fault currents in medium-voltage electrical installations.
The parameters assumed for determining the short-circuit current waveform are as follows: the resistance of the fault loop
R is equal to
, and the reactance of the fault loop
X is equal to
. To extend the analysis, a family of short-circuit current waveforms was assumed for different values of the
ratio (
,
,
), which influence the time constant
that determines the decay of the DC (aperiodic) component of the short-circuit current, under the assumption of a constant amplitude of the steady-state short-circuit current
Figure 13.
The delay introduced by advanced protection automation systems can significantly affect the amount of thermal energy released in a cable during the initial period of a short-circuit. In
Section 2, the time delays were presented, understood as the interval between the generation of the anomaly signal at the outputs of the tester (initiating the short-circuit process) and the activation of the high-current relay outputs (using the Omicron test device). To the laboratory-measured delay, the mechanical operation time of the circuit breaker must be added—i.e., the time required for the mechanical arc extinction and current interruption until the current reaches zero. This mechanical time is typically assumed in the range of
= 70–120 ms [
39,
40]. Therefore, the total duration of short-circuit current flow until extinction in the proposed protection system solutions can be expressed as:
Assuming that the mechanical time
varies within the above-mentioned range (70–120 ms), and that the laboratory-determined delay times
, presented in
Section 3 for the full spectrum of measurements, range from 51 ms to approximately 64 ms, the thermal energy released in the cable can be calculated for total short-circuit durations ranging from the minimum
= 121 ms to the maximum
= 184 ms, according to the following relation:
where:
—thermal energy released in the cable [J],
R—cable resistance [],
—short-circuit current as a function of time [A],
—total duration of current flow until its extinction [s].
Firstly, in order to demonstrate the impact of varying the time constant
of the aperiodic component decay,
Figure 14 presents the square of the short-circuit current for the assumed fault scenarios.
The values of energy released in a sample section of a 10
copper conductor cable with a length of 1 m (
, neglecting the energy dissipated through natural thermal convection) over time, for the family of short-circuit current waveforms presented in
Figure 14, are shown in
Figure 15.
However, the Joule integral itself is much more illustrative since it is not normalized by the resistance value. Therefore, it can be treated as an absolute quantity for a given short-circuit scenario. The Joule integral released within the fault circuit, including the faulted (short-circuited) cable, is illustrated using a reference fault scenario presented in
Figure 16 (based on formula
.
Additionally, in
Figure 16, the possible ranges of cumulative Joule integral values are illustrated. The ranges corresponding to the characteristic clearing time of fault current extinction to zero (70–120 ms) in conventional protection systems—without protective logic elements, communication interfaces, or data acquisition modules—are shown in magenta. These systems rely solely on selectively coordinated switching devices (e.g., circuit breakers). In contrast, the values of Joule energy released during the fault process within the fault circuit, including the short-circuited cable of the electrical installation, are highlighted in red. These correspond to the advanced protection-automation systems described in the first part of the study, which introduce additional time delays across the full range of delay values determined experimentally in
Section 2 (from
to
).
The extreme values of the Joule integral for individual
times (
6), illustrating the impact of delay times introduced by advanced protection-automation schemes on the amount of thermal energy generated during the fault process for the family of fault current characteristics shown in
Figure 14, are presented in
Table 16.
If it is assumed that the permissible Joule integral (adiabatic short-circuit withstand) is expressed by the formula:
where:
—permissible Joule integral [·s],
k—material-insulation constant (dependent on the conductor material, insulation type, and assumed initial and final temperatures),
S—conductor cross-section [].
And for copper in PVC insulation a commonly used value is assumed (indicative values according to IEC standards), and for copper in XLPE insulation: (indicative values according to IEC standards). The calculation for the copper conductor with PVC insulation.
Then becomes possible to determine how the delay time introduced by modern protection-automation systems can affect the need to oversize cables, which under the given assumed conditions would be sufficient in conventional protection schemes based solely on individual switching devices (without communication). By comparing the data in
Table 16, which presents the Joule integrals that can be released under the assumed short-circuit scenarios for characteristic protection operating times considering the analyzed time delays, with the maximum permissible breaking Joule integrals listed in
Table 17 for two types of cable insulation, it can be observed, for example, that in the case of the
variant: assuming the minimum mechanical breaking time (
) of the short-circuit in a conventional protection system, cables with a cross-section of
will have sufficient short-circuit withstand capacity for both assumed insulation materials. On the other hand, when even the shortest experimentally determined delay time
is included in the total short-circuit clearing time from the moment of its initiation (
), it becomes necessary to increase the cross-section of the cable with PVC insulation to the next value in the standard series (
) in order to meet the short-circuit withstand condition. Furthermore, when considering a mechanical breaking time of up to
and again accounting for the minimum delay time
, an increase in the conductor cross-section is required for both types of insulation. The average amount of thermal energy in the
short-circuit scenario, due to the delay introduced by the protection-automation systems discussed in
Section 2, is approximately
higher compared to conventional protection systems, and in the
variant it exceeds this value by more than
for the shortest observed delay time
.
4. Discussion
The conducted simulation studies unequivocally demonstrate that network infrastructure load is a critical parameter affecting the operational dynamics of protection systems. The observed trend indicates that as the intensity of data transmission increases—determined by the number of streams and samples per cycle—a proportional increase in the relay’s response time occurs.
The key conclusions drawn from this research are as follows:
Confirmation of a Causal Relationship: A direct, positive correlation between the degree of network load and the delay in generating a trip signal has been proven. The observed delay is a systemic consequence of increased network traffic rather than a result of the protection device’s operational instability.
Practical Implications for System Optimization: The results suggest that the strategic management of data transmission parameters, particularly the number of samples per cycle, is an effective means of minimizing the operating time of protection systems. This can enhance the performance and reliability of protection systems in practical applications.
In the analyzed protection automation systems, transmission and data processing delays represent a critical factor determining the effectiveness and selectivity of protection schemes. The physical causes of these delays arise from several interrelated mechanisms. First, packet queuing phenomena and communication network congestion (so-called sampled values network congestion) cause temporary bottlenecks in data transmission between IED devices, leading to an increase in the overall signal propagation time [
24,
28]. Second, limited time synchronization precision in PTP (Precision Time Protocol) networks and the occurrence of clock drift between IED nodes can generate phase errors in communication, resulting in incorrect interpretation of event sequences [
41]. Additionally, the response times of the actuating elements themselves—relays, contactors, and circuit breakers—are determined by their electromagnetic properties, particularly arc extinguishing speed and mechanical drive delays [
42]. Reducing these delays requires a multi-layered approach. At the hardware infrastructure level, effective solutions include redundancy of communication components and the use of ring topologies with fast-switching mechanisms (e.g., HSR, PRP – High-availability Seamless Redundancy, Parallel Redundancy Protocol) [
24,
28]. Such solutions eliminate transmission downtime and enhance resilience to single-link failures. At the automation layer, the use of devices with improved temporal parameters is recommended—digital relays and vacuum circuit breakers with reduced actuation and arc-extinguishing times (approximately 20–40 ms instead of typical 60–80 ms) [
42]. Another effective strategy involves implementing multi-level synchronization, combining the PTP standard with local time sources (GPS, IRIG-B), which minimizes synchronization errors in the event of network degradation [
28,
41]. Furthermore, adaptive algorithms in IEDs, leveraging predictive system behavior models, allow temporary communication delays to be compensated by estimating measurement values in real time [
28]. As a result, the effective response time of protections can be shortened, and the thermal consequences of faults can be limited, which would otherwise risk exceeding the allowable
energy in conductors and supply cables.
The data presented can serve as a valuable basis for the design and modernization of network infrastructure in power systems, as well as for the precise parameterization of protection devices to ensure maximum reliability and speed of operation. This mechanism arises directly from the critical characteristics of the Sampled Values (SV) protocol in IEC 61850 environments and from the way network switches handle high-priority packets. The key difference between increasing the number of streams and increasing the sampling frequency lies in the time density of packets and the resulting queuing delay. Increasing the number of SV streams (e.g., from two to four at 256 samples per cycle) increases the total number of packets in the network as well as the switch processor load. However, each individual SV packet (of the same size) maintains the same time interval between consecutive samples for a given Merging Unit (MU). In contrast, increasing the sampling frequency (e.g., from 80 to 256 samples per cycle) while keeping the number of streams constant has a much more drastic effect, as it shortens the time intervals between consecutive SV packets generated by the same MU. SV messages, which are essential for precise protection functions, must be transmitted in a faster sequence, leading to a sharp increase in the number of packets per unit time and significant traffic bursts on the communication links. High sampling frequency (256 samples/cycle) causes that, even though SV packets are assigned high priority (Quality of Service—QoS) according to IEC 61850, their high time density leads to accumulation in switch buffers (HYPERION 400), especially during signal convergence. Shortened intervals between SV packets increase the risk of queuing delay, as the switch cannot instantly process and forward such a large number of high-priority packets without introducing minimal delay. This queuing delay accumulates and is directly propagated to the critical TRIP signal generated by the relay, resulting in a noticeable and systematic increase in operating time () observed in the experiments. Therefore, sampling frequency is the dominant factor because it directly affects the rhythm and utilization of the switch’s temporal resources, forcing the system to process critical data within shorter time windows.
5. Conclusions
Based on the research conducted to analyze the relationship between digital network load and the response time of the protection system, the following conclusions were formulated. The study demonstrated a direct positive correlation between the degree of network infrastructure load and the trip time of the protection relay. The observed elongation in response time is a systemic consequence of intensified network traffic rather than a result of the operational instability of the protection devices themselves. The analysis determined that the key factor influencing the delay is the increase in the sampling frequency of the data streams, which has a more significant impact than merely increasing the number of streams. The obtained results provide empirical evidence that data transmission parameterization is a critical element in the optimization process of protection automation systems. This implies the necessity of strategic management of network traffic characteristics, particularly the sampling frequency, to minimize delays and ensure maximum reliability and speed of operation for modern digital solutions in the energy sector. Future research will be related to network traffic optimization, data aggregation, and the analysis of the impact of time synchronization quality on critical infrastructure in the field of protection automation.
Protection automation systems based on centralized protection architectures significantly enhance the operational effectiveness and coordination of actuating devices under fault conditions. Nevertheless, the presence of additional control and intermediary components introduces an impact on the response times of the actuating elements within the protection automation system. In this study, detailed investigations were carried out to determine the range of response delays that may occur for selected configurations of protection automation schemes. In extreme scenarios, under conditions of high network traffic, the observed delay times reached up to 64 ms (in several extreme cases, whose analyses were not included in the paper, the delay reached up to 100 ms). The analysis demonstrates how these time delays contribute to an extension of fault duration, illustrated by the case of a single-phase-to-ground fault for various values and () ratios of the fault loop parameters. For the longest delay times obtained in laboratory measurements, the thermal energy released during the fault was found to be 45% higher compared to conventional protection arrangements. In critical cases, such excess thermal energy may exceed the short-circuit withstand capacity of cables dimensioned without accounting for the identified delays inherent to advanced protection automation systems. Therefore, when implementing such solutions, it is essential to consider the expected delay times during the selection of conductors and associated protection devices.
This study, based on detailed hardware-in-the-loop simulations, provides empirical and quantitative evidence of fundamental importance for the design and optimization of digital protection systems compliant with the IEC 61850 standard. Our main, original contribution is the quantification of the consequences of network delays, extending beyond a mere description of correlation.
We conducted a quantitative analysis of the tripping time of the protection system across the full spectrum of digital network load in laboratory conditions. Our contribution is the direct, empirical demonstration that an increase in sampling frequency (e.g., from 80 to 256 samples/cycle) has a dominant and more significant impact on prolonging the trip time than simply increasing the number of data streams (SV). This determination, supported by laboratory measurements, establishes a critical criterion for optimizing communication parameters and refutes the simple intuition that delay is solely a function of the total number of streams. The results provide quantitative confirmation that the delay is a systemic consequence of increased traffic (queuing delay), rather than protection device instability.
This work introduces a direct link between the measured network delay and its physical consequences for the power installation, which constitutes the most important engineering contribution. The analysis demonstrated that the additional trip time () introduced by digital automation systems leads to a significant increase in the thermal energy () released in cables during a short-circuit event. In critical cases, for the longest measured delays (≈64), the thermal energy is 45% higher compared to conventional protection schemes.
The most practical contribution is the establishment of a new design criterion. We demonstrated that accounting for the identified delays is essential to meet the cable’s short-circuit withstand condition. Our analysis clearly shows that even the shortest experimentally determined delay may necessitate oversizing the cable cross-section (e.g., from to ) to ensure safe and reliable system operation. This constitutes a direct and original recommendation for engineers designing digital substations.
The conducted research demonstrates that the communication delay introduced by modern digital protection and automation systems—particularly those relying on IEC 61850 process-bus architectures—can significantly extend the total fault-clearing time. Under high sampled-values (SV) network load conditions, the experimentally observed delay exceeded 60 ms, which, when combined with mechanical breaking times of 70–120 ms, may lead to total clearing times exceeding 120 ms. This temporal extension directly increases the Joule integral of the fault current, with simulations showing a 35–55% rise in released thermal energy () compared to conventional protection systems that do not rely on communication-based trip coordination. From a thermal and mechanical standpoint, these additional energy releases can exceed the short-circuit withstand limits of commonly used 0.4 kV distribution cables, especially those with PVC insulation. As a result, even marginal latency growth—caused by network congestion, synchronization inaccuracies (PTP/IEEE 1588v2), or processing jitter in IEDs—may require upsizing conductor cross-sections (e.g., from to ) to maintain compliance with short-circuit thermal endurance criteria defined in IEC 60949 and IEC 60364. For practical engineering applications, the findings highlight the need to explicitly include communication-induced delays in the design and coordination of digital protection schemes. This involves incorporating deterministic latency budgets and worst-case jitter margins in protection coordination studies; implementing redundant network topologies (PRP/HSR) to limit packet loss and reduce queuing delay; using high-precision time synchronization (PTP grandmasters with sub-microsecond accuracy) to minimize timestamp discrepancies; selecting IEDs and circuit breakers with low-latency tripping mechanisms (faster arc-extinguishing technology); and performing hardware-in-the-loop validation of end-to-end trip times before system commissioning. Ultimately, the presented results underline that digital protection reliability cannot be assessed solely through algorithmic or logical coordination. Instead, it requires a holistic design approach integrating communication network performance, time synchronization accuracy, and the physical characteristics of switching and conductor elements. Neglecting these aspects in the design phase can lead to underestimation of fault energy release, potential overheating, and long-term degradation of cable insulation—particularly in industrial power distribution systems, where process continuity and system availability are critical.