1. Introduction
Substation protection and automation systems rely heavily on the reliable and swift delivery of information from various measurement devices. As complexity increases, there is a desire for improved data communication, emphasizing high reliability, low latency, a high data rate, and precise and frequent clock synchronization [
1].
Early electrical power plants and their switching stations typically used copper wires for signal transmission from different measurement devices. These devices generally generated analog signals with values in the range of either 4–20 mA or 0–10 V. This design philosophy necessitated a large number of wires to distribute signals to different locations, which not only increased the cost but also added complexity during maintenance.
In the modern era, advances in communication and computing technology have led to the emergence of digital substations. It has become possible to convert analog signals to digital and transmit them over an Ethernet network for fast and reliable data transmission. All devices are connected to specific buses, which allows for more efficient data transmission and reception between them, with reduced cabling being one of the advantages.
The digital substation architecture, adhering to the IEC 61850 standard series, consists of primarily Merging Units (MUs), which receive analog signals from measurement devices such as Current Transformers (CTs) and Potential Transformers (PTs) or Voltage Transformers (VTs), convert them to digital signals, and send them over to a so-called “Process Bus” of the network. Intelligent Electronic Devices (IEDs) subscribe to the digital signals, receive the signals via the Process Bus, and perform various protection and control tasks and calculations based on the values and the algorithms already implemented in them.
If an intervention, such as the tripping of a circuit breaker, is required, an event type message is sent out to the network and received by the respective device to initiate the tripping action of the circuit breaker. Thus, measurements from various locations can be extracted, and command or event messages can be transmitted through the Process Bus without the complexity of additional wiring.
Traditionally, a wired Ethernet network is used in a digital substation, along with the implementation of protocols such as High-Availability Seamless Redundancy (HSR) and Parallel Redundancy Protocol (PRP), to achieve high reliability of the network infrastructure. However, as new communication technologies emerge, new ways of improving performance and efficiency are necessary.
Wireless communication technologies have developed to the point where, at present, they are very common in our daily lives. The advantages of implementing wireless communication in substations are evident, such as minimizing cable cost, a flexible and scalable network, and ease of maintenance.
However, earlier iterations of wireless communication, such as third-generation (3G)/fourth-generation (4G) cellular networks, Wireless LAN (WLAN), Zigbee, Bluetooth, etc., were able to be used in monitoring and control communications but did not meet the strict requirements for protection communication.
Fortunately, the steep development of cutting-edge technology in wireless communication in the form of fifth-generation (5G) wireless cellular networks could facilitate the provision of the wide-area information interaction and synchronization services required in protection and control systems [
2,
3]. Fifth-generation wireless technology offers several intriguing features with great potential for power system communications, including high data transfer rates, use of multiple frequency bands simultaneously, network slicing, and edge computing. However, very few studies have been conducted on its real-world performance in an electrical substation.
The current literature on the use of 5G technology in digital substations primarily focuses on theoretical and simulated studies, highlighting various use cases based on the theoretical peak performance of the network. However, the impact of the high and variable latency inherent to the 5G network on the protection systems within digital substations has not been thoroughly investigated. Additionally, the trade-offs involved in replacing a wired Ethernet network with wireless 5G remain unexplored. This paper addresses these gaps by examining the challenges associated with the protection system when implementing 5G communication technology in a digital substation. Although there are many other aspects involved, such as health issues, cyber security, and environmental impacts, the current article specifically analyzes the effects of the high latency and jitter characteristics of 5G networks, compared to traditional wired Ethernet, on the operational performance of substation protection and monitoring systems. A real-world latency dataset together with an empirical model approach has been adopted in this research to identify key areas where a 5G network could disturb the protection functionalities of the substation.
Section 1.1 of this paper briefly discusses the IEC 61850 standard protocol, along with the architecture and traffic classes defined in the standard.
Section 1.2 focuses on the history of wireless communication previously used for controlling the electrical grid, and
Section 1.3 sheds light on what is new in 5G communication technology that allows it to be more advanced and reliable compared to previous wireless technologies.
Section 1.4 presents a literature review of studies that have explored the use of 5G network technology in electrical systems.
Section 2 dives into a typical implementation architecture of a 5G network together with discussion on latency.
Section 3 provides the results of this study, focused on the challenges of a 5G communication network in electrical substations, with heavy emphasis on its impact on the protection system;
Section 3.1 discusses the impact on the coordination of the protection schemes;
Section 3.2 discusses how variable latencies impact high-speed measurement data streams; and
Section 3.3 illustrates the impact on the synchronization technique and how variability in latency affects the proper utilization and accuracy of IEEE 1588 Precision Time Protocol (PTP). Finally,
Section 4 provides a summary of the important findings of this study and directions for future work.
1.1. IEC 61850 Protocol
The IEC 61850 standard protocol was developed back in the late 1990s and was designed to support the complete communication of all functions and operations performed in a substation. Its primary focus is on interoperability of devices such as IEDs from different manufacturers, both legacy and modern types, allowing them to work seamlessly together through exchange of data between them and carry out their functions. Some of the main advantages include real-time information exchange based on an Ethernet communication network, clear structure of the substation network through dividing into different levels, use of object-oriented modeling, and many more [
4].
The IEC 61850 standard defines the architecture of an electrical substation in three levels: “Process level”, “Bay level”, and “Station level”. The devices in the Process level comprise Input–Output devices, typically various sensors such as CTs and PTs, MUs, various actuators such as circuit breakers, etc., which send and receive signals to and from the Bay- and Station-level devices. In the Bay level, the IEDs and various other processing devices are located. They are responsible for the protection and control of various primary and actuating elements inside the substation as they process the data sent by the Process-level devices and send commands where necessary. The Station level consists of operator workstations, Human–Machine Interfaces (HMIs), Remote Terminal Units (RTUs), a Supervisory Control and Data Acquisition (SCADA) system, and other remote controllers that communicate and transfer data to and from devices in the Process and Bay levels. The transfer of data between different levels is performed through many interfaces, which are discussed in detail in [
4].
According to IEC 61850-9-2 [
5] and IEC 61850-8-1 [
6], some of the traffic classes specified are as follows: Manufacturing Message Specification (MMS) traffic, Generic Object-Oriented Substation Event (GOOSE) traffic, and Sampled Value (SV) traffic. GOOSE and SV traffic are of interest in this paper. The three types of traffic are introduced below:
MMS traffic: MMS traffic comprises messages that allow client devices such as SCADA or RTUs to send a request for data from the server devices such as IEDs in the substation network. The server device returns the requested data in a response message. This traffic flows in both the Station and the Bay levels.
GOOSE traffic: This comprises data packets that are sent and received in the Bay level or between the Bay level and the Process level. The messages are typically status signals, tripping signals, and interlocking signals. GOOSE traffic can be frequent when an event occurs and can be slow with cyclic transmission under normal conditions.
SV traffic: This comprises data packets containing values of currents and voltages from CTs and PTs. The packets are sent from the MUs, which sample the analog current and voltage signals and send them to the Process Bus. The clients, such as IEDs that subscribe to the SV packets of different CTs and PTs, receive from the Process Bus in real time. This traffic flows from the Process level to the Bay and Station levels.
1.2. Wireless Communication Implementation in Electrical Substations
Wireless network technology has been around for a long time and has been developing at an impressive rate. The appeal of using a wireless network comes from its potential for power system communications as a flexible, scalable, and cost-effective alternative to a wired network. The prospect is more acceptable when remote locations are considered, as well as in retrofit cases. The advantages of wireless communication are also visible during installation and maintenance, as the processes become much simpler. Since the design of communication networks is no longer restricted due to the reach of cables, wireless networks can open the door to better control, monitoring, and protection schemes. Comparative studies between different wireless networks can be found in [
7,
8,
9].
Table 1 provides a summary of the different wireless communication technologies, using the studies and the respective standards for each technology and how they compare to each other.
Implementation of Wireless Technology in Electrical Substations
Fourth-Generation/LTE (Long-Term Evolution): In [
10], a novel scheduler model for an LTE network was able to be used for energy automation systems in substations by prioritizing energy data flows in the public network infrastructure.
Wireless LAN: WLAN has been used to enhance the automation of distribution substations as well as power line protection between two substations [
11].
Zigbee: Research on using Zigbee for wireless sensor networks (WSNs) in substations has been extensively carried out, and it can also perform direct load control as well as remote meter reading [
12]. The disadvantages of Zigbee include its small memory, limited data rate, and low processing capabilities.
WiMAX (Worldwide Interoperability for Microwave Access): Wireless automatic meter reading has been possible through WiMAX. It has also been used in a substation automation system (SAS) as well as rapid outage detection and restoration [
11].
1.3. Introduction to Fifth-Generation Cellular Network (5G)
Fifth-generation mobile wireless technology or 5G is the latest iteration of the development of mobile telephony. Some of the most promising enhancements in 5G include a theoretical low latency of 1 ms, data throughput in the range of gigabits, increased reliability, device connection capacity, and network flexibility. Although there have been many misconceptions about 5G and its impact on human health, according to a comprehensive state-of-the-science review [
13], no confirmed evidence was found that the low-level radio frequency fields of 5G were hazardous to human health, including occupational exposure. On the other hand, another review paper [
14] discusses the challenges and opportunities of 5G networks, emphasizing their transformative potential, which is foundational for the Industry 4.0 concept, smart factories, and remote work. Fifth-generation technology is suitable for providing connectivity to modern applications such as automotive, media, public safety, and manufacturing. Some of the new features in 5G are listed below:
Enhanced Mobile Broadband (eMBB): Enables data rates in the gigabit per second range and large volumes of data transfer.
Massive Machine-Type Communications (mMTCs): Connectivity to a large number of low-complexity devices such as sensors, meters, etc.
Ultra-Reliable Low-Latency Communications (URLLCs): Designed to be used with stringent reliability and latency requirements for real-time communication.
Flexible Data Transmission: Utilization of both Frequency Division Duplexing (FDD) and Time Division Duplexing (TDD) for data transmission.
Network Slicing: Allows multiple networks or slices to be created over a single physical infrastructure to allocate resources on slices based on the application requirements.
Edge Computing: Computational resources can be brought closer to the user location, which reduces latency, enhances data processing speeds, and improves overall network efficiency.
The 5G network is highly adaptive, capable of adjusting to varying traffic conditions and environmental factors. It can dynamically modify key properties such as modulation scheme (ranging from QPSK (Quadrature Phase Shift Keying) to 256-QAM (Quadrature Amplitude Modulation)), coding rate (from 120/1024 to 948/1024), and sub-carrier spacing (from 15 kHz to 240 kHz). For example, in environments with an excellent signal-to-noise ratio (SNR) and low bit error rate (BER), the network may switch to 256-QAM modulation, a coding rate of 948/1024, and a sub-carrier spacing of 240 kHz to maximize data throughput by minimizing latency. However, in adverse conditions, such as heavy rainfall where the SNR decreases and BER rises, the network may shift to a more robust QPSK modulation scheme with a coding rate of 120/1024 and a sub-carrier spacing of 15 kHz. Although this adjustment reduces data throughput by increasing the latency of data transfer, it improves the reliability of transmission in such challenging environments.
1.4. Literature Review on Use of 5G Communication in Electrical Grid Infrastructure
The implementation of 5G communication technology in electrical systems has been explored from various theoretical perspectives and through several use cases. The authors in [
15] review previous studies to discuss current and future communication solutions for smart grids, highlighting the use of 5G for Wide-Area Measurement Systems (WAMSs), secure communications for outage monitoring, and demand response control. Additional use cases of 5G technology in smart grids are discussed in [
16,
17,
18], including applications in SCADA systems, Vehicle-to-Grid (V2G) integration, Distributed Energy Resource (DER) management, remote smart meter reading, and communication with remote locations. Studies such as [
19,
20] explore the integration of 5G and Artificial Intelligence (AI) to enhance network capabilities, supporting the electrical grid and its protection systems. Furthermore, ref. [
21] presents a study on designing a secure wireless network infrastructure based on 5G. A theoretical investigation in [
22] proposes a hybrid WLAN + 5G network for transformer substations, analyzing theoretical performance parameters to identify optimal implementation areas for each network technology.
When it comes to security concerns for 5G technology, significant research has also been conducted. A literature review focusing on cyber security threats for 5G, such as data confidentiality and integrity, was published [
23] in which attacks such as cross-slice, physical tampering with edge devices, and exploitation of IoT devices were detailed. The paper also discussed mitigation strategies for these vulnerabilities. Another review paper [
24] provided a comprehensive survey of cyber security concerns in smart grids, emphasizing 5G-driven vulnerabilities. It reviews threats like data tampering, eavesdropping, and DoS attacks exacerbated by 5G-enabled IoT devices and edge computing.
The simulation and modeling of 5G networks have also been a focus of recent research. Papers [
25,
26] concentrate on the modeling and analysis of 5G network channel parameters, reporting results on path loss, delay spread, and angle spread. They also examine the potential interference of 5G signals with substation secondary equipment using near-field and far-field simulation models.
Another important area of research is the use of 5G communication for Phasor Measurement Units (PMUs). Studies [
27,
28,
29] investigate the communication between PMUs and Phasor Data Concentrators (PDCs) over 5G networks. In [
27,
29], the communication is evaluated based on data packets transferred according to the IEEE C37.118.2 standard [
30]. In [
28], the impact of varying the number of base stations on the Packet Delivery Ratio (PDR) and the transmission latency of TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) packets is analyzed. The simulations from these studies highlight that latency is the primary constraint for successful application functionality over a 5G network, although the measured latency remains within the limits defined by the relevant standards when using a 5G Ultra-Low-Latency (5G-ULL) link.
In addition to communication technologies, research has also focused on integrating wireless networks within the framework of IEC 61850 standards. The adoption of IEC 61850 allows for the integration of multivendor devices within a single Protection, Automation, and Control (PAC) system. Studies [
31,
32] identify engineering challenges associated with this integration, using virtual systems and testbeds to demonstrate IEC 61850 functionality over Ethernet networks. When considering the use of 5G in PAC systems, ref. [
33] evaluates 5G performance in digital substations. The study simulates the transmission of MMS and SV packets over both Standalone (SA) and Non-Standalone (NSA) 5G network topologies, utilizing the 5G-LENA model with the URLLC traffic class. The results confirm that 5G can meet the low-latency requirements of IEC 61850 standards for SV packet transmission. Further advancing this research, ref. [
34] demonstrates the implementation of 5G networks for the protection of a Medium-Voltage (MV) grid, focusing on the transmission of GOOSE and SV packets. In this case, 5G is employed specifically for differential protection, with SV packets transmitted over the 5G network between IEDs.
The reviewed literature illustrates the promising potential of 5G communication technology within the modern electrical grid and highlights several practical use cases. However, most studies rely on simulated models without real-world validation, and existing field tests or pilot projects typically confirm only the feasibility of the technology, offering limited operational or performance data.
One critical gap in the current literature is the lack of an in-depth analysis of how replacing conventional Ethernet-based communication with 5G would affect protection systems in digital substations. This paper contributes to addressing this research gap by presenting a theoretical initial examination of the protection sub-functions most likely to be impacted by 5G integration, offering valuable insights for future research and practical implementation.
2. Implementing a 5G Network in a Digital Substation
In a substation that uses a 5G communication network, data transmission in the form of SV and GOOSE messages would be carried out through the 5G network and its associated devices.
Figure 1 illustrates the schematic diagram of the substation’s communication network. In this setup, IEDs and MUs are connected to Customer Premises Equipment (CPE), which serves as a 5G transmitter or router. The Base Stations (BSs) communicate with the CPEs, and the data is relayed to the core network (CN) through the Baseband Unit (BBU). The data is then routed to the receiving device by the CN as it travels either to the same or to another BBU and BS from which the receiver receives the data.
The process of transitioning from a wired Ethernet network to a wireless 5G network can be described using what is called the OSI (Open Systems Interconnection) model. A simplified OSI model is illustrated in
Figure 2a. According to the model, the changes involved in replacing a wired network with a wireless 5G network start from the Physical Layer (Layer-1) through to the Transport Layer (Layer-4). This is because the transmission of data packets used in the digital substation, such as SV and GOOSE, through the 5G network is not straightforward, as the network requires routing information to securely deliver the data to its destination.
The IEC 61850-90-5 standard [
35] addresses this by defining routable versions of these packets, known as R-SV and R-GOOSE, which are designed for use with external communication networks. Some key differences between the standard and routable versions include packet size and structure. For example, GOOSE packets typically range from 200 to 500 bytes, while R-GOOSE packets are larger, ranging from 300 to 800 bytes. Each transmission slot of the 5G TDD pattern can transmit kilobytes of data under normal conditions, which is suitable for these packets.
Figure 2b shows the process for GOOSE and SV and R-GOOSE and R-SV transmission in the Ethernet network.
As can be seen from
Figure 2b, the GOOSE and SV packets are encapsulated directly into Ethernet frames for transmission. In contrast, R-GOOSE and R-SV packets are created by first encapsulating GOOSE and SV packets into UDP packets and then further wrapping them into IP packets to form their routable versions. Finally, the R-GOOSE and R-SV packets are encapsulated in an Ethernet frame prior to transmission.
Table 2 outlines the tasks performed by each layer, along with the differences between the two networks during the transmission of GOOSE and R-GOOSE messages (as well as SV and R-SV packets). The transmission process begins from the higher layers (Layer-7) and progresses through the layers to Layer-1, where the packets are finally transmitted through the physical communication medium. The process is reversed at the receiving end, starting from Layer-1 and moving towards Layer-7.
The advancements 5G brings over previous wireless networking technologies make it a very promising candidate to replace the wired network used in substations. However, like other wireless communication systems, data transfer through the 5G network remains less predictable than that of wired communication due to variations in latency over time, as well as vulnerability to environmental factors.
Simulation results in industrial 5G Time-Sensitive Network (TSN) environments found in [
36] show that end-to-end latency for critical Network Control (NC) traffic stays below 10 ms, with low jitter under optimal conditions (line of sight (LOS), sparse clutter, and proximity to the base station). As the distance increases from 85 m to 255 m, both latency and retransmission rates rise noticeably. In dense non-line-of-sight (NLOS) environments with machinery-induced clutter, best-effort (BE) and video traffic experience peak latencies exceeding 50–60 ms. BE traffic shows an increase of 11.7% in latency from a lightly loaded to a heavily loaded network. These results demonstrate how distance, clutter, and traffic conditions directly affect latency in industrial 5G networks. Optimizing network design, including device placement and traffic management, is essential to ensure consistent low-latency communication in critical applications.
Although network simulations allow strict control over modulation schemes and other parameters, real-world networks are subject to dynamic changes due to various environmental and network-related factors. As a result, it is challenging to accurately identify the frequency bands, modulation schemes, and coding rates used at any given moment. Consequently, in this study, the 5G network has been treated as a black box.
To investigate the variation in latency, a latency test was carried out within the lab of Luleå University of Technology (LTU) in Luleå, Sweden, using two Raspberry Pi 4s exchanging R-GOOSE messages over a 5G network with a packet transfer rate of 1 pps (packets per second) for a total duration of 100 min. The 5G network used in the process was a private 5G network from NorthStar operating inside the LTU campus [
37]. The network was connected to the public 5G network of Telia (a Swedish mobile network operator), while running as 5G-SA using Local Breakouts (LBOs) such that the data session with the Internet did not require tunneling to the CN. The setup is more secure and can provide lower latency compared to public 5G networks; however, the 5G modems used do not log the real-time parameters of the network.
Figure 3a shows the latency distribution of the data transfer during the test.
Another dataset containing transmission latency was collected from Telia with a similar architecture of packet flow, where data, in the form of UDP packets, were transmitted from a device to the User Plane Function (UPF) of the 5G network via over-the-air transmission and then received by the same device over a wired connection from the UPF. The travel time for the data packet was measured by the device itself. The measured travel time corresponds to an uplink (UL) transmission to the 5G network (the wired data transmission latency from the UPF to the device is negligible compared to over-the-air). Another data packet of the same nature was sent back from the device to the UPF and using the same method the downlink (DL) transmission latency was measured. The data traffic class used was eMBB. According to [
38], the DL latency has been empirically found to be higher than the UL transmission latency, which has also been observed in this dataset. The dataset consists of a packet transmission rate of an average of 62.5 pps, with the 99th percentile of the data within 63.5 pps. Even though this packet transfer rate is not high as that of SV packets, according to the simulation studies conducted in [
33], the 5G network latency do not change substantially with the change in packet transfer rate. Hence, the dataset used could correspond closely to a real-world environment data transmission latency observed inside a substation with good weather conditions. The latency distribution from the Telia dataset is illustrated in
Figure 3b and
Figure 4 shows the latency and jitter of the two datasets.
The latency distribution from the two datasets shows similar trends: a large peak containing most of the results and a short peak shifted from the other by a margin of around 5 ms. The second peak corresponds to the retransmission of lost or dropped packets in the 5G network, which is handled by the Hybrid Automatic Repeat Request (HARQ) system of the network. In addition, both distributions show high variation in latency during data transfer. However, the minimum and mean latency of the Telia dataset is lower compared to that of the LTU dataset. The low transmission rate may cause various timeouts at different levels in the 5G network and enable sleep mode in UEs, which may explain the higher latency in the LTU dataset. Although the size and structure of a simple UDP packet are different from an R-GOOSE packet, this would not affect the latency distribution. The latency of 5G MBB traffic observed in [
38] also showed a similar trend to the above two datasets, with a long tail (approximately 17.5 ms for the 99th and 19.5 ms for the 99.9th percentile) with a lower median latency value (approximately 12.0 ms). The two latency datasets have much higher latency and jitter compared to the latencies observed in [
33] and some parts in [
38]. This is because the studies implemented 5G URLLC traffic for the data transmitted through the 5G network, which may be suitable for the data traffic in a digital substation from a low-latency perspective; however, URLLC requires significantly higher network resources and constant bandwidth usage compared to eMBB data traffic, and with so many devices in constant communication, implementing URLLC traffic for large number of devices inside the digital substation may not be feasible. Hence, a private 5G network with eMBB traffic would be a good balance from a performance and cost perspective. So, for the following sections, the Telia dataset has been used for its larger size and better resemblance of the latency distribution to the high data transmission rate expected in a substation.
3. Impact of Variable Latency of a 5G Network on Various Processes in a Digital Substation
The protection of primary components is one of the major functions in the substation, which requires proper coordination between multiple implemented protection schemes, which in turn depends on the characteristics of the communication network. The variable latency of the 5G network can pose challenges in various processes of the digital substation, which will be discussed in this section.
3.1. Dimensioning and Coordination of Protection Schemes
In an electrical substation, various protection schemes are employed to safeguard primary elements such as transformers, bus bars, and transmission lines. These protection schemes are defined for specific zones. When a fault occurs within a particular zone, the associated circuit breaker is tripped by the protection scheme, ensuring that the rest of the system remains unaffected. Achieving selective tripping is essential, which is accomplished through the precise dimensioning of protection schemes. In addition, redundancy is built into the protection system. If a single protection scheme fails to clear a fault, backup mechanisms come into action. The coordination of protection schemes ensures that the primary protection is activated first. If the primary protection fails, the backup protection will then be triggered after a specified time delay. To illustrate this feature, a single-line diagram of a substation network is shown in
Figure 5.
There are two feeder buses (BUS 101 and BUS 201), which feed in power from the grid (GEN 1 and GEN 2) and distribute it to the connected loads (L1–L4). Elements A1–A4, B1–B4, and C1 are circuit breakers used in the network. The dashed line areas correspond to different protection zones. For a fault that occurs between A3 and A4, denoted by ‘X’, the primary protection scheme should trip breaker A3 within a very short time to isolate the fault and only load L2 would then be interrupted. But if the primary protection fails to trip the breaker within a time of 150–200 ms [
39,
40], then the backup protection scheme would initiate the tripping of breaker A1 and C1, hampering the power flow to both loads L2 and L1. In this case, the interruption affects additional zones, which is not desired but necessary if the primary protection fails.
In the normal state, periodic GOOSE messages are sent by IEDs to different devices to verify if the devices are operational. In case of a fault, IEDs, analyzing SV data packets received through the 5G network, would detect the fault condition in the form of abnormally high current flow and generate a trip signal in the form of burst transmission of GOOSE messages, which initially have short intervals between retransmissions, and the intervals increase exponentially over time. These data packets are sent over the 5G network again, which are received by the IED or relay controlling the circuit breaker and trip the breaker to clear the fault. However, due to the variable latency in packet transfer within 5G communication, there is a possibility that the GOOSE message packets generated by the IED responsible for primary protection may reach the relay controlling the circuit breaker with significant delay. Furthermore, the GOOSE packet from the backup protection IED could arrive at the relay sooner. The findings in [
41,
42,
43] showed that many packets were lost, and GOOSE messages failed due to high congestion in the 5G network. If the delay of the GOOSE packet from the primary protection scheme is greater than the defined delay of the backup protection, then the backup protection would be triggered, and selectivity would be compromised, leading to unnecessary isolation of a larger zone within the system. The cumulative distribution function of data transmission delay through the 5G network is illustrated in
Figure 6, which has been created using the Telia dataset. The X-axis shows the total delay for a round-trip transfer of a data packet through the 5G network in milliseconds.
As a case study, a typical protection scheme for a high-voltage transmission line connecting two substations has been used, where line differential protection has been selected as the primary protection scheme, while distance protection has been selected as the backup protection, and finally overcurrent protection has been selected as the remote backup protection scheme. The purpose of the study is to evaluate the probability of successful operation of different protection schemes within the amount of time allocated to them.
Figure 7 shows the arrangement of the physical equipment used in the primary and backup protection schemes for a transmission line between two substations.
A simplified schematic describing the signal pathway is presented in
Figure 8, where the IED-A block represents the decision-making protection IED and the IED-B block represents the controlling IED for the circuit breaker.
The availability values of different devices in the process were assumed and the processing time of various messages by the devices were sourced from the relevant literature [
40,
41,
42,
43,
44,
45]. In addition, the cumulative distribution function of delay in data transmission through the 5G network has also been used in this case study. The values are summarized below in
Table 3.
Since 5G wireless communication modules are used with the MUs, IEDs, and CBs (circuit breakers), group availabilities can be calculated using Equations (
1)–(
4) stated below:
Using the group availability equations, the success rates of the protection schemes in clearing the fault can also be calculated using Equations (
5)–(
7) stated below:
where
,
, and
are the probability of successful operation of differential protection, distance protection, and overcurrent protection schemes, respectively.
,
, and
are the probability of successful operation within a specified time
T for primary, backup, and remote backup protection schemes, respectively. The time of operation includes the processing times of devices and delays (specified in
Table 3) and includes coordination time delays in case of backup protection schemes. Two scenarios were developed where the delay between the primary protection, backup protection, and remote backup protection schemes was varied.
The results of the case study are presented in
Figure 9, demonstrating the successful operation of the protection schemes within a specified time.
In the first scenario, a delay of 10.0 ms is considered for backup protection and 20.0 ms for remote backup protection relative to the primary protection scheme (
Figure 9a). In the second scenario, these delays are doubled for both the backup and remote backup protection schemes (
Figure 9b). As shown in the graphs, the probability of successful operation of the protection schemes increases as the maximum allowed time for fault clearance is extended. This improvement is attributed to the higher probability of data transmission through the 5G network when delay constraints are relaxed.
However, aiming for a higher success rate for the primary protection scheme can compromise selectivity among the protection schemes, as the success rates of the other schemes also increase. To restore selectivity, the coordination delay (i.e., the time delay between the protection schemes) can be increased, as evident from a comparison of the two scenarios. In conclusion, the 5G network can provide satisfactory performance for protection systems that allow longer delays in operation and larger coordination delays.
3.2. Processing of High-Frequency Sampled Value (SV) Packets
According to IEC 61850-9-2 LE, in a standard digital substation configuration operating at an electrical frequency of 50 Hz, each electrical cycle undergoes 80 samplings, yielding 4000 samples per second. Consequently, if each SV packet encapsulates a single sample, the transmission rate of SV packets would be 4000 packets per second. This continuous flow of data originates from the MU and is directed toward the Process Bus, allowing IEDs to access the packets.
When employing 5G communications in a digital substation, the multicast dissemination of the data stream allows numerous IEDs to simultaneously receive identical packets from a singular source. However, inherent variability in latency can disrupt the sequence in which packets are received. According to IEC 61850-5 [
46], no more than four out-of-sequence SV packets are allowed. Exceeding this threshold may trigger an alarm, indicating degraded network performance, or in more critical cases, may lead to the disabling of protection systems. To mitigate this, networks may buffer packets to maintain sequence integrity, although this can notably increase the delay experienced by IEDs. Cellular network technologies, such as 4G and 5G, have a protocol named PDCP (Packet Data Convergence Protocol). The PDCP is a layer 2 protocol in the 5G-NR (New Radio) protocol stack that has many functionalities, and one of them is responsible for reordering out-of-sequence packets. Although reordering is performed on almost all types of packets, the approaches used are different depending on the type of data packet. R-GOOSE and R-SV are UDP packets, and once they are received by the PDCP layer and identified by the Baseband processor, they are immediately delivered without delay to minimize latency [
47]. Hence, the functionality is almost ineffective for out-of-sequence UDP packets. So when these out-of-sequence R-SV packets arrive at the receiver side of the IEDs, as shown in
Figure 10, the devices may struggle to identify electrical disturbances, potentially leading to unwanted operations or failure to operate.
To demonstrate this issue, a high-frequency data stream is required, but the available dataset has a low data transmission rate of 62.5 pps (mean), which is significantly lower than the 4000 pps of the SV packet stream. Since we already established from [
33] that the latency distribution does not change significantly with the change in packet transmission rate, we can use the available dataset to carry out the study. So, a MATLAB script has been written to identify the send and receive time with a packet transfer rate of 4000 packets per second using the dataset shown in
Figure 3b. Since the packet transfer rate is 4000 pps, each packet is sent every 0.25 ms. If the sent time of the i-th packet is denoted as
, then the time at which is packet is received, denoted as
, can be calculated using Equation (
8) below:
where
is the latency of the i-th packet chosen randomly from the latency dataset.
A comprehensive demonstration also requires comparison with a wired Ethernet network. Hence, a synthetic dataset of the expected latencies of a wired Ethernet network was generated, where the dataset follows a normal distribution with an average latency of 1.0 ms and jitter (variation in latency) of 0.1 ms [
48]. In addition, 5% of the dataset was generated as abnormally high latency with values around and below 3.0 ms.
The packet reception sequence was observed and buffer models were developed to assess variations in packet sequence. Model 1, absence of a buffer, discards out-of-sequence packets and calculates the proportion of the data stream maintaining a correct sequence. Model 2, equipped with infinite buffer, organizes the packets in the correct order, assigning a reception time to the packet according to the clock time of the IED when the packets are forwarded onto upper layers.
For Model 3, a sequential packet processing algorithm has been employed with the objective of monitoring and recording the arrival of SV packets. The operational logic of the algorithm is described below:
Initialization: The algorithm initializes by setting a control parameter, denoted as ‘seq’, to an initial value of 1.0. This parameter represents the expected sequence number of incoming SV packets.
Packet Monitoring and Processing: As SV packets arrive, the algorithm evaluates their sequence numbers against the current value of the sequence parameter.
- (a)
Matching-sequence packet: If the sequence number of the incoming SV packet matches the value of the sequence, the algorithm
- i.
Records the current system time as the packet’s arrival time;
- ii.
Marks the packet as successfully received;
- iii.
Increments the ‘seq’ parameter by 1 to update the expected sequence number;
- iv.
Continues to monitor the next packet.
- (b)
Out-of-sequence packet: If the sequence number does not match the expected value, the following occur:
- i.
The packet is temporarily stored in the buffer.
- ii.
The algorithm continues to inspect subsequent incoming packets, allowing up to two additional attempts to find the matching sequence number.
Delayed match within tolerance: If a packet with the expected sequence number arrives within these three attempts, the packet is processed as described in step 2(a).
No match after three attempts: If no matching packet is received after three consecutive attempts, the packet corresponding to the current ‘seq’ value is considered lost, and the algorithm logs the loss and increments the ‘seq’ value by 1 and resumes the process with the next incoming packet.
The latencies of the received packets after being processed by the three buffer models are recorded, and the jitter, during packet reception, is also evaluated. The buffer size of Model 2 and Model 3 has been evaluated by observing the number of packets that were kept in the buffer during the processing period. The outcomes from the models are depicted in the subsequent
Figure 11,
Figure 12 and
Figure 13.
When evaluating the results of the buffer models from
Figure 11, it can be seen that the model with an absence of buffer (Model 1) had the worst performance when it came to percentage of processed packets in the proper sequence, as it did not offer any reordering functionality, whereas the infinite buffer model (Model 2) was able to buffer the packets and reorder them all. Model 3 with limited buffer showed significantly better results than Model 1 but was still behind Model 2. When comparing the results between the two network technologies, the buffer models showed better performance in the case of the Ethernet network compared to the 5G network. The high variability in latency and jitter of 5G resulted in a large number of packets being out of sequence compared to the Ethernet network, which is why Model 1 showed a significantly higher percentage of packets in sequence for the Ethernet than for the 5G network. Regarding the buffer size used by the models,
Figure 12 shows the comparison between the different networks when incorporating buffer Models 2 and 3.
As can be seen from the figure, Model 2 required 1.81 kB and 11.07 kB for the Ethernet and the 5G networks, respectively, whereas Model 3 required 0.90 kB and 4.07 kB, respectively, for the two networks when taking a packet size of 226 bytes (typical for SV packets). The values indicate that Model 3 requires about a 40% smaller buffer size compared to Model 2 and that with the implementation of the 5G network, the buffer size would need to be increased by a factor of 6 to perform satisfactorily.
The results in
Figure 13 show the overall latency and jitter of the packets after being processed by the buffer models. As can be seen, Model 1 shows the lowest latency and jitter, since no processing in the form of buffering and/or reordering has been conducted on the packets, whereas Model 2 shows the highest values of latency and jitter in order to guarantee the proper sequence of packets. Although Model 3 does not offer such a guarantee, it makes up for it by achieving lower latency and jitter for the packets, which is crucial for transmission of SV packets in the substation. The algorithm of Model 3 buffer can be further modified, by reducing the number of proceeding packets observed, to improve latency and jitter. However, the modification results in a reduced percentage of processed packets in sequence, which would cause detrimental impact during signal reconstruction.
Once the SV packets are received by the IEDs, the sample values of current or voltage in the packets are passed through a low-pass filter for signal reconstruction. As an example case, the SV packets come from a sampled line current waveform, which has a total time span of 0.6 s, where a steady state is observed for 0.2 s before a short circuit takes place, increasing the current magnitude, and later the fault is cleared at 0.4 s. The cut-off frequency for the filter has been set to 500 Hz, with the sampling frequency set to 4 kHz and the filter order set to 20.
Figure 14 illustrates the reconstructed waveforms when the R-SV packets, transferred through the 5G network, pass through the three buffer models in combination with the low-pass filter. It should be noted that the reconstruction of signals depends on the time at which the data packets were received, and therefore the rate at which the signals are reconstructed by the three buffer models would vary significantly depending upon the processed latency of the data packets.
The fault can be detected by the IED using various parameters, such as RMS (Root Mean Square) value, fundamental frequency, angle shift, etc. However, in this case, we chose to use the RMS value of the current as the parameter for fault detection. In this method, the RMS value of the current is compared to a threshold value, and once the current exceeds the threshold, the IED detects it as a fault. This method is similar to the one used in overcurrent relays. The waveform used has a magnitude of 2.7 kA in steady state (before fault) and the threshold value to trigger the protection algorithm in the IED is set to 8.1 kA. The threshold RMS fault current is chosen such that the value is not low enough to be close to the overload current and not high enough to cause a failure to detect any type of fault. Thus the threshold RMS fault current has been set to be three times the steady state value. The RMS value is calculated using a sliding window method, where the window size should be set such that it is not small enough to produce large variations in the RMS value and not large enough to delay the increase in the RMS current and hence the detection of the fault. For this reason, the window size has been set to 160 samples, which corresponds to two cycles of the 50 Hz current waveform.
Figure 15 shows the RMS value of the line current waveforms, including the threshold for identifying the steady state from the fault state.
As can be seen, with the Model 1 buffer, the IED was unable to detect the fault condition as the RMS value of the reconstructed signal does not cross the threshold value throughout the signal time span. In contrast, Model 3 shows clear detection of the fault from the steady-state condition. As discussed earlier, Model 2 shows the highest similarity to the original signal; however, due to higher latency and jitter, signal reconstruction and fault detection would be delayed when compared to Model 3.
3.3. Time Synchronization Process
In digital substations, achieving precise time synchronization is crucial to properly timestamp the data packets. Traditional methods such as the Network Time Protocol (NTP), Simple Network Time Protocol (SNTP), and one pulse per second (1PPS) have been utilized in the past. The IEC 61850 standard introduces an updated protocol capable of sub-microsecond precision. The Precision Time Protocol (PTP), outlined in IEEE 1588, employs a grand master clock (GMC), often a high-precision oscillator such as a GPS, as the reference for aligning the clocks of both publishers and subscribers.
The GMC first sends a synchronization (Sync) message containing the transmission timestamp, noted as
. Upon receiving this message, the device records its reception time according to its local clock as
. To measure the delay, the device sends a ‘Delay Request’ message back to the GMC, noting the transmission time as
. The GMC responds promptly with a ‘Delay Response’ message, which includes the timestamp
, representing the time when the ‘Delay Request’ message was received by the GMC.
Figure 16 illustrates this method.
Once the messages are received and the values of
–
are obtained, the devices can calculate the offset between the clocks using Equation (
9):
where
is the offset between the two clocks. By subtracting the offset from their clocks, the devices synchronize with the GMC. In this case, it is assumed that the uplink and downlink delays are symmetric and equal. However, in case of known asymmetry between UL and DL delays, IEEE 1588 standard version 2002 provides a revised offset formula, as shown in Equation (
10) below:
where
and
are the mean time delays for data transmission from the master device (GMC) to the slave device and from the slave device to the master device, respectively.
From the explanation, it is evident that the synchronization process is highly influenced by network data transfer delays, including propagation, queuing, and processing delays. The process also assumes either symmetrical delays in both directions or that the degree of asymmetry is known. Although wired networks with minimal traffic can achieve sub-microsecond accuracy, the variable latencies introduced by 5G communication technology result in unknown asymmetrical delays. Despite PTP synchronization occurring every second, achieving the desired level of accuracy is challenging. To validate this idea, a MATLAB script was developed using MATLAB R2025a software to simulate the synchronization process using the data transmission latency distribution profile of the 5G network, as discussed in previous sections, and to evaluate the resulting time accuracy.
Since the dataset has a resolution of 0.1 ms, time accuracy is evaluated within a range of 0.1 to 1.0 ms. The script simulates the PTP synchronization process by first sending a ‘Sync’ message to the device, where the data packet’s latency is randomly selected from the latency dataset. The same random selection process applies to the ‘Delay Request’ message, which is sent to the GMC after a fixed processing delay of 0.1 ms. Finally, upon receiving this request, the GMC promptly responds with a ‘Delay Response’ message, again with its latency randomly drawn from the dataset. The offset
is calculated with the help of the equation stated earlier, and by using it, the revised device clock is calculated. As the dataset has millions of data points, the synchronization process is performed for a total of 2.5 million times; each time the difference between the GMC and the revised device clock is calculated. If the difference falls within the specified time accuracy level, the synchronization is considered successful; otherwise, it is deemed a failure. The probability of successful synchronization is then determined on the basis of the results.
Figure 17 shows the results obtained from the simulation.
The results show that the probability of successful time synchronization increases as the required time precision decreases. However, even at a time accuracy of 1.0 ms, the synchronization process cannot be guaranteed to succeed consistently. These findings clearly indicate that the PTP struggles to maintain its standards under the variable latency conditions introduced by non-optimized 5G communication, highlighting the weakness of 5G communication for precise device clock synchronization.
5. Conclusions
Fifth-generation (5G) wireless technology has gained traction in Sweden for its enhanced capabilities over previous generations, including high data rates, low latency, and support for massive connectivity. These features make it an attractive option to replace wired communication in digital substations that comply with IEC 61850 standards. This study identifies research gaps and offers a theoretical initial examination of the substation protection sub-functions most likely to be affected by 5G integration and underlines key challenges that must be addressed before 5G technology can be considered a viable replacement for Ethernet.
Latency measurements over a standalone 5G network with eMBB traffic revealed relatively high and inconsistent delay and jitter, resulting in trip times that were 38.78% longer compared to Ethernet. Achieving proper selectivity required larger coordination delays, which can compromise system responsiveness. Data packet transmission analysis showed a significantly lower in-sequence packet reception rate, with 5G demanding much larger buffers to maintain signal quality. This also increased the mean latency from 8.5 ms to 12.0 ms to support full packet reordering. Additionally, the PTP synchronization success rate on the 5G network was found to be unsatisfactory, with asymmetric delays preventing sub-millisecond accuracy.
Despite these limitations, several 5G features, such as network slicing, URLLC prioritization, QoS management, and edge computing, have the potential to mitigate performance issues and make 5G a more reliable wireless solution.
In conclusion, while 5G shows strong potential for future digital substation applications, according to the current theoretical analysis, its current SA network configuration under eMBB traffic is not yet suitable to replace Ethernet for protection system communication. This study contributes to the growing body of research by highlighting the specific protection sub-functions affected by 5G integration and offers valuable insights to guide future research, field testing, and implementation strategies.