1. Introduction
The accelerating transformation of the energy system has elevated electricity to the centre of global decarbonisation strategies. The European Green Deal [
1] envisions “a pan—European integrated energy system that is low—carbon, secure, reliable, resilient, accessible, and cost—efficient” capable of underpinning a climate-neutral economy by 2050. Policymakers converge on two mutually reinforcing levers: a rapid build-out of renewable energy and deep structural electrification of end uses. Thus, the European Union’s revised Renewable Energy Directive (RED III) [
2] mandates that at least 42.5% of gross final energy consumption be supplied from renewable sources by 2030, with an aspirational target of 45%. Achieving this milestone translates into intense growth of wind and solar capacity, alongside accelerated grid expansion and digitalisation. Thus, a fundamental pillar for achieving the goals of the energy transition is the integration of renewable energy, based not only on new large plants but also on sustainable self-consumption based on renewable energy, which is expected to experience a significant increase, reaching nearly 40% of building electricity consumption by 2030 [
3]. Concurrently, and focusing on the other key part of the transition, which is the demand side, the International Energy Agency (IEA) projects in its Net-Zero by 2050 roadmap that global electricity consumption will more than double between 2020 and 2050 even as total primary energy declines [
4]. Heat pumps for buildings, electro-intensive industrial processes, and evolving actors such as electric vehicles and data centres account for the bulk of this growth.
Within this context, figures such as prosumers, which combine the electrification of consumption with the integration of renewable energy systems, particularly for self-consumption from small-scale distributed generators, are evolving and growing. In this framework, Local Energy Communities (LECs) represent advanced configurations of prosumer-based systems that integrate self-consumption and Distributed Energy Resources (DER), with models and roles as described in [
5,
6,
7]. Under specific circumstances, and considering trade-offs between individual and collective objectives under network constraints [
8], as well as the effects of peer-to-peer trading [
9], LECs can not only maximise internal self-consumption but also export surplus electricity generated locally at low voltage (LV) to the upstream medium voltage (MV) distribution network through the common coupling point. Consequently, LECs function as microgrids capable of either consuming or injecting energy into the grid depending on operating conditions, thereby transforming LV distribution networks from passive to active systems. The connection criteria for DER defined by CIGRE in [
10] provide technical requirements for these scenarios. The inverter-dominated nature of DER-based LECs introduces stability and control challenges for microgrids, particularly regarding voltage and frequency regulation during intentional islanded operation (while unintentional islands must be avoided for safety according to IEEE 1547-2018 [
11]). Addressing such challenges requires the development and deployment of advanced technologies. Examples include grid-forming inverters [
12], peak load shaving [
13] and power management control strategies [
14,
15], aggregation and optimal operation under grid constraints [
16], or black-start capabilities [
17] enabling stable operation in grid-connected and islanded operating modes.
Scenarios with high penetration of DER-based LECs, driven by the European Green Deal and renewable energy targets, pose significant technical challenges for distribution networks, particularly given that conventional distribution systems were originally designed for unidirectional power flows (from substation to loads). Meeting these technical requirements necessitates that Distribution System Operators (DSOs) and Transmission System Operators (TSOs) adopt technological innovations that enhance network reliability, reinforcing their backbone role in the energy transition. Voltage regulation is among the prominent issues, as increased DER generation can cause voltage violations, particularly at feeder ends during low-load periods [
18,
19]. Furthermore, reverse power flows, which occur when local generation exceeds consumption, can potentially cause overloads and increased losses in the networks [
20]. Likewise, the uneven distribution of DERs on LV feeders may lead to load imbalances and congestion, complicating operational and planning activities [
21]. STATCOMs and other power electronics solutions enhance voltage stability through reactive power support, harmonic filtering, and bidirectional power flow control, reducing imbalances and DER curtailment [
22,
23]. Decentralised control with intelligent inverters increases resilience and scalability [
24,
25,
26], while digital twins and predictive analytics enable active distribution management [
27]. Demand-side flexibility complements these technologies, further improving the adaptability of active distribution networks [
28]. Moreover, beyond the mentioned challenges, a critical factor influencing the reliability of distribution networks, and thus a major concern for DSOs, is their response to faults in the system. Reliability depends on the ability to detect, locate, and isolate faults, as well as on the speed and accuracy of supply restoration. Building on the framework outlined, the present work investigates the technical impacts of the high penetration of DER-based LECs on the effectiveness of fault management and service restoration processes, and consequently on the continuity of supply reliability.
Fault management in MV distribution networks ranges from conventional manual practices to advanced self-healing schemes, depending on their technological evolution. Conventional MV networks’ fault management relies on local manual operation of disconnectors by field crews, which isolate faulted segments and then close tie switches to restore power to unaffected sections, a process that typically can take from several minutes up to several hours. Modern DSOs integrate Supervisory Control and Data Acquisition (SCADA) systems and fault-detection systems such as Fault Passage Indicators (FPIs) to locate faults and execute isolation and restoration operations remotely and automatically, significantly reducing outage duration to minutes [
29]. Advanced Smart Grid technologies further enhance fault management through sophisticated algorithms and automation strategies [
30], enabling self-healing capabilities, thereby improving network reliability and resilience [
31].
The effectiveness of fault management methodologies described depends critically on the behaviour and responsiveness of fault-detection and protection systems against short circuits in MV distribution networks. Traditionally, these protections have been designed and validated assuming fault current contributions flowing radially from High Voltage (HV) downstream into MV. However, in scenarios with increasing penetration of DER-based LECs, short circuits in MV networks can be supplied from both the HV and LV systems, due to the stable integration of DER units into the LECs connected downstream (thanks to power electronics and control technologies such as those mentioned above). Consequently, fault detection and isolation/reconfiguration methods face considerable challenges in distribution grids with high DER penetration, with examples such as those analysed in [
32,
33,
34]. Addressing protection challenges is essential both for operational safety and for ensuring the continued reliability of electricity supply in sustainable scenarios targeted by the energy transition.
The review of technical literature reveals significant developments related to the operation and integration of DER units within microgrids, emphasising advancements in power electronics, communication systems, and control strategies (as highlighted by the references cited above). In addition, studies proposing innovative protection schemes for microgrids with substantial distributed generation components can also be found ([
35,
36]). Nevertheless, there remains a notable gap in studies that explicitly address scenarios involving high penetration of sustainable LECs integrating renewable energy sources and that quantitatively assess, from the perspective of DSOs, their effects on the reliability of distribution networks under fault conditions. Consequently, with the aim of providing objective results in this area, this work investigates the potential impacts that large-scale integration of DER-based LECs embedded within LV networks may have on fault detection and location processes, as well as on service restoration and, ultimately, their influence on reliability indices. The contribution of this work is twofold. First, a qualitative analysis is conducted through laboratory-based Hardware-in-the-Loop (HIL) experiments employing real distribution network protection equipment. These tests allow an evaluation of device performance in fault scenarios where the fault current contribution is supplied by renewable generation sources connected to LV networks. Building on these insights, quantitative assessments are performed using advanced simulation tools applied to a benchmark distribution network, representative of typical European systems, focusing on evaluating the sensitivity of widely used continuity-of-supply indicators (such as ASIDI and ENS) to the reliability of protection systems, under varying levels of automation in fault management processes. While previous works have generally concentrated on technical developments for Smart Grids, the novelty of this research lies in bridging laboratory validation and system-level simulation to provide an integrated reliability assessment. In particular, this study introduces a sensitivity analysis of system reliability under high DER penetration, explicitly accounting for the risk of misoperation in fault-detection elements, an aspect rarely addressed.
This work is part of a research line conducted by the authors, aimed at defining and analysing a wide range of scenarios aligned with the objectives of the energy transition, and evaluating their implications for the planning and operation of distribution networks from multiple perspectives. Thus, in the work presented in [
37], the authors analyse the impact of renewable-based LECs on the operational parameters of distribution systems, under electrification scenarios, highlighting significant improvements in voltage, load, and losses, particularly in European contexts. The study concluded that further analysis, including additional Smart Grid elements such as storage and electric vehicles (EVs) should be performed. Consequently, in [
38] the authors assess the impact of EV deployment on LV networks based on sustainable LECs. By comparing scenarios with varying EV penetration levels, charging strategies, and management schemes, the study demonstrates the crucial role of Active System Management, such as Smart Grid Monitoring and Active Network Management tools, in enhancing DSO capabilities. The results underscore the importance of predictive, cooperative frameworks involving DSOs, aggregators, and LEC managers to mitigate grid stress and boost flexibility through Local Flexibility Markets. Building on these foundations, the present contribution evolves and expands the research scope by moving beyond normal operating conditions and active network management considerations to address the challenges posed by fault scenarios. This represents a novel dimension within the research line, as it constitutes the first work to explicitly investigate the effects of high LEC penetration on fault management processes and reliability metrics in distribution networks. In addition, this study introduces a methodological innovation by integrating results from HIL experiments into a probabilistic network simulation framework. The work not only extends the analytical perspective of previous studies but also provides new insights into the technical implications that may arise in the context of highly renewable, LEC-based distribution networks.
The structure of this paper is outlined as follows:
Section 2 elaborates on the comprehensive methodology adopted for this research;
Section 3 showcases the results and findings obtained;
Section 4 engages in an in-depth discussion and examination of these findings; and
Section 5 concludes the study by summarising key findings and proposing directions for future research.
2. Materials and Methods
The assessment of the impact on the reliability of MV distribution networks of the proliferation of DERs connected at LV under the sustainable LEC framework, is grounded in sensitivity-based analyses of fault management processes. Specifically, the study evaluates the potential of advanced fault location, isolation, and service restoration strategies to enhance supply continuity and, subsequently, how the correct operation of fault-detection devices (particularly protection relays and FPIs) determines the reliability of such strategies, hence affecting MV network reliability indices. This includes examining how DER penetration affects the performance of these elements, and how potential detection failures influence overall supply continuity.
The methodology integrates laboratory testing of protection devices under DER injection conditions with system-level simulations on a benchmark MV network, enabling quantification of reliability impacts across a range of operational scenarios.
Figure 1 provides a macro-level overview of the research workflow, from problem definition to the integrated analysis of qualitative and quantitative results.
Section 2.1,
Section 2.2,
Section 2.3 and
Section 2.4 provide detailed descriptions of the specific methodologies applied, the reliability indicators quantified, the benchmark system and scenarios considered, and the simulation methods implemented.
2.1. Introduction to DSO Operational Response to Faults in MV Networks
To evaluate reliability impacts, several paradigms of fault management in MV networks must be considered, evolving from conventional manual practices to advanced self-healing schemes.
In traditional manual operation, when a short circuit occurs on any section of an MV feeder, the fault management process is initiated by the operation of overcurrent protection devices (a circuit breaker operated by a protection relay or, in less sophisticated systems, a fuse), which isolates the entire affected feeder from the upstream power system, thus causing an initial outage for all downstream customers. If the circuit breaker is equipped with automatic reclosing functionality, a reclosing attempt follows, which, in the case of a transient fault, promptly restores service within seconds. However, if the fault is permanent, the reclosing attempt fails, the circuit breaker trips again, and the feeder remains de-energised, triggering the manual recovery sequence. Subsequently, the DSO’s operators dispatch field crews to patrol the line, locate the fault, and manually isolate the faulty segment by opening sectionalising switches or disconnectors installed in the secondary substations along the MV line. Since faults are rarely visible in such processes, locating the fault is a complex task for the field crews, involving coordination with DSO control centre or feeder substation operators to carry out an iterative sequence of operations to identify the faulty feeder section. Field crews progressively open selected disconnectors to split the feeder into smaller segments, after which the main feeder breaker is reclosed. If the overcurrent protection relay trips again the feeder circuit breaker, the fault is confirmed to lie within the re-energised segment; if no trip occurs, the fault lies downstream of the open disconnector. This iterative, trial-and-error procedure continues according to utility-specific heuristics (which may range from starting with disconnectors closest to the source to those that disconnect half, a quarter or smaller portions of the feeder load) until the faulted section is unambiguously identified. Once the faulted section is located and isolated, crews manually close normally open tie switches connecting with adjacent healthy feeders to restore supply to the unaffected segments of the feeder. This manual fault location, isolation, and service restoration process can take tens of minutes to hours according to normal field response times.
The deployment of monitoring, automation, and communication technologies in MV networks allows the evolution and modernisation of the mentioned operations and consequently accelerates the complete fault management processes. In this line, CIGRE highlights coordination strategies for protection and automation in future networks in [
39]; in [
40], it analyses and compares fault location systems in AC grids; in [
41], FutuRed (Spanish Technological Platform for Electrical Networks) defines fundamentals for functionality distribution in smart secondary substations; and in [
42], IEEE establishes recommended practices for fault diagnosis and protection in smart distribution systems. The first step upon fault occurrence remains, normally, the operation of the feeder protection relay (commonly overcurrent), which trips the circuit breaker and de-energises the feeder (with rare exceptions in case of intermediate automatic circuit breakers that may equip selected secondary substations). Once this primary isolation takes place, however, the implementation of Smart Grid technologies enables a far more advanced response. SCADA systems, using protocols such as IEC 60870-5-104 [
43] and IEC 61850 [
44] for main substations, secondary substations, and field device communication, allow DSO control room operators to remotely open or close circuit breakers, reclosers, and motorised switches or disconnectors for the subsequent location, isolation, and restoration process. Furthermore, specific technologies are designed and implemented for fault location, with FPIs being among the most widely used in advanced MV networks. FPIs are devices installed at selected points along MV feeders (typically at secondary substations) equipped with current (and occasionally voltage) sensors and implementing fault-detection functions (normally overcurrent-based) so that when the measured quantities exceed programmed thresholds, the FPI operates, displaying a visual signal or sending a remote signal via communication schemes. Thus, in a SCADA-based fault management scheme, when a feeder breaker trips, alarms and indications of fault location (via remote FPIs and feeder telemetry) are generated and received in the control centre. The operators can then send remote commands to isolate the faulted segment (by opening remote-controlled disconnectors identified thanks to the operation of FPIs) and to reconfigure the feeder (by closing a normally open tie switch to an adjacent feeder, if capacity allows). This SCADA-enabled remote operation can restore service to most customers within minutes, significantly faster than dispatching field crews as explained above.
Building on SCADA and related technologies, a more advanced approach to fault management in MV networks is automatic FLISR (Fault Location, Isolation, and Service Restoration), also known in the Smart Grid context as self-healing. Self-healing functionalities can be integrated into an Advanced Distribution Management System (ADMS), using automated logic to detect faults, isolate the faulted section, and re-energise the healthy sections of the feeder, thereby reducing human intervention. The sequence of events executed by an ADMS for self-healing purposes is based on the steps described above, implemented and executed under an automated framework, decreasing the need for action by control centre operators. Such a level of automation relies on distributed intelligence and robust communications, using a mix of substation SCADA links and peer-to-peer device messaging, to coordinate the sequence of operations.
Figure 2 presents a simplified single-line diagram of a distribution feeder that conceptually illustrates the principal components discussed in the preceding procedures, including the primary and secondary substations, protection relays with their associated circuit breakers, sectionalising switches or disconnectors, and FPIs.
Figure 3 presents a flowchart summarising the manual and automated procedures described in
Section 2.1.
2.2. The Role of Fault-Detection Devices: Qualitative Laboratory—Scale Analysis Methodology
As described in
Section 2.1, the performance of fault-detection devices constitutes a cornerstone of fault management and service restoration in MV distribution networks. Timely and accurate actuation of protection relays and FPIs determines whether the fault is detected and cleared promptly, the affected line section is precisely located, and, consequently, supply is swiftly restored to the healthy portions of the feeder. In fact, the higher the degree of FLISR automation, the more critical the correct performance of these devices becomes.
The behaviour of fault-detection devices in MV feeders can be influenced by DER connected at the LV level capable of back-feeding into the MV network, which is expected to become progressively common due to the increased capabilities of the power electronics associated with their interconnection to the network, including grid-forming functions, as described in the introduction. When these DERs contribute short-circuit current (whether to a fault on the supplying feeder or even upstream towards the primary substation), the magnitude, direction, and dynamic profiles of fault currents observed by protection relays and FPIs can be altered. The resulting impact varies with the relative position of the fault and with the technology-specific fault-current behaviour of the DER units. The underlying complexity stems from two factors. On the one hand, the traditional radial paradigm of MV and LV systems evolves into multi-source scenarios that feature intermediate generation nodes; on the other hand, most new DERs are inverter-interfaced renewable sources, whose electronically limited and highly transient fault response differs remarkably from the sustained contributions of synchronous machines for which legacy protection devices were designed [
45,
46]. Next, a non-exhaustive set of examples of potential failure situations is summarised conceptually.
Under high penetration of DERs in the LV networks fed by an MV feeder, a short-circuit located upstream of the feeder (whether at the primary substation or on an adjacent feeder) can be fed by DERs that back-feed through their secondary substations. The component of fault current flowing from the feeder towards the substation may exceed the pick-up setting of the feeder overcurrent relay, thereby precipitating a spurious trip of the circuit breaker, interrupting supply to all downstream customers, and disrupting the fault management sequences described in
Section 2.1.
Although directional discrimination would in principle prevent such maloperation, most MV feeders are still protected by non-directional relays because radial operation presumes unidirectional fault currents from the HV grid towards the MV and LV systems. Furthermore, as discussed in
Section 3.1, the dynamics of fault contributions characteristic of inverter-interfaced DERs challenge the reliability of conventional directional algorithms.
Conventional FPIs are currently designed for radial MV feeders under the assumption that, in the event of a fault on MV lines, current flows from the HV source. In scenarios with significant DER penetration connected at LV level, a short circuit in an MV feeder can be supplied by such generation through the secondary substations connected downstream of the faulted MV section. Therefore, the resulting current could pass upstream of the feeder through each FPI in the opposite direction to that assumed in their setting philosophy. If the current measured exceeds its pickup threshold, the FPI will latch and report a fault on their own feeder segment even though the actual fault is located upstream. Such spurious indications can mislead control room operators or autonomous FLISR algorithms described in
Section 2.1, causing unnecessary isolation of healthy line sections and delaying service restoration.
Figure 4 shows an example of this type of maloperation.
The scheme presents the effect of fault contributions from the HV network and DER units (represented by I
sc) on FPIs, considering the distribution feeder scheme shown in
Figure 2. Directional or polarity-sensitive FPIs could mitigate this mode of maloperation.
Secondary Substation Protection Systems Maloperation: Undetected Reverse Fault Contribution from DER.
In a conventional radial scheme, the protections installed in secondary substations (typically distribution transformer circuit breakers, LV circuit-breakers, or fuses) assume that, under faults on MV, an upstream trip of the MV feeder will automatically de-energise the transformer and disconnect the LV bus. However, with high penetration of DERs connected to that LV bus, the scenario can be remarkably different. Under short circuits on an MV feeder, such inverter-interfaced renewable sources (typical DER configuration) contribute a reduced fault magnitude (far lower than that expected when faults on the LV side are supplied by the HV network), therefore local protection in the secondary substation will not trip. The result is that the secondary substation remains connected and, consequently, DER units continue exporting power and feeding the faulted MV line.
This situation has three critical consequences. One is related to safety, since the supposedly isolated MV line remains live, presenting a lethal hazard to field crews dispatched to locate and repair the fault. The second consequence affects reclose coordination, given that automatic or manual reclosing initiated at the primary substation expects a de-energised line; closing onto an already energised, frequency- and phase-uncontrolled island can impose severe synchronisation stresses, damage equipment, or re-ignite the original fault. Lastly, it jeopardises fault-management integrity, because FLISR or self-healing algorithms interpret the feeder as cleared and proceed with network reconfiguration; persisting energisation from the DER masks the true fault status, delays accurate fault location, and may provoke further maloperations.
Many of the maloperation situations described above could be alleviated by implementing robust directionality discrimination functions in MV network fault-detection devices. However, the fault current signatures provided by inverter-interfaced renewable DERs, which are limited in magnitude, rapidly decaying, and phase-controlled, challenge the polarisation algorithms of conventional direction functions. Part of the qualitative analysis carried out in this work, with the results presented in
Section 3.1, is the assessment of such limitations, for which commercial protection devices are subjected to laboratory testing using HIL techniques on an RTDS real time digital simulator [
47]. HIL configuration allows the laboratory real-time integration of the power system model (representing any desired case study), signal-conditioning amplifiers, and real devices under test. This arrangement permits laboratory assessment of the behaviour of the relay as though it were installed in a live network. Thus, the network to which the device under test would be connected is modelled in RTDS, which runs the electromagnetic transient (EMT) simulation in real time, delivering analogue current and voltage waveforms to the protection relay through signal amplifiers, while the relay’s digital outputs (mainly trip signals) are fed back to the RTDS, enabling the detailed evaluation of its performance under controlled fault conditions.
Figure 5 presents the laboratory setup used for the present study. In the analysis performed, the system modelled in RTDS represents the simplified network presented in
Figure 2 and
Figure 4, with special focus on the equivalent of the LV network, including the aggregated PV-based LECs connected downstream to a secondary substation, and explicitly models the equivalent of the inverter-interfaced PV generation with its grid-coupling converter. This model reproduces the LV contribution to short-circuit events initiated on the MV feeder, with the RTDS simulation generating analogue current and voltage signals associated with the simulated faults. Since the RTDS low-level analogue outputs are limited to ±10 V, a power amplifier is required to scale them to the secondary levels of voltage and current expected by the relay (equivalent to those typically found in secondary substations). For this study, an F650 Doble amplifier (Doble Engineering Company, Marlborough, MA, USA) is used, providing six current and voltage channels, with maximum output ratings of 300 Vrms and 70 Arms. Once amplified, the current and voltage signals are supplied to the protection relay, which processes the measurements to evaluate the operation of the fault-detection function (in
Figure 5, a generic relay is shown to maintain confidentiality of the specific relay model used for the tests). If a fault is detected, the relay generates a trip signal, closing a digital output, which is connected to a digital input of the GTFPI (Gigabit-Transceiver Front Panel Interface) card of the RTDS. This card supports up to 16 digital inputs and is connected to the circuit breaker within the RTDS simulation, enabling its tripping in the event of a detected fault.
The device under test is a commercial fault-detection device (acting as an FPI) installed in an intermediate secondary substation situated between the primary substation and the secondary substation modelled in detail. The protection function of the relay selected to evaluate the operation of the FPI is the directional overcurrent function, with the following settings:
Figure 6 presents the operating characteristic of the directional function implemented in the device under test.
By imposing fault conditions analogous to those depicted in
Figure 4 on the simulated MV feeder, the experimental setup enables a qualitative evaluation of the FPI’s performance under the influence of DER contribution. To examine the directional fault identification capability of the FPI, two different fault scenarios are considered. The first scenario corresponds to faults located downstream of the relay (forward fault), where the fault current observed by the relay is supplied by the external grid, thus representing radial grid operation. In contrast, the second scenario involves faults occurring upstream of the relay location (backward fault), where the fault current experienced by the relay is supplied entirely (100%) by renewable sources.
2.3. Quantitative System—Level Reliability Analysis Methodology
The assessment of the reliability of a distribution network relies on probabilistic analyses that estimate, over an operating period, the expected frequencies and durations of supply interruptions, as well as the associated energy not supplied, under disturbances in the network. In the present work, such assessments are performed by comparing multiple scenarios considering different fault-management strategies (as detailed in
Section 2.1), fault-occurrence probabilities and reliability rates of fault-detection devices, reflecting the challenges discussed in
Section 2.2.
Quantitative evaluation is performed using simulation and programming techniques that automate the evaluation of the range of scenarios analysed and build a results database, which allows the extraction of conclusions on the influence of FLISR technologies on reliability and their dependence on fault-detection devices and high-DER penetration.
2.3.1. Reliability Indicators
The quantitative reliability assessment is based on the calculation of the standard supply continuity indicators defined in the IEEE Guide for Electric Power Distribution Reliability Indices [
48]. For each scenario described in
Section 2.3.3, these key performance indicators (KPIs) are computed and compared to evaluate the network reliability.
In particular, the following KPIs are analysed for this work:
It is important to clarify that load-based indices are used in this study instead of customer-oriented metrics (such as SAIDI and SAIFI), because they better reflect system performance under MV fault conditions. This is due to the fact that, in the benchmark network used for the analysis (described in
Section 2.3.2), all customers connected to a given secondary substation are aggregated into a single equivalent load; consequently, power demand is a more meaningful parameter than the number of customers.
2.3.2. Benchmark System
Figure 7 presents the benchmark system used as the testbench for the studies carried out in this work.
This benchmark network is based on the common reference platform developed by the CIGRE study Committee C6 (Active distribution systems and distributed energy resources). This platform provides test networks that capture the integration challenges associated with a variety of network topologies. The present work adopts the European configuration of the MV Distribution Network Benchmark described in [
49], including network topology and comprehensive datasets encompassing line characteristics, transformer specifications, generation capacities, and load demands with their profiles (the total load served by this benchmark network, considering these data, is 25,269 MWh per year). These datasets enable rigorous construction and detailed modelling of the network within the dedicated simulation environment presented in
Section 2.3.4. The system shown in illustrates the single-line diagram of the network, comprising the point of interconnection with the HV sub-transmission network, two HV/MV transformers, and two independent MV feeders that supply the secondary substations represented as buses in the diagram. Arrows at each node indicate the aggregated demand supplied by the secondary substation connected to that bus.
Along with the network topology and detailed design parameters of each component, the benchmark network modelled incorporates element-specific annual fault rates and mean time to repair (MTTR), thereby providing the probabilistic inputs required for the reliability assessment conducted in this study. As explained in
Section 2.3.3, the range of fault rates is one of the factors considered for the definition of the scenarios analysed in this work.
Based on an extensive literature review conducted by EPRI, which examines papers, reports, and books on the reliability of electric power distribution system components and consolidated in the white paper
A Review of the Reliability of Electric Distribution System Components [
50], a range of four fault rate categories have been modelled for each element of the system: low, lower-medium, upper-medium, and high. The low and high fault rates correspond to the extreme values reported by EPRI, which are interpreted as the bounds of a normal distribution at ±3 σ from the mean. The two intermediate categories (lower-medium and upper-medium) are derived by shifting the mean value by ±1 σ offsets from the average value, thus representing fault-rate scenarios under less extreme operating conditions. The source of the data for fault rate and repair times (the EPRI white paper) compiles and harmonises reliability data from multiple utilities, together with an extensive survey of 191 technical publications, resulting in a statistically robust reliability data library. The reported values represent simple failure rates and average repair times for the mid-life operating region of components (excluding infant mortality and wear-out effects), which aligns with the conditions most relevant for system-level reliability y assessments. On this basis, this dataset is widely recognised as a benchmark for both research and planning studies; therefore, the present study ensures that the simulated continuity-of-supply indices are grounded on credible, industry-accepted data.
Table 1 shows the fault rates assigned in the present study.
Table 2 summarises the repair times considered for each element of the benchmark network, defined on the basis of the values provided in Annex B of the referenced EPRI white paper.
The benchmark system shown in includes sectionalising disconnectors installed at the end of every MV line section, interconnecting each secondary substation (represented as a bus in the single-line diagram) with the MV feeder. Different disconnector typologies have been modelled in the simulation platform according to the DSO methodologies for fault management explained in
Section 2.1. Thus, each disconnector type can be distinguished by its associated fault-detection capabilities, considering whether its secondary substation is equipped with fault-detection devices (FPIs) or not, as well as by its operation mode and, therefore, its range of actuation time (considering whether the operation is manual or automated). Since the manual operating times depend on factors such as fault location and crew mobilisation, each disconnector is assigned a range of operating times rather than a single deterministic value (ranges obtained from anonymised data based on real experiences shared by DSOs collaborating with the authors in different projects). As explained in
Section 2.3.4, a specific operating time is randomly sampled from the prescribed range for each simulation. The combination of different types of disconnectors and the associated operation methodology under fault situations forms the scenarios analysed in the present study, which are presented in
Section 2.3.3.
Table 3 summarises the types of disconnectors considered.
2.3.3. Scenarios
The present work evaluates a matrix of scenarios that combines the factors defined in
Section 2.1,
Section 2.2 and
Section 2.3.2, regarding levels of MV network automation and annual fault rates, with the aim of quantifying how successive levels of automation (in coherence with the procedures described in
Section 2.1) affect continuity of supply indices. Thus, a wide spectrum of network automation levels is defined, ranging from fully manual to fully automatic FLISR, including a series of hybrid scenarios in which automatic disconnectors (with FPI) are progressively incorporated. In this way, the study covers the full set of automation configurations applicable to the benchmark system under consideration. Furthermore, in order to explore their impact on supply continuity indices, different levels of reliability of fault-detection devices are considered, which can be affected by the increased penetration of DERs integrated within the LECs connected at LV level. A reliability range from 100% (fully reliable operation) down to 50% is considered in order to encompass best-case and severely unreliable scenarios. The lower bound of 50% represents a hypothetical worst-case, where only half of all faults are correctly identified by fault-detection devices. Although this is an extreme assumption, it is deliberately adopted to stress-test the system. In conventional distribution networks with low DER penetration, protection-device reliability is typically high; however, as discussed in
Section 2.2, increasing DER penetration may significantly affect this performance. For this reason, the study examines scenarios with reliability values as low as 50%, providing insight into the potential upper bound of adverse impacts on supply continuity indices if protection performance deteriorates. By considering a broad range of values for both the level of automation and the reliability of fault-detection devices, the analysis encompasses extreme operating conditions and thereby ensures the robustness of the results.
Table 4 summarises the scenarios defined for the assessment. For hybrid scenarios, two combinations of manual and automated disconnectors have been defined (optimal-case and worst-case), as explained in
Section 2.3.4.
2.3.4. Simulation Methods
The continuity-of-supply indicators described in
Section 2.3.1 have been calculated from results obtained through power-system simulations, focused on reliability and restoration assessment, performed using the benchmark network presented in
Figure 7. The simulations are based on automated, probabilistic contingency evaluations in which component failures and outages are modelled considering the fault rates defined in
Table 1 and the repair-time distributions indicated in
Table 2.
For each fault simulated, the algorithm implemented in the software tool computes the system response and its subsequent restoration sequence. Thus, the simulation performs power-flow calculations to establish pre-fault and post-fault operating points and emulates the subsequent fault management actions by applying routines that select the admissible switching operations to restore supply while respecting loading and voltage limits. To this end, the different fault-management procedures described in
Section 2.1 and the subsequent combination of fault detection equipment and disconnector types presented in
Table 3 have been modelled. For each simulation, the energy and duration of each customer interruption is recorded and subsequently the selected reliability indices are quantified for each scenario under study.
The software platform selected to conduct such analyses is PowerFactory DIgSILENT (version 2024 SP1) [
51], in which the elements described in
Section 2.3.2, considering the scenarios explained in
Section 2.3.3, have been modelled. Under this framework, Python (software version 3.12) scripts have been developed which, interfaced with the PowerFactory API, automate simulations execution, post-processing, and evaluation of the results.
For each category (low, lower-medium, upper-medium, and high), the scripts assign the corresponding fault rate and repair duration to each element of the network by applying the probabilistic values derived from the EPRI review explained in
Section 2.3.2. These assignments are deterministic within a given category: A scenario is defined by the chosen category and the set of automated/manual devices.
To capture the operational variability associated with field intervention, any scenario that includes manual disconnectors is repeated 50 times. In each repetition, an operation time for every manual disconnector is drawn independently from a uniform distribution U(20, 90) minutes. The reported reliability indicators for that scenario are the averages across the 50 repetitions. This repeated-sampling procedure ensures that results do not depend on a single arbitrary choice of manual operation times while remaining computationally manageable.
For hybrid scenarios, a brute-force algorithm is implemented to systematically evaluate the influence of the location of automation elements. Two search objectives are implemented:
Optimal configuration: Exhaustively evaluating feasible automation combinations and select the set that minimises the ASIDI.
Worst-case configuration: Similarly, the configuration with the highest ASIDI is identified.
To reduce the computational load of the brute-force evaluation, the algorithm is initially executed assuming a fixed operation time of 60 min for manual disconnectors. Once the target configuration is identified, the simulation is repeated 50 times with variable operation times (randomly assigned within the defined range) to obtain statistically representative results.
PowerFactory performs system contingency and protection-behaviour simulations, while Python scripts manage parameter assignment, stochastic sampling, job submission to PowerFactory, post-processing of interruption data, computation of reliability indices, and brute-force combinatorial search. This separation ensures the physical accuracy of protection/fault behaviour (PowerFactory) while keeping the statistical and optimisation logic transparent and reproducible (Python).
2.4. Integration of Laboratory-Scale and System-Level Analyses
Section 2.2 and the results shown in
Section 3.1 explain, analytically and experimentally, that DER connected at LV can weaken the dependability of MV protection relays and FPIs under faults on MV feeders. To estimate quantitatively the resulting impacts, and thereby integrate the qualitative and quantitative studies conducted, a sensitivity analysis is undertaken in which the continuity-of-supply KPIs calculated as explained above are challenged across a spectrum of fault-detection systems reliability values, emulating progressively higher DER penetration.
Equations (3)–(5) define the calculation method used to incorporate the dependence of supply continuity KPIs on the reliability of MV fault-detection devices. The method considers that under the maloperation of fault detection systems (with failure modes such as those explained in
Section 2.2 and
Section 3.1), the resulting fault-management procedure will be equivalent to manual operation.
where ASIDI
Ideal, ASIFI
Ideal, ENS
Ideal represent the values obtained for each scenario under ideal operation of MV fault-detection systems (presented in
Section 3.2.2 and
Section 3.2.3); ASIDI
Manual, ASIFI
Manual, ENS
Manual are the results of the KPIs for manual operation (shown in
Section 3.2.1); and R
Fault-detection is the real reliability range considered for MV fault-detection systems.
The resulting datasets obtained, presented in
Section 3.2.4, establish the maximum reliability improvements that can be obtained with higher levels of FLISR automation and illustrate how these enhancements can be threatened when the performance of the fault-detection elements is affected by the high penetration of DER-based LECs in downstream LV networks.
2.5. Limits of the Study
As outlined above, the central aim of this study is to analyse how scenarios with a strong presence of DER-based sustainable LECs integrated into LV networks may affect the performance of fault detection, fault location, and service restoration, and how these effects are ultimately reflected in reliability indices. This specific focus requires setting boundaries for the analysis, thereby excluding from the scope of the study certain aspects that will be explored in subsequent works. The main limitations of the present study are listed below.
The simulations performed to obtain quantitative results for supply continuity indicators are based on a benchmark network representative of European distribution systems. The use of benchmark systems is a standard approach for both system operators and planners, as well as for research projects. These models capture many of the characteristics of actual distribution systems; however, complementary studies should be carried out to replicate the analysis in additional regions or to zoom in on a specific real-life distribution system.
Failure rates considered for the different components of the benchmark network are based, as explained in
Section 2.3.2, on statistical data collected from multiple utilities and reported by EPRI in the white paper
A Review of the Reliability of Electric Distribution System Components. While these values provide representative inputs, further studies should be carried out if more specific data on annual fault rates become available for a given real network.
The simulations for quantitative system-level assessments incorporate fault-management strategies based on the criteria explained in
Section 2.1, reflecting common practices of DSOs in manual and automated scenarios. These studies could be replicated or extended if new system-restoration procedures need to be evaluated.
The qualitative assessment conducted in the RTDS laboratory (setup described in
Section 2.2) uses a representative commercial relay widely applied in distribution networks (the model and manufacturer are not disclosed for confidentiality reasons). While a comprehensive comparison of relays from different manufacturers lies beyond the scope of the present work, the applied methodology is replicable for further experimental testing.
The qualitative laboratory analysis aims to verify the possibility of fault-detection device malfunction under the short-circuit contribution of distributed renewable generation units. Based on this validation, a sensitivity analysis is performed by considering a broad range of values for the reliability of fault-detection devices (i.e., probabilities of failure, once this possibility has been confirmed in the laboratory) thereby ensuring the robustness of the results, as described in the scenarios explained in
Section 2.3.3. However, a detailed quantitative evaluation of protection-device failure probabilities is beyond the scope of this paper, representing a parallel line of research that constitutes a specific work. In this regard, a complementary research line pursued by the authors provides detailed statistical evaluation. In related experimental work with four commercial relays, misoperation rates of up to 25% have been observed depending on manufacturer and fault type, underscoring the variability of protection performance under high DER penetration and further justifying the inclusion of the sensitivity analysis in the present study, which applies extreme scenarios.
The findings of the study, presented in
Section 3 and
Section 4, provide insights into the technical performance of distribution systems under high DER-based LEC penetration, thereby serving as valuable input for planning processes. Moreover, the Discussion section includes a non-exhaustive list of potential technical developments that could contribute to enhancing overall system performance. A comprehensive feasibility assessment of deploying such new technologies, along with an economic evaluation of the scenarios, is outside the scope of this work and is considered a potential continuation in future studies. Preliminary qualitative considerations are, however, included in the Discussion section.
4. Discussion
The experimental and simulation findings obtained in the present research highlight both the substantial potential of advanced fault-location methods and network automation technologies to enhance supply reliability, along with the challenges associated with the extensive integration of DER-based LECs within distribution networks.
Quantitative system-level assessments reinforce the benefits of implementing automated FLISR strategies when correct operation of fault-detection systems is assumed. The results indicate general enhancements in the values obtained for the different KPIs analysed, especially for ASIDI and ENS, as fault-management procedures evolve from manual operation to automated systems with reliable protection-device operation.
Figure 12,
Figure 13 and
Figure 14 illustrate the evolution of ASIDI, ASIFI, and ENS values, respectively, with the level of penetration of automated disconnectors in the MV network (considering optimised placement of automated disconnectors). KPI values are represented by bars, and cumulative improvements by curves.
As observed, taking as an example the high fault rates, ASIDI can improve significantly, falling from nearly 16 h/year for manual operation to less than 1.73 h/year in fully automated FLISR, with a cumulative improvement of 89% (
Figure 12). Equivalent results are obtained for ENS values, which decrease from 46 MWh/year in manual scenarios to less than 6 MWh/year with self-healing technologies (
Figure 14). The case of ASIFI is slightly different, as clarified in the explanation of
Table 8; nevertheless, FLISR technologies also produce significant improvements in this KPI compared with manual operation (
Figure 13).
Between manual and fully automated FLISR, the analysis of hybrid scenarios with the implementation of partial automation reveals the importance of strategic device placement. As shown in
Figure 12 even a modest penetration of optimally located automated disconnectors (30–40%) leads to significant improvements in ASIDI values over the manual case, while automation levels of about 60–80% allow achieving performance close to that of fully automated scenarios. A similar trend is observed for ENS (
Figure 14), highlighting the relevance of detailed location assessment for the optimisation of DSO investments in control and protection equipment.
The reliability of fault-detection systems associated with the automation of disconnectors and the subsequent fault-isolation and supply restoration processes, is a key limiting factor in achieving the improvements in supply continuity indicators presented above. The results obtained in the laboratory tests performed in this study underscore that inverter-based DER contributions can cause malfunctions in current MV fault-detection devices, particularly affecting direction-discrimination functions (as illustrated in
Figure 11), which are essential for ensuring correct fault location in scenarios with high penetration of DER-based LECs. Building upon these qualitative conclusions, the results of the sensitivity analysis conducted quantify the impact of fault-detection device reliability on the supply continuity KPIs values obtained under DER-based LEC scenarios.
Figure 15,
Figure 16 and
Figure 17 illustrate the results obtained for ASIDI, ASIFI, and ENS, respectively, for the different levels of automation considered, expressed in relative terms with respect to the baseline manual case, and incorporating the results for a range of reliability values of fault-detection systems (for clarity, only values for the high annual fault rate are included). The graphics, which complement
Table 10,
Table 11 and
Table 12, reveal that if the effective reliability of fault-detection systems decreases, the actual values of supply continuity KPIs can be dramatically affected. It is evident that even relatively modest reliability reduction causes substantial KPI degradation, moving the metrics towards manual-operation values and thereby diminishing the potential benefits achievable through the integration of automated FLISR technologies.
The underlying reason for this behaviour is that any fault not correctly identified by fault-detection devices (due to failure modes such as those described in
Section 2.2 and
Section 3.1) must ultimately be managed through manual intervention, resulting in significantly longer isolation times and more extensive outages. Thus, if an FPI fails to operate for a given short circuit, the fault will persist until maintenance crews manually isolate the affected section. During this period, service remains interrupted until the restoration process is completed following the traditional manual procedures described in
Section 2.1. This situation substantially increases the interruption duration (and therefore ASIDI), and the total energy not supplied (ENS).
The sensitivity results illustrate this trend: Reduced reliability of fault-detection systems causes marked degradation in all KPIs, effectively limiting the benefits that automation could otherwise provide. As expected, in scenarios with higher fault rates, the reliability indices deteriorate compared with those in lower fault-rate scenarios, since the probability of events leading to service interruptions increases. This explains the results presented in
Table 10,
Table 11 and
Table 12: Scenarios with high fault rates and low detection reliability show the most severe deterioration, approaching the performance of manual operation.
It is also worth emphasising that, although the results presented include aggregated KPI values for an entire year and for faults across all line sections (using the fault rates listed in
Table 1), if the analysis is focused on specific fault locations, the severity of the impact is greater for short circuits occurring in critical feeder sections, such as main feeder segments closer to the HV/MV substation. An incorrectly detected fault in such a section results in the disconnection of a large number of customers for the entire duration of the fault-location and service-restoration process, thereby significantly affecting ASIDI, ASIFI, and ENS indices, whereas faults in small lateral feeder sections affect fewer customers and thus have a more limited impact.
The results of this study corroborate and extend previous research on distribution automation, DER integration, and protection systems, as referenced in the introduction and in
Section 2.1 and
Section 2.2. While earlier works mainly focus on technical developments for Smart Grids, the present contribution provides a qualitative assessment of the reliability improvements achievable through the deployment of automated FLISR schemes across a range of representative scenarios, a dimension rarely quantified in the existing literature.
Furthermore, a key contribution of this work lies in integrating a sensitivity analysis of protection-device reliability under high-DER penetration conditions, explicitly including the possibility of fault-detection system misoperation (validated in qualitative laboratory tests). This allows an evaluation of potential technical risks in scenarios characterised by intensive distributed renewable generation. As shown in the results, the effective security-of-supply improvements from automation may be significantly reduced if protection systems are not correspondingly upgraded. The quantification of this effect across diverse scenarios constitutes an innovative aspect of this study.
The findings highlight a key insight with practical implications for DSOs: Automation alone is insufficient to ensure the expected reliability improvements in upcoming scenarios with high penetration of renewable generation, particularly DER units integrated into sustainable LECs. Maintaining or enhancing protection dependability is equally critical to fully realise the benefits of automation. By combining experimental and simulation-based analyses into a holistic evaluation framework, this work advances based on the developments of previous studies, demonstrating that high DER penetration can undermine automation benefits unless protection technologies evolve in parallel.
Accordingly, a variety of innovative solutions must be investigated, among which it is reasonable to explore adapting methodologies applied in HV networks to MV systems and researching novel protection algorithms. Potential solutions to be further investigated in future research include applying directional detection methods based on multi-criteria algorithms [
52,
53], implementing advanced communication schemes to define solutions based on WAMPAC (Wide Area Monitoring, Protection, and Control) [
54,
55], leveraging digitalisation and virtualisation technologies [
56], as well as the application of machine learning techniques [
57,
58] and adaptive fault-detection schemes [
59]. The implementation and deployment of these solutions present substantial engineering challenges for both DSOs and the protection equipment manufacturers. The deployment of a WAMPAC system in an MV network requires a robust, low-latency communication infrastructure to reliably transmit data and control signals, often necessitating the expansion or upgrading of the existing communication network. In addition, WAMPAC systems rely on time-synchronised phasor measurements, meaning that phasor measurement units (PMUs) must be widely installed at key points in the network. Retrofitting a distribution grid with PMUs and integrating their data into real-time operations can be logistically complex and costly, as it requires modifying substations and control systems to accommodate these new devices and the high volumes of data they produce. Integrating multi-criteria algorithms into MV protection relays may exceed the computational capabilities of current hardware, potentially requiring new high-performance devices or centralised processing architectures to execute these algorithms in real time. Likewise, the use of machine learning techniques may involve models whose performance varies under different grid topologies or scenarios not included in the training process, thereby necessitating model retraining for each specific location.
Based on the impacts and technical challenges discussed above, decision-making for the planning and deployment of solutions to ensure security of supply requires DSOs to perform feasibility assessments and to evaluate the cost–benefit of increasing automation and upgrading protection systems. Higher levels of automation (from SCADA-based control to fully self-healing FLISR schemes) and advanced protection systems entail substantial investments in equipment, engineering, and maintenance. Thus, the balance between required investments and the value provided by the reliability improvements they involve, either through reduced customer interruption costs or performance-based regulatory incentives, represents a key factor for network-development planners.
Comprehensive assessments are therefore necessary to determine the optimal level of automation and protection-system upgrades. The results presented in this work show that the transition from a partially automated (considering the optimal location of automated elements) to fully automated network yields additional reliability improvements (an increase from ~80% to ~90% ASIDI reduction). However, the incremental cost associated with achieving this final 10% improvement may not be justified. A qualitative analysis of the results suggests that the most effective strategy could be a hybrid approach, implementing automation up to the point where the ratio of reliability gains to cost is maximised, and complementing with targeted protection upgrades at critical points of the network. A detailed cost–benefit analysis is required to identify this optimum configuration for each specific distribution network. While such an assessment exceeds the scope of the present work, it is emphasised as an important avenue for future research.
5. Conclusions
The research presented in this paper develops an exhaustive qualitative and quantitative evaluation of the impacts of high penetration of renewable distributed energy resources (DER) integrated within sustainable Local Energy Communities (LECs) on distribution-network reliability and supply continuity indices. The holistic methodology, combining Hardware-in-the-Loop (HIL) laboratory experiments with extensive simulations on a representative European benchmark network, enables a robust assessment of reliability and restoration performance across a range of operational scenarios.
Findings demonstrate the benefits achievable through advanced automated Fault Location, Isolation, and Service Restoration (FLISR) systems, notably reflected in significant improvements across key supply continuity indicators such as Average System Interruption Duration Index (ASIDI), Average System Interruption Frequency Index (ASIFI) and Energy Not Supplied (ENS). Specifically, FLISR schemes can reduce ASIDI from approximately 15.9 h/year (manual scenarios) to about 1.7 h/year, achieving an improvement of around 90%. Similarly, ENS values decrease from 46 MWh/year to 6 MWh/year (around 87% reduction) with automation. While not as pronounced as for the previous KPIs, ASIFI results also show significant improvement under FLISR scenarios. Additionally, the analysis of intermediate hybrid automation scenarios reveals that strategic location of automated devices can yield most of the reliability benefits: for instance, automating 30–40% of the network (in optimal locations) already provides notable reductions in interruption duration and frequency, and at around 60–80% automation, the performance converges close to that of full FLISR deployment.
Along with these promising benefits, the present work also identifies essential vulnerabilities in conventional protection devices under inverter-based DER contributions during fault conditions in Medium Voltage (MV) networks. Laboratory tests reveal the risk of malfunctions in fault-detection systems, particularly regarding directional-discrimination functions, which threaten effective fault detection, isolation, and subsequent restoration processes. Sensitivity analyses quantify that even moderate reductions in fault-detection device reliability could drastically decrease the mentioned improvements achieved with automated FLISR deployment, shifting the metrics closer to those of manual operation, and underscoring the strong interdependence between fault detection reliability and FLISR effectiveness. Therefore, ensuring the robustness and dependability of automated FLISR systems in distribution networks with substantial DER-based LECs integration relies on the enhancement of fault-detection system behaviour under such scenarios.
The outcomes of the study hold significant implications, in the context of reliability in the considered scenarios, for DSO planning, operations, and regulatory oversight. Improved continuity indicators (lower ASIDI, ASIFI, and ENS) not only enhance customer satisfaction and public trust but also yield potential financial benefits for DSOs within incentive-based regulatory schemes, with revenues and penalties tied to reliability indices. Thus, sustained improvements in supply reliability may be financially rewarded (either by avoiding outage-cost liabilities or by earning performance bonuses), while deterioration may trigger losses, sanctions, or corrective actions. Additionally, from a technical perspective, achieving the potential reliability gains of advanced FLISR automation, together with the required evolution of fault-detection systems, entails substantial engineering implications to be addressed by DSOs and technology developers. The deployment of enhanced FLISR schemes requires robust field devices and IT infrastructure, seamless integration of multi-vendor systems, interoperability between new and legacy equipment, and the development and implementation of advanced control and fault-detection algorithms (adaptive schemes, multi-criteria directional detection methods, or communication-assisted trip schemes, among others).
While the present study provides a robust technical basis for understanding the impact of DER-based LECs on distribution-network reliability, several aspects remain open, suggesting further investigation and future research lines. The use of a benchmark European distribution network and generalised statistical failure rates ensured methodological consistency and comparability, but application to specific real networks will require using utility-provided specific data, which will allow a more precise quantification of impacts under local operational conditions. Likewise, the qualitative laboratory assessment is based on representative devices, focused in this study on a specific commercial relay; therefore, a broader comparative analysis, including relays from different manufacturers and configurations, would enrich the statistical characterisation of misoperation probabilities for the personalised networks under study. An additional priority is to explore the techno-economic dimension of automation and protection upgrades. This work qualitatively suggests that, while high levels of automation can yield substantial improvements in reliability indices, the incremental benefit of full automation may not always justify the associated cost. Future research will therefore conduct cost–benefit analyses, balancing the expenses of advanced equipment and engineering against the value of avoided interruptions and regulatory incentives. In addition, following the key findings of the sensitivity analysis, future research will address the development and validation of advanced protection schemes, exploring the application of multi-criteria algorithms under the framework of integrated communication and digitalisation technologies. The enhancement of protection systems will sustain the benefits of automated systems amid increasing renewable energy integration, thereby facilitating efficient, secure, and sustainable integration of renewable-based LECs into Smart Grid infrastructures.