Remote Laboratory Testing Demonstration

: The complexity of a smart grid with a high share of renewable energy resources introduces several issues in testing power equipment and controls. In this context, real-time simulation and Hardware in the Loop (HIL) techniques can tackle these problems that are typical for power system testing. However, implementing a convoluted HIL setup in a single infrastructure can be physically impossible or can increase the time required to test a smart grid application in detail. This paper introduces the Joint Test Facility for Smart Energy Networks with Distributed Energy Resources (JaNDER) that allows users to exchange data in real-time between two or more infrastructures. This tool enables the integration of infrastructures, exploiting the synergies between them, and creating a virtual infrastructure that can perform more experiments using a combination of the resources installed in each infrastructure. In particular, JaNDER can extend a HIL setup. In order to validate this new testing concept, a coordinated voltage controller has been tested in a Controller HIL setup where JaNDER was used to interact with an actual On Load Tap Changer (OLTC) controller located in a remote infrastructure. The results show that the latency introduced by JaNDER is not critical; hence, under certain circumstances, it can be used to expand the real-time testing without a ﬀ ecting the stability of the experiment.


Introduction
In a global warming trend [1], the key enablers to manage the increasing emission of greenhouse gases (GHGs) are energy efficiency and low-carbon technologies. Renewable sources, storage systems, and flexible loads provide enhanced possibilities. However, these new resources introduce several issues that power system operators have to cope with and investigate. Indeed, an infrastructure with a growing amount of heterogeneous components typically has a higher complexity than traditional power plants [2]. Sophisticated component design methods, intelligent information and communication architectures, automation and control concepts, and proper standards are necessary in order to manage the higher complexity of such intelligent power systems (i.e., Smart Grids) [3][4][5]. Due to the considerable higher complexity of such cyber-physical systems, it is expected that the validation of smart grid configurations will play a major role in future technology developments.
During the last decade, a growing number of various research and technology development activities have already been carried out in this area. This has been brought to form several research infrastructures (RIs) performing experiments on smart grid activities. However, up to now, no

Real-Time Simulation Testing
Real-time simulation and HIL techniques are gaining significant attention as a testing procedure for smart grid applications. These techniques allow the connection of physical devices (e.g., inverters, controllers) to a DRTS that simulates the rest of the power system in real-time. Therefore, instead of using simulated models of hardware equipment, which could be inaccurate compared to the actual device, the actual hardware components are used providing a more realistic view of a smart grid application. Under this setup, exhaustive testing can be achieved in realistic, flexible, controllable, and repeatable conditions leading to fewer problems at the commissioning phase and field operation [13][14][15][16].
HIL approaches can be the closest representation that a laboratory test can get to a full hardware application. However, as the HIL setup approaches the actual field implementation, complexity and interfacing challenges increase due to the introduction of various hardware components.
The implementation procedure of such convoluted setup could be quite time consuming. This is derived from the fact that the engineers involved must gain adequate expertise on the different equipment, which could have been developed by various manufacturers, in order to address communication, protection and interface issues between the hardware devices and the DRTS. The physical space required, in the infrastructure hosting the HIL setup, can be another important limitation factor.
To sum up, implementing a convoluted HIL setup in a single infrastructure can be physically impossible or can increase the time required to test a smart grid application in detail. However, platforms like JaNDER, which is described in this work, can be used to overcome the aforementioned issues. JaNDER allows the interconnection of different infrastructures with very small communication latencies and can be used, as presented in Section 5, to expand the HIL setup. This addresses the space limitation issue and also reduces the time required to interface the different equipment due to the collaboration of different personnel specialized in different components.

JaNDER Concept
To better understand how an HIL setup can be extended on two or more RIs, it is useful to explain how JaNDER works. At the heart of the JaNDER platform, there is the idea that several RIs, linked with a standardized ICT solution, are integrated into a Virtual Research Infrastructure (VRI), encompassing the simulation/experimental potential made available by each laboratory, and thus being able to participate in tests of complex use cases. The overall idea of VRI is depicted in Figure 1.
Energies 2020, 13, x FOR PEER REVIEW 3 of 16 platforms like JaNDER, which is described in this work, can be used to overcome the aforementioned issues. JaNDER allows the interconnection of different infrastructures with very small communication latencies and can be used, as presented in Section 5, to expand the HIL setup. This addresses the space limitation issue and also reduces the time required to interface the different equipment due to the collaboration of different personnel specialized in different components.

JaNDER Concept
To better understand how an HIL setup can be extended on two or more RIs, it is useful to explain how JaNDER works. At the heart of the JaNDER platform, there is the idea that several RIs, linked with a standardized ICT solution, are integrated into a Virtual Research Infrastructure (VRI), encompassing the simulation/experimental potential made available by each laboratory, and thus being able to participate in tests of complex use cases. The overall idea of VRI is depicted in Figure 1.  Figure 1 shows the advantage of the VRI concept: the possibility for one RI to seamlessly access the resources located at a remote site: these resources might be real field devices, typically mediated through a supervisory control and data acquisition (SCADA) system, or resources modelled inside a DRTS. DRTSs are crucial components of smart grid laboratory experiments but are also very expensive and the possibility of sharing them among RIs is a big advantage. Also, software artifacts such as control algorithms (e.g., a voltage control algorithm, operating on both local and remote devices) can be remotely connected. In practice, measurements and control signals transferred through the VRI's common interface can benefit from higher levels of interoperability-at a basic level, there is the need to transfer plain values associated to devices (either real or simulated), while a more semantic level can be placed "above" the basic exchange of field signals. In fact, this "layered" approach is the one used by the JaNDER platform (under the terminology of "Level 0" and "Level 1"), as described in more details in the following subsections.

JaNDER Level 0
JaNDER Level 0 implements the basic mechanism for exchanging live data (i.e., typically measurements and controls) between different RIs. The non-functional constraints associated to this layer are the following:  Figure 1 shows the advantage of the VRI concept: the possibility for one RI to seamlessly access the resources located at a remote site: these resources might be real field devices, typically mediated through a supervisory control and data acquisition (SCADA) system, or resources modelled inside a DRTS. DRTSs are crucial components of smart grid laboratory experiments but are also very expensive and the possibility of sharing them among RIs is a big advantage. Also, software artifacts such as control algorithms (e.g., a voltage control algorithm, operating on both local and remote devices) can be remotely connected. In practice, measurements and control signals transferred through the VRI's common interface can benefit from higher levels of interoperability-at a basic level, there is the need to transfer plain values associated to devices (either real or simulated), while a more semantic level can be placed "above" the basic exchange of field signals. In fact, this "layered" approach is the one used by the JaNDER platform (under the terminology of "Level 0" and "Level 1"), as described in more details in the following subsections.

JaNDER Level 0
JaNDER Level 0 implements the basic mechanism for exchanging live data (i.e., typically measurements and controls) between different RIs. The non-functional constraints associated to this layer are the following: • Work with a "push" communication model, i.e., do not require any RI to accept incoming TCP connections. This allows for a simpler deployment of the platform. • Use the HTTPS protocol; the reason for this requirement is the same as the previous one.

•
It must be easy to integrate the platform in many different existing laboratory environments, using different communication protocols and programming languages. • It must be secure. Since RIs are connected over the Internet, proper cyber security mechanisms must be adopted in order to guarantee that only authorized parties can connect to the platform. • It must be "fast enough"; this actually means fast enough to support the foreseen test cases, both in terms of latency and volume of data.
Given the above requirement, Figure 2 illustrates in a simplified manner the implemented JaNDER Level 0 architecture, which is then explained in details.
Energies 2020, 13, x FOR PEER REVIEW 4 of 16  Work with a "push" communication model, i.e., do not require any RI to accept incoming TCP connections. This allows for a simpler deployment of the platform.  Use the HTTPS protocol; the reason for this requirement is the same as the previous one.  It must be easy to integrate the platform in many different existing laboratory environments, using different communication protocols and programming languages.  It must be secure. Since RIs are connected over the Internet, proper cyber security mechanisms must be adopted in order to guarantee that only authorized parties can connect to the platform.  It must be "fast enough"; this actually means fast enough to support the foreseen test cases, both in terms of latency and volume of data.
Given the above requirement, Figure 2 illustrates in a simplified manner the implemented JaNDER Level 0 architecture, which is then explained in details. The starting point for each RI is a real-time repository used to collect measurements and controls from the field (or more often, as shown in Figure 2, from an already existing SCADA system or DRTS). The reason for this repository is the decoupling of the JaNDER platform from any specific automation solution already installed in the infrastructure. The idea is to have data points from each partner available in the same basic format using a simple key-value repository. The remote connection of remote infrastructures is then implemented by deploying a common real-time repository (which can be hosted in a cloud environment, for example) that is automatically synchronized with all the local real-time databases of the partners. In other words, the common repository acts as a central broker for connecting the different local repositories of the partners, and can be thought as a "virtual bus" connecting all authorized facilities. The real-time repository is an existing open source product, called Redis [17], which is a widely tested, supported, and documented solution with client libraries already developed for many programming languages and environments. However, the HTTPS replication logic is not part of the Redis product and has been developed originally within the project and then released as open-source software.
A crucial implementation aspect of JaNDER Level 0 is the connection between Redis and the existing SCADA system (or individual field devices, depending on each laboratory architecture, The starting point for each RI is a real-time repository used to collect measurements and controls from the field (or more often, as shown in Figure 2, from an already existing SCADA system or DRTS). The reason for this repository is the decoupling of the JaNDER platform from any specific automation solution already installed in the infrastructure. The idea is to have data points from each partner available in the same basic format using a simple key-value repository. The remote connection of remote infrastructures is then implemented by deploying a common real-time repository (which can be hosted in a cloud environment, for example) that is automatically synchronized with all the local real-time databases of the partners. In other words, the common repository acts as a central broker for connecting the different local repositories of the partners, and can be thought as a "virtual bus" connecting all authorized facilities. The real-time repository is an existing open source product, called Redis [17], which is a widely tested, supported, and documented solution with client libraries already developed for many programming languages and environments. However, the HTTPS replication logic is not part of the Redis product and has been developed originally within the project and then released as open-source software.
A crucial implementation aspect of JaNDER Level 0 is the connection between Redis and the existing SCADA system (or individual field devices, depending on each laboratory architecture, such as a DRTS); this link is of course different for each partner, and therefore, the associated software Energies 2020, 13, 2283 5 of 16 development effort can vary from a really straightforward task, in case the SCADA system provides an open and well-documented interface for connection with external systems to a difficult and time-demanding task, in case there is no central SCADA system and each device must be integrated separately (possibly with different communication protocols). Given the impossibility to address each of the partner's ICT infrastructures in a unified way, it should be noted that the extremely simple data model of the Redis database and the fact that client libraries are available in almost any programming language and environment (including LabVIEW and Matlab, which are highly popular in laboratory environments) helps in keeping the integration effort minimal. In conclusion, all of the requirements mentioned at the beginning of this section have been addressed with the described implementation: the performance argument in particular is detailed in the next section.
The fully open source nature of JaNDER Level 0 [18] makes it easy to extend the VRI community in the future with new participants, however external users will also typically be interested in having a standardized protocol for interfacing: this is handled by the higher JaNDER levels, as described in the next subsection.

JaNDER Level 1
JaNDER Level 1 is a software abstraction build on top of Level 0, and its purpose is to provide an IEC 61850 interface on top of the very simple data structures defined in Redis. The reason for adding this level is of course to provide access to the VRI by means of an internationally accepted standard where applicable, such as IEC 61850, which is a standard of primary importance in the field of Smart Grids. The high level architecture of JaNDER Level 1, as an extension of the picture presented for JaNDER level 0, is the following.
In Figure 3, one RI wants to access some contents of its Redis database using the IEC 61850 protocol. It should be stressed that the data points which are in its repository can also come from another RI by means of the JaNDER Level 0 replication mechanism. This means that the IEC 61850 server in the picture can give access to devices in infrastructure 2 but also in infrastructure 1, in a seamless way; at the same time, since the IEC 61850 connection is local to infrastructure 2, there is no need to setup sophisticated cyber security mechanisms for this connection, i.e., the basic MMS protocol without any encryption can be safely used. Of course this is possible because strong cyber security mechanisms are already implemented at Level 0, and are therefore completely transparent for Level 1. The "Mapping" and "CID" files shown as inputs in the above figure are the fundamental inputs needed by the IEC 61850 server in order to work. More in detail, the configured IED description (CID) is the standard IEC 61850 file used for configuring a device (an IED) and contains a data model representing (a subset of) the contents of the Redis repository in terms of IEC 61850 logical nodes. Apart from this file, it is of course necessary to link the data attributes defined inside it with the live values stored in Redis: this is done by means of a mapping file, which is a text file where each line contains an IEC 61850 data attribute name and a corresponding Redis data point name. The server will use this file in order to connect the IEC 61850 data model specified in the CID to Redis.
The IEC 61850 server software is completely open source [19] and based on the OpenIEC61850 Java library provided by Fraunhofer ISE [20] and others in the context of the wider OpenMUC framework. The library has been integrated with a Redis interfacing mechanism which allows the implementation of the behavior discussed above. The interfacing is of course bidirectional, so that measurements can be retrieved and control commands can be issued. (It should be highlighted that the more advanced IEC 61850 control services (enhanced security and select-before-operate) have not been implemented since the direct control with normal security is adequate for the testing scenarios.) In conclusion, the Level 1 of the JaNDER platform provides a simplified way of adding an IEC 61850 standard interface to a RI, built on top of the Level 0 basic solution.

JaNDER Characterization Introduction
To better understand the behavior of JaNDER in different contexts a characterization of the different JaNDER levels is necessary. First of all, two of the principal features that are usually taken into account in a distributed system have been considered: latency and response time. In this work only one level has been tested: JaNDER Level 0. The difference between JaNDER Level 0 and Level 1, in terms of latency, is negligible; the delay introduced by JaNDER Level 1 is only few milliseconds due to the mapping between the two layers. For this reason, only a single test has been performed. For this type of architecture, it is important to evaluate the latency to exchange measurements between a single RI and the cloud platform. The next subsection introduces a useful platform to log the measurements used during the tests in order to characterize JaNDER and the test procedure adopted for testing JaNDER Level 0.

Logging Data Platform: a Big Data Solution
Since the characterization of JaNDER needs a large number of experiments, to simplify the management of the test, a Big Data solution has been developed. The idea is to automate the analysis of test results as much as possible and to make it easy to manage the collection and analysis of data with a Big Data platform. The platform for data analysis is deployed using the Amazon AWS [21] infrastructure. Databricks [22] was used for the Big Data analysis. Every single RI records the files coming from tests, and at the end of the test, these logs are sent to the single cloud repository (AWS S3 in Figure 4). Using the framework Apache Spark [23] in Databricks, it is possible to aggregate and explore the data coming from all the tests performed by the RI in a single place with powerful Big Data tools and save the results again in AWS S3 to be eventually visualized or processed with other tools.

JaNDER Characterization Introduction
To better understand the behavior of JaNDER in different contexts a characterization of the different JaNDER levels is necessary. First of all, two of the principal features that are usually taken into account in a distributed system have been considered: latency and response time. In this work only one level has been tested: JaNDER Level 0. The difference between JaNDER Level 0 and Level 1, in terms of latency, is negligible; the delay introduced by JaNDER Level 1 is only few milliseconds due to the mapping between the two layers. For this reason, only a single test has been performed. For this type of architecture, it is important to evaluate the latency to exchange measurements between a single RI and the cloud platform. The next subsection introduces a useful platform to log the measurements used during the tests in order to characterize JaNDER and the test procedure adopted for testing JaNDER Level 0.

Logging Data Platform: a Big Data Solution
Since the characterization of JaNDER needs a large number of experiments, to simplify the management of the test, a Big Data solution has been developed. The idea is to automate the analysis of test results as much as possible and to make it easy to manage the collection and analysis of data with a Big Data platform. The platform for data analysis is deployed using the Amazon AWS [21] infrastructure. Databricks [22] was used for the Big Data analysis. Every single RI records the files coming from tests, and at the end of the test, these logs are sent to the single cloud repository (AWS S3 in Figure 4). Using the framework Apache Spark [23] in Databricks, it is possible to aggregate and explore the data coming from all the tests performed by the RI in a single place with powerful Big Data tools and save the results again in AWS S3 to be eventually visualized or processed with other tools.  To permit that every single test is realized without the overlap of other tests, scheduling the tests is recommended in order to automatically run the test for every RI. In this way, the latency of every single measurement is not compromised by other tests. In the next paragraph, the JaNDER Level 0 test is described.

Test JaNDER Level 0: Test Description
The layer considered for this test is JaNDER Level 0. Figure 5 shows the conceptual model of this test. It aims to measure the time required for the data synchronization from an RI to the Cloud Platform: . In practice this time should be similar to the latency value of a public network such as the Internet. Since the Internet is not a deterministic network and does not present Quality of Service (QoS), but it is based on best-effort service, obviously the results must always consider this characteristic of the Internet, so it means that sometimes the measured latency may contain outliers with very high values (also in the order of tens of seconds). Every single RI, at the scheduled time, will send 1000 times a group of measurements. The test is repeated for four times with groups of 1, 10, 100, and 1000 measurements every time. The tests have been executed for a week, in slots of 30 min from 09:00 to 17:00, for each single RI. As already mentioned these experiments were performed using a scheduler in such a way as to have at any To permit that every single test is realized without the overlap of other tests, scheduling the tests is recommended in order to automatically run the test for every RI. In this way, the latency of every single measurement is not compromised by other tests. In the next paragraph, the JaNDER Level 0 test is described.

Test JaNDER Level 0: Test Description
The layer considered for this test is JaNDER Level 0. Figure 5 shows the conceptual model of this test. It aims to measure the time required for the data synchronization from an RI to the Cloud Platform: t 2 − t 1 . In practice this time should be similar to the latency value of a public network such as the Internet. Since the Internet is not a deterministic network and does not present Quality of Service (QoS), but it is based on best-effort service, obviously the results must always consider this characteristic of the Internet, so it means that sometimes the measured latency may contain outliers with very high values (also in the order of tens of seconds).  To permit that every single test is realized without the overlap of other tests, scheduling the tests is recommended in order to automatically run the test for every RI. In this way, the latency of every single measurement is not compromised by other tests. In the next paragraph, the JaNDER Level 0 test is described.

Test JaNDER Level 0: Test Description
The layer considered for this test is JaNDER Level 0. Figure 5 shows the conceptual model of this test. It aims to measure the time required for the data synchronization from an RI to the Cloud Platform: . In practice this time should be similar to the latency value of a public network such as the Internet. Since the Internet is not a deterministic network and does not present Quality of Service (QoS), but it is based on best-effort service, obviously the results must always consider this characteristic of the Internet, so it means that sometimes the measured latency may contain outliers with very high values (also in the order of tens of seconds). Every single RI, at the scheduled time, will send 1000 times a group of measurements. The test is repeated for four times with groups of 1, 10, 100, and 1000 measurements every time. The tests have been executed for a week, in slots of 30 min from 09:00 to 17:00, for each single RI. As already mentioned these experiments were performed using a scheduler in such a way as to have at any Every single RI, at the scheduled time, will send 1000 times a group of measurements. The test is repeated for four times with groups of 1, 10, 100, and 1000 measurements every time. The tests have been executed for a week, in slots of 30 min from 09:00 to 17:00, for each single RI. As already mentioned these experiments were performed using a scheduler in such a way as to have at any The synchronization between the machines is obtained using NTP.
For this type of test, a lot of data was collected-16 tests a day, four for every partners for four number of measurements. Every test consists of 1000 repetitions, for a total of 16,000 single repetitions for day. This was possible thanks to the automatic script system. Here a test summarizations (after a data cleaning):  Figure 6 shows that during the test the server has no Central Processing Unit (CPU), memory, network problems (of course, there is only one active client). This means that the latency measurement is not influenced by the machine (an Amazon AWS EC2 c3.large).  Figure 6 shows that during the test the server has no Central Processing Unit (CPU), memory, network problems (of course, there is only one active client). This means that the latency measurement is not influenced by the machine (an Amazon AWS EC2 c3.large).

Test JaNDER Level 0: Test Results
This paragraph shows some results of the JaNDER Level 0 characterization. Considering one of the RI, Figures 7-10 show the latency as function of the number of measurements sent from the RI to the cloud for every single repetitions.

Test JaNDER Level 0: Test Results
This paragraph shows some results of the JaNDER Level 0 characterization. Considering one of the RI, Figures 7-10 show the latency as function of the number of measurements sent from the RI to the cloud for every single repetitions.
The first set of tests considers one single measurement for repetition. Figure 7 shows the results coming from tests related to a single RI for JaNDER Level 0 test.
In this case the picture on the right shows that there are 3997 measurements exchange between the RI and the cloud platform. The average latency is 32.7 ms, and the third quartile is equal to 29.3 ms.
The second set of experiments considers 10 measurements for time. Figure 8 displays the results. In this case, the third quartile is equal to 31 ms.          The third experiment considers 100 measurements for time. Figure 9 shows the results. The third quartile is equal to 31.6 ms. The latency is practically in line with previous tests.
The fourth and last test is with 1000 measurements sent from the RI to the cloud platform. Figure 10 shows that the third quartile is equal to 132 ms. This means that with 1000 measurements at a time, the latency increases from about 30 ms to 130 ms. The first set of tests considers one single measurement for repetition. Figure 7 shows the results coming from tests related to a single RI for JaNDER Level 0 test.
In this case the picture on the right shows that there are 3997 measurements exchange between the RI and the cloud platform. The average latency is 32.7 ms, and the third quartile is equal to 29.3 ms.
The second set of experiments considers 10 measurements for time. Figure 8 displays the results. In this case, the third quartile is equal to 31 ms.
The third experiment considers 100 measurements for time. Figure 9 shows the results. The third quartile is equal to 31.6 ms. The latency is practically in line with previous tests.
The fourth and last test is with 1000 measurements sent from the RI to the cloud platform. Figure 10 shows that the third quartile is equal to 132 ms. This means that with 1000 measurements at a time, the latency increases from about 30 ms to 130 ms. Figure 11 presents a summary of the tests. Figure 11 compares all the tests for different number of measurements (1, 10, 100, 1000) and showing the total statistics (grand total). Blue points represent the outlier of every test. The Internet latency, and thus the JaNDER Level latency, is sometime more than one second.    The first set of tests considers one single measurement for repetition. Figure 7 shows the results coming from tests related to a single RI for JaNDER Level 0 test.
In this case the picture on the right shows that there are 3997 measurements exchange between the RI and the cloud platform. The average latency is 32.7 ms, and the third quartile is equal to 29.3 ms.
The second set of experiments considers 10 measurements for time. Figure 8 displays the results. In this case, the third quartile is equal to 31 ms.
The third experiment considers 100 measurements for time. Figure 9 shows the results. The third quartile is equal to 31.6 ms. The latency is practically in line with previous tests.
The fourth and last test is with 1000 measurements sent from the RI to the cloud platform. Figure 10 shows that the third quartile is equal to 132 ms. This means that with 1000 measurements at a time, the latency increases from about 30 ms to 130 ms. Figure 11 presents a summary of the tests. Figure 11 compares all the tests for different number of measurements (1, 10, 100, 1000) and showing the total statistics (grand total). Blue points represent the outlier of every test. The Internet latency, and thus the JaNDER Level latency, is sometime more than one second.  The JaNDER Level 0 test results confirms that applications with time constant in the range of seconds are feasible with a number of measurements lower than 1000 measurements.

Example of Application: Integration of a Remote OLTC Controller
Real-time testing, using HIL techniques, can evaluate equipment performance in realistic conditions. This could lead in a decrease of their integration time while also reducing potential risks in their field deployment [14]. Components such as controllers and power equipment that are part of actual applications make the HIL setup more realistic and close to the actual conditions. However, there might be a limitation on how many devices a research infrastructure can host, due to space limitation or due to the fact that different components probably are developed by different manufacturers. Therefore, it might be difficult to investigate if any integration issue occurs between the devices in a safe laboratory environment prior to field implementations. For example, possible issues might exist due to communication delays between the different IED's control devices.
In this section the adequacy of JaNDER platform to serve as the interface between equipment located at remote RIs coupling them in the same HIL setup is investigated.
In order to illustrate that, a Coordinated Voltage Controller, which aims to reduce voltage deviations utilizing different inverters and an On Load Tap Changer (OLTC) located in a Low Voltage (LV) benchmark grid, will be tested in a Control Hardware in the Loop (CHIL) setup. At the same time, it will utilize JaNDER to interact with an actual OLTC controller located in a remote infrastructure.
Finally, in actual field implementations, an interface with an industrial communication protocol is established between a centralized controller and remote IEDs. Therefore, using the JaNDER Level 1 platform, which integrates different components located in remote infrastructures with the industrial protocol IEC 61850, a more realistic environment can be achieved.

Test Description
In the aforementioned setup, a centralized voltage controller (CVC) operates in the premises of Electrical Energy Systems Laboratory (EESL) of the National Technical University of Athens (NTUA). The aim of this controller is to minimize the voltage deviations, power losses and the required OLTC operations [14]. In order to do that, the CVC receives measurements such as the load demand, the PVs' production, the batteries state of charge (SoC), and the OLTC's tap position. Then, an optimization problem is solved providing the required setpoints for the PV and battery inverters as well as the OLTC's required tap changes in order to achieve the aforementioned goals. The aim of this optimization is to minimize the voltage deviation from the nominal value while at the same time to ensure that the power losses are not increased considerably, due to the additional reactive power injection, and the OLTC operations are restricted to avoid its wear. This optimization problem is solved with a commercial solver that is able to find a local minimum for non-linear problems. The timestep of each iteration is around 1 min [8].
In this setup, a DRTS located in the EESL, is the backbone of a CHIL setup. Under this setup, the centralized controller can receive and send measurements in real-time. Therefore, both its behavior and its impact on the power system can be investigated under realistic conditions. In the DRTS, the LV Cigre benchmark network is simulated in a similar way to previous works [14,15]. The main goal of the test-case described in this work is to demonstrate the ability of JaNDER to further enhance the setup including an OLTC controller located in a remote installation. Therefore, possible interaction issues between the CVC controller located in EESL in Athens, Greece, and an OLTC controller located in the UDEX laboratory at the Ormazabal premises in Bilbao, Spain, have been investigated. It should be highlighted that the CVC controller has been integrated through the JaNDER Level 1 platform with the OLTC controller. To meet the test setup requirements, an OLTC controller emulator has been used instead of the actual transformer with the OLTC. This controller emulator includes the control of the OLTC and also serves as a real-time simulator of the whole OLTC system. This is achieved by including in the emulator the expected behavior of the OLTC hardware, such as delays in changing taps [24,25]. Therefore, through the JaNDER platform, a HIL setup is interconnected with a custom-made real-time simulator.
JaNDER-Level 1 was used to receive measurements (tap position) and send commands (tap changes) to the OLTC controller. Finally, the tap position measured by the OLTC controller is forced to a simulated OLTC in the DRTS at NTUA's premises in order to include the impact of the actual OLTC's changes to the simulated system. The described advanced setup is presented in Figure 12.
JaNDER-Level 1 was used to receive measurements (tap position) and send commands (tap changes) to the OLTC controller. Finally, the tap position measured by the OLTC controller is forced to a simulated OLTC in the DRTS at NTUA's premises in order to include the impact of the actual OLTC's changes to the simulated system. The described advanced setup is presented in Figure 12.

Results
The operation of CVC controller in parallel with the OLTC controller was tested for a daily scenario with increased PV production. Under this scenario, the CVC is expected to utilize the OLTC in order to mitigate voltage rise issues.
In Figures 13 and 14, the voltage profiles with and without the CVC operation are presented. In Figure 15, the Tap position forced to the OLTC controller by the CVC controller is presented. Figure 12. Test setup using JaNDER.

Results
The operation of CVC controller in parallel with the OLTC controller was tested for a daily scenario with increased PV production. Under this scenario, the CVC is expected to utilize the OLTC in order to mitigate voltage rise issues.
In Figures 13 and 14, the voltage profiles with and without the CVC operation are presented. In Figure 15, the Tap position forced to the OLTC controller by the CVC controller is presented.
JaNDER-Level 1 was used to receive measurements (tap position) and send commands (tap changes) to the OLTC controller. Finally, the tap position measured by the OLTC controller is forced to a simulated OLTC in the DRTS at NTUA's premises in order to include the impact of the actual OLTC's changes to the simulated system. The described advanced setup is presented in Figure 12.

Results
The operation of CVC controller in parallel with the OLTC controller was tested for a daily scenario with increased PV production. Under this scenario, the CVC is expected to utilize the OLTC in order to mitigate voltage rise issues.
In Figures 13 and 14, the voltage profiles with and without the CVC operation are presented. In Figure 15, the Tap position forced to the OLTC controller by the CVC controller is presented.   Under this setup, the ability of JaNDER platform to enhance real-time testing was investigated. The CVC was tested and validated against an OLTC controller and real-time simulator designed by specialized personnel in a different location. This test case clearly showed that JaNDER can be used for testing applications, like the CVC controller with equipment in different RIs without affecting its operation due to the time delays introduced by the interface. In Figure 16, the delays measured during a request for a tap position by the CVC to the OLTC are presented. The most significant delay is introduced by the OLTC controller/OLTC real-time simulator which emulates the actual performance of an OLTC in a transformer. The inverters, which are simulated in the DRTS located in the same infrastructure as the CVC controller, receive their setpoint faster, which results in faster acknowledgement of the setpoint implementation by the CVC controller. Therefore, both the distributed DERs implement their setpoints in different time frames and the CVC controller acknowledges those setpoints at different time stamps. However, the CVC time step (~1 min) is significantly larger than the time required from the CVC controller to send the setpoints to the DERs and OLTC and acknowledge their implementation  Under this setup, the ability of JaNDER platform to enhance real-time testing was investigated. The CVC was tested and validated against an OLTC controller and real-time simulator designed by specialized personnel in a different location. This test case clearly showed that JaNDER can be used for testing applications, like the CVC controller with equipment in different RIs without affecting its operation due to the time delays introduced by the interface. In Figure 16, the delays measured during a request for a tap position by the CVC to the OLTC are presented. The most significant delay is introduced by the OLTC controller/OLTC real-time simulator which emulates the actual performance of an OLTC in a transformer. The inverters, which are simulated in the DRTS located in the same infrastructure as the CVC controller, receive their setpoint faster, which results in faster acknowledgement of the setpoint implementation by the CVC controller. Therefore, both the distributed DERs implement their setpoints in different time frames and the CVC controller acknowledges those setpoints at different time stamps. However, the CVC time step (~1 min) is significantly larger than the time required from the CVC controller to send the setpoints to the DERs and OLTC and acknowledge their implementation Under this setup, the ability of JaNDER platform to enhance real-time testing was investigated. The CVC was tested and validated against an OLTC controller and real-time simulator designed by specialized personnel in a different location. This test case clearly showed that JaNDER can be used for testing applications, like the CVC controller with equipment in different RIs without affecting its operation due to the time delays introduced by the interface. In Figure 16, the delays measured during a request for a tap position by the CVC to the OLTC are presented. The most significant delay is introduced by the OLTC controller/OLTC real-time simulator which emulates the actual performance of an OLTC in a transformer. The inverters, which are simulated in the DRTS located in the same infrastructure as the CVC controller, receive their setpoint faster, which results in faster acknowledgement of the setpoint implementation by the CVC controller. Therefore, both the distributed DERs implement their setpoints in different time frames and the CVC controller acknowledges those setpoints at different time stamps. (~5 s). Therefore, the overall behavior of this application is not affected by the different time frames that the devices react to the CVC controller setpoints. As shown in Figure 16, the most significant delay is introduced by the OLTC operation according to the CVC request. This delay could differ according to the OLTC manufacturer. It is clear of course that if the OLTC controller's response is comparable to the centralized controller's timestep, the overall behavior of the system would be affected. For example, during online operation the centralized controller initiates a sequence by sending setpoints to all the local devices. If the OLTC controller response is comparable to the centralized controller timestep, then it is possible that the tap position will remain the same when the central controller begins measurements in the next timestep. The central controller would then send an additional request (e.g., tap increase) to the OLTC controller. If the OLTC controller implements those request in series, the resulting tap position would be wrong leading to undesired operation of the OLTC. To avoid this, the CVC controller should request, for example acknowledgement from the local controllers that the ordered setpoints have been implemented. The setup presented can effectively reveal such weaknesses of the controller design in a safe laboratory environment.
In this implementation, the most significant factor of failure is the OLTC response compared to the CVC timestep, since the platform's delays are insignificant compared to the CVC timestep. Nevertheless, the platform delays can affect the overall stability of the setup and the accuracy of the results, if applications requiring faster response are considered. The stability of a control system considering the sample rate of the controller for systems with communication network delays has been investigated in [26,27]. A controller with slower response or slower sample rate is usually a way to increase the robustness of the control against communication delays. This is not applicable in setups with several controllers located in remote locations, since altering their response might lead to a behaviour different than anticipated in the field deployment. A general rule of thumb when using the JaNDER platform to create a testing setup is to compare the network delays introduced by JaNDER, with the actual network delays expected in the field deployment or to ensure that the controller has a sample rate that is not affected by the communication delays. Otherwise, the testing setup is not suitable, as shown in the example of decentralized secondary voltage and frequency control of an islanded microgrid under varying time delays [27].

Discussion
From the point of view of the CVC users, this test case using JaNDER was more realistic by interfacing their DRTS with the industrial product. At the same time, the collaboration with the experienced industrial personnel has assured a safe and fast interfacing of both devices, since every However, the CVC time step (~1 min) is significantly larger than the time required from the CVC controller to send the setpoints to the DERs and OLTC and acknowledge their implementation (~5 s). Therefore, the overall behavior of this application is not affected by the different time frames that the devices react to the CVC controller setpoints.
As shown in Figure 16, the most significant delay is introduced by the OLTC operation according to the CVC request. This delay could differ according to the OLTC manufacturer. It is clear of course that if the OLTC controller's response is comparable to the centralized controller's timestep, the overall behavior of the system would be affected. For example, during online operation the centralized controller initiates a sequence by sending setpoints to all the local devices. If the OLTC controller response is comparable to the centralized controller timestep, then it is possible that the tap position will remain the same when the central controller begins measurements in the next timestep. The central controller would then send an additional request (e.g., tap increase) to the OLTC controller. If the OLTC controller implements those request in series, the resulting tap position would be wrong leading to undesired operation of the OLTC. To avoid this, the CVC controller should request, for example acknowledgement from the local controllers that the ordered setpoints have been implemented. The setup presented can effectively reveal such weaknesses of the controller design in a safe laboratory environment.
In this implementation, the most significant factor of failure is the OLTC response compared to the CVC timestep, since the platform's delays are insignificant compared to the CVC timestep. Nevertheless, the platform delays can affect the overall stability of the setup and the accuracy of the results, if applications requiring faster response are considered. The stability of a control system considering the sample rate of the controller for systems with communication network delays has been investigated in [26,27]. A controller with slower response or slower sample rate is usually a way to increase the robustness of the control against communication delays. This is not applicable in setups with several controllers located in remote locations, since altering their response might lead to a behaviour different than anticipated in the field deployment. A general rule of thumb when using the JaNDER platform to create a testing setup is to compare the network delays introduced by JaNDER, with the actual network delays expected in the field deployment or to ensure that the controller has a sample rate that is not affected by the communication delays. Otherwise, the testing setup is not suitable, as shown in the example of decentralized secondary voltage and frequency control of an islanded microgrid under varying time delays [27].

Discussion
From the point of view of the CVC users, this test case using JaNDER was more realistic by interfacing their DRTS with the industrial product. At the same time, the collaboration with the experienced industrial personnel has assured a safe and fast interfacing of both devices, since every team was able to prepare the equipment in their area of expertise, thus boosting the overall implementation time.
From the point of view of the industrial partner, the JaNDER implementation has provided an insight on state-of-the-art approaches on smart grid applications. At the same time, the introduction of their equipment in such application presented a new opportunity for future research and investigation area.
Since most of commercial real-time simulators have a considerable capital cost, from the overall project perspective, JaNDER can promote collaboration between different infrastructures in order to expand their real-time testing capabilities without spending significant resources in the upgrade of their equipment and can provide considerable benefits for both a commercial and academic field by connecting geographically distributed real-time simulators.
On the other hand, since the industrial protocol (IEC 61850) is widely supported by JaNDER platform, studying the impact of this communication alongside the electrical grid in real-time, can provide a valuable insight on the impact of the delays and interfacing issues of both networks.
Finally, connecting power components (e.g., inverters) to different infrastructures can also be utilized through this platform to study the latency impact and to test slow varying phenomena of these components. Knowing that this platform was proven as a reliable and valuable tool with no critical latency, it is interesting to expand it in real-time testing without affecting the stability of the experiment.

Conclusions
The integration of distributed renewable energy resources to the grid is increasing its complexity and, therefore, new flexible system test methods are needed to study their impact and to ensure their efficiency and security. A novel HIL setup, based on remote hardware integration, is presented in this paper. This new testing setup is available thanks to the real-time communication tool developed in the context of ERIGrid project called JaNDER.
To demonstrate this new testing setup, a real test case, interfacing an industrial device like an OLTC controller with a remote DRTS with a LV network model, has been implemented. The test results shows that the latency for data exchanging due to JaNDER is lower than 300 ms. Hence, this platform is suitable for testing remote hardware/software and analyzing slow varying phenomena. Indeed the latency does not affect the stability of the experiments.