A Novel Methodology for the Scalability Analysis of ICT Systems for Smart Grids Based on SGAM: The InteGrid Project Approach

: Information and Communication Technology (ICT) infrastructures are at the heart of emerging Smart Grid scenarios with high penetration of Distributed Energy Resources (DER). The scalability of such ICT infrastructures is a key factor for the large scale deployment of the aforementioned Smart Grid solutions, which could not be ensured by small-scale pilot demonstrations. This paper presents a novel methodology that has been developed in the scope of the H2020 project InteGrid, which enables the scalability analysis of ICT infrastructures for Smart Grids. It is based on the Smart Grid Architecture Model (SGAM) framework, which enables a standardized and replicable approach. This approach consists of two consecutive steps: a qualitative analysis that aims at identifying potential bottlenecks in an ICT infrastructure; and a quantitative analysis of the identiﬁed critical links under stress conditions by means of simulations with the aim of evaluating their operational limits. In this work the proposed methodology is applied to a cluster of solutions demonstrated in the InteGrid Slovenian pilot. This pilot consists of a Large Customer Commercial Virtual Power Plant (VPP) that provides ﬂexibility in medium voltage for tertiary reserve and a Trafﬁc Light System (TLS) to validate such ﬂexibility offers. This approach creates an indirect Transmission System Operator (TSO)—Distribution System Operator (DSO) coordination scheme. replicable methodology for the scalability analysis of ICT systems used in Grid scenarios with


Introduction
The scalability of Information and Communication Technology (ICT) systems is a key factor for the large-scale deployment of Smart Grid solutions. Indeed, the successful demonstration of a pilot project is not a guarantee that the systems will have a sufficient level of performance under different boundary and stress conditions. Hence, it is fundamental to evaluate how the technological choices, i.e., the different interconnections and devices composing the ICT architecture, are affected in more challenging conditions than in the pilot. This allows the design of scalable systems, if proper conditions and assumptions are provided, since the architecture has been appropriately analyzed previous to this scaling process. As a result, it also reduces the uncertainty and risk associated to the investment required to undertake such large-scale deployments, since it allows the technological decisions which are made and deployed in today's context to be valid up until the infrastructure is replaced in the future (e.g., in the case of smart metering, typically in a 15-years period). offers creating an indirect Transmission System Operator (TSO)-Distribution System Operator (DSO) coordination scheme through a DSO platform.
The structure of the paper is as follows. Section 1 provides an introduction and motivation to the work presented in the paper. Section 2 presents the general methodology defined for the ICT scalability analysis. Thereafter, its implementation to the particular aforementioned scenario (Large Customer Commercial VPP) is discussed in two parts. Firstly, the qualitative analysis is exposed in Section 3 by providing the potential network architecture bottlenecks. Secondly, in Section 4, such bottlenecks are stressed and their scaling behavior are quantified as part of the quantitative analysis. Finally, Section 5 draws the main conclusions from the study and proposes future work.

General Methodology
This section provides an overview of the proposed two-step methodology. The methodology uses the SGAM as the main source of information. It enables modeling any Smart Grid scenario in a standard way, including enriched data related to the component, communication and information interoperability SGAM layers.
The qualitative analysis used in this methodology is based on a complete analysis of the network architecture of the considered scenario through its SGAM representation. In the qualitative analysis, a set of attributes is selected and used for characterizing the architecture. With this characterization, critical components (devices which are included in the architecture) and links (connections between such devices) can be identified and fed to the quantitative analysis.
The quantitative analysis focuses on stress simulations applied to such critical links in order to evaluate their behaviour when the system is scaled up. The system scaling dimensions considered are: • Increment of the number of nodes (amount of devices) • Increment of the frequency of information exchange (data and commands) • Increment of the measurements taken in each sampling period, which is especially important in the case of smart meters Figure 1 shows a condensed visual aid to understand the methodology steps, along with its own qualitative and quantitative sub-steps developed for the analysis of the ICT scalability of Smart Grid scenarios. These steps are hereafter detailed in this section and the results obtained when applying them to the scenario analyzed in this paper (Large Customer Commercial VPP) are exposed in Sections 3 and 4.  The SGAM developed for the Large Customer Commercial VPP scenario analyzed in the paper has the following components. The TSO is represented using the TSO simulator (bidding) and the P/f controller is used for power and frequency control, located at the left side of the SGAM. The DSO spans across the entire Distribution domain of the SGAM and part of the Customer and DER domains. In the Distribution domain, the DSO is reflected by its own internal tools, such as the AMI Head End, Meter Data Management Database, Supervisory Control And Data Acquisition (SCADA); the tools provided by third parties, such as the Load/Renewable Energy Sources (RES) DSO forecasting system, the Traffic Light System (TLS) and the Multi Period Optimal Power Flow/Power Flow (MPOPF/PF). At the field and station SGAM zones, the components used for data acquisition, such as the Remote Terminal Unit (RTU) (On Load Tap Charger (OLTC)), Smart meter @ Primary Smart Substation (PSS), Smart Meter per feeder, Smart Meter @ Secondary Smart Substation (SSS) and Data concentrator are located. The DSO also acquires the filed data from the DER and Customer Premise domains through the Smart Meters (SM) per DER and per customer. With regard to the VPP, it is represented as a central entity by the Commercial VPP System orchestration environment and all its dependencies, such as Price forecasting system, Time Series Database, Load forecasting system and connection to the Charging Point Operator (CPO) for handling the Electric Vehicles (EV). However, the VPP also has a presence in the DER and Customer Premise domain as it needs to acquire data and control the flexibilities. In order for the TSO, DSO and VPP to exchange information, they use the GM-hub [20] platform, a solution specially developed for InteGrid, which acts as a main communication exchange hub for data sharing. This SGAM is represented in Figure 2.  To perform the assessment, a SGAM template has been used with the inclusion of a numeric layer mapped onto an information table. These extensions are needed as the methodology requires, at a later stage, stress simulations. Hence, when creating the representations and collecting the information, the information table is created. An example of the information table is presented in Table 1. The two steps from the qualitative analysis are detailed hereupon. As previously stated and shown in Figure 1, the first step in the qualitative analysis is to classify a set of attributes, which is obtained through the use of a questionnaire for each considered scenario, by order of interest and impact:

•
The interest denotes whether the attribute shall be considered in the analysis of the scenario or not • The impact denotes the weight of this attribute over the qualitative analysis The aforementioned questionnaire is filled by each of the stakeholders involved in a given scenario. These stakeholders are: a DSO, a VPP a communication platform and two service providers making a total of 5 actors. The questionnaire is completed by using a 1-3 scale system, according to a rating process as shown in Table 2. The results are not shared among the involved stakeholders to avoid influencing their scores. The attributes to be classified include a total of 13 features, grouped in three main categories: These attributes are common for any given scenario being analyzed using the proposed methodology, having been carefully selected to provide the required generalization and taking into account both the academia and the industry point of view [3,6,8,[21][22][23]. Nevertheless, the obtained results might of course differ from one scenario to another.
In addition, it is worthwhile to remark that, since the ICT scalability analysis deals not only with the technical specifications of components (devices) but also with links (how devices are connected), the attributes must be filtered according to their relevance to either components, links or even both. The categories, attributes and the relevance of each attribute towards components and links is collected in Table 3.  Based on the results obtained in the classification by interest and impact, a weighting process takes place to decide whether an attribute is selected for the quantitative analysis or not and what role is going to play in such an analysis. This is necessary since the efforts have to be focused on those attributes which are important to the parties involved. The weighting process is done by giving each of the parties involved a weight based on their own importance in the system, as it is considered, for example, that the DSO must have a greater importance than the technology providers. Correlating these weights and the results obtained, it is then possible to have, as an outcome for each scenario under study, which attributes will be considered in the next sub-step, as well as the importance of these attributes.
The second sub-step in the qualitative analysis aims to provide an architecture characterization based on the current status, in addition to highlight potential critical components and links within the architecture. To provide such outputs, the analysis incorporates elements from the graph theory [24], where the SGAM links are considered the edges, the SGAM components are considered the nodes, and weights are applied through the internal scores and the attribute impact (result of the previous step). The internal scores are based on a score system based on a 1-5 scale. Each component and link has to be granted with an internal score with respect to each selected attribute. An example of the 1-5 scale scoring is presented in Table 4.
All the actors or stakeholders in each scenario under study are responsible to provide the scores for all the components and links they own in the architecture according to their current status of deployment in the field. The scores are completely unknown by the companies involved in the implementation of the demo associated to the considered scenario, anew to avoid any possible biasing. Based on all distribution weights, the weight process results in a potential list of critical components and links when considering the scalability of an ICT network, which can then be analyzed. From this analysis, the critical links are fed into the quantitative analysis for stress simulations. The second step of the global methodology is the quantitative analysis. It takes as input the results obtained from the qualitative analysis. Again, this step is subdivided into two sub-steps. Firstly, based on the information from the qualitative analysis (list and relevant information from the potential critical components and links), link models are developed for the stress simulations in the selected simulation environment (OMNet++ [25] in the case of this work, although other simulation frameworks would also fit the proposed methodology). The models consider the link's protocol stack used to transfer information from one component to another component; whereas the components and other boundary conditions are taken into consideration for the technical characteristics of the end-to-end points. Secondly, these models are simulated using best-case and worst-case scenarios based on the link owner's expectations, when certain parameters, such as the number of components, data size or frequency of information exchange between components, are scaled up.
As it has already been mentioned, in this work these models and its subsequent simulation use OMNet++, as it is a customizable and well-known network simulator widely used in previous research work, such as [26][27][28].

Qualitative Analysis
In this section, the results from the qualitative analysis of the proposed methodology applied to the specific scenario analyzed in this paper and represented in Figure 2, are presented with the aim of better illustrating how it works and the benefits it provides. The scenario represents a Large Customer Commercial VPP that is being deployed and tested in the InteGrid Slovenian pilot.

Classification of Attributes
This first sub-step depends on two main factors to filter the attributes, i.e., which actor is involved in the scenario under study and what is their weight in the demonstration. The information of which actor and their respective weight (weight factor) is summarized in Table 5  Based on the actors involved and the distribution (weight) they have in the system, the first questionnaire, where each actor has to evaluate each attribute in terms of interest, which determines if the attribute is considered, and impact, which determines the weight used in the Architecture Characterization-Scores step explained in the next subsection. These results are collected in a compact manner and presented in Table 6. From the results obtained in Table 6, Tech generation is completely discarded for the entire analysis. The reason is due to the fact that the technologies used are based on current well-established technologies which already offer a sufficient support and maintenance, hence the lack of interest towards this attribute.

Architecture Characterization-Scores
The second step of the qualitative analysis deals with the Architecture Characterization.
Here components and links are granted scores by each responsible actor with regard to the considered attributes based on their current status. These scores are later computed to identify the critical components and links within the architecture, with respect to the attributes categories.
The scores obtained for each component are represented using spider-web diagrams. These scores aggregate the total score in each category. Thus, the potential bottlenecks are identified by those components or links which have the lowest scores. It is also interesting to review those components or links which have the highest score. They are considered to be highly complex or important, components or links, which can also results in potential bottlenecks due to their complexity when the system scales in size. These scores are included in Figures 3 and 4.

Outputs of the Qualitative Analysis
In this section the output scores are analyzed in order to identify the potential critical components and links, as well as to provide the quantitative analysis process with the respective link information for its subsequent stress simulation tests.
The main dimensions considered in the qualitative analysis for potential bottlenecks are: • Increment of data sources due to the new number of devices (e.g., new flexibilities in the system).

•
Bigger data size (higher granularity of the data) from the data sources. From these 11 potential critical components, all can be considered minor constraints. For those components located in the upper part of the SGAM (see Figure 2), it is clear that their potential issues can be solved by either increasing the available computational resources through an investment if needed (e.g., price forecasting system and Load/RES DSO forecasting system) or by being aware of the complexity of managing a subsystem (e.g., commercial VPP system orchestration environment), which will lead to time investment, as the addition of new devices is not a plug and play solution.
For those components located in the lower part of the SGAM (see Figure 2), their issues are again related with the computational resources. This does not suppose a real problem, as once the architecture is understood, the technical capabilities are already well-dimensioned for the scope of use of such devices. However, since the number of data sources (Smart Meters) tend to increase as new flexibilities are foreseen in the system, new measuring points and new customers could be potentially included, and the technical junction nodes (data concentrators and RTU) should be aware of the data which will be handled by them. In the other cases, where the component is an actuator (e.g., DER control, Flex control) based on an incoming signal, since it is a 1-to-1 relation with the DER-Customer flex, no super processing power is required, therefore no scalability constraints are foreseen.
When analyzing the link scores obtained in the 3 categories represented in Figure 4, 19 links result in potential critical links. Nonetheless, only 9 links are truly considered to be further explored within the quantitative analysis. The remaining are discarded for the following reasons: • Being point-to-point communication with a physical connection interface which can be fully used.

•
A calculated latency where only if a required frequency of exchange is high enough for response times (real operation for control purpose of asset steering) would require a small upgrade on the communication technologies from GPRS to 4G or upcoming 5G to fix it.

•
Internal actor networks (intranets), which can be easily optimized if needed at anytime.
Considering the aforementioned points, the links listed in Table 7 are considered to be critical, due to being the connection between networks and their devices to data concentrators, and thus need to be further evaluated in the quantitative analysis. Regarding links 9, 9', 10 and 10', it should be noted that in-field smart meters can communicate either directly to the backend of the DSO by means of GPRS (links 9' and 10') or through a data concentrator (links 9 and 10) depending on whether a monolithic or a hierarchical communication architecture is used respectively, which in turn may depend on many factors, such as the topology of the power distribution infrastructure (e.g., rural vs. urban scenarios) [22]. If a hierarchical architecture is used (i.e., data concentrators are in place), Narrowband PowerLine Communications (NB-PLC) are the most widely used solution in the so-called 'last-mile' of Advanced Metering Infrastructures (AMI) [29], G3-PLC being used in the Large Customer Commercial VPP specific scenario analyzed in this paper. In this latter case, the data concentrators aggregate the data coming from the smart meters and send them to the backend of the DSO through GPRS. Table 7. Summary of potential critical links to be considered in the quantitative analysis for the Large Customer Commercial VPP scenario.

Quantitative Analysis
This section presents the quantitative analysis carried out for the critical links identified in the qualitative analysis explained in Section 3, which is focused on the Large Customer Commercial VPP scenario that is demonstrated in the InteGrid Slovenian pilot.

Overview of Simulation Tools and Related Work
Although experimental tests in real environments, such as mock-ups or laboratories, presents benefits such as the validity of the obtained results if the tests are properly carried out [30][31][32], simulation tools represent a powerful, flexible and cost-effective solution to quantitatively assess scalability issues in Smart Grid infrastructures. There are a number of advantages when taking a simulation approach to study this kind of scenarios, such as cost reduction or the evaluation of several alternative solutions for a given scenario at once.
Since the Smart Grid brings power and ICT together, such simulation tools need to consider the effects of both dimensions [33]. Currently, there are different approaches found in literature to tackle this issue, combining the effects of both dimensions to different extents: • Decoupled simulations. This approach is based on simulating each par of the problem independently using a commercial or validated software. On one hand, this approach uses appropriate state-of-the-art simulations for each dimension. On the other hand, it is limited since it is difficult to relate the output of each simulation. Some examples of this approach available in the literature are [34], where the performance of a wireless communication architecture for energy efficiency and Distributed Generation integration is evaluated taking into account the characteristics of the underlying power infrastructure [35], or the evaluation of the performance of the NB-PLC technology PRIME (PoweRline Intelligent Metering Evolution) using the well-known simulator SimPRIME [36] in different Smart Grid scenarios, such as Advanced Metering Infrastructures [37][38][39][40][41] or Demand Response [42]. All these studies were carried out using OMNeT++ Communication Network Simulator.

•
Monolithic simulations. An straight-forward alternative to improve previous design is to build a simulation model that includes all the effects of both the telecommunication and power system's part of the problem. Although this would provide good and more realistic results, the creation of such software will be complex and time-consuming. One specific aspect that makes this task very complex is the fact the the kind of effects that need to be modeled in each part of the problem (the telecommunication and the power systems) require a different simulation approach. Whereas the telecommunication simulations are based on event-based simulations, the power system simulations are based on the solution of transients through differential equations. The reader can refer to some monolithic simulators in the literature such as the Electric POwer and Communication syncHronizing Simulator (EPOCHS) [43] or the Global Event-driven CO-simulation framework (GECO) [44]. • Co-simulation. An alternative to previous approaches is the co-simulation. The basic idea underneath it is the use of specific simulations for each one of the problem and interconnect them using some kind of standardized solutions. This adds some more computational complexity but provides more realistic results, since one simulator is fed-back with the partial results of the other and vice versa. The Virtual Grid Integration Laboratory (VirGil) [45] is an example of this approach that uses Functional Mock-up Interface (FMI) [46] to interconnect three simulators: PowerFactory, OMNeT++ and Modelica. This project was followed-up by the CyberPhysical Co-Simulation Platform for Distributed Energy Resources in Smart Grids (CyDER) [47]. Moreover, one additional advantage of this type of solution is the possibility of including Hardware-in-the-Loop (HiL) in the simulation, as reported in [48]. A complete overview of other research initiatives is available in [49].
The work presented in this paper falls within the first approach (decoupled simulations), since the goal is to evaluate the performance of certain critical communication links of given Smart Grid scenarios under stress conditions in order to identify whether the available infrastructure is prepared for future challenging situations or whether new planning and investments are required and, in order to get the boundary conditions from the power perspective, the qualitative analysis is carried out instead of power simulations.
The following subsections describe how the models have been implemented, the range of parameters used in the simulations and the corresponding results and comments.

Considered Scenarios
The potential bottleneck links coming from the qualitative analysis are grouped based on their protocol stack to come up with the following scenarios (also listed in Table 8), which will be simulated as part of the quantitative analysis to evaluate their performance under different conditions:

2.
Scenario B (links 9 and 10): The communication layer-stack in this scenario is set to DLMS/COSEM messages being transmitted over G3-PLC.

3.
Scenario C: The communication layer-stack in this scenario is set to DLMS/COSEM messages being transmitted over TCP/IP/xDSL. This technology was included in the simulation as an alternative to GPRS (e.g., for link 11'). It can be also seen as a replicability and scalability analysis for links in Scenario A.
The two possible dimensions for scaling up is either to generate more data, which can be done by increasing either the number of components or the frequency of measurement-data gathering, or by increasing the frequency of exchange, i.e., moving to real-time communication. Combining these two dimensions, the best-case and worst-case situations, shown in Table 8 and explained in detail in the next subsections, are obtained.

Simulation Modeling
Modeling represents a very important stage when it comes to simulations. The main reason is that the relevance of the results obtained from the simulations will depend on how well the model behind such simulations represents reality or covers the most important features of the target problem. Regarding this, it goes without saying that perfect models do not exist. In addition, very complex models that try to include many features do not guarantee better results in any case, well-known principles such as the Occam's razor principle or the KISS principle suggesting even the opposite [41].
Regarding the modeling of the application protocol used in the identified critical links, DLMS/COSEM will be modeled based on the size of the payload, since this is the main feature that affects the target simulations. The request/response mechanism will also be modeled. Thus, based on previous research work [50], the size of the requests will be set to 71 Bytes and the size of the response will be set to 1576 Bytes.
For TCP/IP, the implementation available in the OMNET++/INET Framework will be used, since it is a consolidated and widely adopted implementation. In the case of TCP, the Maximum Segment Size (MSS) needs to be configured. Based on previous research studies, low MSS (e.g., 512 Bytes [51] and 413 Bytes [52]) is appropriate for interactive applications; whereas high MSS (e.g., 1400-1600 Bytes [53]) is appropriate for bulk data exchange applications. As a result, in this work both low and high MSS will be considered in order to assess the impact of this parameter in the obtained results.
GPRS represents the most challenging part of the model. There are some accurate implementations available for OPNET, however, this simulation tool is licensed. In the case of OMNeT++, [54] presents a proposal for this purpose, but it is deprecated. Therefore, as in the GPRS Radio Access Network (RAN) Frequency Division Mutiple Access (FDMA) and Time Division Multiple Access (TDMA) are combined resulting in virtually dedicated channels, this technology has been approached in this paper by Virtual Channels or Local Loops (LL) that connects each user to an aggregator, as shown in Figure 5a. From the aggregator to a RouterModem, a Back-Bone (BB) Optical Link is assumed. The following parameters have been considered and set for each of these virtual links: • Uplink data rate: 26.8 kbps [55].

•
Probability of error based on the theoretical availability of the channel: P e = 0.001 [56]. • Availability: 97.07% based on [22].
In the case that xDSL is in place as an alternative to GPRS (e.g., in link 11'), the modeling is also quite complex, since the performance of the technology depends on many factors, such as the length of the cables or the noise mask. Thus, the simplified model in Figure 5c is proposed: dedicated Local Loops for each user to the Digital Subscriber Line Access Multiplexer (DSLAM), followed by a Back-Bone Optical Link. For the uplink, three data rates have been considered depending on the distance from the subscriber to the DSLAM: Regarding the probability of error, a packet error rate of 10 −5 has been assumed for the Local Loop (LL) and of 10 −9 for the backbone.
With respect to G3-PLC, the technology defines four communication modes depending on the conditions of the channel. Three communication modes use a Reed-Solomon and a convolutional encoder to implement Forward Error Correction (FEC) techniques together with three differential phase modulators: Binary Phase Shift Keying (DBPSK), Quaternary Phase Shift Keying (DQPSK) and 8 Phase Shift Keying (D8PSK), respectively. These three communication modes are referred to as "Normal". In addition to this, a fourth communication mode is defined by preceding the Reed-Solomon encoder with a Repetition Code block, which increases the robustness of the communication, and uses only DBPSK modulation. As a matter of fact, this communication mode is known as "Robust". The communication modes included in the model used in this paper are: Robust, Normal-DQPSK and Normal-D8PSK, going from more robust (in order to consider noisy scenarios) to less robust (in order to consider less noisy scenarios). Figure 6 shows the performance, in terms of Bit-Error Rate vs. Signal-to-Noise Ratio (BER vs. SNR), achieved by each of the communication modes. The price to pay for this robustness is a detriment in the transmission rate, as it can be seen in Table 9, where data-bit rates are shown for each mode.
From a modeling point of view, the implementation of the G3-PLC technology has been done using the Ethernet model available in OMNeT++/INET, since both technologies are based on a shared medium and use Carrier-Sense Multiple Access (CSMA) as a contention mechanism. In addition to this, packet sizes, transmission rates and error probabilities have been modified to correspond to the case of G3-PLC. A snippet of the model developed in OMNeT++ is shown in Figure 5b, where all nodes use the same transmission medium to communicate.

Simulation Setup
A number of different scenarios have been simulated using the models detailed in Section 4.3 and varying a set of parameters in order to cover a wide range of situations. Parameters considered in the simulation are: • The number of nodes in the networks. In order to model different network sizes, simulations consider 10, 100 and 1000 nodes.  For all different scenarios, 100 simulations (i.e., 100 runs) have been performed, each one of them running 1000 connections from all nodes in the network.
In addition, from the simulation, the following Key Performance Indicators (KPI) have been studied:

•
Percentage of link usage. For the sake of clarity, this paper will focus mainly on the best and worst cases for each simulated scenario. The details of the parameters for each case are shown in the corresponding columns of Table 8. The mean percentage of channel usage for scenario A can be seen in Figure 7. As shown, the Back-Bone can easily handle the traffic for the scenario, due to the high transmission speeds provided by the optical link present between the concentrator and the RouterModem. For this part of the link there is no need for new infrastructure.
Nevertheless, it is worthwhile to mention that a dedicated cellular infrastructure is assumed. If a shared cellular infrastructure were in place, these simulations would allow the telecom operator to assess the percentage of its infrastructure that would be required to deliver this service.
When considering the Local-Loop usage, it can be seen how the usage decreases with the number of nodes, showing an opposite behavior compared with the Back-Bone. The reason for this is that, since the polling of the nodes is done in a sequential manner (i.e., a new TCP connection is not requested until the previous one has been correctly closed), the time between two polls of a given nodes takes longer. Since the corresponding Local-Loop is not used in between the two polling, the usage drops. Figure 8 shows the dependency of the RTTAll (mean values of all simulations) with respect to the MSS used. While it is obvious that the RTTAll increases with the number of nodes, a decent amount of time can be saved by increasing the MSS to its highest value. This dependency can be explained since the payload-to-header length index is increased, reducing the effective message's length and, thus, its transmission time.

Scenario B (Links 9 and 10)
The main difference between the previous scenario is that communications are now performed using a shared medium, i.e., only one node can transmit at a given instant. This produces much higher delays, even when compared with GPRS (which uses only half of the transmission speed than Normal-D8PSK in G3-PLC), as shown on the top graph in Figure 9. According to the figure, time requirements for the worst case are only met when transmitting at the highest speed. This is, the TRRAll One additional consequence of the shared medium is that the percentage occupancy of the channel is, once again, much higher than in other scenarios, since the same channel is used no matter what node is being polled. This was not the case in other scenarios where each one had its own dedicated Local-Loop and, once one node was being polled, Local-Loops corresponding to the other nodes were unused. Moreover, as it could be expected, the two bottom graphs in Figure 9 shows how the channel is more frequently used in the up-link direction than in the down-link. The reason for this is the difference in application message size, as mentioned in Section 4.3.  Analogously as before, the Back-Bone optical technology is able to cope with the traffic requirements according to the mean usage values shown in Figure 10. As in the previous scenario, there is no need of new infrastructure for this part of the telecommunication network.

G3-PLC -RttAll Vs. Scenario
When looking at the Local-Loop links, it can be seen how the usage is much smaller than in Scenario A. The main reason for this is that the technology used in the Physical Layer (xDSL) provides much higher transmission rates that the one in Scenario A (GPRS). Indeed, xDSL is designed to be used for high-bandwidth applications, thus using it for sending metering data represents an under-usage of this kind of technology.
With respect to the RTTAll values obtained in this scenario, it can be seen how low transmission speeds on the Local-Loop decreases performance rapidly and TCP fragmentation does not seem as significant as in the case of Scenario A (GPRS). These results are represented in Figure 11.

Conclusions and Future Work
The work presented in this paper explores the value of the methodology proposed within the frame of the European project InteGrid to analyze the scalability of ICT infrastructures for Smart Grids. Such a methodology is implemented in two consequent steps for smart grid projects before their substantial expansion. The main success is to combine a qualitative approach based on SGAM with a quantitative approach to analyze smart grids. This methodology can be easily replicated in other projects and scenarios to provide a fast, clear output and complete overview of the potential scalability of a Smart Grid solution. Additionally, it involves all the actors in the Smart Grid solution, fostering relations and collaboration but also bringing internal and external awareness to each of the actors, as they are included in the analysis assumptions and requirements.
In this paper, the analysis has been conducted into a set of solutions demonstrated in the InteGrid Slovenian pilot. This pilot consists of a Large Customer Commercial Virtual Power Plant (VPP), which considers TSO-DSO indirect interaction where distributed flexibility is aggregated at MV and offered to the TSO upon a validation from the DSO when used for tertiary reserves. The means of communication is done through a communication platform connecting every TSO, DSO and VPP.
The scalability analysis with respect to this scenario from both the qualitative and quantitative point of view indicates that no major constraints are foreseen. Hence, the system architecture which is based on legacy systems in addition to new systems such as smart meters for the VPP, has been correctly dimensioned for the use case which is demonstrated in the pilot and beyond the pilot extension.
Actors through this analysis had to reconsider whether their own system criteria are met during large scale operation since they had to analyze their current status. Nonetheless, those minor constraints found during the analysis are mainly based on the manageability of the overall system, as it is a complex solution. An increase of flexibility (increase in the number of devices) aggregated by the VPP would be the system challenge, as it is not a plug and play solution.
Meanwhile an increase in data size, given from a possible higher granularity of the data sources (technically possible with the current devices), will require the actors to be aware of the storage and increase it when needed. This increase of data size can be considered a driver for internal services as the forecasting systems deployed, since they would have a higher pool of data for their algorithms. If so, processing power would again be no constraint, as they are based on scalable solutions (e.g., Cloud).
Finally, an increase of frequency of exchange results is a constraint only for certain technologies if the boundary conditions are changed. This means that moving to real operation of data streaming is no constraint for the services which are currently offered, but if other services rather than tertiary control reserve are offered, such as secondary or even primary control reserve, it is recommended for a system upgrade in the communication infrastructure, moving from 2G to 4G and 5G.
As for the future work, it is planned to apply the methodology to other scenarios involving different architectures and technologies where critical components/links may lead to extending or adapting the methodology. In addition, the spider-web diagrams presented in Section 3 allows for performing a graphical diagnosis. Thus, the consideration of the use of automatization of such a diagnosis using simplified machine reading code may be also conducted. Finally, it will be also interesting to complement this scalability work with replicability aspects in order to increase the impact of the analysis.   Table A1. Standard deviation of the results for the different scenarios analyzed with G3-PLC technology ( Figure 9).