Erigrid holistic test description for validating cyber-physical energy systems

: Smart energy solutions aim to modify and optimize the operation of existing energy 1 infrastructure. Such cyber-physical technology must be mature before deployment to the actual 2 infrastructure, and competitive solutions will have to be compliant to standards still under 3 development. Achieving this technology readiness and harmonization requires reproducible 4 experiments and appropriately realistic testing environments. Such testbeds for multi-domain 5 cyber-physical experiments are complex in and of themselves. This work addresses a method for the 6 scoping and design of experiments where both testbed and solution each require detailed expertise. 7 This empirical work ﬁrst revisited present test description approaches, developed a new description 8 method for cyber-physical energy systems testing, and matured it by means of user involvement. 9 The new Holistic Test Description (HTD) method facilitates the conception, deconstruction and 10 reproduction of complex experimental designs in the domains of cyber-physical energy systems. This 11 work develops the background and motivation, offers a guideline and examples to the proposed 12 approach, and summarises experience from three years of its application. 13


16
With Smart Energy 1 solutions reaching higher technology readiness [1], the question of 17 appropriate testing becomes pressing [2]. Testing is necessary throughout development as well 18 as before roll-out of market-ready products [3], employing virtual, physical, and hybrid testbeds 19 [4,5]. A key issue for testing of smart energy solutions is their mixed-technology nature involving 20 communications, controls, and multi-domain physical infrastructure, which affects both availability of 21 engineering expertise and suitable tool integration [6]. 22 An appropriate test is then an issue of sufficiently clear test objectives and a specific and relevant 23 multi-domain test environment [3,6,7]. The standards for technical quality and appropriate levels of 24 scrutiny in testing are set within the specific context of a scientific discipline or technical application 25 domain. For example, organizations within automotive, thermal systems or electric power domains 26 each identify and maintain their specific standards, test requirements, protocols and test environments. 27 For a project coordinator, system integrator, solution developer, test engineer, or researcher, 28 a project aim often is to increase the Technology Readiness Level (TRL) [2] of a specific smart 29 energy solution. Rather than development, the ultimate project aim would thus be a validation planning, experiments, and reporting? 83 We are interested in facilitating the scoping and design of validation tests and experiments by 84 offering a better formal framing and a procedural guideline. In this work we focus on the preparation 85 of technically "holistic" test descriptions (characterised by a multi-domain and systems-of-systems 86 view towards a formalized description covering design and validation) with application to Smart 87 Energy problems, and reports its use in a number of cases. The presented approach in this article has 88 been developed in the European ERIGrid project [16] and an early version of it was already discussed 89 in [7,17]. 90 The remainder of this article is structured as follows: Section 2 indentifies the context and 91 background of test description methods. Section 3 provides a thorough guideline to the HTD approach 92 and Section 4 provide an illustration example and reports on HTD applications. Finally, Section 5 93 concludes this article. 94 For readers focused on the applying the HTD in their own work, we refer to Sections 2.2 for 95 context, Section 3 for the guidelines, and Section 4.1 for the discussion of an application example.

97
To achieve a holistic view on test descriptions, we ought to be aware of their full context, in 98 terms of related work (Section 2.1) purpose, formal context, technology (testbeds), and methodology 99 (test procedures). This requires separately examining the purpose of testing in a formal context requirements specification and system granularity. This decomposition enables parallel development 126 of sub-systems and components, while tracing requirements to overall system purposes. The bottom-up 127 phase represents the validation and integration of different solution aspects and sub-systems. This 128 V-model can be interpreted classically as "waterfall" sequential process, but can also be applied to 129 modern concurrent engineering as a conceptual hierarchy, where the V-model establishes a strong 130 coupling of requirements specification and testing: at every stage of development, experiments are 131 based on a) requirements identified earlier in the design process (i.e., in the top-down phase), and b) 132 an assembly of components validated in a previous stage of testing, and c) the appropriate type of 133 testbed (dark red in Figure 1). 134 Also visible in this illustration of the V-model is the relation between system requirements 135 and test specification, and its widening speficiation gap. This specification gap appears at higher 136 levels of integration, and is amplified when the test involves the integration of several domains with 137 fundamentally distinct natures (e.g., power system and ICT). 138 In engineering and research practice, the conceptual difference between design and testing is 139 easily obscured at early development stages; improved use of simulations and software tool integration: 140 in (simulation-based) design, the focus is on structural and parametric changes to a (simulation) model, 141 which lead to an incremental adaptation of a system design. In contrast, for testing, the system is 142 fixed, and an experiment is set up to quantify a property or to validate a hypothesis (e.g., function, 143 performance) about the present system design. As the system grows in scale and complexity, also 144 the formulation of a test hypothesis becomes non-trivial; on the one hand it is driven by the (more 145 complex) system requirements, but larger and more complex experimental setups are required.

146
A holistic test description would support this re-framing from engineering design to test design, 147 helping to narrow down the test purpose and test system requirements.

149
The essence of framing an experiment is therefore in the formulation of a test hypothesis. In 150 CPES two key aspects of a test hypothesis are the boundary of the test system and the system qualities 151 to be assessed. System qualities of interest would typically be derived from system requirements or 152 related engineering concerns. For the identification of a system boundary, we have to consider both 153 the system functional and structural architecture, and its environmental embedding. This hypothesis 154 should be developed as independent from the testing tool. Only later in the experiment design, the 155 testbed properties are required to define embedding of a system part being tested into an emulated or 156 simulated experimental environment.

157
To achieve an operable integration between the different stages and phases of the V-model, 158 we distinguish the semantic context of the energy system solution from the context of testing and 159 embedding in a testing platform. In spite of overlapping terminology and tooling between these 160 contexts, each has its own set of engineering requirements and purposes:

161
1) The energy system semantic: represents the behaviour and the semantic relations among the 162 different actors of the system. Depending on the considered energy system and the information 163 models, this semantic represent the application relevant purposes, components and structures of 164 the system. (i.e., the "real world application").

165
2) The testing semantic: the purpose and content of a single or set of tests. It relates the real-world 166 motivation for a test to the concrete system configurations and functions to be included in an 167 experiment.

168
The aforementioned specification gap (see Figure 1) can now be described by three gaps: i) the 169 translation between these two semantics, ii) the lack of testing semantics for the multi-domain nature 170 of a cyber-physical energy system, and iii) missing semantics and integration for the advanced testing 171 technologies of CPES. At present, this gap is addressed manually by engineers proposing a specific 172 test-setup and validation criteria. The process is therefore subjective and presents difficulties for 173 keeping a common understanding across different stakeholders, test-stages, and for eventual system 174 integration. and syntactic configurations of a system while the dynamic and technical configurations are provided 185 by the specific implementation technologies (TCP/IP, modbus, DNP3, etc.). While they can be readily 186 used for system specification, there is a need to improve support for modelling other domains (e.g.,

187
ICT and thermodynamics). Nevertheless, the energy system semantics can be used as building blocks 188 for the CPES design but the link from these information models to the validation setup is obscured, 189 hence, the specification gap.

190
The SGAM proposes an interoperability architecture that covers mainly the conceptual and 191 semantical interactions in a multi-domain smart grid. TDL and TTCN-3 are utilized in information domain. However, in order to apply them to CPES 218 assessment and validation, there is missing a means to establish a concrete link to energy system 219 specifications, as the ETSI suite is not meant to interface physical structures and functions. This gap 220 may be filled by integration of a complementing energy system semantic.

221
The holistic test description addresses both the energy system semantic and testing semantic,  Each of these approaches entails coupling of different testbed contexts. Thus, in addition to 250 increasing complexity of the CPES and complexity of testing semantics noted above, the diversified 251 and rapid advancement of testbed technologies needs to be addressed to encompass the complete test 252 description. Issues here include the establishment of a common information model across the diverse 253 testbed, synchronization, logging and time-stamping, as well as methods for the coherent initialization 254 of the test setup.

255
The holistic test description proposed in this paper is intended to resolve this challenge in part 256 by aiming to fill in the specification gap also at the level of testbed description and mapping of test 257 specifications to testbed. The statistical concept of Design of Experiments (DoE) has been developed to address result 260 significance and reproducibility in experimentation. The phrase has been coined by Fisher [45] who 261 has established many fundamental concepts of the methodology as well as an abstract terminology 262 that allows DoE to be easily mapped to any application domain. In its essence, DoE provides a 263 statistical framework to explore the influence of different factors on a system's response. A special 264 focus is put on avoiding the confounding of factors so that their influences can be distinguished 265 from each other. While these basic ideas of DoE had initially found application in agricultural and 266 clinical research, over time they have also been adopted by the engineering domain to improve 267 product optimization, quality assessment and validation [46,47]. Especially in the context of software 268 simulation, the DoE framework has been widely adopted and modernized by the extension to more 269 complex, multidimensional sampling algorithms [48,49]. So far, however, DoE application is mostly  In the course of this work, the authors demonstrate how the DoE methodology can be seen as an 280 intrinsic part of a HTD. It provides testing with the statistical groundwork for efficient experimentation, 281 result reproducibility and significance of the outcome against noise in the tested system. A first 282 discussion of the relationship between DoE and holistic testing has been given in [50]. The work 283 presented in this paper partly builds up on this first approach and aims to provide a more general 284 understanding.

286
In practice, test description means to write up intentions and draw out configurations, to identify 287 and define the essential parameters and procedural steps for conducting a test. The HTD aims to 288 support testing and test description practicioners in laying out these intentions in a clear and traceable 289 manner, in spite of the complexities arising in CPES testing which have been outlined above. The HTD 290 approach comprises a set of textual templates [51], a graphical notation and partial processes that may 291 be employed by a practitioner to structure, refine and document their testing endeavour. The whole 292 process is outlined in Figure 3. Like any model process, the HTD offers a supporting structure and 293 raises relevant questions 3 . Whereas users have reported benefits from using of HTD templates in early 294 phases of test scoping and planning; yet also the fully documented test description may be relevant in 295 cases where a complete trace of the experiment design is valued. The supporting structure offered by 296 the HTD has some complexity; for any learner it can be useful to practice once on a simple problem, to 297 avoid too steep a learning curve during a complex application. In test applications involving multiple research infrastructures or testbeds, it is unavoidable to 299 follow an approach that likes the here described HTD method, including the development of new 300 testing chains, round robin testing or the online coupling of research infrastructure. Essentially, the 301 HTD provides a framework for separating test-bed test objectives, and supports the qualification of 302 test-beds as part of the testing approach. It is expected that a minimal HTD use is beneficial in any 303 multi-disciplinary testing effort.

304
The following sections provide a modular overview of the HTD approach, enabling readers 305 to quickly grasp the purpose of different parts of the HTD and assess which of them will be more 306 applicable in their test. First, Section 3.1 provides an overview of the elements, then Section 3.2 307 highlights important aspects of the HTD in more detail. For the same reasons it can seem overly formal and tedious to apply when the testing problem is simple. For example, a practitioner who is completely familiar with their laboratory may find little need to follow the steps of an "Experiment Realization Plan".  The Test Case structures the motivation for a test. By combining narrative, with graphical, 329 qualitative, structured and quantitative/formal elements, domain specifics are given a shared testing 330 context. In Figure 4 the TC template elements are summarised. We can identify three main parts:

331
Firstly, The Test Objectives in narrative form, and their more analytical form as Purpose of Investigation 332 (PoI); secondly, the description of system functions and components to organize the System under 333 Test (SuT) and its functions, and isolate the focal points of the investigation; finally, the Test Criteria, 334 which present a further formalization of the test objectives in terms of measurands of performance and 335 behavior.

Test Objectives
Why is the test needed? What do we expect to find out? A short narrative of context and goals of the test.

Purpose of Investigation (PoI)
The test purposes classified in terms of Characterization, Verification, or Validation. Object under Investigation (OuI) "the component(s) (1..n) that are to be qualified by the test"

Function(s) under Investigation (FuI)
"the referenced specification of a function realized (operationalized) by the object under investigation"  The Test Case frames the purpose of an experiment, and identifies relevant functions, structures 337 and components. A key purpose of this abstract description is to isolate the test objectives from the 338 possible test implementations. While also aimed at structuring purposes, in contrast to a use case in the 339 Energy System Semantic (cf. Section 2.3), a Test Case identifies both structural and functional aspects 340 of the Test System and its boundary 4 (which, ultimately, is to be reflected by a testbed); similarly the 341 test criteria relate to the test purpose, rather than the functional purpose of a use case.

342
The Test Case is an essential part of any testing effort. In order to realize a TS in an experiment on an experimental platform (RI-research infrastructure), 361 the TS requirements need to be mapped to RI capabilities (RI hardware, software, models). The HTD 362 provides a guideline for the identification of suitable RI and mapping in the form of an Experiment 363 Realization Plan (see Section 3.2.3).

364
The main purpose of an ERP is provide a conceptual approach and possible algorithm for 365 situations where the test specification is well developed and multiple applicable testbeds and RI 366 cooperations are considered; the ERP is not required for simple experiments where the Experiment 367 Configuration follows straightforward from the test specification.  The collection and annotation of experiment results is a natural element of any testing process.

378
In a holistic test description, a common reference frame and format is advised to keep experiment 379 results traceable in spite of multiple testbeds, time resolutions and data formats. Such a frame 380 can further be applied in the definition of test signals and documentation of system configurations.

381
This specific challenge is not explicitly addressed here, as an appropriate solution will typically be 382 domain-specific. In the context of energy systems, organizing data typically involves combining time 383 series of measurements with metadata about those measurements. An example of a data format which 384 applies to this context may be found in [54].

386
In this section we highlight some key considerations that have been accommodated in the HTD 387 conceptual framework.

388
In practice, after a Test Case is formulated clearly, the further planning can benefit from applying 389 only a subset of the HTD aspects. In any case, one should first identify whether the test objectives are sufficiently formalised (see below). In a next step, for example, it may be necessary to shed light 391 on dependencies between test objectives leading to a hierarchy or sequence of the test executions.

392
In that case, formulating a qualification strategy is useful. Here it helps to step back and look at the "test objectives" as a pure narrative formulation of the 401 motivation and rationale of a test. In a second step, the test objectives are formally refined into the where experiment results are subject to some expert interpretation to decide upon pass/no-pass.

416
Implication for Test Case: Test criteria are formulated qualitatively; a qualitative passing criterion 417 is required (consider who is the expert qualified to pass the judgement). Following the textual formulation of PoIs, the next step is a further formalization of the Test 432 Criteria (TCR), which take reference to domains and components identified in the System under Test,433 and would suitably be represented as mathematical formula.

434
The target metrics, variability attributes and quality attributes each identify parameters related to the 435 SuT, to suitably measure, perturb and assess experiment result quality, respectively.

437
While a test case describes in the most generic terms the requirements and observables to be 438 examined, these must eventually be mapped onto a specific laboratory infrastructure. Documenting 439 this mapping is the task of the three levels of system configuration descriptions; a Generic System   is specific because it names the key factors and observables, as well as the expected system topology.

457
In that it represents an instance of the SuT indentified. Justifiable reasons for leaving SSC parameters 458 undefined relate to system parameters and properties that are non-critical for the test-criteria, as well 459 as parameters that will vary strongly with the choice of testbeds. In the latter case, acceptable and  As an example of the three levels, Figure 5 shows system configurations from a test involving   The web-based open access hosting of the database is a step on the way towards a pan-European 514 testing and research platform that allows users to find the best RI for their particular application cases.

515
However, some institutions wish to keep their RI layout information confidential. An alternative use 516 of the RI database may therefore be given by adopting the concept within closed company networks to 517 improve lab accessibility only in that consortium.

518
The experiment realization plan is closely linked to the RI database and outlines multiple usage 519 scenarios. It is therefore not to be understood as a strict set of rules for the use of the database, but 520 rather as an illustration of the database capabilities. The guideline describes a two-stage process for 521 deriving an experiment implementation from a given test specification. The first stage of the process 522 can be called the assessment phase. Most practical tests do not require the experimental setup to follow 523 the test specification in all aspects; certain aspects such as e.g. grid topology, controllability, static and 524 dynamic parameters will have a strong impact on the outcome of the test while others can be ignored.

525
For example, the communication protocol and bandwidth of a PV inverter do not affect the outcome 526 of an anti-islanding test. However, these would be of high relevance for an interoperability test of the 527 same inverter, while the electrical characteristics of the inverter might be irrelevant. HTD practitioners 528 are asked to assess the degree of precision to which the experimental setup needs to replicate various 529 aspects of the test specification, by examining each aspect of the test system and assigning one of four 530 different precision levels to it:

531
• precise -the respective system aspect has to be matched 1:1 (e.g., exactly the same model of 532 electric vehicle, the exact grid topology, the same communication protocol, etc.),

533
• equivalent -the respective aspect has to be matched equivalently (e.g., an electrical vehicle 534 with the same charger and battery size, a grid topology with the same number of nodes, a 535 communication protocol with the same or a better fidelity, etc.), 536 5 A subset of the database has been released in HTML form as part of the ERIGrid RI descriptions, as for example: https: //erigrid.eu/components-attributes-of-test-center-for-smart-grids-and-electromobility-iee/.
• nominal -the respective aspect can be matched with some deviations, but they should only 537 lead to marginal influences on objective and results (e.g., a controllable load simulating an 538 electrical vehicle, a grid connection providing similar load/voltage characteristics, some means 539 of communication without regard for the specifications, etc.),

540
• irrelevant -the respective system aspect does not influence the test objective and results.

541
A test system (SSC, cf. Section 3.2.2) aspect, on the other hand, may vary in scale. It can be 542 a component, a set of components or even just a certain component or connection property. The 543 required focus and level of detail of the aspect overview depends entirely on the given system and 544 test case. Thus, a comprehensive list of potential aspects cannot be established in the context of this 545 paper. The outcome of the assessment phase is a table that pairs each system aspect with a precision 546 category. An example for a part of such an assessment table is given in requirements of a test and to prioritize the rest of the system properties. These constraints, together 551 with the prioritization, enable an iterative search of the database. In a significant number of cases, user 552 requirements and the RI capabilities will not be a perfect match; an iterative search will then help to 553 identify the most suitable RI to implement an experiment in.

554
The first search pass identifies all RIs fulfilling the most crucial requirements. Subsequently, more 555 constraints are applied until only one RI is left, including the set of suitable components it provides.

556
This process will also alert the user if the planned experiment cannot be fully implemented in any 557 available RI. In the latter case, either the TS has to be revised and/or precision requirements have to be   This illustrates a major benefit of the HTD: it requires its users consider essential DoE concepts from 567 the very beginning of the test planning and refine them over the course of the specification process.

568
Accordingly, the DoE concept of a system response is to be specified in stages as test criteria and 569 target metrics (TC stage) and output parameters as well as target measures (TS stage). Factors whose

575
The aim of an experiment strongly determines how the DoE process is planned and results 576 are interpreted. As described above, this aims are specified in the HTD as the PoI, falling into the 577 categories characterization, validation or verification. These PoI categories have different implications 578 on the necessary DoE considerations. As an example, imagine a test system with intrinsic fluctuation.

579
A common DoE-related technique for the interpretation of results in the presence of noise is given 580 by Analysis of Variance (ANOVA, see e.g., [57]). It allows its practitioners to explore (with a given 581 significance level α) whether the influence of a given factor is stable against the system's fluctuation. In 582 the case of a characterization experiment, users of ANOVA would generally explore which significance 583 levels α can be reached.

584
In a validation experiment, on the other hand, users will want to interpret whether the calculated to make assumptions about the analyzed system. As an example, the influence of some factors or 595 factor combinations may be considered negligible so that they are ruled out from the experiment, or a 596 linear behavior of the system dynamics may be assumed. Such assumptions have to be based on an 597 understanding of the given system. Since an appropriate insight is not always given, especially in the 598 case of highly interdisciplinary systems, employing screening experiments is a common practice in DoE 599 (see e.g., Chapter 5 of [58]). These types of experiments typically employ designs that are relatively 600 cheap in the sense of requiring few experiment runs. As a consequence they feature confounding 601 of factors or factor combinations so that definite statements about factor influences cannot be made.

602
Nevertheless, screening serves its purpose of providing its users with some initial insight into the

675
The main objectives of this test were twofold:

676
• Verification that that the EFCC control scheme is capable of identifying grid frequency events 677 correctly and deploying an appropriate amount of response to contain the frequency deviation.

678
Verifying scheme sensitivity to frequency events and stability against non-frequency events (e.g.,

679
faults) are the focus here.

680
• Quantification of the enhancement of frequency containment using the EFCC control (i.e., 681 compared to relying solely on primary frequency response). Speed and extent of frequency 682 containment are the focus here.

683
Moreover, it was critical that as much of the EFCC control scheme hardware components (LC,

684
RA, CS, PMU) are tested in an independent physical test environment akin to a field deployment.

685
Consequently, an integrated system test was a necessary follow up to manufacturer factory acceptance 686 tests. If an informal method of describing the test objectives and requirements for the case described 695 thus far is adopted, it becomes challenging to translate these into different test laboratories with 696 comparable test outcomes. Moreover, further difficulties in experiment realization can be faced if the test is to be conducted in a distributed fashion (e.g. across different laboratory infrastructures). The 698 following will examine how the HTD can be applied to the illustrative EFCC test case; drawing the 699 main points of the process detailed in Section 3.1. This treatment will be split across the three main 700 stages of developing a test case description, test specification and experiment specification. The focus in this stage of the HTD development is to define the scope of the system under test 703 and test objectives, which will ultimately translate to a specific test design (corresponding to the test 704 specification) and specific test implementation(s) (corresponding to the experiment specification). To 705 develop the formal descriptions established by the HTD, we first refer back to the above narrative 706 explaining the EFCC control scheme operation, motivation for using it and objectives of testing. The 707 test case clearly requires a representation of a frequency response that is characteristic of a low inertia 708 grid. As such, the system configuration considered for the test is that of a transmission grid with 709 low inertia generation. In other words, the EFCC control scheme to be tested must be exposed to the 710 electrical operational conditions of a low inertia grid, particularly during a frequency disturbance. In • OuI: although a wide-area control scheme is being tested, it is the LCs which deploy the energy 717 resources during grid frequency disturbances that are the focus of the test.

718
• FuI / FuT: following on from the OuI definition, the LCs ability of determining and deploying the 719 appropriate amount of energy resources in response to a detection of a grid frequency disturbance 720 is the functionality that is being investigated. Note that other functions will be present and

724
This process continues to detail the quantifiable metrics of the FuI against which the test outcomes 725 are assessed. In this example, the aforementioned objectives of the test imply that verification of system 726 performance is of most interest. These objectives can be detailed in a set of distinct PoI which expose 727 the SuT to specific test conditions through which the FuT performance can be evaluated. Example PoI 728 in this case include:

729
• Verify that the LC successfully detects grid frequency disturbances necessitating a response.

730
• Verify that the LC remains stable against grid frequency disturbances not requiring a response 731 (e.g., over-frequency resulting from a short circuit).

732
• Verify that the LC deploys the expected amount of resource with reference to the severity of the  Following on from the test case description, a test design is specified along with a number of 743 measurable parameters that are used to evaluate the test criteria. In our example case, verification of 744 the performance of the LC is key and as such, the test design reflects the need to expose the LC to a 745 comprehensive range of grid disturbances while measuring its response to each disturbance. Table 3   746 shows an excerpt from a test matrix designed to expose the LC to aforementioned grid disturbances; 747 combinations of different generation loss levels, locations and available resource capacities are tested.

748
A factorial or manual discrete approach to specifying these test parameters can be adopted to create 749 this matrix.
750 Table 3. Excerpt from test matrix specifying event sizes and initial conditions. In order to perform the verification of the LC performance, it is necessary to measure the following:

751
• Amount of grid frequency containment following a genuine grid frequency event.

752
• Amount of resource deployed in relation to the event severity and LC settings. The PHIL experiment setup is illustrated in Figure 6. A real-time digital simulator was used to 767 model the grid while physical controllers were deployed on a low voltage distribution network with 768 load banks as the physical deployable resource. Physical controllers were also interfaced with energy 769 storage system models in the real-time simulation. Figure 7 shows a sample of the measurements made 770 during the execution of test ID 1.1 as summarized in Template can be seen in Figure 8.

955
The presented HTD method offer overall control and traceability of the experiments with CPES. A 956 test design specified using HTD templates allows to plan and follow up on complex CPES experiments, 957 also by users not physically present in the laboratory premises. This saves time for the overall validation 958 work, even if the preparation and writing of the test cases and corresponding test and experiment 959 specifications may take some time (depending on the validation complexity of an experiment this 960 can vary from minutes up to several hours) but at the end a more detailed understanding of the 961 testing goals, requirements, boundary conditions, etc. improves the whole process. As the detail of 962 test descriptions can be adjusted, the overhead of following the HTD method in detail can be tuned 963 to the needs of experiments. We recommend using the simplest variant in the preparation of any 964 multi-disciplinary testing effort.

965
The HTD defines a technical language which one needs to know and understand to plan  A word of advice for future HTD users: As in any multi-discipplinary work, different 974 understanding of terms in the HTD templates is likely. Typically, similar terms can be 975 understood differently between power system engineers and computer science professionals.

976
Hence, cross-checking the interpretation of key terms will always be needed to establish common