In this section, each of the use cases is described in detail highlighting how the proposed methods are used to overcome the problems described earlier.
4.1. Autogeneration of Co-Simulations from System Descriptions
This use case forms the basis for the subsequent use cases. In this use case, two models are connected in a simulation. The first is a model of an adaptive cruise control (ACC) system simply called ACC function that is to be tested. It controls the acceleration of a vehicle in order to maintain a desired speed by using incoming sensor data from the environment. In our case, the speed of the vehicle itself and the speed of and distance to the vehicle ahead if such a vehicle is detected are calculated. The second model encompasses the entire environment and serves as a counterpart to the ACC function. A system level description of the models, their signals, and how these signals are connected is shown in a UML diagram in
Figure 5. Here, as an example, UML was used for the proof of concept, due to practical reasons as it was simple to realize as a showcase with open-source tools available. The UML metamodel that was used for demonstration is very primitive. The simulation master is described by a ProtocolStateMachine which holds properties (e.g., parameters for start time). The participants are classes which themselves have properties which hold parameters, and ports for the signals’ in- and outputs. The connectors are represented by InformationFlows, which hold the type of protocol encoded in the name. A simple but well-defined mapping into our graph model in the form of a simulation subgraph can be derived:
- 1.
A ProtocolStateMachine is mapped onto a master node. Its properties are added as data to the node.
- 2.
The class objects are represented as bridge nodes which again hold the property fields as information.
- 3.
The port objects are mapped onto signal nodes.
- 4.
The InformationFlow objects are then used to define connection nodes and the edges between the participants.
However, this approach is not limited to UML because other standards, such as SysML or SSP, could be used for the transformation as well because they are easily parseable. It should be highlighted again that this is a very simplistic model to showcase the viability of the approach. A more in-depth study of metamodels is needed to provide proper transfomrations for industrial use. Nevertheless, one can see that this should be extendable to other formats. This system description as it is designed by a systems engineer forms the basis of the use case and is used to generate the graph that describes the build and simulation framework. The structure of the process, which is described earlier in an abstract manner, is shown in
Figure 6.
In this case, the model (green box) is maintained in a code repository by a developer and consists of C code as well as the model description in xml format. The build pipeline (blue box) is constructed by filling out templates of Jenkins pipelines with the following information. The location of and the access credentials to the code repository and used libraries and their location (here the FMU SDK), the model description (which we consider part of the code), and configuration of the model and its build environment. The resulting pipeline will compile the downloaded code into a shared library according to the provided configuration and package it together with the model description into an FMU. Then, the FMU is tested by using the FMU Checker to ensure formal compliance with the FMI standard and check if the shared object can be called by using the defined interface. Then, the resulting most recent version of the FMU is uploaded to an artifact repository. This forms a self-contained development process for each simulation unit in the scenario. This build process is repeated for all involved simulation participants.
To conclude the build stage of the process, these artifacts must be deployed in the simulation environment. Our implementation of the deployment uses docker containers. Each simulation participant, as well as the simulation master, is deployed in its own docker container, and the containers are started up and connected by using a docker network, this is schematically shown in
Figure 7. Again, the configuration of these containers is generated automatically by using information contained in the co-simulation process graph, such as the number of participants, their connections and the network configuration.
The simulation stage of the process as depicted in
Figure 2 gives information on the configuration of the scenario, in particular which signals are connected to which, which time step to use, how long to simulate, and so on. This is stored in the simulation graph and used to generate the configuration supplied to the master. The containers corresponding to simulation participants contain an FMI-to-DCP wrapper that allows the downloaded FMUs to be run as DCP participants once the container is started. This wrapper runs as a DCP slave that maps the state machine of DCP to that of the FMU, calling the appropriate FMI functions as triggered by the DCP master. This makes the functionality of the FMU available as a DCP participant. This wrapper was developed as a prototype by using the DCPLib (
https://github.com/modelica/DCPLib, accessed on 10 January 2022)—which is the open-source reference implementation of DCP. It provides an implementation of the protocol, including slave description generation, state machine transitions, and data exchange, and ensures compliance with the protocol. The actual calculation of the simulation participant is relayed to the FMU that the DCP is wrapped around. In our implementation, we use the FMILibrary (
https://github.com/modelon-community/fmi-library, accessed on 10 January 2022) for that purpose. When a docker container containing a simulation participant is started, the DCP slave in the container is started and transitions to the DCP state Alive, awaiting instructions by the master.
Once all docker containers are started, the master registers the participants and rolls out the given configuration by using configuration protocol data units (PDUs) via the network using TCP/IP. This causes the DCP slaves to transition through their state machine and apply the supplied configuration acknowledging each received message. Once the simulation is started by the master simulation, time starts, and each DCP slave independently calculates the current time step sending their output to the other slave and receiving their inputs by using data PDUs. This is called the soft real- time (SRT) configuration of DCP, i.e., every simulation participant keeps their own time and performs the calculations in real time by best effort. Of course this is no problem in our minimal example, but in a real-world scenario care must be taken to ensure the performance of the models is adequate. Once a certain amount of wall clock time has passed, the master sends a stop PDU to each participants and notes their orderly termination. Then the results of the simulation are collected and uploaded to an artifact repository.
This use case is treated as a prototype that the other use cases are based upon Their configuration management and deployment follow the same principals, and the pipelines for building and deploying the simulation are automatically adapted to the changes in configuration.
4.2. Autoconfiguration of Execution Orders of Participants in Co-Simulations
As discussed in
Section 2.2 the co-simulation graph is already used to analyze the system and generate configurations. Since the co-simulation process graph contains this graph in the simulation stage, these methods fit well in our proposed framework. In sequential non-iterative co-simulation (i.e., the execution of the models in each time step in a certain sequence without repeating steps), the order of execution is of importance for the quality of the result. In DCP, this sequential execution, which is important for the determinism of the system, is possible by using the non-real-time operating mode (NRT). In this mode, the simulation time is completely independent of the wall clock time and participants only calculate to the next time step when explicitly triggered to do so by the simulation master. Indeed, the sending of outputs is also triggered by the master to enable a more fine-grained configuration, such as which participants will receive which data during the sequence of execution. This represents the default mode in co-simulation in the FMI-specification [
24]; however, due to the distributed nature of DCP, this is represented as separate state transitions. Note that in NRT this does not necessarily mean that the system is not running in real time, but rather that the time of each participant is dependent on triggers from the master. If all participants of an NRT system can perform their calculations in real time and the master can trigger these in real time, the system is running in real time.
In our use case, the sequence of execution of the simulation participants is a fixed configuration that can be changed as a parameter. Although this was sufficient to run the simulation several times with changed parameters and compare the results, in principle this trigger sequence can be derived from the simulation graph [
15].
4.3. Autogeneration of Test Scenarios
As motivation, consider the following co-simulation, which represents a lumped propeller shaft [
25] (see also
Figure 8 for a schematic overview).
The participant TA sends the time-dependent signal
defined by the curve
with
a = 10 Nm.
Similar to TA, TB is defined by
with
.
The system
A is defined by the differential equation
with
and parameter
.
The system
B is described by
with the initial conditions
,
,
and parameters
,
and
. The outgoing signal is determined by the relation
A proper system initialization is needed in order to only couple the shafts when an equilibrium is reached to avoid undesired effects. In DCP, this can be achieved by use of the synchronization states indicating a finished synchronization by transitioning to the state Synchronized before starting the actual simulation. This can be achieved by implementing a coupling control in the master which can distinguish between an initial transient oscillation phase and the actual simulation run.
In certain simulation scenarios, it can be of vital importance to start the system from a consistent initial state. For this purpose, DCP provides a separate initialization phase called the super state Initialization comprised of several states. In contrast to the synchronization states, this is done before simulation time starts. Although this feature is optional, it can be used to run the simulation a number of non-real-time steps before the simulation time starts, even in real time operating mode. This can be used by the master to bring the system to a consistent initial state before the actual simulation is started. In order for the master to have control over the system, the data can be routed via the master in this initialization phase only and exchanged directly between the simulation participants during the actual simulation run. This is depicted in
Figure 8 for our use case. The dotted connections of participant A and participant B with the master are only active during initialization. These connections may be informed by dependencies of the simulation models and may thus be generated based on the simulation graph. The master can decide when an equilibrium is reached based on the exchanged data and start the simulation run.
However, implementing such a master control can be difficult. Problems like these can arise, for example, in the context of test benches where the coupling of such a system has implications on the hardware. Thus, it is important to test such a master control properly, so a lot of tests are needed, and reproduction of faults and errors has to be fast. Although the example above is relatively simple, when faced with additional complexity creating tests becomes difficult and test runs may take a long time, as many participants may be needed to reproduce the issue and avoid regression over long-term development.
To tackle this issue, the following approach is proposed to ease up the creation of proper test cases in a timely manner. The left-hand side of
Figure 9 shows a scenario with four participants (as in the example above). From a previous simulation, run-stored data is used to generate a test for participant 1, which can be seen on the right. The data of all inputs and outputs of participant 1 at the black dots is used to generate a test participant that simply repeats the recorded data. This is done by considering all participants, except for the one to be tested, to be in a subsystem and contracting this subsystem to a single new participant. This enables us to test participant 1 for repeatability and determinism, and this test can run cheaply again and again during continuous development of participant 1 without having to run the whole system again. In the same manner, tests for all other participants can be generated each time, putting all participants that are currently not to be tested in a subsystem, assuming all data from a previous simulation run has been recorded. We tested this setup only for one participant. We did this because the code to generate a test dummy as a counterpart to one participant was available. However, in principle the same can be done to test entire subsystems. The subsystem consisting of participants 1 and 2 could, e.g., be tested against the data provided by participants 3 and 4 and so on. In this manner, a large number of integration tests can be generated by using the same setup and ensuring reproducibility of the results during the development process. Because each test only repeats data, it is very fast to run and one could test a model without running a computationally expensive participant, such as an environment simulation, each time. If no previous simulation was run, the data could be replaced by data from requirements, specifying how a model should behave. In this manner, a complex simulation system can be constructed step by step, adding on components or subsystems one at a time.