Next Article in Journal
Fractional-Order Modeling of Sediment Transport and Coastal Erosion Mitigation in Shorelines Under Extreme Climate Conditions: A Case Study in Iraq
Previous Article in Journal
A Simplified Fish School Search Algorithm for Continuous Single-Objective Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parallel Simulation Using Reactive Streams: Graph-Based Approach for Dynamic Modeling and Optimization

by
Oleksii Sirotkin
1,*,
Arsentii Prymushko
1,
Ivan Puchko
1,
Hryhoriy Kravtsov
1,
Mykola Yaroshynskyi
1 and
Volodymyr Artemchuk
1,2,3,4
1
Department of Mathematical and Computer Modeling, G.E. Pukhov Institute for Modelling in Energy Engineering of the NAS of Ukraine, 15 General Naumov Str., 03164 Kyiv, Ukraine
2
Department of Environmental Protection Technologies and Radiation Safety, Center for Information-Analytical and Technical Support of Nuclear Power Facilities Monitoring of the NAS of Ukraine, 34a Palladin Ave., 03142 Kyiv, Ukraine
3
Department of Information Systems in Economics, Kyiv National Economic University Named After Vadym Hetman, 54/1 Peremohy Ave., 03057 Kyiv, Ukraine
4
Department of Intellectual Cybernetic Systems, State Non-Profit Enterprise State University “Kyiv Aviation Institute”, 1 Liubomyra Huzara Ave., 03058 Kyiv, Ukraine
*
Author to whom correspondence should be addressed.
Computation 2025, 13(5), 103; https://doi.org/10.3390/computation13050103
Submission received: 8 March 2025 / Revised: 17 April 2025 / Accepted: 23 April 2025 / Published: 26 April 2025
(This article belongs to the Section Computational Engineering)

Abstract

:
Modern computational models tend to become more and more complex, especially in fields like computational biology, physical modeling, social simulation, and others. With the increasing complexity of simulations, modern computational architectures demand efficient parallel execution strategies. This paper proposes a novel approach leveraging the reactive stream paradigm as a general-purpose synchronization protocol for parallel simulation. We introduce a method to construct simulation graphs from predefined transition functions, ensuring modularity and reusability. Additionally, we outline strategies for graph optimization and interactive simulation through push and pull patterns. The resulting computational graph, implemented using reactive streams, offers a scalable framework for parallel computation. Through theoretical analysis and practical implementation, we demonstrate the feasibility of this approach, highlighting its advantages over traditional parallel simulation methods. Finally, we discuss future challenges, including automatic graph construction, fault tolerance, and optimization strategies, as key areas for further research.

1. Introduction

As simulations become increasingly complex, more and more computational resources are required to execute them. Computing power continues to grow per Moore’s law, but this growth shifts to the horizontal plane, i.e., it happens due to an increase in the number of parallel processors and their cores. Thus, there is a need to develop parallel simulation algorithms capable of utilizing the computing resources of multiple CPUs.
Today, several approaches exist for parallelizing simulations. In particular, we can consider the Time Warp algorithm [1] described in detail in [2]. This algorithm has been studied for many years and has several implementations in the code [3,4,5]. However, Time Warp uses its own synchronization protocol, which is complex and low level [6]. The RxHLA software framework (based on the reactive adaptation of IEEE 1516 standard) [7] is similar to Time Warp in terms of complexity and low levelness. Another approach, based on the CQRS + ES architecture, is described in [8]. However, the authors of that work concentrate more on the practical aspects of implementation without providing much theoretical background. The HPC simulation platform [9] is also a more practical implementation of parallel simulation; it is based on actors and the AKKA library and constitutes a more conservative approach than reactive streams.
The key concept of this paper is to use a general-purpose synchronization protocol to parallelize simulations, namely, the reactive-streams protocol [10,11,12], particularly the version implemented in the AKKA library [13,14,15,16]. Thus, on the one hand, we have a classical mathematical model. On the other hand, we have a general-purpose synchronization protocol. The goal of this work is to unite them.
The rest of this manuscript is organized as follows:
  • Section 2 explains the basic modeling concepts and entities that we will use in this paper.
  • Section 3 extends basic modeling to be represented in the form of a transition graph and shows how a simulation can be performed on this graph.
  • Section 4 shows how the transition graph can be implemented with reactive streams and how simulations can be executed.

2. Substates

Before we start developing a parallel simulation algorithm with reactive streams, we define substate concepts and some objects for later use in this paper. Before reading this section, we suggest you check Supplementary S1, which describes the notation, and Supplementary S2, which gives a common basic definition used in this article. Also, in Section 5 and Supplementary S6, you can find real word examples that illustrate the described approach.

2.1. Substates S K q as a Decomposition of the State V

Each state V can be represented as a set of substates, each of which contains only a part of the values of v i V . There must be a way to determine which of the substates belongs to a certain V . One option to achieve this is to use a unique key to mark all substates belonging to a certain V .
Definition 1. 
Let us define a substate  S K q , where  S V  is part or all of the set of values,  v i V ,  K  is some key unique to the state  V V n ;, and  q N  is the index of the substate with the same key,  K .
One or more v i V values can be used as key K . In this case, it makes no sense to include them in any of the substates since they will be presented in the key.
For some state V , we have the set of substates S K q with the same key K . We will denote this set by a bold S K . With such representation of the state V , it is necessary to ensure that all S K q marked by the same K are not contradictory. The pair of S K q with the same key K can be contradictory if one or more values v i S , V differ under the same index.
Definition 2. 
Let us define the set of substates
S K S K q , S K q | K = K   q q
where for all  S K q S K ,   the consistency criterion
S K i , S K j S K v l S K i v l S K j l
is true.
In this paper, we will talk about arbitrary sets of substates  S K q , whose only requirement is to meet the consistency criterion.
Definition 3. 
Let us define an arbitrary set of substates
S K S K S K | K K  
as the union of the sets  S K  with different  K .
Notice that these definitions do not require the presence in the set S K (and consequently, in S K ) of a sufficient number of substates S K q to cover all values v i V .
Let us also note that, by definition, the set S K can contain more than one substate S K q with the same key K . However, on an arbitrary set S K that contains duplicate keys K , we can construct a set S K that does not contain them. For this, we need to combine all substates with the same key into one substate:
S K d S K S K d S K 1 S K u = K S K S K u
where S K d is a set with duplicate keys and S K u is a set with unique K . Thus, we can say that an arbitrary set of substates S K can be considered as a key–value structure or as the surjective function
f : K S
As follows from the definition, the set S K can only be constructed from a set of states V V n in which a unique key K can be associated with each state V V . Otherwise, this will lead to the appearance of substates S K q in conflict.
The inverse transformation, i.e., the construction of V V n from an arbitrary S K
S K S K S K V V = K S K V
is possible only if S K contains enough substates S K j to construct each state V V completely.
The set of substates S K can be equivalent to the state space V n if this state-space contains enough substates to construct each state V V n . We will denote such a set by S K .

2.2. Representation of the Dependence of Y on X as a Set of Substates: Y = S X | G

The dependence of the variables Y upon X can be represented as a set of states S K . This representation is an alternative to a set of functions  F X | G . In this case, it is convenient to choose the values X X n as the key K and the subset of the values Y Y n as the values of S (including S = ).
Definition 4. 
Let us define the substate S X q , where the key is K = X , X X n , the value S Y , Y Y n , and the index q N  is such that
  S X q ,   S X q X = X , q q
also, X S X q  and S X q \ X Y .
Since the values of the parameters G G n are constant for any possible values of X and Y , they are also constant for any possible substates S X q composed of the values of X and Y . Thus, the definition of S X q does not include values G . Being joined into a set, the substates S X q will have the same parameters G , but may have different values of the key X .
Definition 5. 
Let us define the dependence
Y = S X | G S X q | G
which represents the dependence of the variables  Y   upon  X   for given parameters  G   that are the same for all substates included in the set  S X | G . At the same time, substates should not be contradictory:
S X i , S X j S X | G v l S X i v l S X j   l ,  
The set S X | G with all substates having the same key X will be denoted S X | G .
Let us note that we do not impose a completeness restriction upon the set S X | G —i.e., S X | G may not contain all of the keys X X n or may even be empty: S X | G = . S X | G may also not contain all Y Y n and/or it may not contain enough S X q to build one or more complete Y .
The representation Y = S X | G is equivalent to the representation Y = F X | G if and only if, for each X X n at a given G G n , the representations are equal:
Y = S X | G Y = = F X | G X X n F X | G = S X | G
For each key X , there exists a set of substates S X q that cover all possible values Y Y n . We will denote this set by S X | G . This set may not satisfy the consistency criterion (Formula (1)) and will have cardinality
S X | G = i = 1 n y i
If we join the sets S X | G for all possible keys X X n , we obtain the set of all possible substates. We will denote it by
S X | G = X X n S X | G ,
The cardinality of this set when S = Y will be
S X | G = X X n S X | G
Moreover, S X | G V n since, from the set G n only, one set of values G is used (note, the cardinalities will be equal in case G n = 1 ).
In practice, we will more often see sparse S X | G , where it is impossible to completely construct Y for every X X n . The use of sparse S X | G will reduce the modeling accuracy. In general, this is not a problem from an engineering standpoint since increasing or decreasing the cardinality S X | G allows us to choose an acceptable accuracy level for solving a specific simulation problem.

2.3. Reflection Y ̌ X ¯ G as a Record of Changes in the Values of Variables

We can reflect the behavior of a modeled object by measuring its properties and recording the corresponding values of the variables X , Y , and G . By abstracting from a specific implementation, we will call such a record a reflection of the modeled object.
Definition 6. 
Let us define the reflection  Y ̌ X ¯ G   as an arbitrary representation of the dependence of the dependent variables  Y ̌   upon the independent variables  X ¯   and the values of the parameters  G . Moreover, this dependence is constructed by studying and measuring the modeled object’s properties.
We can graphically represent the building of the reflection Y ̌ X ¯ G by adding the points
V = X ¯ Y ̌ G
into the state space V n at the coordinates X ¯ , Y ̌ , G , where X ¯ X n ¯ , Y ̌ Y n ̌ , and G G n . The added points will form a geometric figure that reflects the behavior of the modeled object.
A reflection can be represented as a set of functions
Y ̌ X ¯ G F = F X ¯ | G = Y ̌
or as a set of states
Y ̌ X ¯ G S = S X ¯ | G = Y ̌
In the first case, a set of functions can be constructed by recording the obtained or measured values of the variables X , Y , and G [17]. In the second case, from the values of Y ̌ obtained or measured with respect to X ¯ and G , the substate S X ¯ q = 1 can be directly built and added to the set of substates S X ¯ | G .
In this case, writing down the values of the stopwatch (which reflects the variable t ) and the level gauge (which reflects v w a t e r ), we obtain the function v w a t e r t , which reflects the dependence of v w a t e r on t . In practice, this function will be defined only on a certain interval or several intervals of the time t m e a s u r i n g , during which the measurement was performed.

2.4. Model Y ^ ( X ¯ | G ) as an Imitation of Changes in the Variables V

In one of several ways, we can define the dependencies of the variables Y on X and G without directly measuring the properties of the modeled object [18,19,20]. We will call the dependence defined in this way the model of the modeled object.
Definition 7. 
The model of the modeled object  Y ^ X ¯ G  is an arbitrary representation or implementation of the dependence of the dependent variables  Y ^  upon the independent variables  X ¯  and the values of the parameters  G . Moreover, this dependence is constructed without the direct participation of the modeled object.
We can graphically represent the model Y ^ ( X ¯ | G ) as a geometrical figure in the state space V n consisting of the points
V = X ¯ Y ^ G
that define the relationship between the variables Y ^ and X ¯ and the parameters G , where X ¯ X n ¯ , Y ^ Y n ^ and G G n .
The model can be implemented as a set of possibly partial functions
Y ^ X ¯ G F = F X ¯ | G = Y ^ ,
or as a set of states
Y ^ X ¯ G S = S X ¯ | G = Y ^ ,
In the first case, the set of functions can be determined analytically or in another way. In the second case, many substates must be pre-built in one way or another.
We can define the model Y ^ ( X ¯ | G ) such that it completely coincides with certain reflection Y ̌ X ¯ G ; however, it is much more reasonable and useful to construct Y ^ ( X ¯ | G ) to predict changes in the modeled object.
From a practical point of view, we are interested in how accurately the constructed model Y ^ ( X ¯ | G ) corresponds to the modeled object. One way to determine compliance is to compare the model and reflection Y ̌ X ¯ G (i.e., to calculate the magnitude of their inconsistency in one way or another). Let us denote the inconsistency value by ε .
For example, for the case in which all variables V have domain R , we can define ε R as the integral sum of the difference of the values Y ̌ and Y ^ for each X ¯ X n ¯ :
ε = X ¯ X n ¯ i = 1 Y Y ̌ ( X ¯ | G ) i Y ^ ( X ¯ | G ) i
where G G n .

2.5. Simulation of the Model Y ^ ( X ¯ | G ) as a Calculation of a Subset of Y ^ Y n ^ from the Subset X ¯ X n ¯ and the Parameters G

The simulation task can be reduced to obtaining or calculating the subset of the unknown values of the dependent variables Y ^ from the subset of the known values of the independent variables X ¯ and the values of the parameters G using a certain model Y ^ ( X ¯ | G ) .
Definition 8. 
Let us define the simulation as the operator
X ¯ Y ^ ( X ¯ | G ) Y ^ ,
where  X ¯ X n ¯  is a possibly ordered set of unique known values of independent variables,  Y ^ Y n ^  is the desired set of possibly not unique values of the dependent variables, and  Y ^ ( X ¯ | G )  is a certain model used to obtain the desired  Y ^ Y n ^  for a given  X ¯ X n ^ .
For the case where the model is implemented as a set of functions (Formula (3)), the simulation
X ¯ Y ^ X ¯ G F Y ^
is simply a calculation of the result Y ^ Y ^ for each argument X ¯ X ¯ :
X ¯ X ¯ X ¯ Y ^ Y ^ = F X ¯ | G X ¯ Y ^
where
Y ^ = F X ¯ | G X ¯
which is the operation for calculating Y ^ Y n ^ for a given X ¯ X n ^ . For a model implemented as a set of substates (Formula (4)), the simulation is a matter of finding all substates for each key X ¯ X ¯ and then building the values of Y ^ Y ^ from the found substates
X ¯ X ¯ X ¯ Y ^ Y ^ = S X ¯ | G X ¯ Y ^ ,
where
S X ¯ | G X ¯ = S X ¯ | G
is the operation for selecting a subset S X ¯ | G S X ¯ | G of substates S X | G j with the same key X .
A simulation can be interactive, i.e., it can react with external events and produce the results to the outside right during the calculation. In the simplest case, an interactive simulation can be represented as a series of simulations
X ¯ i Y ^ ( X ¯ | G ) i Y ^ i ,
of the set of models
Y ^ ( X ¯ | G ) i
for the corresponding sets of subsets of values of independent variables X ¯ i sequentially received during the interaction and the sets of dependent Y ^ i sequentially returned as simulation results.

3. Graph Modeling

We show how the model Y ^ ( X ¯ | G ) can be represented as a transition graph and how a simulation can be performed for this representation. We define and prove the rules for constructing a consistent transition graph. Before reading this section, we recommend to check Supplementary S3, which describes transition function concepts. In Section 5 and Supplementary S7, we present a simple example of the construction and simulation of a transition graph.

3.1. The Transition Graph Γ | G and the Simulation Graph γ | G

We can join the function Θ | P j Θ | G and a set of functions Θ | O i , k = 1 , , Θ | O i , k = n Θ | G represented as graphs by combining the result nodes
Θ | O i , k = 1 S ̀ : S ̀ i , k = 1 , , Θ | O i , k = n S ̀ : S ̀ i , k = n
and the argument nodes
S : S k = 1 , j , , S : S k = n , j Θ | O j
with intermediate-variable nodes
Θ | O i , k = 1 S k = 1 : S ̀ i , k = 1 , , Θ | O i , k = n S k = n : S ̀ i , k = n Θ | O j
(see Figure 1).
We continue in the same way to sequentially join the functions included in the same set Θ | G (possibly using the same function more than once); we obtain some DAG (see Figure 2). We call such DAG a transition graph. Also, as an option, we can combine two or more root variables S k , i = 1 , , S k , i = n  that do not have incoming edges, thereby reducing the total number of nodes.
Definition 9. 
We define the transition graph Γ | G as a DAG constructed on the set of transition functions Θ | G  by sequentially joining arbitrary subsets of functions
Θ | O i , Θ | O j , k = 1 , , Θ | O j , k = n Θ | G
and by combining the result node
S ̀ i : S ̀ i Θ | O i
and the argument nodes
S k , j , k = 1 : S k , j , k = 1 Θ | O j , k = 1 , , S k , j , k = n : S k , j , k = n Θ | O j , k = n
such that
S ̀ i S k , j , k = 1 , , S k , j , k = n S ̀ i S k , j , k = 1 , , S k , j , k = n
into intermediate variable nodes
Θ | O i S : S ̀ i Θ | O j , k = 1 , , Θ | O j , k = n
Additionally, the root nodes
S k , i = 1 S k , i = n
and their domains
S k , i = 1 S k , i = 1
may also be combined.
We note that this definition imposes no restrictions on the graph structure except for its acyclicity (the result of the next joined Θ | O cannot be connected with the argument of any already joined Θ | O ) and continuity (all nodes of the graph Γ | G are connected by at least one edge).
When we join the transition functions Θ | O , we also join the transitions θ | O from the equivalent set θ | O Θ | O , forming a set of more complex DAGs with the same structure as the graph Γ | G , but which consist of the substates S X q and transitions θ | O (see Figure 3). We call such DAGs simulation graphs.
Definition 10. 
A simulation graph γ | G  is defined as the DAG obtained by constructing a transition graph Γ | G ; it has the same structure as Γ | G . The graph γ | G consists of constructions of the form
θ | O i S X q , i θ | O j , k = 1 , , θ | O j , k = n
which results from joining the transitions
θ | O i θ | O i Θ | O i
and
θ | O j , k = 1 θ | O j , k = 1 Θ | O j , k = 1 , , θ | O j , k = n θ | O j , k = n Θ | O j , k = n
belonging to the set of joined functions
Θ | O i , Θ | O j , k = 1 , , Θ | O j , k = n Θ | G
such that
S ̀ X q , i θ | O i = S X q , j , k = 1 θ | O j , k = 1 = , , = S X q , j , k = n θ | O j , k = n
We note that all γ | G will have a structure exactly matching Γ | G . According to the definition of γ | G , during the construction of Γ | G , incomplete graphs of γ | G with structures not coinciding with that of Γ | G will be discarded. Thus, the substates S X q included in the discarded graphs γ | G will also be removed from the domains S of the variables S included in the constructed Γ | G .
Let us denote some arbitrary set of graphs γ | G by γ | G . According to the definitions of the graphs Γ | G and γ | G , each of the substates S X q from the domains S of the variable nodes S will belong to one of the simulation graphs γ | G . All S X q terms that do not belong to any γ | G will be discarded during the construction of Γ | G , along with the incomplete γ | G .
Thus, we can represent the graph Γ | G as an equivalent set of graphs γ | G . We will denote such a set as γ | G Γ | G ; this set will include all S X q from all domains S :
S Γ | G = γ | G γ | G Γ | G S X γ | G
where S Γ | G is the set of domains S of the variable nodes S from the graph Γ | G and S X γ | G is the set of all S X q belonging to γ | G , which is consistent:
S X | G S X γ | G = S X | G
Moreover, all simulation graphs γ | G will share the same set of parameters G split into parts O .
Let us index each node from the set S Γ | G with the depth index
d = max l e n ( S r o o t S d ) ,
where d N , S r o o t S d is the set of all possible paths from any root node S r o o t S r o o t (i.e., the node that has no incoming edges) to the indexed node S d and l e n ( ) is the length of the path (the number of edges in the path). Let us also index all transition functions Θ | O with the same index d same as the index of the result node
Θ | O d S d
Thus, each root node of S r o o t will have d = 0 , and each leaf node S l e a f (i.e., such that it has no outgoing edges) will have d = n , where n is the minimum number of edges to the nearest root node S d = 0 .

3.2. Construction of a Consistent Transition Graph Γ | G

From a practical viewpoint, we want to be able to construct transitions Γ | G from a set of predefined transition functions Θ | G —in other words, to build models from a set of ready-made functional blocks, similar to the Simulink software (Version R2024b). It is necessary to guarantee the consistency of Γ | G at the local level, i.e., at the level of individual functions Θ | P , to implement this approach successfully.
We can represent some simulation graph γ | G as the set of directional paths (dipaths) covering all substates S X q S X γ | G and transitions θ | O θ γ | G .
Definition 11. 
Let us define a dipath
p | G S X q , l = 0 θ | O l = 1 θ | O l = n S X q , l = n
in the simulation graph  γ | G γ | G Γ | G , where  S X q , l = 0 S l = 0 ,  S X q ,   l = n S l = n ,  l N  is the index of the node, which is in the dipath, such that  l = 0  corresponds to some root node  S X q , r o o t , and  l = n  corresponds to some leaf node  S X q , l e a f  in the graph  γ | G ,  S X q , l = 0 , , S X q , l = n S X γ | G ,  θ | O l = 1 , , θ | O l = n θ γ | G ,  S l = 0 , , S l = n S Γ | G  , with  S X p | G S X γ | G .
We denote some arbitrary set of paths by p | G , which is not necessarily related to the same graph γ | G .
The set of paths p | G can be equivalent to the graph γ | G if the paths in this set contain all substates S X q S X γ | G and transitions θ | O θ γ | G :
p | G γ | G p | G p | G S X p | G = = S X γ | G j p | G p | G θ p γ | G j = θ γ | G j
In order to guarantee the consistency condition
γ | G γ | G Γ | G S X γ | G S X | G
for the graph Γ | G (i.e., to guarantee that each of the simulation graphs γ | G described by Γ | G will not contain any inconsistent substates), the graph Γ | G must meet the following two restrictions:
  • For each graph γ | G j γ | G Γ | G , each substate S X q , j S X γ | G j , S X p | G j (i.e., located on one of all possible paths p | G j ) must have a unique key X S X q , j regarding the p | G j .
  • For each graph γ | G γ | G Γ | G , the values y y of some variable y : y Y should only belong to the set of substates S X q , j S X γ | G j such that there exists in γ | G at least one path p | G p | G γ | G , including all of these substates.
At the local level (i.e., without studying the entire graph Γ | G ), the above restrictions can be met by applying the following construction principles (Supplementary S4):
I.
The set of keys X n must be linearly ordered.
II.
Each transition function
S : S k = 1 , , S : S k = n Θ | O S ̀ : S ̀
(where Θ | O Θ Γ | G ) for each transition
S X q , j , k = 1 , , S X q , j , k = n θ | O j S ̀ X q , j
(where θ | O j θ p | G j , θ | O Θ | O ) in some graph γ | G j γ | G Γ | G must generate the resulting substate S ̀ X q , j S ̀ , S X γ | G j such that its key X q , j S ̀ X q , j will always satisfy the conditions
X q , j > max X q , j , k = 1 , , X q , j , k = n
or
X q , j < min X q , j , k = 1 , , X q , j , k = n
where
X q , j , k S X q , j , k S k , S X γ | G j
III.
For each variable y : y Y , its values y y must belong to no more than one root node S : S l = 0 :
y : y Y S : S l = 0 S Γ | G | y y , y S l = 0 1
where
y S S S S S S q | S q S X q S
(i.e., the set of all y values in all substates S X q form the domain of the variable S ).
IV.
If, for some node S : S i S Γ | G and some variable y : y Y the condition y y , y S i is true, then either the node S i must be a root, or there must be a transition function
, S : S i , k = 0 , , S : S i , k = n , Θ | O i S ̀ i
with one or more arguments S : S i , k for which the condition y y , y S i is true and in the graph Γ | G there exists a chain
S i , k = 0 Θ | O i , k = 1 , , Θ | O i , k = n S i , k = n
that includes all S i , k . Moreover, for the last argument S i , k = n in the chain, there should not be another function
, S i , k = n , Θ | O i S ̀ i
for which the condition y y , y S ̀ i is true.
In practice, principle (I) can be easily implemented since linearly ordered sets are common. For example, time, speed, etc., can be represented using variables with ℝ. Next, if the domains of all independent variables are in linear order, then the set of keys X n will also be in linear order.
Principle (II) says that the key–value constantly increases or decreases as the simulation graph γ | G is calculated. This approach can be applied, for example, to physical models, where independent variables are usually rational numbers that increase or decrease over the simulation.
Principle (III) holds if the graph Γ | G has a single root node S : S l = 0 S Γ | G such that in each graph γ | G j there will be only one substate S X q , j , l = 0 S l = 0 , thereby excluding the possibility that the values y y of the same variable y : y Y are in different substates S X q , j , l = 0 .
Another approach to implementing (III) is for each root node S : S l = 0 to include y y values only from its own unique set of variables y 1 , , y n Y , such that
y i , y j i j , y i y j =
This approach, for example, is convenient in the graphs Γ | G used for interactive simulation, where each next node S l = 0 reflects the next input of data from outside the simulation.
In practice, a simple way to implement principle (IV) is to check whether adding the next function Θ | O to form S ̀ does not include variables that are already in the results of the functions that have joint arguments S with Θ | O . For example, if there are nodes S k = 1 and S k = 2 for which y S k = 1 = a , b and y S k = 2 = x , y , where
y S : S y : y Y | y y , y S
and these nodes are the arguments of some function
S k = 1 , S k = 2 Θ | O j = 1 S ̀ j = 1
for which the result is y S ̀ j = 1 = a , x , then we can add only a function
S k = 1 , S k = 2 Θ | O j = 1 S ̀ j = 1
for which y S ̀ j = 2 = b , y and either y S ̀ j = 2 = y or y S ̀ j = 2 = b , but not y S ̀ j = 2 = a , b , y .

3.3. Computability of the Simulation Graph γ | G and the Initial Set of Substates S

In practice, we will need to find some specific simulation graph γ | G γ | G Γ | G from some known set of consistent substates S X | G S X γ | G associated with the nodes S of the graph Γ | G . We will call S X | G the initial set of substates.
Definition 12. 
Let us define the initial set of substates
S S = S X q
associated with the specific nodes  S   of the graph  Γ | G   such that
! γ | G S S X γ | G ,
where  S : S S Γ | G ,  S X q S , S X γ | G ,  γ | G γ | G Γ | G .
The search for a specific graph γ | G with some set S can be imperatively represented as a calculation of all functions Θ | O Θ Γ | G , using S as the initial arguments for these functions.
Note that the definition requires that S be a subset of the one and only one set S X γ | G . However, in the general case, some S X | G can be a subset of more than one S X γ | G . In this case, in the imperative representation of the search, a single graph γ | G cannot be calculated from such S X | G since for some or all functions Θ | O Θ Γ | G , not all arguments can be defined.
By representing the search for a specific γ | G in the form of a calculation of the functions Θ | O Θ Γ | G , we notice that all Θ | O will be calculated only if the values of all root nodes S r o o t S Γ | G are known or can be obtained in some way. Thus, S is a subset of the unique set S X γ | G (Formula (8)) if and only if, for each initial node S r o o t S Γ | G , there exists a path
S r o o t Θ | O 1 Θ | O n S d e f
where S d e f S Γ | G , S S is a node whose value is defined in S , and all functions Θ | O i are this reversible (Supplementary S5).
Another important property of this approach is the glitching freedom described in [21,22]. Since only one graph γ | G is to be found, there never exist inconsistent substates S X q .

3.4. Representation of the Dependence of Y on X in the Form of a Simulation Graph Y = Γ | G , S X and a Graph Model Y ^ X ¯ G Γ

The dependence of the dependent variables Y upon the independent variables X can be represented as a tuple of the transition graph Γ | G , and the set of initial substates S with given values of the parameters G .
Definition 13. 
Let us define a pair
Y = Γ | G , S X X X n S X γ Γ | G | S X
representing the dependence of the variables  Y   on the variables  X , as parametrized by the values of  G , where
γ | G = γ Γ | G | S
is a simulation graph  γ | G γ | G Γ | G   found for a given  Γ | G ,  S   and  G , and
Y = X X S X | G X
is the merging operation of the substates  S X q S X | G  with the same key  X X n  into the set of values  Y Y n .
The representation Y = Γ | G , S X can be used to implement the model Y ^ X ¯ G ; we call this implementation a graph model and denote it as
Y ^ X ¯ G Γ = Γ | G , S X ¯ = Y ^ ,
This implementation is similar to a representation in the form of a set of substates (Formula (4)), except that the set S X ¯ | G must first be found as
S X ¯ | G = S X γ Γ | G | S

3.5. Simulation of the Graph Model Y ^ X ¯ G Γ as a Calculation of a Subset of the Values Y ^ Y n ^ on the Subset X ¯ X n ¯ and the Parameters G

For the graph model Y ^ X ¯ G Γ , we can define the simulation as the operator
X ¯ Y ^ X ¯ G Γ Y ^ ,
where X ¯ X n ¯ is the subset of known values of the set of independent variables X ¯ V and Y ^ Y n ^ is the subset of unknown values of dependent variables Y ^ V .
The simulation can be implemented as a search for the simulation graph γ | G γ | G Γ | G for a given initial set S and a set G . Then, from the set S X ¯ | G ,   Y ^ Y ^ is constructed for each X ¯ X ¯ as follows:
X ¯ X ¯ X ¯ Y ^ Y ^ = S X γ Γ | G | S X ¯ Y ^
In the simulation problem, we can significantly optimize the search for the graph γ | G . Since the set X ¯ is usually much smaller than X n ¯ , we can search or calculate only a part of the substates from S X γ | G , which contain all the required keys X ¯ X ¯ :
S X ¯ = S X q S X γ | G | X S X q X ¯ ,
We can also optimize S by including substates that are as close as possible (from the point of view of the distance in the graph Γ | G ) to substates from the desired S X ¯ or even equivalent to these substates. This will reduce the number of calculations not related to the search for S X ¯ (see Figure 4).
For the graphical model Y ^ X ¯ G Γ , an interactive simulation can also be performed. In the simplest case, this requires many models Y ^ X ¯ G Γ i ; however, a more interesting and optimal approach is to undertake interactive manipulation of the values of the nodes S S Γ | G when imperative representations (sequential calculation of the functions Θ | P ) of the operation γ Γ | G | S are used. This approach was explored briefly in [23].
Two patterns are possible here:
  • Push pattern:
    This pattern can help synchronize the simulation with some external processes (for example, to synchronize with real-time). The essence of the pattern is that some function Θ | O cannot be calculated until all its arguments S are defined; thus, we can locally pause the simulation, leaving some of the root nodes S r o o t S Γ | G uninitialized. We can then continue it by defining these nodes.
  • Pool pattern:
    This pattern can be used to implement an asynchronous simulation reaction to some external events—for example, to respond to user input. As in the previous case, some S r o o t S Γ | G remain uninitialized. However, the simulation does not stop there. Their values are constructed as needed to calculate the next Θ | O . Using this approach, it is figuratively possible to imagine that undefined S r o o t is computed by some set of unknown transition functions, possibly also combined into a transition graph. In other words, there is some “shadow” or “unknown” part of the graph Γ | G and, as a result of its calculation, the S r o o t is initialized (see Figure 5).

4. Logical Processors

We show how the graph model Y ^ X ¯ G Γ can be implemented using the reactive-stream paradigm in the form of a computational graph. We also show how the simulation can be evaluated on this graph and offer ways to optimize it. Additionally, in Supplementary S8, we implement a simple computational graph and perform a simulation with it.

4.1. Reactive Streams and Graph Model Y ^ X ¯ G Γ

The concept of reactive streams was formulated in 2014 in manifest [10,11,12] and extended by AKKA library developers with tools for composing reactive streams into computational graphs [13,24], which are already widely used in practice [9,25,26,27]. The graph nodes are logical processors, and the edges are the channels representing the stream of messages that transmit data; each of the processors transforms the messages in some way. Generally, reactive streams are an implementation of the well-known dataflow-programming paradigm [25].
This chapter will follow the approach described in (i.e., we will compose a computational system from small blocks that process data streams) [22,28,29]. However, we will use reactive streams to do all the hard work for us to distribute computation and load balancing.
We will denote messages (values) by M , logical processors (reactors) by L P , channels connecting the processors by D , and the numeral graph by C .
We can transform an arbitrary graphical model Y ^ X ¯ G Γ (Formula (9)) into a computational graph C :
  • To represent each substate S X q S X Γ | G with the message M = S X q .
  • To replace all Θ | P Θ Γ | G for which S k = 1 , , S k = n S S with equivalent processors L P e v a l :
    S k = 1 D k = 1 , , S k = n D k = n Θ | P L P e v a l S ̀ D
    and all Θ | P (for which S ̀ S S ) with processors L P e v a l equivalent to the inverse functions Θ 1 | P :
    S ̀ D Θ | P Θ 1 | P L P e v a l S k = 1 D k = 1 , , S k = n D k = n
  • To successively replace all functions Θ | P Θ Γ | G and all arguments S k = 1 , , S k = n that are already replaced by the channels D k = 1 , , D k = n , which are equivalent to L P e v a l :
    D k = 1 , , D k = n Θ | P L P e v a l S ̀ D
  • And to successively replace all Θ | P , with the result S ̀ having already been replaced by channel D , by L P e v a l , which is equivalent to the inverse functions Θ 1 | P , consider the following:
    D Θ | P Θ 1 | P L P e v a l S k = 1 D k = 1 , , S k = n D k = n
As a result, we obtain a graph C containing the equivalent L P e v a l for each Θ | P Θ Γ | G but possibly differing structures compared to Γ | G , since its construction was carried out starting from S S S rather than from the root nodes S r o o t S Γ | G (see Figure 6).
Next, each root channel D r o o t D C must be connected to a logical processor L P i n i t , whose task is to send the corresponding M = S X q (where S X q S X S ) , which starts the computational process (see Figure 7).
Moreover, all or part of the channels D D C must be connected with one or more L P c o l l e c t , which will collect part or all of the calculated substates S X q S X γ | G belonging to the graph γ | G γ | G Γ | G , given by the set of initial states S (see Figure 7).

4.2. Graph C Optimization

Simply replacing Θ | P functions with processors L P e v a l yields an extremely suboptimal and potentially infinite processing graph C , which is not good from the viewpoint of minimizing computing resources. To solve this problem, we can optimize graph C . For example, consider two optimization methods:
  • Folding of cyclic sequences in the graph Γ | G :
    Consider a chain with an arbitrary length of the same functions θ | P , as in Figure 8a. This can be transformed into a chain of logical processors L P e v a l of equal length, as in Figure 8b. We can fold this chain into a single L P e v a l by adding a message-return loop as in Figure 8c. Thus, more than one message M will go through one L P e v a l , so that if θ | P has more than one argument, it can lead to collisions. To resolve collisions and also to implement breakage of the loop, we need to determine the loop-iteration number of messages M . The simple way to do this is to add an iteration counter for each loop in C . Another approach is to use history-sensitive values [21]. As a more complex example, we consider the graph Γ | G in Figure 9a, which can be converted and collapsed into a compact graph C as in Figure 9b.
  • Folding of graph C :
    Inside each L P e v a l , we can implement more than one function Θ | P Θ Γ | G , thus reducing the number of nodes in the graph C . This folding can be performed over a wide range, up to the realization of all Γ | G in one L P e v a l . For example, graph C from Figure 9b can be folded into a single L P e v a l and will look like Figure 9c.
In general, the optimization problem for graph C is rather complex and goes beyond the scope of this article.

4.3. Simulation of the Graph Model Y ^ X ¯ G Γ Using the Computational Graph C

In the simplest case, we can simulate the model Y ^ X ¯ G Γ (Formula (10)) using the graph C constructed on it in two stages:
  • Calculate the set of substates
    S X ¯ | G = S X γ Γ | G | S
    For this, we initialize the calculation by sending M messages using the processors L P i n i t . Using the processors L P c o l l e c t , we collect all the calculated messages, M = S X q .
  • Find all substates for each key X ¯ X ¯ , and then collect the values Y ^ Y ^ from the found substates (Formula (6)).
In most cases, this approach will be computationally expensive since, in practice, X γ | G > X ¯ usually occurs.
Generally, simulation optimization is the minimization of the number of calculated substates S X q such that X S X q X ¯ . Several approaches are possible here—for example, constructing a minimalistic Γ | G with a certain well-known collection X ¯ . Alternatively, lazy algorithms that cut off the calculations Θ | P , whose result is not required to cover X ¯ , could be used. However, this topic is beyond the scope of the present article.

5. Practice

In this chapter, we show our approach in practice. First, we describe a modeled object, then define its mathematical model and the analytical solution. In the next step, we explain the procedure of the construction of the graph and parallelization scheme and present the results. This chapter contains a shortened description; please check Supplementary S6–S8 for the full one.

5.1. Description of the Modeled Object and the Construction of Model Y ^ ( X ¯ | G )

As an example, consider the classic model of saline mixing. Here, the simulated object is a system of two connected tanks of volumes v 1 = 4   L and v 2 = 8   L . Over time t , a saline solution circulates from the first tank to the second with a speed q 3 = 5   L / m and in the opposite direction with a speed of q 2 = 2   L / m . In addition, the saline solution is poured into the first tank at a speed of q 1 = 3   L / m and drains from the second tank at the same speed q 4 = 3   L / m , i.e., the volume of the saline solution in the tanks does not change. Initially, the first and second tanks are entirely filled with solutions with initial salt concentrations of ω 1 = 0   g / L and ω 2 = 20   g / L , respectively. A saline solution with a concentration of ω 3 = 10   g / L is supplied to the first tank constantly. Thus, the set of variables reflecting the properties of interest will look like the following:
V = t ω 1 ω 2 ω 3 v 1 v 2 q 1 q 2 q 3 q 4
The modeling task is to predict the change in the salt concentrations ω 1 = 0   g / L and ω 2 = 20   g / L over time t .
As part of the modeling problem to be solved, we represent the simulated object in the form of the model Y ^ X ¯ G F (Formula (3)), breaking the variables V as
X ¯ = t , Y ^ = ω 1 ω 2 , G = v 1 = 4 v 2 = 8 q 1 = 3 q 2 = 2 q 3 = 5 q 4 = 3 ω 3 = 10
and specifying their dependence as a set of functions:
Y ^ = F X ¯ | G = ω ^ 1 t ¯ = 13 e 105 15 t ¯ 16 105 21 13 e 15 + 105 t ¯ 16 105 21 5 e 105 15 t ¯ 16 5 e 15 + 105 t ¯ 16 + 10 ω ^ 2 t ¯ = 5 e 15 + 105 t ¯ 16 105 21 5 e 105 15 t ¯ 16 105 21 + 5 e 105 15 t ¯ 16 + 5 e 15 + 105 t ¯ 16 + 10 ,
which are obtained by solving the Cauchy problem
d ω ^ 1 d t ¯ = 3 10 + 2 ω ^ 2 5 ω ^ 1 4 ω ^ 1 0 = 0                                           d ω ^ 2 d t ¯ = 5 ω ^ 1 2 ω ^ 2 3 ω ^ 2 8 ω ^ 2 0 = 20                                     .  
We can also represent the simulated object in the form of model Y ^ X ¯ G S (Formula (4)). In this case, the values of the variable t will be used as keys, and those of the variables ω 1 and ω 2 can be separated by different substates in such a way that we obtain two types of substates S X q = 1 = ω 1 t q = 1 and S X q = 2 = ω 2 t q = 2 . In the code, we can represent the values X ¯ , Y ^ and the substate S X q as OOP classes (source code B.1.L27).
One simple, but impractical, way to construct a set of substates S X ¯ | G is to generate S X q = 1 , S X q = 2 S X ¯ | G using a set of functions (Formula (12)) with some step of key t (source code B.2.L60).
Using the model Y ^ X ¯ G , we can perform the simulation (Formula (5)) for some segment X ¯ = t b e g i n , t e n d and obtain the corresponding set of values Y ^ (source code B.1.L71 is an implementation of Y ^ = F X ¯ | G , and source code B.2.L70 is an implementation of S X ¯ | G = Y ^ ). Looking at the output plots, we can see that they are similar (see Figure 10).
We can compare the results of executing the models Y ^ X ¯ G F and Y ^ X ¯ G S just by accumulating different overall output values:
ε = X ¯ X n ¯ i = 1 Y Y ^ X ¯ G F i Y ^ X ¯ G S i .
Evaluating this algorithm (source B.3.L24), we obtain ε = 1.1546319456101628 e 14 .

5.2. Building and Simulating a Graphical Model Y ^ X ¯ G Γ

The graph Γ | G for this example will represent an infinite chain of pairs of nodes S connected by edges Θ | O . For convenience, in addition to the index of depth d , we index the nodes S with indices of width w N so that the nodes S d , with the same index d , will have different values of w . Moreover, we set w = k = q , where k is the index of the argument (edges) Θ | O d and q is the index of the substate assigned to S d , w . Each pair S d , w = 1 and S d , w = 2 corresponds to a certain moment of discrete time t ¯ . For simplicity, we will use a fixed time step t = d γ , where d is the depth index, and γ is the time-step coefficient. Also, we restrict model time to a small interval t b e g i n , t e n d t ¯ . In this case, the graph Γ | G will contain
n = t e n d t b e g i n t + 1
pairs of nodes S d , w .
The simplest way to implement the transition functions Θ | O d , w = 1 and Θ | O d , w = 2 is to use the functions ω ^ 1 t ¯ and ω ^ 2 t ¯ from the set F X ¯ | G (Formula (12)). In this case,
Θ | O d , w = 1 ω ^ 1 t ¯ k = 1 , ω ^ 2 t ¯ k = 2 = = ω ^ 1 t ¯ + t t ¯ + t q = 1 ; Θ | O d , w = 2 ω ^ 1 t ¯ k = 1 , ω ^ 2 t ¯ k = 2 = = ω ^ 2 t ¯ + t t ¯ + t q = 2 .
A slightly more complicated implementation is to rewrite the system of differential equations (Formula (13)) to be solved by the Euler method
ω ^ 1 , i = ω ^ 1 , i 1 + t q 1 ω 3 + q 2 ω ^ 2 , i 1 q 3 ω ^ 1 , i 1 v 1           ω ^ 2 , i = ω ^ 2 , i 1 + t q 3 ω ^ 1 , i 1 q 2 ω ^ 2 , i 1 q 4 ω ^ 2 , i 1 v 2
as iterated by t :
ω ^ 1,0 = 0 ; ω ^ 2,0 = 20 ; i = 1,2 , 3 ,
In this case,
Θ | O d , w = 1 ω ^ 1 t ¯ k = 1 , ω ^ 2 t ¯ k = 2 = ω ^ 1 + t O . q 1 O . ω 3 + O . q 2 ω ^ 2 O . q 3 ω ^ 1 O . v 1 t ¯ + t q = 1 ;
Θ | O d , w = 2 ω ^ 1 t ¯ k = 1 , ω ^ 2 t ¯ k = 2 = ω ^ 2 + t O . q 3 ω ^ 1 O . q 2 ω ^ 2 O . q 4 ω ^ 2 O . v 2 t ¯ + t q = 2 .
We implement the nodes S and the sets of edges Θ | P as OOP classes (source code C.1.L89). S nodes are essentially variables that are not initially defined. The transition graph Γ | G and the simulation graph γ | G can be represented as classes containing collections of nodes S of sets of edges Θ | P (source code C.1.L149). Moreover, the graph γ | G is the same as graph Γ | G , but with all variables S defined.
Due to the simplicity of the transition graph Γ | G , we can implement the function build_Γ( n , t ), which automatically constructs Γ | G based on the given number of steps and the time step (source code C.1.L190).
The search for the simulation graph γ Γ | G | S is a calculation of the values of all nodes S from the initial set of substates
S = S d = 0 , w = 1 = S X j = 1 , S d = 0 , w = 1 = S X j = 2 .
We implement the search as method Γ .γ( S ), using the indices d and w as the key in the set S (source code C.1.L156). The method first initializes the nodes S d = 0 , w = 1 and S d = 0 , w = 2 with the initial substates S X j = 1 and S X j = 2 and then calculates the values of the rest nodes S d , w by calling each method Θ | O d , w .eval() until all S d , w are defined. The method Θ | O d , w .eval() checks whether the arguments
S d 1 , w = 1 , S d 1 , w = 2 Θ | O d , w
are defined and, if so, evaluates the result
Θ | O d , w S d , w .
The set of substates S X γ | G can be obtained from the simulation graph γ Γ | G | S by simply extracting the values from the nodes S and combining them into the set S X ¯ | G . We implement this in the form of the method γ | G . S () (source code C.1.L179); next, S X ¯ | G can be used to obtain the values of Y ^ Y ^ from the values of X ¯ X ¯ .

5.3. Constructing and Calculating Graph C Using the Graphical Model Y ^ X ¯ G Γ

As an example, we construct graph C using the model Y ^ X ¯ G Γ for mixing salt solutions. To implement it, we use the AKKA Streams library. There was a similar approach for implementing the SwiftVis tool.
We can build an unoptimized version of graph C by simply replacing the functions Θ | P d , w = 1 and Θ | P d , w = 2 with logical processors L P e v a l d , w = 1 and L P e v a l d , w = 2 and adding L P c o l l e c t , L P i n i t d = 0 , w = 1 , and L P i n i t d = 0 , w = 2 .
We represent the substates in the form of the messages M d , w = S X q , d , w produced by the corresponding L P e v a l d , w , where q = w . In particular, the substates from the set S can be represented as M d = 0 , w = 1 = S d = 0 , w = 1 = S X q = 1 and M d = 0 , w = 2 = S d = 0 , w = 2 = S X q = 2 .
This will work as follows (see source code D.1): the initial messages M d = 0 , w = 1 and M d = 0 , w = 2 are sent by logical processors L P i n i t d = 0 , w = 1 and L P i n i t d = 0 , w = 2 to the processors L P e v a l d = 1 , w = 1 , L P e v a l d = 1 , w = 2 . Then, the messages will distribute throughout the graph, where a copy of each substate is fed into the processor L P c o l l e c t , which builds the resulting set of substates S X ¯ | G .
Since the standard blocks Zip, Flow.map, Broadcast, and Merge from the AKKA Streams library were used to construct graph C , the implementation of each L P e v a l d , w will be a nested graph.
Since the obtained graph C consists of recurring pairs L P e v a l d , w = 1 and L P e v a l d , w = 2 , it can be optimized by implementing two cycles using two logical processors, L P e v a l w = 1 and L P e v a l w = 2 .
Since it is necessary in this case to determine which incoming messages refer to particular iterations of the cycle, we add the iteration (depth) counter d to them, M d , w = d , S X q , and modify the grouping function Zip so that it selects pairs of incoming M d , w with the same value d (source code D.2). When we execute this code, we obtain the simulation result (see Figure 11), which was the same as our findings from the implementation of the Y ^ X ¯ G S model (see Figure 10).

6. Discussion

One of the primary contributions of this research is the synthesis of classical mathematical modeling techniques with the practical, high-performance synchronization mechanisms provided by reactive streams. Similar to earlier approaches, such as the Time Warp algorithm [1,2] and actor-based frameworks used in HPC simulation platforms [9], our method decomposes the complete object state into substates with unique keys. This modular representation not only supports reuse and flexibility but also enables the direct mapping of transition functions to logical processors. The resulting computational graph is reminiscent of systems such as RxHLA [7] and CQRS + ES architectures [8], which emphasize decoupling and distributed processing.
Representing the model as a transition graph Γ | G and initial set of sates S offers several benefits:
  • Modularity and Reusability: By encapsulating transition rules as independent functional blocks, the approach supports reuse and flexibility. This modular structure is similar in spirit to block-diagram environments like Simulink [30,31,32] and has parallels in dataflow programming models discussed by Kuraj and Solar-Lezama [21].
  • Scalability: Our implementation leverages the inherent parallelism of modern multi-core and distributed architectures. This approach aligns with the findings of actor-based models [9,25,26] and contemporary research on reactive programming in distributed systems [10,24].
  • Interactive Simulation: The push and pool patterns introduced in our model are analogous to techniques used in recent studies on interactive and fault-tolerant reactive systems [23,33]. This design allows the simulation to respond in real time to external events or user inputs.
In summary, the proposed method of using reactive streams as a synchronization protocol for parallel simulation provides a compelling framework that unites rigorous mathematical modeling with practical, scalable implementation techniques. While challenges remain—particularly in optimization, continuous simulation, and fault tolerance, the initial results and conceptual clarity offer a solid foundation for further research and development. The integration of our approach with similar studies in the field [1,2,3,4,5,6,7,8,9,10,13,21,22,23,24,25,26,27,28,33,34] highlights its potential and provides clear directions for future work.

7. Future Work

Many unanswered questions remain, some of which we present for future research:
  • Effective optimization of computational graph C and simulation on it:
    Section 4.2 and Section 4.3 dealt with this topic. However, due to its complexity and vastness, it did not fit into this article. In general, this is a very important issue from a practical point of view. Solving it will significantly reduce the number of resources required to perform simulations. Another interesting question is the automation of the optimization of graph C . Say that, initially, we have non-optimal C , for example, obtained by the method described in Section 4.1. We want to automatically make C compact and computationally easy without loss of accuracy and consistency.
    The ML technique can be used to resolve the optimization task. For example, reinforcement rearming agents can be trained to explore various graph configurations (i.e., different ways to fold or collapse the computational graph) and learn which configurations yield the best performance in terms of latency, throughput, or resource consumption [35,36,37,38,39]. Also, techniques like neural architecture search (NAS) can be adapted to optimize the layout and parameters of the computational graph. This includes automatically deciding how to fold cyclic sequences, balancing load among logical processors, and minimizing redundant computations [38,39,40].
  • Accurate simulation of continuous-valued models:
    Many properties of modeled objects can be represented in continuous quantities, for example, values from the set ℝ. However, the simulation (the calculation of which is based on message forwarding) is inherently discrete. An open question remains as to how accurately continuous quantities can be calculated. The question is how to increase the accuracy of calculating such quantities without increasing the requirements for computer resources.
  • Fault-tolerance of reactive streams:
    We did not touch on fault tolerance of simulation in this work, but in most real/practical applications, fault tolerance is very important. This question was partially explored in [23], but we also suggest this for future work.
  • Manual and automatic graph construction C :
    From a practical viewpoint, it is interesting to be able to use some IDE to manually construct a computational graph C and to do this in such a way that the corresponding graph Γ | G will be consistent and optimal. For example, this might be performed similarly to the Simulink package [30,41], SwiftVis tool [25,42], or XFRP language (Version 2.9.644) [24,43]. It is also interesting to find ways to automate the construction of C . For example, the model can initially be defined as a certain set of rules by which graph C can be automatically and even dynamically constructed. Specialized programming languages are also an interesting area to explore. For example, the EdgeC [33] language can be considered a tool to describe computational graphs.
    Also, the ML technique can be applied wildly here. For example, graph learning techniques from graph neural networks (GNNs) can be applied to learn the structure of the optimal computational graph from historical data. The learned model can then suggest or automatically construct a more efficient graph based on current simulation requirements. Adaptive scheduling ML algorithms can dynamically adjust the scheduling of tasks across logical processors, optimizing the execution order and balancing the load [44,45]. This is particularly useful in interactive or real-time simulations where conditions may change frequently.
  • Testing with complex models and comparing with other parallelizing approaches:
    This work provides a small, simple example of parallel simulation to show how the described approach can be implemented in practice. However, the questions of checking this approach with large and complex models and comparing its effectiveness with other parallelizing approaches remain open.

8. Conclusions

The proposed method effectively integrates the reactive stream paradigm with classical mathematical modeling techniques to create a scalable framework for parallel simulation. By using a graph-based representation of object states and transition functions, this approach enhances modularity and reusability while supporting efficient computation through logical processors. The implementation using AKKA reactive streams demonstrates its scalability and practical feasibility for distributed systems. Despite its promise, the work highlights challenges such as graph optimization, continuous model simulation, fault tolerance, and automation of graph construction, which offer significant areas for future research and development. The study lays a strong foundation for advancing parallel simulation techniques, emphasizing both theoretical robustness and practical scalability.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/computation13050103/s1, Supplementary S1: Notation; Supplementary S2: Basic definitions [46,47,48,49,50,51,52,53,54,55,56,57,58,59]; Supplementary S3: Transition functions Θ | O [60,61,62]; Supplementary S4: Building of a consistent graph theorem Γ | G ; Supplementary S5: Completeness of the set of initial substates S theorem; Supplementary S6: Example: Description of the modeled object and the construction of model Y ^ X ¯ G ; Supplementary S7: Example: Building and simulating a graphical model Y ^ X ¯ G Γ ; Supplementary S8: Example: Constructing and calculating graph C using the graphical model Y ^ X ¯ G Γ .

Author Contributions

O.S.: Conceptualization, methodology, formal analysis, writing, visualization, software. A.P.: Conceptualization, methodology, formal analysis, writing—original draft preparation, and visualization. I.P.: Methodology, software, formal analysis, data curation, and writing—original draft preparation. M.Y.: Investigation, resources and editing, and visualization. H.K.: Supervision, data curation, and writing—review and editing. V.A.: Supervision, data curation, visualization, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

This work represents a harmonious blend of independent research pursuits and a series of scientifically rigorous endeavors underpinned by various grants. The authors would like to extend their heartfelt gratitude to all the institutions and organizations, reviewers, and our editor, who have contributed to the successful completion of this study. The research was conducted as part of the projects ‘Development of methods and means of increasing the efficiency and resilience of local decentralized electric power systems in Ukraine’ and ‘Development of Distributed Energy in the Context of the Ukrainian Electricity Market Using Digitalization Technologies and Systems,’ implemented under the state budget program ‘Support for Priority Scientific Research and Scientific-Technical (Experimental) Developments of National Importance’ (CPCEL 6541230) at the National Academy of Sciences of Ukraine.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jeerson, D.R.; Sowizral, H.A. Fast Concurrent Simulation Using the Time Warp Mechanism; Part I: Local Control; Rand Corp: Santa Monica, CA, USA, 1982. [Google Scholar]
  2. Richard, R.; Fujimoto, M. Parallel and Distributed Simulation Systems; Wiley: New York, NY, USA, 2000. [Google Scholar]
  3. Radhakrishnan, R.; Martin, D.E.; Chetlur, M.; Rao, D.M.; Wilsey, P.A. An Object-Oriented, Time Warp Simulation Kernel. In Proceeding of the International Symposium on Computing in Object-Oriented Parallel Environments (ISCOPE’98), Santa Fe, NM, USA, 8–11 December 1998; Caromel, D., Oldehoeft, R.R., Tholburn, M., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; Volume 1505, pp. 13–23. [Google Scholar]
  4. Jefferson, D.; Beckman, B.; Wieland, F.; Blume, L.; Diloreto, M. Time warp operating system. In Proceedings of the eleventh ACM Symposium on Operating systems principles, Austin, TX, USA, 8–11 November 1987; Volume 21, pp. 77–93. [Google Scholar]
  5. Aach, J.; Church, G.M. Aligning gene expression time series with time warping algorithms. Bioinformatics 2001, 17, 495–508. [Google Scholar] [CrossRef] [PubMed]
  6. Nicol, D.M.; Fujimoto, R.M. Parallel simulation today. Ann. Oper. Res. 1994, 53, 249. [Google Scholar] [CrossRef]
  7. Falcone, A.; Garro, A. Reactive HLA-based distributed simulation systems with rxhla. In Proceedings of the 2018 IEEE/ACM 22nd International Symposium on Distributed Simulation and Real Time Applications (DS-RT), Madrid, Spain, 15–17 October 2018; pp. 1–8. [Google Scholar]
  8. Debski, A.; Szczepanik, B.; Malawski, M.; Spahr, S. In Search for a scalable & reactive architecture of a cloud application: CQRS and event sourcing case study. IEEE Software 2016, 99. [Google Scholar] [CrossRef]
  9. Bujas, J.; Dworak, D.; Turek, W.; Byrski, A. High-performance computing framework with desynchronized information propagation for large-scale simulations. J. Comp. Sci. 2019, 32, 70–86. [Google Scholar] [CrossRef]
  10. Reactive Stream Initiative. Available online: https://www.reactive-streams.org (accessed on 15 April 2025).
  11. Davis, A.L. Reactive Streams in Java: Concurrency with RxJava, Reactor, and Akka Streams; Apress: New York, NY, USA, 2018. [Google Scholar]
  12. Curasma, H.P.; Estrella, J.C. Reactive Software Architectures in IoT: A Literature Review. In Proceedings of the 2023 International Conference on Research in Adaptive and Convergent Systems (RACS ‘23), Association for Computing Machinery, New York, NY, USA, 6–10 August 2023; Article 25. pp. 1–8. [Google Scholar] [CrossRef]
  13. The Implementation of Reactive Streams in AKKA. Available online: https://doc.akka.io/docs/akka/current/stream/stream-introduction.html (accessed on 15 April 2025).
  14. Oeyen, B.; De Koster, J.; De Meuter, W. A Graph-Based Formal Semantics of Reactive Programming from First Principles. In Proceedings of the 24th ACM International Workshop on Formal Techniques for Java-like Programs (FTfJP ‘22), Association for Computing Machinery, New York, NY, USA, 7 June 2022; pp. 18–25. [Google Scholar] [CrossRef]
  15. Posa, R. Scala Reactive Programming: Build Scalable, Functional Reactive Microservices with Akka, Play, and Lagom; Packt Publishing: Birmingham, UK, 2018. [Google Scholar]
  16. Baxter, C. Mastering Akka; Packt Publishing: Birmingham, UK, 2016. [Google Scholar]
  17. Nolte, D.D. The tangled tale of phase space. Phys. Today 2010, 63, 33–38. [Google Scholar] [CrossRef]
  18. Myshkis, A.D. Classification of applied mathematical models-the main analytical methods of their investigation. Elem. Theory Math. Models 2007, 9, 9. [Google Scholar]
  19. Briand, L.C.; Wust, J. Modeling development effort in object-oriented systems using design properties. IEEE Trans. Softw. Eng. 2001, 27, 963–986. [Google Scholar] [CrossRef]
  20. Briand, L.C.; Daly, J.W.; Wust, J.K. A unified framework for coupling measurement in object-oriented systems. IEEE Trans. Softw. Eng. 1999, 25, 91–121. [Google Scholar] [CrossRef]
  21. Shibanai, K.; Watanabe, T. Distributed functional reactive programming on actor-based runtime. In Proceedings of the 8th ACM SIGPLAN International Workshop on Programming Based on Actors, Agents, and Decentralized Control, Boston, MA, USA, 5 November 2018; pp. 13–22. [Google Scholar]
  22. Lohstroh, M.; Romeo, I.I.; Goens, A.; Derler, P.; Castrillon, J.; Lee, E.A.; Sangiovanni-Vincentelli, A. Reactors: A deterministic model for composable reactive systems. In Cyber Physical Systems. Model-Based Design; Springer: Cham, Switzerland, 2019; pp. 59–85. [Google Scholar]
  23. Mogk, R.; Baumgärtner, L.; Salvaneschi, G.; Freisleben, B.; Mezini, M. Fault-tolerant distributed reactive programming. In Proceedings of the 32nd European Conference on Object-Oriented Programming (ECOOP 2018), Amsterdam, The Netherlands, 19–21 July 2018. [Google Scholar]
  24. About the Graphs in AKKA Streams. Available online: https://doc.akka.io/docs/akka/2.5/stream/stream-graphs.html (accessed on 15 April 2025).
  25. Kurima-Blough, Z.; Lewis, M.C.; Lacher, L. Modern parallelization for a dataflow programming environment. In Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA), The Steering Committee of the World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), Las Vegas, NV, USA, 17–20 July 2017; pp. 101–107. [Google Scholar]
  26. Kirushanth, S.; Kabaso, B. Designing a cloud-native weigh-in-motion. In Proceedings of the 2019 Open Innovations (OI), Cape Town, South Africa, 2–4 October 2019. [Google Scholar]
  27. Prymushko, A.; Puchko, I.; Yaroshynskyi, M.; Sinko, D.; Kravtsov, H.; Artemchuk, V. Efficient State Synchronization in Distributed Electrical Grid Systems Using Conflict-Free Replicated Data Types. IoT 2025, 6, 6. [Google Scholar] [CrossRef]
  28. Oeyen, B.; De Koster, J.; De Meuter, W. Reactive Programming without Functions. arXiv 2024, arXiv:2403.02296. [Google Scholar] [CrossRef]
  29. Babaei, M.; Bagherzadeh, M.; Dingel, J. Efficient reordering and replay of execution traces of distributed reactive systems in the context of model-driven development. In Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, Virtual Event, 16–23 October 2020. [Google Scholar]
  30. Simulink. Available online: https://www.mathworks.com/help/simulink/index.html?s_tid=CRUX_lftnav (accessed on 15 April 2025).
  31. Karris, S.T. Introduction to Simulink with Engineering Applications; Orchard Publications: London, UK, 2006. [Google Scholar]
  32. Dessaint, L.-A.; Al-Haddad, K.; Le-Huy, H.; Sybille, G.; Brunelle, P. A power system simulation tool based on Simulink. IEEE Trans. Ind. Electron. 1999, 46, 1252–1254. [Google Scholar] [CrossRef]
  33. Kuraj, I.; Solar-Lezama, A. Aspect-oriented language for reactive distributed applications at the edge. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking 2020, (EdgeSys ‘20), Association for Computing Machinery, New York, NY, USA, 27 April 2020; pp. 67–72. [Google Scholar] [CrossRef]
  34. Babbie, E.R. The Practice of Social Research; Wadsworth Publishing: Belmont, CA, USA, 2009; ISBN 0-495-59841-0. [Google Scholar]
  35. Zoph, B.; Le, Q.V. Neural Architecture Search with Reinforcement Learning. arXiv 2016, arXiv:1611.01578. [Google Scholar]
  36. Nakata, T.; Chen, S.; Saiki, S.; Nakamura, M. Enhancing Personalized Service Development with Virtual Agents and Upcycling Techniques. Int. J. Netw. Distrib. Comput. 2025, 13, 5. [Google Scholar] [CrossRef]
  37. Liu, M.; Zhang, L.; Chen, J.; Chen, W.-A.; Yang, Z.; Lo, L.J.; Wen, J.; O’Neil, Z. Large language models for building energy applications: Opportunities and challenges. Build. Simul. 2025, 18, 225–234. [Google Scholar] [CrossRef]
  38. Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. International Conference on Learning Representations (ICLR). arXiv 2017, arXiv:1609.02907. [Google Scholar]
  39. Nie, M.; Chen, D.; Chen, H.; Wang, D. AutoMTNAS: Automated meta-reinforcement learning on graph tokenization for graph neural architecture search. Knowl.-Based Syst. 2025, 310, 113023. [Google Scholar] [CrossRef]
  40. Kuş, Z.; Aydin, M.; Kiraz, B.; Kiraz, A. Neural Architecture Search for biomedical image classification: A comparative study across data modalities. Artif. Intell. Med. 2025, 160, 103064. [Google Scholar] [CrossRef]
  41. Chaturvedi, D.K. Modeling and Simulation of Systems Using MATLAB and Simulink; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  42. Lewis, M.C.; Lacher, L.L. Swiftvis2: Plotting with spark using scala. In Proceedings of the International Conference on Data Science (ICDATA’18), Las Vegas, NV, USA, 30 July–2 August 2018; Volume 1. [Google Scholar]
  43. Yoshitaka, S.; Watanabe, T. Towards a statically scheduled parallel execution of an FRP language for embedded systems. In Proceedings of the 6th ACM SIGPLAN International Workshop on Reactive and Event-Based Languages and Systems, Athens, Greece, 21 October 2019; pp. 11–20. [Google Scholar] [CrossRef]
  44. Bassen, J.; Balaji, B.; Schaarschmidt, M.; Thille, C.; Painter, J.; Zimmaro, D.; Games, A.; Fast, E.; Mitchell, J.C. Reinforcement learning for the adaptive scheduling of educational activities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ‘20, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar] [CrossRef]
  45. Long, L.N.B.; You, S.-S.; Cuong, T.N.; Kim, H.-S. Optimizing quay crane scheduling using deep reinforcement learning with hybrid metaheuristic algorithm. Eng. Appl. Artif. Intell. 2025, 143, 110021. [Google Scholar] [CrossRef]
  46. Pietarinen, A.; Senden, Y.; Sharpless, S.; Shepperson, A.; Vehkavaara, T. The Commens Dictionary of Piece’s Terms, “Object,” 2009-03-19. Available online: https://web.archive.org/web/20090214004523/http:/www.helsinki.fi/science/commens/terms/object.html (accessed on 15 April 2025).
  47. Sismondo, S. Models, Simulations, and Their Objects. Sci. Context 1999, 12, 247–260. [Google Scholar] [CrossRef]
  48. Achinstein, P. Theoretical models. Br. J. Philos. Sci. 1965, 16, 102–120. [Google Scholar] [CrossRef]
  49. Banks, J.; Carson, J.; Nelson, B.; Nicol, D. Discrete-Event System Simulation; Prentice Hall: Hoboken, NJ, USA, 2001; ISBN 0-13-088702-1. [Google Scholar]
  50. Brézillon, P.; Gonzalez, A.J. (Eds.) Context in Computing: A Cross-Disciplinary Approach for Modeling the Real World; Springer: New York, NY, USA, 2014. [Google Scholar]
  51. Varga, A. Discrete event simulation system. In Proceedings of the European Simulation Multiconference (ESM2001), Prague, Czech Republic, 6–9 June 2001; Volume 17. [Google Scholar]
  52. Choi, B.K.; Kang, D. Modeling and Simulation of Discrete Event Systems; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  53. Goldsman, D.; Goldsman, P. Discrete-event simulation. In Modeling and Simulation in the Systems Engineering Life Cycle: Core Concepts and Accompanying Lectures; Springer London: London, UK, 2015; pp. 103–109. [Google Scholar]
  54. Robinson, S. Conceptual modeling for simulation. In Winter Simulations Conference (WSC); IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  55. Robinson, S. A tutorial on conceptual modeling for simulation. In Winter Simulation Conference (WSC); IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  56. Abdelmegid, M.A.; Gonzales, V.A.; Naraghi, A.M.; O’Sullivan, M.; Walker, C.G.; Poshdar, M. Towards a conceptual modeling framework for construction simulation. In Winter Simulation Conference (WSC); IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  57. Curtright, T.L.; Zachos, C.K.; Fairlie, D.B. Quantum mechanics in phase space. Asia Pac. Phys. Newsl. 2012, 1, 37–46. [Google Scholar] [CrossRef]
  58. Wu, B.; He, X.; Liu, J. Nonadiabatic field on quantum phase space: A century after Ehrenfest. J. Phys. Chem. Lett. 2024, 15, 644–658. [Google Scholar] [CrossRef] [PubMed]
  59. Hastings, N.B. Workshop Calculus: Guided Exploration with Review; Springer Science & Business Media: Berlin, Germany, 1998. [Google Scholar]
  60. Keller, R.M. Formal verification of parallel programs. Commun. ACM 1976, 19, 371–384. [Google Scholar] [CrossRef]
  61. Erwig, M.; Kollmansberge, S. Functional pearls: Probabilistic functional programming in Haskell. J. Funct. Program. 2006, 16, 21–34. [Google Scholar] [CrossRef]
  62. Saini, A.; Thiry, L. Functional programming for business process modeling. IFAC-PapersOnLine 2017, 50, 10526–10531. [Google Scholar] [CrossRef]
Figure 1. Joining of function Θ | P j Θ | G .
Figure 1. Joining of function Θ | P j Θ | G .
Computation 13 00103 g001
Figure 2. Example of the transition DAG built from functions Θ | P j Θ | G .
Figure 2. Example of the transition DAG built from functions Θ | P j Θ | G .
Computation 13 00103 g002
Figure 3. Example of the simulation graph that can be obtained from the transition graph in Figure 2.
Figure 3. Example of the simulation graph that can be obtained from the transition graph in Figure 2.
Computation 13 00103 g003
Figure 4. Reduction in the number of calculations by including substates that are as close as possible to those from the desired S X ¯ .
Figure 4. Reduction in the number of calculations by including substates that are as close as possible to those from the desired S X ¯ .
Computation 13 00103 g004
Figure 5. Representing the input/output as a set of unknown transition functions.
Figure 5. Representing the input/output as a set of unknown transition functions.
Computation 13 00103 g005
Figure 6. Example of constructing the computational graph C from the model Y ^ X ¯ G Γ .
Figure 6. Example of constructing the computational graph C from the model Y ^ X ¯ G Γ .
Computation 13 00103 g006
Figure 7. Addition of L P i n i t and L P c o l l e c t into the computational graph C .
Figure 7. Addition of L P i n i t and L P c o l l e c t into the computational graph C .
Computation 13 00103 g007
Figure 8. Example of the folding of the simple computational graph C . (a) a chain with an arbitrary length of the same functions θ | P . (b) a chain of logical processors L P e v a l of equal length. (c) folded chain of a single L P e v a l with a message-return loop.
Figure 8. Example of the folding of the simple computational graph C . (a) a chain with an arbitrary length of the same functions θ | P . (b) a chain of logical processors L P e v a l of equal length. (c) folded chain of a single L P e v a l with a message-return loop.
Computation 13 00103 g008
Figure 9. Example of the folding of a more complex computational graph C . (a) a more complex example of the graph Γ | G . (b) graph C built from Γ | G . (c) folded graph C into a single L P e v a l .
Figure 9. Example of the folding of a more complex computational graph C . (a) a more complex example of the graph Γ | G . (b) graph C built from Γ | G . (c) folded graph C into a single L P e v a l .
Computation 13 00103 g009
Figure 10. Results of a simulation of the Y ^ X ¯ G F (first plot) and Y ^ X ¯ G S (second plot) models, where the X axis is time and the Y axis is salt concentration, the green line ω 1 is the salt concentration in tank 1, and the red line ω 2 is the salt concentration in tank 2.
Figure 10. Results of a simulation of the Y ^ X ¯ G F (first plot) and Y ^ X ¯ G S (second plot) models, where the X axis is time and the Y axis is salt concentration, the green line ω 1 is the salt concentration in tank 1, and the red line ω 2 is the salt concentration in tank 2.
Computation 13 00103 g010
Figure 11. Simulation of the Y ^ X ¯ G Γ model using graph C , where the X axis is time and the Y axis is salt concentration, the green line ω 1 is salt concentration in tank 1, the red line ω 2 is salt concentration in tank 2, and the gray line ω 3 is saline solution concentration supplied to tank 1 constantly.
Figure 11. Simulation of the Y ^ X ¯ G Γ model using graph C , where the X axis is time and the Y axis is salt concentration, the green line ω 1 is salt concentration in tank 1, the red line ω 2 is salt concentration in tank 2, and the gray line ω 3 is saline solution concentration supplied to tank 1 constantly.
Computation 13 00103 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sirotkin, O.; Prymushko, A.; Puchko, I.; Kravtsov, H.; Yaroshynskyi, M.; Artemchuk, V. Parallel Simulation Using Reactive Streams: Graph-Based Approach for Dynamic Modeling and Optimization. Computation 2025, 13, 103. https://doi.org/10.3390/computation13050103

AMA Style

Sirotkin O, Prymushko A, Puchko I, Kravtsov H, Yaroshynskyi M, Artemchuk V. Parallel Simulation Using Reactive Streams: Graph-Based Approach for Dynamic Modeling and Optimization. Computation. 2025; 13(5):103. https://doi.org/10.3390/computation13050103

Chicago/Turabian Style

Sirotkin, Oleksii, Arsentii Prymushko, Ivan Puchko, Hryhoriy Kravtsov, Mykola Yaroshynskyi, and Volodymyr Artemchuk. 2025. "Parallel Simulation Using Reactive Streams: Graph-Based Approach for Dynamic Modeling and Optimization" Computation 13, no. 5: 103. https://doi.org/10.3390/computation13050103

APA Style

Sirotkin, O., Prymushko, A., Puchko, I., Kravtsov, H., Yaroshynskyi, M., & Artemchuk, V. (2025). Parallel Simulation Using Reactive Streams: Graph-Based Approach for Dynamic Modeling and Optimization. Computation, 13(5), 103. https://doi.org/10.3390/computation13050103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop