Next Article in Journal
Study of Precipitation Forecast Based on Deep Belief Networks
Previous Article in Journal
Numerical Modeling of Wave Disturbances in the Process of Ship Movement Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Algorithm to Determine Probabilistic Bisimulation

Department of Mathematics and Computer Science, Eindhoven Universtiy of Technology, P.O. Box 512, 5600 MB Eindhoven, The Netherlands
*
Author to whom correspondence should be addressed.
Algorithms 2018, 11(9), 131; https://doi.org/10.3390/a11090131
Submission received: 10 July 2018 / Revised: 20 August 2018 / Accepted: 27 August 2018 / Published: 3 September 2018
(This article belongs to the Special Issue Bisimulation and Simulation Algorithms)

Abstract

:
We provide an algorithm to efficiently compute bisimulation for probabilistic labeled transition systems, featuring non-deterministic choice as well as discrete probabilistic choice. The algorithm is linear in the number of transitions and logarithmic in the number of states, distinguishing both action states and probabilistic states, and the transitions between them. The algorithm improves upon the proposed complexity bounds of the best algorithm addressing the same purpose so far by Baier, Engelen and Majster-Cederbaum (Journal of Computer and System Sciences 60:187–231, 2000). In addition, experimentally, on various benchmarks, our algorithm performs rather well; even on relatively small transition systems, a performance gain of a factor 10,000 can be achieved.

1. Introduction

In [1], Larsen and Skou proposed the notion of probabilistic bisimulation. Although described for deterministic transition systems, the same notion is also very suitable for probabilistic transition systems with nondeterminism [2,3], i.e. so-called PLTSs. It expresses that two states are equivalent exactly when the following condition holds: if one state can perform an action ending up in a set of states, each with a certain probability, and then the other state can do the same step ending up in an equivalent set of states with the same distribution of probabilities. Two characteristic nondeterministic transition systems of which the initial states are probabilistically bisimilar are given in Figure 1.
In [4], Baier et al. gave an algorithm for probabilistic bisimulation for PLTSs, thus dealing both with probabilistic and nondeterministic choice, of time complexity O m n ( log m + log n ) and space complexity O m n , where n is the number of states and m is the number of transitions (from states to distributions over states; there is no separate measure for the size of the distributions). As far as we know, it is the only practical algorithm for bisimulation à la Larsen-Skou for PLTSs. In essence, other algorithms for probabilistic systems typically target Markov chains without nondeterminism. The algorithm in [4] performs an iterative refinement of a partition of states and a partition of transitions per action label. The crucial point is splitting the groups of states based on probabilities. For this, a specific data structure is used, called augmented ordered balanced trees, to support efficient storage, retrieval and ordering of states indexed by probabilities.
In this paper, we provide a new algorithm for probabilistic bisimulation for PLTSs of time complexity O ( m a + m p ) log n p + m p log n a ) and space complexity O m a + m p , where n a is the number of states, m a  the number of transitions labelled with actions, n p  the number of distributions and m p the cumulative support of the distributions. Our n a coincides with the n of Baier et al. We prefer to use m a , n p , and m p over m as the former support a more refined analysis. A detailed comparison between the algorithms reveals that, if the distributions have a positive probability for all states, the complexities of the algorithms are similar. However, when distributions only touch a limited number of states, as is often the common situation, the implementation of our algorithm outperforms our implementation of the algorithm in [4], both in time as well as in space complexity.
Similar to the algorithm of Baier et al., our algorithm keeps track of a partition of states and of distributions (referred to as action states and probabilistic states below) but in line with the classical Paige–Tarjan approach [5] it also maintains a courser partition of so-called constellations. The treatment of distributions in our algorithm is strongly inspired by the work for Markov Chain lumping by Valmari and Franceschinis, but our algorithm applies to the richer setting of non-deterministic labelled probabilistic transition systems. Using a brilliant, yet simple argument, taken from [6], the number of times a probabilistic transition is sorted can be limited by the fan-out of the source state of the transition. This leads to the observation that we can use straightforward sorting without the need of any tailored data structure such as augmented ordered balanced trees or similar as in [4,7]. Actually, our algorithm uses a simplification of the algorithm in [6] since the calculation of so-called majority candidates can be avoided, too.
We implemented both the new algorithm and the algorithm from [4]. We spent quite some effort to establish that both implementations are free from programming flaws. To this end, we ran them side-by-side and compared the outcomes on a vast amount of randomly generated probabilistic transition systems (in the order of millions). Furthermore, we took a number of examples from the field, among others from the Prism toolset [8], and ran both implementations on the probabilistic transition systems that were obtained in this way. Time-wise, all benchmarks indicated better results for our algorithm compared to the algorithm from [4]. Even for rather small transition systems of about 100,000 states, performance gains of a factor 10,000 can be achieved. Memory-wise the implementation of our algorithm also outperforms the implementation in [4] when the sizes of the probabilistic state space are larger. Both findings are in line with the theoretical complexity analyses of both algorithms. Both implementations have been incorporated in the open source mCRL2 toolset [9,10].

1.1. Related Work

Probabilistic bisimulation preserves logic equivalence for PCTL [11]. In [12], Katoen c.s. reported up to logarithmic state space reduction obtained by probabilistic bisimulation minimisation for DTMCs. Quotienting modulo probabilistic bisimulation is based on the algorithm in [7]. In the same vein, Dehnert et al. proposed symbolic probabilistic bisimulation minimisation to reduce computation time for model checking PCTL in a setting for DTMCs [13], where an SMT solver is exploited to do the splitting of blocks. Partition reduction modulo probabilistic bisimulation is also used as an ingredient in a counter-example guided abstraction refinement approach (CEGAR) for model checking for PCTL by Lei Song et al. in [14].
For CTMCs, Hillston et al. proposed the notion of contextual lumpability based on lumpable bisimulation in [15]. Their reduction technique uses the Valmari–Franceschinis algorithm for Markov chain lumping mentioned earlier. Crafa and Renzato [16] characterised probabilistic bisimulation of PLTSs as a partition shell in the setting of abstract interpretation. The algorithm for probabilistic bisimulation that comes with such a characterisation turns out to coincide with that in [4]. A similar result applies to the coalgebraic approach to partition refinement in [17] that yields a general bisimulation decision procedure, which can be instantiated with probabilistic system types.
Probabilistic simulation for PLTSs has been treated in [4], too. In [18], maximum flow techniques are proposed to improve the complexity. Zhang and Jansen [19] presented a space-efficient algorithm based on partition refinement for simulation between probabilistic automata, which improves upon the algorithm for simulation by Crafa and Renzato  [16] for concrete experiments taken from the PRISM benchmark suite. A polynomial algorithm, essentially cubic, for deciding weak and branching probabilistic bisimulation by Turrini and Hermanns, recasting the algorithm in [20], is presented in [21].

1.2. Synopsis

The structure of this article is as follows. In Section 2, we provide the notions of a probabilistic transition system as well as that of probabilistic bisimulation. In Section 3, the outline of our algorithm is provided and it is proven that it correctly calculates probabilistic bisimulation. This section ends with an elaborate example. In Section 4 we provide a detailed version the algorithm with a focus on the implementation details necessary to achieve the complexity. In Section 5, we provide some benchmarking results and a few concluding remarks are made in Section 6.

2. Preliminaries

Let S be a finite set. A distribution f over S is a function f : S [ 0 , 1 ] such that s S f ( s ) = 1 . For each distribution f, its support is the set { s S f ( s ) > 0 } . The size of f is defined as the number of elements in its support, written as  | f | . The set of all distributions over a set S is denoted by  D ( S ) . Distributions are lifted to act on subsets T S by f [ T ] = s T f ( s ) .
For an equivalence relation R on S, we use S / R to denote the set of equivalence classes of R. We define s / R = { t S s R t } and, for a subset T of S, we define T / R = { s S t T : s R t } . A partition π = { B i S i I } is a set of non-empty subsets such that B i B j = for all i , j I and i I B i = S . Each  B i is called a block of the partition. Slightly ambiguously, we use S / R to denote the set of equivalence classes of R with respect to S. Clearly, the set of equivalence classes of R forms a partition of S. Reversely, a partition  π of S induces an equivalence relation  R π on S, by s R π t iff s , t B for some block B of  π . A partition  π is called a refinement of a partition  ϱ iff each block of  π is a subset of a block of  ϱ . Hence, each block in  ϱ is a disjoint union of blocks from  π .
We use probabilistic labeled transition systems as the canonical way to represent the behaviour of systems.
Definition 1. (Probabilistic Labeled Transition System).
A probabilistic labeled transition system (PLTS) for a set of actions  Act is a pair A = ( S , ) where
  • S is a finite set of states, and
  • S × Act × D ( S ) is a finite transition relation relating states and actions to distributions.
It is common to write s a f for s , a , f . For s S , a Act , and a set F D ( S ) of distributions, we write s a F if s a f for some  f F . Similarly, we write a F if there is no distribution f F such that s a f . For the presentation below, we associate a so-called probabilistic state  u f with each distribution f provided there is some transition s a f of  A . We write U for { u f s S , a Act : s a f } , with typical element u. Note that, since is finite, U is also finite. We also use the notation s a u f if s a f for some  f D ( S ) . As a matter of notation, we write u f [ T ] for  f [ T ] if probabilistic state  u f corresponds to the distribution f. We sometimes use a so-called probabilistic transition u f p s for 0 < p 1 and s S iff u f ( s ) = p . To stress S U = , we refer to states s S as action states.
Below, in particular in the complexity analysis, we use n a = | S | as the number of action states, n p = | U | as the number of probabilistic states, m a = | | as the number of action transitions and m p = u f U | f | as the cumulative size of the support of the distributions corresponding to all probabilistic states. Note that m p n p as every distribution has support of at least size 1.
The following definition for probabilistic bisimulation stems from [1].
Definition 2. (Probabilistic Bisimulation).
Consider a PLTS A = ( S , ) . An equivalence relation R S × S is called aprobabilistic bisimulationfor  A iff for all states  s , t S such that  s R t and s a f , for some action a Act and distribution f D ( S ) , it holds that t a g for some distribution g D ( S ) , and f [ B ] = g [ B ] for each B S / R .
Two states s , t S are probabilistically bisimilar iff a probabilistic bisimulation R for  A exists such that  s R t , which we write as  s p t . Two distributions f , g D ( S ) , and similarly two probabilistic states u f , u g U , areprobabilistically bisimilariff for all B S / p it holds that f [ B ] = g [ B ] , which we also denote by f p g and u f p u g , respectively.
By definition, probabilistic bisimilarity is the union of all probabilistic bisimulations. To be able to speak of probabilistically bisimilar distributions (or of probabilistically bisimilar probabilistic states), probabilistic bisimilarity needs to be an equivalence relation. In fact, probabilistic bisimilarity is a probabilistic bisimulation. See [22] for a proof.

3. A Partition Refinement Algorithm for Probabilistic Bisimulation (Outline)

Many efficient algorithms for standard bisimulation calculate partitions of states [5,23,24]. Here, we consider the construction of a partition  B of the sets of action states S and of probabilistic states U for some fixed PLTS A over a set of actions  Act . Below blocks of the partition always contain either action states or probabilistic states.

3.1. Stability of Blocks and Partitions

An important notion underlying the algorithm introduced below is that of the stability of a block of a partition. If a block is not stable, it contains states that are not bisimilar. These states either have different transitions or different distributions. We first define the notion of stability more generically on sets instead of on blocks. Then, we lift it to partitions.
Definition 3. (Stable Sets and Partitions).
1. 
A set of action states  B S is called stable under a set of probabilistic states C U with respect to an action a Act iff s a C whenever t a C and vice versa for all s , t B . The set B is called stable under C iff B is stable under C with respect to all actions a Act .
2. 
A set of probabilistic states B U is called stable under a set of action states C S iff u [ C ] = v [ C ] for all u , v B .
3. 
A set of states B with  B S , respectively B U , is called stable under a partition  C of S U , with C S or C U for all C C , iff B is stable under each C C with C U , respectively C S .
4. 
A partition B is called stable under a partition C iff all blocks B of B are stable under C .
There are two simple but important properties stating that stability is preserved when splitting sets. The first one says that subsets of stable sets are also stable.
Lemma 1.
Let B S be a set of action states and C U a set of probabilistic states. If B is stable under C, then any B B is also stable under C. Similarly, if C is stable under B, then any C C is also stable under B.
Proof. 
We only prove the first part as the argument for the second part is essentially the same. If s , t B , then also s , t B . As B is stable under C, it holds that for every action a Act either both satisfy s a C and t a C , or neither does. Thus, B is stable under C. ☐
The second property says that splitting a set in two parts can only influence the stability of an other set if there is a transition or a positive probability from this other set to one of the parts of the split set.
Lemma 2.
Let B S be a set of action states and C U a set of probabilistic states.
1. 
Suppose B is stable under C with respect to an action a, C C , and there is no s B such that s a C . Then, B is stable under C and C C with respect to a.
2. 
Suppose C is stable under B, B B , and u [ B ] = 0 for all u C . Then, C is stable under B and  B B .
Proof. 
We only provide the proof for the first part of this lemma. If s , t B , then both s a C and t a C by assumption. Thus, B is stable under  C with respect to a. Furthermore, B is stable under  C C : Suppose s , t B and s a C C . Thus, s a C . As B is stable under C, t a C , and by assumption t a C . Therefore, t a C C . Suppose s a C C . Then, also s a C . As B is stable under C, t a C and hence, t a C C . ☐
The following property, called the stability property, says that a partition stable under itself induces a probabilistic bisimulation. In general, partition based algorithms for bisimulation search for such a stable partition.
Lemma 3. Stability Property.
Let A = ( S , ) be a PLTS. If a partition  B for  A is stable under itself, then the corresponding equivalence relation  B on S is a probabilistic bisimulation.
Proof. 
By the first condition of Definition 3 and stability of all blocks in  B we have that either B S or B U , for each block B B . We write s B t iff s , t B for some B B . Note that used in this way B is an equivalence relation on S.
Suppose s B t for some s , t S and s a f . Let u U correspond to f. Say s , t B and u B for some blocks B , B B . Then, s a B . By stability of B for  B , it follows that t a B . Hence, v B and g D ( S ) exist such that v corresponds to g and s a g . Therefore, for any block B B we have f [ B ] = u [ B ] = v [ B ] = g [ B ] since the block  B of u and v is stable under each block  B of  B .
Thus, the stable partition  B induces an equivalence relation that satisfies the conditions for a probabilistic bisimulation of Definition 2, as was to be shown. ☐

3.2. Outline of the Algorithm

We present our algorithm in two stages. An abstract description of the algorithm is presented as Algorithm 1; the detailed algorithm is provided as Algorithm 2. The set-up of Algorithm 1 is a fairly standard, iterative refinement of a partition  B , in this particular case containing both action states and probabilistic states, which are treated differently. In addition, following the approach of Paige and Tarjan [5], we maintain a coarser partition  C , which we call the set of constellations. Each constellation in partition  C is a union of one or more blocks of  B , thus B  is a refinement of  C . A constellation C C that consists of exactly one block in  B is called trivial. We refine partitions B and C until C only contains trivial constellations (see Line 5 of Algorithm 1).
Algorithm 1 Abstract Partition Refinement Algorithm for Probabilistic Bisimulation.
1:
functionpartition-refinement
2:
C : = { S , U }
3:
B : = { U } { S A A Act }
4:
      where S A = { s S a A u U : s a u }
5:
while C contains a non-trivial constellation C do
6:
 choose block B C from B in C
7:
 replace in C constellation C by B C and C B C
8:
if C contains probabilistic states then
9:
  for all blocks B of action states in B unstable under B C or C B C do
10:
   refine B by splitting B into blocks of states with the same actions into B C and C B C
11:
  end for
12:
else
13:
  for all blocks B of probabilistic states in B unstable under B C do
14:
   refine B by splitting B into blocks of states with equal probabilities into B C
15:
  end for
16:
end if
17:
end while
18:
return B
Algorithm 2 Partition Refinement Algorithm for Probabilistic Bisimulation
Algorithms 11 00131 i001
Among others, we preserve the invariant that the blocks in partition B are always stable under partition C . If all constellations in C are trivial, then the partitions B and C coincide. Hence, the blocks in B are stable under itself, and according to Lemma 3 we have found a probabilistic bisimulation. Our algorithm works by iteratively refining the set of constellations C . When refining C , we must also refine B to preserve the above mentioned invariant.
Since the set of states of a PLTS is finite (cf. Definition 1) refinement of the partitions B and C cannot be repeated indefinitely. Thus, termination of the algorithm is guaranteed. The partition consisting of singletons of action states and of probabilistic states is the finest that can be obtained, but this is only possible if all states are not bisimilar. In practice, the main loop of the algorithm stops well before reaching that point.
The algorithm maintains the following three invariants:
Invariant 1.
Probabilistic bisimilarity p is a refinement of B .
Invariant 2.
Partition B is a refinement of partition C .
Invariant 3.
Partition B is stable under the set of constellations C (mentioned already above).
Invariant 1 states that if two action states or two probabilistic states are probabilistically bisimilar, then they are in the same block of partition B . Thus, the partition-refinement algorithm will not separate states if they are bisimilar. By Invariant 2, we have that, at the end and at the start of each iteration, each constellation in C is a union of blocks in B . Invariant 3 says that blocks in partition B cannot be split by blocks in constellation C .
In Lines 2 and 3 of Algorithm 1, the set of constellation and the initial partition are set such that the invariants hold. All probabilistic states are put in one block, and all action states with exactly the same actions labelling outgoing transitions are also put together in blocks. (Note the universal quantification over all actions a in A for the set comprehension at Line 4 to ensure that only maximal blocks are included in B for it being a partition indeed.) The set of constellations contains two constellations namely one with all action states, and one with all probabilistic states. It is straightforward to see that Invariants 1 and 2 hold. Invariant 3 is valid because all transitions from action states go to probabilistic states and vice versa.
Invariants 1–3 guarantee correctness of Algorithm 1. That is, from the invariants, it follows that, upon termination, when all constellations have become trivial, the computed partition B identifies probabilistically bisimilar action states and probabilistically bisimilar probabilistic states.
Theorem 1.
Consider the partition B resulting from Algorithm 1. We find that (i) two action states are in the same block of B iff they are probabilistically bisimilar, and (ii) two probabilistic states are in the same block of B iff they are probabilistically bisimilar.
Proof. 
Upon termination, because of the while loop of Algorithm 1, all constellations of C are trivial, i.e. each constellation in C consists of exactly one block of B . Hence, by Invariant 2, the partitions B and C coincide. Thus, by Invariant 3, each block of B is stable under each block in B . In other words, partition B is stable under itself.
By the Stability Property of Lemma 3, we have that B is a probabilistic bisimulation on S. It follows that two action states in the same block of B are probabilistically bisimilar. Reversely, by Invariant 1, probabilistically bisimilar action states are in the same block of B . Thus, p and B coincide on S. In other words, two action states are in the same block of B iff they are probabilistically bisimilar.
To compare p and the relation B on U, choose probabilistic states u , v U such that u B v . Thus, u and v are in the same block of B . By stability of block B for B it follows that u [ B ] = v [ B ] , for each block B S . Since p and B coincide on S this implies u [ B ] = v [ B ] for all B S / p . Thus, we have u p v . Reversely, if u p v , we have u , v B for some block B of B by Invariant 1. Thus, two probabilistic states are in the same block of B iff they are probabilistically bisimilar. ☐
It is worth noting that in Line 5 of Algorithm 1 an arbitrary non-trivial constellation is chosen and in Line 6 an arbitrary block B C is selected from C (we later put a constraint on the choice of B C ). In general, there are many possible choices and this influences the way the final partition is calculated. The previous theorem indicates that the final partition is not affected by this choice, neither is the complexity upper-bound, see Section 4.6. However, it is conceivable that practical runtimes can be improved by choosing the non-trivial constellation C and the block B C optimally.

3.3. Refining the Set of Constellations and Restoring the Invariants

As we see from the high-level description of the partition refinement Algorithm 1, a non-trivial constellation C and a constituent block B C are chosen (Lines 5 and 6) and C is replaced in C by the smaller constellations B C and C B C (Line 7). This preserves Invariants 1 and 2, but Invariant 3 may be violated as stability under B C or C B C (or both) may be lost: On the one hand, it may be the case that two actions states s and t both have an a-transition into C, but s may have one to B C but t to C B C only or vice versa. On the other hand, it may be the case that two probabilistic states u and v yield the same value for C as a whole, i.e. u [ C ] = v [ C ] , but by no means this needs to hold for B C or C B C , i.e. u [ B C ] v [ B C ] and u [ C B C ] v [ C B C ] . Therefore, in the remainder of the body of Algorithm 1, the blocks that are unstable under B C and C B C are split such that Invariant 3 is restored, both for blocks of actions states (Lines 9 and 10) and for blocks of probabilistic states (Lines 13 and 14). In the next section, the detailed Algorithm 2 describes how this is done precisely.
The general situation when splitting a block B for a constellation C containing a block B C is depicted in Figure 2, at the left where B contains action states and at the right where B consists of probabilistic states. We first consider the case at the left.
In this case, block B S is stable under constellation C U and C is non-trivial. Thus, C properly contains a block B C of B , and we distinguish two non-empty subsets of C, the block B C on its own and the remaining blocks together in C B C . As B is stable under C, the block B can only be unstable under B C or C B C if there is an action a Act and a state s B such that s a B C (Lemma 2.1). Thus, we only investigate and split blocks, for which such a transition s a B C exists.
We can restore stability by splitting B into the following three subsets:
left a ( B ) = { s B s a B C s a C B C } , mid a ( B ) = { s B s a B C s a C B C } , and right a ( B ) = { s B s a B C s a C B C } .
Note that the remaining set { s B s a B C s a C B C } must be empty; if not, this would imply that there is some action state t such that t a C . However, due to the existence of state s such that s a B C , this would mean that block B is unstable under C, contradicting Invariant 3.
Checking that the sets left a ( B ) , mid a ( B ) , right a ( B ) are stable under C is immediate. As subsets of stable sets are also stable (Lemma 1) and B is stable all other configurations of C , the sets left a ( B ) , mid a ( B ) , and right a ( B ) are stable under all other configurations of C too.
Note that, due to the existence of state s with s a B C , it is not possible that both left a ( B ) and mid a ( B ) are equal to the empty set. It is however possible that left a ( B ) = B or mid a ( B ) = B , leaving the other two sets empty.
Lines 9 and 10 can now be read as follows. For all a Act , investigate all blocks B such that there is an action state s B with s a B C as these blocks are the only candidates to be unstable. Replace each such block B in B by { left a ( B ) , mid a ( B ) , right a ( B ) } to restore stability under B C and C B C .
Invariants 1 and 2 are preserved by splitting B. For Invariant 2, this is trivial by construction. For Invariant 1, note that the states in different blocks among left a ( B ) , mid a ( B ) , right a ( B ) cannot be probabilistically bisimilar as they have unique transitions to states B C and C B C and these target states cannot be bisimilar by Invariant 1. Thus, if two states of B are probabilistically bisimilar then both are in the same subset left a ( B ) , mid a ( B ) , or right a ( B ) of B.
We next turn to the case of a set of probabilistic states B, see the right-side of Figure 2. Again, we assume that the non-trivial constellation C is replaced by its two non-empty subsets B C and C B C . As in the previous case, although the block B is stable under the constellation C, this may not be the case under the subsets B C and C B C .
To restore stability, we now consider for all q, 0 q 1 , the sets
B q = { u B u [ B C ] = q } .
Note that, for finitely many q [ 0 , 1 ] , we have B q . Observe that each set B q is stable under B C as by construction u [ B C ] = v [ B C ] = q for any u , v B q . The set B q is also stable under C B C . To see this consider two states u , v B q . As block B U is stable under constellation C S , u [ C ] = v [ C ] . Hence, u [ C B C ] = u [ C ] - u [ B C ] = v [ C ] - v [ B C ] = v [ C B C ] . By Lemma 1, the new blocks B q are also stable under the other constellations in C .
According to Lemma 2.2, only those blocks B that contain a probabilistic state u B such that u [ B C ] > 0 can be unstable under B C and C B C . Thus, at Line 13 of Algorithm 1 we consider all those blocks B and replace each of them by the non-empty subsets B q , 0 q 1 at Line 14 in B . This makes the partition stable again under all constellations in C , in particular under the new constellations B C and C B C .
Again, it is straightforward to see that Invariants 1 and 2 are not violated by replacing the block B by the blocks B q . For Invariant 1, if states are probabilistically bisimilar in B, they remain in the same block B q . For Invariant 2, as B is refined, partition B remains a refinement of partition C .
For the detailed algorithm in Section 4, it is required to group the sets B q as follows: left p ( B ) : = B 0 , right p ( B ) : = B 1 , and mid p ( B ) = { B q 0 < q < 1 } . This does not play a role here, but left p ( B ) , mid p ( B ) , and right p ( B ) are already indicated in Figure 2, in particular mid p ( B ) = { B 1 4 , B 1 2 , B 3 4 } .

3.4. An Example

We provide an example to illustrate how Algorithm 1 calculates partitions.
Example 1.
Consider the PLTS given in Figure 3. We provide a detailed account of the partitions that are obtained when calculating probabilistic bisimulation. The obtained partitions are listed in Table 1. In the lower table, nine partitions together with their constellations are listed that are generated for a run of Algorithm 1. In the upper table the blocks that occur in these partitions are defined. Observe that we put the blocks and constellations with action states and probabilistic states in different columns. This is only for clarity, as in the current partition and the current set of constellations they are joined.
Algorithm 1 starts with four blocks of action states, S 0 to S 3 , which contain the action states with no outgoing transitions and those with an outgoing transition labelled with a, with b, and with c, respectively. In the algorithm, all probabilistic states are initially collected in block U 0 . There are two constellations, viz. S 0 S 1 S 2 S 3 and U 0 . These initial partitions are listed in R0w 0 of the lower part of Table 1.
Since the constellation with action states is non-trivial we split it, rather arbitrarily, in S 0 and S 1 S 2 S 3 . The block U 0 is not stable under S 0 and S 1 S 2 S 3 and is split in U 1 = { u 1 , u 3 , v 1 - 5 } , U 2 = { u 2 , u 4 } and U 3 = { u 5 , u 6 } . This is because we have u [ S 0 ] = 1 for u equal to u 1 , u 3 , and v 1 to v 5 ; we have u [ S 0 ] = 1 2 for u equal to u 2 and u 4 ; we have u 5 [ S 0 ] = 0 and u 6 [ S 0 ] = 0 . The resulting partitions are listed at Row 1 in Table 1.
For the second iteration, we consider the non-trivial constellation S 1 S 2 S 3 and split it into S 1 and S 2 S 3 . Note, the action states s 1 to s 4 in S 1 do not have incoming transitions. Consequently, for all u U 1 , we have u [ S 1 ] = 0 ; for all u U 2 we have u [ S 1 ] = 0 ; for all u U 3 we have u [ S 1 ] = 0 . Thus, all blocks of probabilistic states are stable under S 1 and S 2 S 3 . Hence, no block is split.
In the third iteration, we split the non-trivial constellation S 2 S 3 into S 2 and S 3 . For all, u U 1 we have u [ S 2 ] = 0 . Thus, U 1 is stable under S 2 and S 3 . For U 2 , the probabilistic states u 2 and u 4 agree on the value 1 2 for S 2 , hence for S 3 too. Thus, U 2 is stable as well. However, for u 5 and u 6 in U 3 we have u 5 [ S 2 ] = 1 and u 6 [ S 2 ] = 1 3 . Therefore, U 1 needs to be split in U 4 = { u 5 } and U 5 = { u 6 } .
At this point, all constellations with actions states are trivial, so at iteration 4 we turn to the non-trivial constellation of probabilistic states U 1 U 2 U 4 U 5 and split it into U 1 and U 2 U 4 U 5 . Block S 0 is stable since each of its states has no transitions at all. Block S 1 is not stable: s 1 , s 2 a U 1 and s 1 , s 2 a U 2 U 4 U 5 , but s 3 , s 4 a U 1 and s 3 , s 4 a U 2 U 4 U 5 . Thus, S 1 needs to be split into S 4 = { s 1 , s 2 } and S 5 = { s 3 , s 4 } . Block S 2 is stable since its states have only b-transitions into U 1 . Block S 3 is a singleton and therefore cannot be split.
The following iteration, Iteration 5, sets U 2 and U 4 U 5 apart as constellations. Again, in absence of transitions, block S 0 is stable under U 2 and U 4 U 5 . The same holds for S 2 that has only b-transitions into U 0 . Block S 3 can be ignored. For S 4 , both s 1 and s 2 have an a-transition into U 2 as their only transition. Hence, block S 4 is stable. Similarly, S 5 is stable, as its states s 3 and s 4 both have an a-transition into U 4 U 5 and no other transitions. Overall, in this iteration, no blocks require splitting to restore Invariant 3.
Next, at Iteration 6, we split non-trivial constellation U 4 U 5 into U 4 and U 5 . For S 0 , S 2 , S 3 and S 4 we conclude stability in the same way as in the previous iteration. However, now we have for s 3 , s 4 S 5 on the one hand s 3 a U 4 and s 3 a U 5 , but on the other hand s 4 a U 4 and s 4 a U 5 . Hence, S 5 needs to be split, yielding the singletons S 6 = { s 3 } and S 7 = { s 4 } .
Returning to constellations of actions states, at Iteration 7, we split S 4 S 6 S 7 over S 4 and S 6 S 7 . All probabilistic states have value 0 for both S 4 and S 6 S 7 , hence no split of probabilistic blocks is needed.
This is similar in Iteration 8, where the non-trivial constellation S 6 S 7 is split, and none of the blocks become unstable. Now, all constellations are trivial and the algorithm terminates. According to the Stability Property, Lemma 3, the corresponding equivalence relation is a probabilistic bisimulation. Thus, the final partition is { S 0 , S 2 , S 3 , S 4 , S 6 , S 7 , U 1 , U 2 , U 4 , U 5 } . Moreover, the deadlock states t 1 , t 3 , t 4 , t 6 , t 7 and r 1 to r 5 are probabilistically bisimilar, the states t 2 , t 5 , t 8 , t 9 that have only a b-transition into a Dirac distribution to deadlock are probabilistically bisimilar, the states s 1 and s 2 are probabilistically bisimilar (which is clear when identifying states t 7 and t 8 ), whereas the remaining action states s 3 , s 4 and t 10 have no probabilistically bisimilar counterpart. For the probabilistic states the states u 1 , u 3 and v 1 to v 5 are identified by probabilistic bisimulation. This also holds for the probabilistic states u 2 and u 4 . Probabilistic states u 5 and u 6 each have no probabilistically bisimilar counterpart.

4. A Partition-Refinement Algorithm for Probabilistic Bisimulation (Detailed)

Algorithm 1 gives an outline but leaves many details implicit. The detailed refinement-partition algorithm is presented in this section as Algorithm 2. It has the same structure as Algorithm 1, but in this section we focus on how to efficiently calculate whether and how blocks must be split, and how this split is actually carried out. We first explain grouping of action transitions per action, next we introduce various data structures that are used by the algorithm, subsequently we explain how the algorithm is working line-by-line, and finally we give an account of its complexity.

4.1. Grouping Action Transitions per Action Label

To obtain the complexity bound of our algorithm, it is essential that we can group action transitions by actions linearly in the number of transitions. Grouping means that the action transitions with the same action occur consecutively in this ordering. It is not necessary that the transitions are ordered according to some overall ordering.
We assume that | Act | m a and that the actions in Act are consecutively numbered. Recall, m a denotes the number of transitions s a u . These assumptions are easily satisfied, by removing those actions in Act that are not used in transitions and by sorting and numbering the remaining action labels. Sorting these actions adds a negligible O ( | Act | log | Act | ) O ( m a log m a ) .
Grouping transitions is performed by an array of buckets indexed with actions. All transitions are put in the appropriate bucket in constant time exploiting actions being numbered. Furthermore, all buckets that contain transitions are linked together. When all transitions are in the buckets, a straightforward traversal of all linked buckets provides the transitions in a grouped order. This requires time linear in the number of considered action transitions. Note that the number of buckets is equal to | Act | m a and, therefore, the buckets do not require more than linear memory.

4.2. Data Structures

We give a concise overview of the concrete data structures in the algorithm for states, transitions, blocks, and constellations. We list the names of the fields in these data structures in a programming vein to keep a close link with the actual implementation.
The chosen data structures are not particularly optimised. Exploiting ideas from [6,24,25] to store states, blocks, and constellations, usage of time and memory can be further reduced. All data structures come in two flavours, one related to actions and the other related to probabilities. We treat them simultaneously and only mention their differences when appropriate.

4.2.1. Global

In the detailed algorithm, there are arrays containing transitions, actions, blocks and constellations. There is a stack of non-trivial constellations to identify in constant time which constellation must be investigated in the main loop. Furthermore, there is an array containing the variables state _ to _ constellation _ cnt , which are explained below.
For all action transitions s a u , it is maintained how many action transitions there are labelled with the same action a, and that go from s to the constellation C containing u. This value is called state _ to _ constellation _ cnt for this transition. The value is required to efficiently split probabilistic blocks (the idea of using such variables stems from [5]). For each state s, constellation C, and action a there is one instance of state _ to _ constellation _ cnt stored in a global array. Each transition s a u contains a reference called state _ to _ constellation _ cnt _ ptr to the appropriate value in this array. See Figure 3 for a graphical illustration with a constellation C of probabilistic states and blocks B 1 and B 2 of action states. The purpose of this construction is that state _ to _ constellation _ cnt can be changed by one operation for all transitions from the same state with the same action to the same constellation, simultaneously.

4.2.2. Transition

Each transition consists of the fields from , label and to . Here, from and to refer to an action/probabilistic state, and label is the action label or probabilistic label of the transition. The action labels are consecutive numbers; the probabilistic labels are exact fractions. Action transitions also contain a reference state _ to _ constellation _ cnt _ ptr to the variable state _ to _ constellation _ cnt as indicated above.

4.2.3. State

Each action state and probabilistic state contains a list of incoming transitions and a reference to the block in which the state resides. For intermediate calculations, each state contains a boolean mark _ state which is used to indicate that a state has been marked. Each action state also contains two more variables for temporary use. When deciding whether blocks need to be split, the variable residual _ transition _ cnt indicates how many residual transitions there are to blocks C B C when splitting takes place by a block B C . The variable transition _ cnt _ ptr is used to let the variable state _ to _ constellation _ cnt _ ptr for an action transition point to a new instance of state _ to _ constellation _ cnt when this transitions is moved to a new block. In probabilistic states, there is the temporary variable cumulative _ prob used to calculate the total probability to reach a block under splitting.

4.2.4. Block

Blocks contain an indication of the constellation in which it occurs, a list of the states contained in the block including the size of this list, and a list of transitions ending in this block. For blocks of action states, this list of transitions is grouped by action label, i.e., transitions with the same action label are a consecutive sublist. For temporary use, there is also a variable to indicate that the block is marked. This marking contains exactly the information that the functions aMark and pMark , discussed below, provide for blocks of action states and blocks of probabilistic states, respectively.

4.2.5. Constellation

Finally, constellations contain a list of the blocks in the constellation as well as the cumulative number of states contained in all blocks in this constellation.

4.3. Explanation of the Detailed Algorithm

Algorithm 1 focuses on how, by refining partitions and sets of constellations, probabilistic bisimulation can be calculated. In Algorithm 2, we stress the details of carrying out concrete refinement steps to realise the required time bound. As already indicated, the overall structure of both algorithms is the same.
The initial Lines 2 and 3 of Algorithm 2 are the same as those of Algorithm 1. In Line 3, the partition B is set to contain one block with all probabilistic states and a number of blocks of action states, grouped per common outgoing action labels. Thus, two action states are in the same block initially if their menu, i.e., the set of actions for which there is a transition, is identical. This initial partition B is calculated using a simple partition refinement algorithm on outgoing transitions of states. This operation is linear in the number of outgoing action transitions when using grouping of transitions as explained in Section 4.1.
At Line 4, the incoming transitions are ordered on actions as indicated in Section 4.1. At Line 5, an array with one instance of state _ to _ constellation _ cnt for each action label is made where each instance contains the number of action transitions that contain that action label. The reference state _ to _ constellation _ cnt for each action transition is set to refer to the appropriate instance in this array. This is done by simply traversing all transitions s a u grouped by action labels and incrementing the appropriate entry in the array containing all state _ to _ constellation _ cnt variables. The appropriate entry can be found using the temporary variable transition _ cnt _ ptr associated to state s. If no entry for state _ to _ constellation _ cnt exists yet, the variable transition _ cnt _ ptr belonging to s is null and an appropriate entry must be created.
In Line 6, selecting a non-trivial constellation is straightforward, as a stack of non-trivial constellations is maintained. Initially, this stack contains C = { S , U } . To obtain the required time complexity, we select B C such that | B C | 1 2 | C | in Line 8. This is done in constant time as we know the number of states in C. Hence, either the first or second block B of constellation C satisfies that | B | 1 2 | C | (for if the first block contains more than half the states the second one cannot). We replace the constellation C by B C and C B C in C , see Line 8, and put the constellation C B C on the stack of non-trivial constellations if it is non-trivial.
From Line 9 to Line 19, the partition B is refined to restore the invariants, especially Invariant 3. This is done by first marking the blocks (Line 11 and Line 16) such that it is clear how they must be split, and by subsequently splitting the blocks (Lines 12–14, and Lines 17–19). Both operations are described in the next two subsections.

4.4. Marking

Given a constellation C that contains a block B C and in the case of an action transition, an action a, we need to know which blocks need to be split in what way. This is calculated using the functions aMark ( B , C , B C , a ) and pMark ( B , C , B C ) . The first one is for marking blocks with respect to action transitions, the second for marking blocks with respect to probabilities.
Both functions yield a five-tuple B , left , mid , right , a n d large . Here, B B is a set of blocks that may have to be split and left , mid , and right are functions that together for each block B B provide the sets into which B must be partitioned. The set large ( B ) is the largest set among them. For every set B in which B must be partitioned, except for large ( B ) , it holds that | B | 1 2 | B | . To obtain the complexity bound, we only move such small blocks out of B, i.e., those blocks not equal to large ( B ) .
We note that sets in left ( B ) , mid ( B ) and right ( B ) can be empty. Such sets can be ignored. It is also possible that there is only one non-empty set being equal to B itself. In this case, B is stable under B C and C B C . Furthermore, it is equal to large ( B ) and therefore B is kept intact.
We now concentrate on the function aMark ( B , C , B C , a ) with a partition B , a constellation C, a block B C contained in C, and an action a. In this situation, C is a non-trivial constellation of probabilistic states. Since C contains probabilistic states only, incoming transitions for states in B C are action transitions. The situation is depicted in Figure 2, at the left. The call aMark ( B , C , B C , a ) returns the tuple B a , left a , mid a , right a , large a defined as follows.
B a = { B B s B : s a B C } and , for each B B a , left a ( B ) = { s B s a B C s a C B C } , mid a ( B ) = { s B s a B C s a C B C } , right a ( B ) = { s B s a B C s a C B C } , and large a ( B ) : the   largest   set   among   left a ( B ) ,   mid a ( B ) ,   and   right a ( B ) .
We calculate B a by traversing the list of all transitions with action a going into B C and adding each block containing any source state of these transitions to B a . The blocks in B a are the only blocks that may be unstable under B C and C B C with respect to a (Lemma 2).
The for loop at Line 10 iterates over all actions. As the incoming transitions into block B C are grouped per action, all incoming transitions with the same action can easily be processed together, while the total processing time is linear in the number of incoming transitions. However, note that calculating B a is based on partition B , while B is refined at Line 14. Thus, the calculation of B a for different actions a can be based on repeatedly refined partitions B .
Next, we discuss how to construct the blocks left a ( B ) , mid a ( B ) , and right a ( B ) . While traversing a-labelled transitions into B C , all action states in a block B with an a-transition into B C are marked and (temporarily) moved into left a ( B ) . The remaining states in block B form the subset right a ( B ) . We keep track of the number of states in a block. Thus, we can easily maintain the size of right a ( B ) .
To find out which states now in left a ( B ) must be transferred to mid a ( B ) , the variables state _ to _ constellation _ cnt are used. Recall that these variables record for each transition s a u , with u S , how many transitions s a v there are to states v C . These variables are initialised in Line 5 of Algorithm 2. When the first state is moved to left a ( B ) , we copy the value of state _ to _ constellation _ cnt of transition s a u to the variable residual _ transition _ cnt belonging to state s of the transition, subtracted by one. The number residual _ transition _ cnt indicates how many unvisited a-transitions are left from the state s into C. Every time an a-transition is visited of which the source state is already in left a ( B ) , we decrease residual _ transition _ cnt of the source state by one again. If all a-transitions into B C have been visited, the number residual _ transition _ cnt of a state s indicates how many transitions labelled a go from s into C B C .
Subsequently, we traverse the states in left a ( B ) . If a state s has a non-zero residual _ transition _ cnt , we know that there are a-transitions from s to both B C and C B C . Therefore, we move state s into mid a ( B ) . Otherwise, all transitions from s with action a go to B C and s must remain in left a ( B ) .
While moving states into left a ( B ) and mid a ( B ) , we also keep track of the sizes of these sets. Hence, it is easy to indicate in large a ( B ) which set is the largest.
We calculate pMark ( B , C , B C ) in a slightly different manner than aMark . In particular, we have mid p : B 2 2 U , i.e., mid p ( B ) is a set of blocks. This indicates that the block B can be partitioned in many sets, contrary to the situation with action blocks where B could be split in at most three blocks. The situation is depicted in Figure 2 at the right. The five-tuple that pMark returns has the following components:
B p = { B B u B : u [ B C ] > 0 } and , for each B B p , left p ( B ) = { u B u [ B C ] = 1 } , mid p ( B ) = { u B u [ B C ] = q } | q 0 , 1 , right p ( B ) = { u B u [ B C ] = 0 } , and large p ( B ) : the   largest   set   from   { left p ( B ) } mid p ( B ) { right p ( B ) } .
The above is obtained by traversing through all incoming probabilistic transitions in B C . Whenever there is a state u in a block B such that u p B C , one of the following cases applies:
  • If B is not in B p yet, it is added now. The variable cumulative _ prob in state u is set to p, and u is (temporarily) moved from B to left p ( B ) .
  • If B is already in B p , then the probability p is added to cumulative _ prob of state u.
After the traversal of all incoming probabilistic transitions into B C , the variable cumulative _ prob of u contains u [ B C ] , i.e., the probability to reach B C from the state u.
Those states that are left in B form the set right p ( B ) . We know the number of states in right p ( B ) by keeping track how many states were moved to left p ( B ) . Next, the states temporarily stored in left p ( B ) must be distributed over left p ( B ) and mid p ( B ) . First, all states with cumulative _ prob < 1 are moved into some set M such that left p ( B ) contains exactly the states with cumulative _ prob = 1 . Then, the states in M are sorted on their value for cumulative _ prob such that it is easy to move all states with the same cumulative _ prob into separate sets in mid p ( B ) . In Figure 2, at the right, the set mid p ( B ) consists of three sets, corresponding to the probabilities q = 1 4 , q = 1 2 and q = 3 4 to reach B C . Note that all processing steps mentioned require time proportional to the number of incoming probabilistic transitions in B C , except for the time to sort. In the complexity analysis below, it is explained that the cumulative sorting time is bounded by O m p log n p .
By traversing the sets of states in left p ( B ) and mid p ( B ) once more, we can determine which set among left p ( B ) , right p ( B ) , and the set of sets mid p ( B ) contains the largest number of probabilistic states. This set is reported in large p ( B ) .

4.5. Splitting

In Lines 14 and 19 of Algorithm 2, a block B is moved out of the existing block B. By the marking procedure, either aMark or pMark , the states involved are already put in separate lists and are moved in constant time to the new block B’.
Blocks contain lists of incoming transitions. When moving the states to a new block, the incoming transitions are moved by traversing the incoming transitions of each moved state, removing them from the list of incoming transitions of the old block and inserting them in the same list for the new block. There is a complication, namely that incoming action transitions must be grouped by action labels. This is done separately for the transitions moved to B as explained in Section 4.1 and this is linear in the number of transitions being moved. When removing incoming action transitions from the old block B, the ordering of the transitions is maintained. Thus, the grouping of incoming action transitions into B remains intact without requiring extra work.
When moving action states to a new block we also need to adapt the variable state _ to _ constellation _ cnt for each action transition s a C with state s B . Observe that this only needs to be done if there are some a-transitions to B C and some to C B C , which means that s mid a ( B ) . In that case residual _ transition _ cnt for state s is larger than 0.
This is accomplished by traversing all incoming transitions s a u into B C one extra time. If residual _ transition _ cnt for s is larger than 0 we need to replace the state _ to _ constellation _ cnt for this transition s a u by the value of state _ to _ constellation _ cnt - residual _ transition _ cnt of s. For all non-visited transitions s a u where u C B C , the value of state _ to _ constellation _ cnt must be set to residual _ transition _ cnt of s.
This is where we use that state _ to _ constellation _ cnt is actually referred to by the pointer state _ to _ constellation _ cnt _ ptr (see Figure 3). When traversing the first transition of the form s a u with u B C such that residual _ transition _ cnt for s is larger than 0, a new entry in the array containing the variables state _ to _ constellation _ cnt is constructed containing the value state _ to _ constellation _ cnt - residual _ transition _ cnt and the auxiliary variable transition _ cnt _ ptr is used to point to this entry. At the same time, the value in old entry in this array for state _ to _ constellation _ cnt is replaced by the value residual _ transition _ cnt of state s. In this way, the values of state _ to _ constellation _ cnt of all transitions labelled with a from s to C B C are updated in constant time, i.e., without visiting the transitions that are not moved. For all transitions s a u with u B C , the variable state _ to _ constellation _ cnt _ ptr is made to refer the new entry in the array.

4.6. Complexity Analysis

The complexity of the algorithm is determined below. Recall that n a and n p are the number of action states and probabilistic states, respectively, while m a is the number of action transitions and m p is the cumulative size of the supports of the distributions.
Theorem 2.
The total time complexity of the algorithm is O ( m a + m p ) log n p + ( m p + n a ) log n a and the space complexity is O m a + m p + n a .
Proof. 
In Algorithm 2, the cost of each computation step is indicated. The initialisation of the algorithm at Lines 2–5 is linear in n a , n p and m a . At Line 3, calculating { S A A Act } can be done by iteratively splitting S using the outgoing transitions grouped per action label. This is linear in the number of action transitions. At Line 4, grouping the incoming transitions per action is also linear as argued in Section 4.1.
The while loop at Line 6 is executed for each B C C where | B C | 1 2 | C | . As B C becomes a constellation itself, each state can only be part of this splitting step log 2 ( n a ) times and log 2 ( n p ) times, respectively. The steps in Lines 10–13 and Lines 16–18 require steps proportional to the number of incoming action transitions and probabilistic transitions, respectively, in B C , apart from a sorting penalty which we treat separately below. The cumulative complexity of this part is therefore O m a log n p + m p log n a .
At Lines 14 and 19, the states in B are moved to a new block. This requires to group the incoming action transitions in a block B per action, which can be done in time linear in the number of these transitions. Block B is not the largest block of B considered and therefore | B | 1 2 | B | . Hence, each state can only be log 2 ( n p ) or log 2 ( n a ) times be involved in the operation to move to a new block. Hence, the total time to be attributed to moving is O ( m a + n p ) log n p + ( m p + n a ) log n a .
While marking, probabilistic states in mid p ( B ) need to be sorted. An ingenious argument by Valmari and Franceschinis [6] shows that this will at most contribute O m p log n p to the total complexity: Let K be the total number of times sorting takes place. Assume, for 1 i K , that the total number of distributions in mid p ( B ) when sorting it for the i-th time is k i . Clearly, k i n p . Each time a distribution in mid p ( B ) is involved in sorting, the number of reachable constellations with non-zero probability from this distribution is increased by one. Before sorting it could reach C, and after sorting it can reach both new constellations B C and C B C with non-zero probability. Note that this does not hold for the states in left p ( B ) and right p ( B ) , and this is the reason why we have to treat them separately. In particular, to obtain complexity O ( m p log n p ) , it is not allowed to involve the states in left p ( B ) and right p ( B ) in the sorting process as shown by an example in [6]. Due to the increased number of reachable constellations, the total number of times a probabilistic state can be involved in sorting is bounded by the size of the distribution. In other words, i = 1 K k i m p . Hence, the total time that is required by sorting is bounded as follows:
O i = 1 K k i log k i O i = 1 K k i log n p O m p log n p .
Adding up the complexities leads to the conclusion that the total complexity of the algorithm is O ( m a + m p + n p ) log n p + ( m p + n a ) log n a . As m p n p , the stated time complexity in the theorem follows.
The space complexity follows as all data structures are linear in the number of transitions and states. As n p m p , this complexity can be stated as O m a + m p + n a . ☐
Note that it is reasonable that the number of probabilistic transitions m p is at least equal to the number of action states n a - 1 as otherwise there are unreachable action states. This allows formulating our complexity more compactly.
Corollary 1.
Algorithm 2 has time complexity O ( m a + m p ) log n p + m p log n a ) and space complexity O m a + m p if all action states are reachable.
The only other algorithm to determine probabilistic bisimilarity for PLTS is by Baier, Engelen and Majster-Cederbaum [4]. The algorithm uses extended ordered binary trees and is claimed to have a complexity of O m n ( log m + log n ) where m is the number of transitions (including distributions) and n the number of action states. For a fair comparison, we reconstructed their complexity in terms of n a , n p , m a and m p . Their space complexity is O n a n p | Act | and the time complexity is O m a n a log n a + n a n p log n p + n a 2 n p . The last part n a 2 n p is not mentioned in the analysis in [4]. It is due to taking the time into account for “inserting Pre ( α , μ i ) into v . states ” (see page 208 of [4]) for the version of ordered balanced trees used, and we believe it to be forgotten [26].
This complexity is not easily comparable to ours. We make two reasonable assumptions to facilitate comparison. The first assumption is that the number of action transitions is equal to the number of distributions: m a = n p . As second assumption, we use that log n p and log n a only differ by a constant.
In the rare case that the support of distributions is large, i.e., if all or nearly all action states have a positive probability in each distribution, then m p is equal or close to n a n p . In this case our space complexity becomes O n a n p and our time complexity is O n a n p log n p , which is comparable mutatis mutandis to the complexity in [4]. However, in the more common case where the support of distributions is limited by some constant c, i.e., m p c n p , we can simplify the space and time complexities to those in the following Table 2.
In the table the underlined part stems from the extra time needed for insertions. It is clear tha,t if the assumptions mentioned are satisfied, the complexity of the present algorithm stands out well. This is confirmed in the next section where we report on the performance on a number of benchmarks of implementations of both algorithms.

5. Benchmarks

Both our algorithm, below referred to as GRV, and the reference algorithm by Baier, Engelen and Majster-Cederbaum [4], for which we use the abbreviation BEM, have been implemented in C++ as part of the mCRL2 toolset [9,10] (www.mcrl2.org). This toolset is available under a Boost license which means that the source code is open and available without restriction to be inspected or used. In the implementation of BEM, some of the operations are not carried out exactly as prescribed in [4] for reasons of practicality.
We have extensively tested the correctness of the implementation of the new algorithm by applying it to millions of randomly generated PLTSs, and comparing the results to those of the implementation of the BEM algorithm. This is not done because we doubt the correctness of the algorithm, but because we want to be sure that all the details of our implementation are right.
We experimentally compared the performance of both implementations. All experiments have been performed on a relatively dated machine running Fedora 12 with INTEL XEON E5520 2.27 GHz CPUs and 1TB RAM. For the probabilities exact rational number arithmetic is used which is much more time consuming than floating point arithmetic. The reported runtimes do not include the time to read the input PLTS and write the output.
Our first experimental question regards the growth of the practical complexity of the BEM and GRV algorithm when concrete probabilistic transition systems grow in size. To get an impression of this, we considered the so-called “ant on a grid” puzzle published in the New York Times [27,28]. In this puzzle, an ant sits on a square grid. When it reaches the leftmost or rightmost position on the grid it dies. When it reaches the upper or lower position of the grid it is free and lives happily ever after. On any remaining position, the ant chooses with equal probability to go to a neighbouring position on the grid. The question is what the probabilities for the ant are to die and stay alive, given an initial position on the grid.
The specification in probabilistic mCRL2 of the ant-on-a-grid is given in Figure 4, where the dimensions of the grid are max x and max y , and the initial position is given by i x and i y .
The actions dead , live and step indicate that the ant is dead, stays alive and makes a step. The process expression p · q stands for sequential composition and p + q represents the choice in behaviour. The notations c p and c p q are the if-then and if-then-else of mCRL2. The curly equal sign (≈) in conditions stands for equality applied to data expressions. The expression dist d : Direction [ 1 / 4 ] means that each direction d is chosen with probability 1 4 . From this description, PLTSs are generated that are used as input for the probabilistic bisimulation reduction tools.
Figure 5 depicts the runtime results of a set of experiments when increasing the total number of states of the ant on the grid model. At the left are the results when running the BEM algorithm, whereas the results for the GRV algorithm are shown at the right. Note that the x-axis only depicts the number of action states. This figure indicates that the practical running times of both algorithms are pretty much in line with the theoretical complexity. This is in agreement with our findings on other examples as well. Furthermore, it should be noted that the difference in performance is dramatic. The largest example that our implementation of the BEM algorithm can handle within a timeout of 5 h requires approximately 10,000 s compared to 2 s for GRV. The particular example regards a PLTS of 6 . 4 × 10 5 action states. The graphs clearly indicate that the difference grows when the probabilistic transition systems get larger.
To further understand the practical usability of the GRV algorithm, we applied it to a number of benchmarks taken from the PRISM Benchmark Suite (www.prismmodelchecker.org/benchmarks/) and the mCRL2 toolset (www.mcrl2.org/). The tests taken from PRISM were first translated into mCRL2 code to generate the corresponding PLTSs.
Table 3 collects the results for the experiments conducted. The ant_N_M_grid examples refer to the ant-on-a-grid puzzle for an N by M grid with the ant initially placed at the approximate center of the grid. The models airplane_N are instances of an airplane ticket problem using N seats. In the airplane ticket problem, N passengers enter a plane. The first passenger lost his boarding pass and therefore takes a random seat. Each subsequent passenger will take his own seat unless it is already taken, in which case he randomly selects an empty seat as well. The intriguing question is to determine the probability that the last passenger will have his or her own seat (see [28] for a more detailed account).
The following three benchmarks stem from PRISM: The brp_N_MAX models are instances of the bounded retransmission protocol when transmitting N packages and bounding the number of retransmissions to MAX . The self_stab_N and shared_coin_N_K are extensions of the self stabilisation protocol and the shared coin protocol, respectively. For the self stabilisation protocol, N processes are involved in the protocol, each holding a token initially. The shared coin protocol is modelled using N processes and setting the threshold to decide head or tail to K.
Finally, the random_N tests are randomly generated PLTSs with N action states. All the models are available in the mCRL2 toolset.
At the left of Table 3, the characteristics for each PLTS are given: the number of action states ( n a ), the number of action transitions ( m a ), the number of distributions ( n p ), and the cumulative support of the distributions ( m p ). The symbol “K” is an indicator for 1000 states. The same characteristics for the minimised PLTS are also provided. Furthermore, the runtime for minimising the probabilistic transition system in seconds as well as the required memory in megabytes are indicated for both algorithms. As mentioned earlier, we limited the runtime to 5 h.
The experiments show that the GRV algorithm outperforms the reference algorithm quite substantially in all studied cases. In the case of “random_100” the difference is four orders of magnitude, despite the fact that this state space has only 100 K action states. The second last column of Table 3 lists the relative speed-up, i.e. the quotient of the time needed by BEM over the time needed by GRV, when applicable. Memory usage is comparable for both algorithms for small cases, whereas for larger examples the BEM algorithm requires up to one order of magnitude more memory than the GRV algorithm. The right-most column of Table 3 contains the relative efficiency in memory, i.e. the quotient of the memory used by BEM over the memory used by GRV, for the cases where BEM terminated before the deadline.

6. Concluding Remarks

We believe we have formulated a very efficient algorithm to determine probabilistic bisimulation. As the algorithm restricts the handling of distributions to the states in the support of the distributions, the running time of the algorithm compares favourably when the fan-out is low in the PLTS under consideration, a situation occurring frequently in practice.
Apart from deciding strong probabilistic bisimilarity, our algorithm is instrumental in the mCRL2 toolset for minimising PLTSs modulo probabilistic bisimulation. Such a reduction can be useful as a preprocessing step before applying other forms of analysis on the PLTS. Occasionally, minimisation can even simplify PLTSs such that they become suitable for visual inspection. See for example the discussion the airplane ticket problem, also known as the problem of the lost boarding pass, in [28]. However, having smaller state spaces will be beneficial regardless, as this reduces the processing time for other tools further down the analysis chain.
To fine tune the algorithm, it will be interesting in future work to investigate how to choose the non-trivial constellations C and its sub-blocks B C optimally; their choice is now non-deterministic. Furthermore, it is interesting to refine the algorithm to probabilistic bisimulation with combined transitions [29] as this appears to be required to extend this algorithm to weaker notions of equivalence [21], such as probabilistic branching bisimulation.
References

Author Contributions

Conceptualisation, J.F.G.; Software and benchmarks, J.R.V.; Formal Analysis, J.F.G. and E.V.; Original Draft Preparation, J.F.G.; and Writing, Review and Editing, E.V., J.F.G.

Funding

This research received no external funding.

Acknowledgments

We thank Rodin Aarssen, Olav Bunte, and Thomas Neele for their helpful comments on a final draft of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Larsen, K.G.; Skou, A. Bisimulation through probabilistic testing. Inf. Comput. 1991, 94, 1–28. [Google Scholar] [CrossRef]
  2. Segala, R. Modeling and Verification of Randomized Distributed Real-Time Systems. Ph.D. Thesis, Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995. [Google Scholar]
  3. Segala, R.; Lynch, N. Probabilistic Simulations for Probabilistic Processes. In CONCUR ’94: Concurrency Theory; Jonsson, B., Parrow, J., Eds.; Springer: Berlin, Germany, 2005; Volume 836, pp. 481–496. [Google Scholar]
  4. Baier, C.; Engelen, B.; Majster-Cederbaum, M.E. Deciding bisimilarity and similarity for probabilistic processes. J. Comput. Syst. Sci. 2000, 60, 187–231. [Google Scholar] [CrossRef]
  5. Paige, R.; Tarjan, R.E. Three partition refinement algorithms. SIAM J. Comput. 1987, 16, 973–989. [Google Scholar] [CrossRef]
  6. Valmari, A.; Franceschinis, G. Simple O(mlogn) time Markov chain lumping. In Proceedings of the 16th international conference on Tools and Algorithms for the Construction and Analysis of Systems, Paphos, Cyprus, 20–28 March 2010; pp. 38–52. [Google Scholar]
  7. Dersavi, S.; Hermanns, H.; Sanders, H. Optimal state-space lumping in Markov chains. Inf. Process. Lett. 2003, 87, 309–315. [Google Scholar] [CrossRef]
  8. Kwiatkowska, M.; Norman, G.; Parker, D. Stochastic model checking. In Proceedings of the 7th international conference on Formal methods for performance evaluation, Bertinoro, Italy, 28 May–2 June 2007; pp. 220–270. [Google Scholar]
  9. Groote, J.F.; Mousavi, M.R. Modeling and Analysis of Communication Systems; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  10. Cranen, S.; Groote, J.F.; Keiren, J.J.A.; Stapper, F.P.M.; de Vink, E.P.; Wesselink, J.W.; Willemse, T.A.C. An Overview of the mCRL2 Toolset and Its Recent Advances. In Tools and Algorithms for the Construction and Analysis of Systems; Piterman, N., Smolka, S.A., Eds.; Springer: Berlin, Germany, 2013; Volume 7795, pp. 199–213. [Google Scholar]
  11. Hansson, H.A.; Jonsson, B. A logic for reasoning about time and reliability. Formal Aspects Comput. 1994, 6, 512–535. [Google Scholar] [CrossRef]
  12. Katoen, J.-P.; Kemna, T.; Zapreev, I.; Jansen, D.N. Bisimulation Minimisation Mostly Speeds Up Probabilistic Model Checking. In Tools and Algorithms for the Construction and Analysis of Systems; Grumberg, O., Huth, M., Eds.; Springer: Berlin, Germany, 2007; Volume 4424, pp. 87–101. [Google Scholar]
  13. Dehnert, C.; Katoen, J.-P.; Parker, D. SMT-based bisimulation Minimisation of Markov Models. In Verification, Model Checking, and Abstract Interpretation; Giacobazzi, R., Berdine, J., Mastroeni, I., Eds.; Springer: Berlin, Germany, 2013; Volume 7737, pp. 28–47. [Google Scholar]
  14. Song, L.; Zhang, L.; Hermanns, H.; Godskesen, J.C. Incremental bisimulation abstraction refinement. ACM Trans. Embedded Comput. Syst. 2014, 13, 1–23. [Google Scholar] [CrossRef]
  15. Hillston, J.; Marin, A.; Rossi, S.; Piazza, C. Contextual lumpability. In Proceedings of the 7th International Conference on Performance Evaluation Methodologies and Tools, Torino, Italy, 10–12 December 2013; pp. 194–203. [Google Scholar]
  16. Crafa, S.; Ranzato, F. Bisimulation and simulation algorithms on probabilistic transition systems by abstract interpretation. Formal Methods Syst. Des. 2012, 40, 356–376. [Google Scholar] [CrossRef] [Green Version]
  17. Dorsch, U.; Milius, S.; Schröder, L.; Wissmann, T. Efficient coalgebraic partition refinement. arXiv, 2017; arXiv:1705.08362. [Google Scholar]
  18. Zhang, L.; Hermanns, H.; Eisenbrand, F.; Jansen, D.N. Flow faster: efficient decision algorithms for probabilistic simulations. Logical Methods Comput. Sci. 2008, 4, 1–43. [Google Scholar] [CrossRef]
  19. Zhang, L.; Jansen, D.N. A space-efficient simulation algorithm on probabilistic automata. Inf. Comput. 2016, 249, 138–159. [Google Scholar] [CrossRef] [Green Version]
  20. Cattani, S.; Segala, R. Decision Algorithms for Probabilistic Bisimulation. In CONCUR 2002 — Concurrency Theory; Brim, L., Křetínský, M., Kučera, A., Jančar, P., Eds.; Springer: Berlin, Germany, 2002; Volume 2421, pp. 371–386. [Google Scholar]
  21. Turrini, A.; Hermanns, H. Polynomial time decision algorithms for probabilistic automata. Inf. Comput. 2015, 244, 134–171. [Google Scholar] [CrossRef]
  22. Hennessy, M. Exploring probabilistic bisimulations, part I. Formal Aspects Comput. 2012, 24, 749–768. [Google Scholar] [CrossRef]
  23. Kannelakis, P.; Smolka, S. CCS expressions, finite state processes and three problems of equivalence. Inf. Comput. 1990, 86, 43–68. [Google Scholar] [CrossRef]
  24. Groote, J.F.; Jansen, D.N.; Keiren, J.J.A.; Wijs, A.J. An O(mlogn) algorithm for computing stuttering equivalence and branching bisimulation. ACM Trans. Comput. Logic 2017, 18, 1–34. [Google Scholar] [CrossRef]
  25. Valmari, A. Simple bisimilarity minimization in O(mlogn) time. Fundam. Inf. 2010, 105, 319–339. [Google Scholar]
  26. Baier, C.; Technische Universität Dresden, Dresden, Germany. Personal communication, 2018.
  27. Antonick, G. Ant on a grid. Available online: http://wordplay.blogs.nytimes.com//2013/08/12/ants-2/ (accessed on 12 August 2013).
  28. Groote, J.F.; de Vink, E.P. Problem Solving Using Process Algebra Considered Insightful. In ModelEd, TestEd, TrustEd. Lecture Notes in Computer Science; Katoen, J.-P., Langerak, R., Rensink, A., Eds.; Springer: Cham, Switzerland, 2017; Volume 10500, pp. 48–63. [Google Scholar]
  29. Bandini, E.; Segala, R. Axiomatizations for probabilistic bisimulation. In Proceedings of the 28th International Colloquium on Automata, Languages and Programming (ICALP) 2001, Crete, Greece, 8–12 July 2001; pp. 370–381. [Google Scholar]
Figure 1. Two probabilistically bisimilar nondeterministic transition systems.
Figure 1. Two probabilistically bisimilar nondeterministic transition systems.
Algorithms 11 00131 g001
Figure 2. Splitting a non-stable block B into left , middle and right .
Figure 2. Splitting a non-stable block B into left , middle and right .
Algorithms 11 00131 g002
Figure 3. Transitions with state _ to _ constellation _ cnt stored in a global array.
Figure 3. Transitions with state _ to _ constellation _ cnt stored in a global array.
Algorithms 11 00131 g003
Figure 4. The specification of ant-on-a-grid in mCRL2.
Figure 4. The specification of ant-on-a-grid in mCRL2.
Algorithms 11 00131 g004
Figure 5. Scaling of runtime results for the ant-on-a-grid puzzle
Figure 5. Scaling of runtime results for the ant-on-a-grid puzzle
Algorithms 11 00131 g005
Table 1. The generated partitions for the PLTS of Example 1.
Table 1. The generated partitions for the PLTS of Example 1.
Blocks of Actions StatesBlocks of Probabilistic States
S0 = {t1, t3, t4, t6, t7, r1–5}U0 = {u1–6, v1–5}
S1 = {s1–4}U1 = {u1, u3, v1–5}
S2 = {t2, t5, t8, t9}U2 = {u2, u4}
S3 = {t10}U3 = {u5, u6}
S4 = {s1, s2}U4 = {u5}
S5 = {s3, s4}U5 = {u6}
S6 = {s3}
S7 = {s4}
B C
0S0, S1, S2, S3U0S0S1S2S3U0
1S0, S1, S2, S3U1, U2, U3S0, S1S2S3U1U2U3
2S0, S1, S2, S3U1, U2, U3S0, S1, S2S3U1U2U3
3S0, S1, S2, S3U1, U2, U4, U5S0, S1, S2, S3U1U2U4U5
4S0, S2, S3, S4, S5U1, U2, U4, U5S0, S2, S3, S4S5U1, U2U4U5
5S0, S2, S3, S4, S5U1, U2, U4, U5S0, S2, S3, S4S5U1, U2, U4U5
6S0, S2, S3, S4, S6, S7U1, U2, U4, U5S0, S2, S3, S4S6S7U1, U2, U4, U5
7S0, S2, S3, S4, S6, S7U1, U2, U4, U5S0, S2, S3, S4, S6S7U1, U2, U4, U5
8S0, S2, S3, S4, S6, S7U1, U2, U4, U5S0, S2, S3, S4, S6, S7U1, U2, U4, U5
Table 2. Space and time complexity of the GRV algorithm and the BEM algorithm.
Table 2. Space and time complexity of the GRV algorithm and the BEM algorithm.
GRV (this article)BEM [4]
Space complexity O n p O n a n p | Act |
Time complexity O n p log n a O n a n p log n a + n a 2 n p ̲
Table 3. Runtime (in s) and memory use (in MB) results for the reference algorithm (BEM) and the GRV algorithm.
Table 3. Runtime (in s) and memory use (in MB) results for the reference algorithm (BEM) and the GRV algorithm.
Model n a m a n p m p min. n a min. m a min. n p min. m p timeBEMme.BEMtimeGRVme.GRVSpeed-upMemory
shared_coin_2_514,09628,19212,89114,8019981995116314795.4553.570.0851.34681.04
brp_100_2015,00315,00310,80315,00385048504850412,70437.2782.510.0655.796211.48
self_stab_716,13056,46214,33757,346206093242836567214.0871.540.1766.35831.08
brp_100_4029,00329,00320,80329,00316,50416,50416,50424,704368.67176.340.2265.4116762.70
airplane_400031,99131,99015,99831,99123,99523,99415,99823,995491.75219.010.1566.8832783.27
ant_100_100_grid39,98439,984999739,988240524052404960818.5278.080.1461.851321.26
random_4040,00063,98154,12386,23129,61060,86445,09273,8611766.80546.210.50111.7035344.89
shared_coin_2_1053,736107K48,131551K197839552303293965.41107.850.3681.341821.33
self_stab_865,026260K65,537262K630632,960972119,442401.17294.440.69157.175811.87
brp_200_5072,00372,00351,60372,00341,00441,00441,00461,4041674.53814.890.38101.8644078.00
ant_200_100_grid79,98479,98419,99779,98848554855485419,408105.78164.550.2082.855291.99
airplane_1000079,99179,99039,99879,99159,99559,99439,99859,9952805.121073.630.37113.8375819.43
random_100100K160K134K215K73,607151K112K183K15,439.182975.381.14213.0813,53413.96
ant_200_200_grid160K160K39,997160K98059805980439,208760.17484.780.41124.8418543.88
shared_coin_2_20210K419K185K212K3938787545835859654.94440.181.19198.805502.21
self_stab_9262K1175K294K1180K19,172113K32,80665,6124015.532809.323.24598.1412394.70
shared_coin_3_2280K837K274K318K584117,347772210,0681781.64974.921.78294.6810013.31
ant_400_200_grid320K320K79,997320K19,70519,70519,70478,8083026.021703.800.88218.9534397.78
airplane_40000320K320K160K320K240K240K160K240K--1.34333.17--
random_400400K638K540K859K293K605K446K730K--5.11729.84--
shared_coin_2_30468K936K413K472K589811,795686387792427.551248.442.95389.138233.21
ant_400_400_grid640K640K160K640K39,60539,60539,604158K9917.646477.752.09397.23474516.31
airplane_100000800K800K400K800K600K600K400K600K--4.22755.948--
random_800800K1277K1079K1718K587K1210K893K1462K--11.981418.79--
brp_600_200846K846K604K846K483K483K483K724K--4.94789.05--
self_stab_101046K5232K1311K5242K58,026383K109K218K--16.532369.75--
shared_coin_2_601858K3716K1632K1866K11,77823,55513,70317,539--12.251416.92--
ant_800_800_grid2560K2560K640K2560K159K159K159K636K--8.271466.44--
shared_coin_3_53222K9665K2984K3437K14,08541,86318,63024,432--26.972809.88--
brp_1000_5003510K3510K2508K3510K2005K2005K2005K3007K--24.853122.14--
ant_1600_1600_grid10,240K10,240K2560K10,240K638K638K638K2553K--35.645743.74--
brp_2000_100014,020K14,020K10,016K14,020K8010K8010K8010K12,015K--115.9512,351.47--
brp_4000_200056,040K56,040K40,032K56,040K32,020K32,020K32,020K48,030K--652.7849,253.50--

Share and Cite

MDPI and ACS Style

Groote, J.F.; Rivera Verduzco, J.; De Vink, E.P. An Efficient Algorithm to Determine Probabilistic Bisimulation. Algorithms 2018, 11, 131. https://doi.org/10.3390/a11090131

AMA Style

Groote JF, Rivera Verduzco J, De Vink EP. An Efficient Algorithm to Determine Probabilistic Bisimulation. Algorithms. 2018; 11(9):131. https://doi.org/10.3390/a11090131

Chicago/Turabian Style

Groote, Jan Friso, Jao Rivera Verduzco, and Erik P. De Vink. 2018. "An Efficient Algorithm to Determine Probabilistic Bisimulation" Algorithms 11, no. 9: 131. https://doi.org/10.3390/a11090131

APA Style

Groote, J. F., Rivera Verduzco, J., & De Vink, E. P. (2018). An Efficient Algorithm to Determine Probabilistic Bisimulation. Algorithms, 11(9), 131. https://doi.org/10.3390/a11090131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop