You are currently viewing a new version of our website. To view the old version click .
Algorithms
  • Article
  • Open Access

8 July 2014

Model Checking Properties on Reduced Trace Systems

and
1
Dipartimento di Ingegneria, Università del Sannio, Benevento 82100, Italy
2
Dipartimento di Ingegneria della Informazione, Università di Pisa, Pisa 56122, Italy
*
Authors to whom correspondence should be addressed.

Abstract

Temporal logic has become a well-established method for specifying the behavior of distributed systems. In this paper, we interpret a temporal logic over a partial order model that is a trace system. The satisfaction of the formulae is directly defined on traces on the basis of rewriting rules; so, the graph representation of the system can be completely avoided; moreover, a method is presented that keeps the trace system finite, also in the presence of infinite computations. To further reduce the complexity of model checking temporal logic formulae, an abstraction technique is applied to trace systems.

1. Introduction and Motivation

Linear time [1] and branching time [2] temporal logics are used for specifying and verifying concurrent and distributed systems: partial order models (trace systems are an example) are mostly used to give semantics to linear time logics, while interleaving models (such as transition systems) are widely used for branching time logics. To express properties inherent in concurrency, i.e., properties distinguishing concurrency from nondeterminism, a partial order interpretation for the logic fits better. This interpretation allows also a good definition of fairness properties as, for example, inevitability under the fairness assumption [3,4]: “in all computations the event a eventually occurs”.
Model checking is one of the main methods for the automated verification of concurrent systems [5]; it consists in checking whether a structure representing the system is a model for a logic formula. Model checking very large concurrent systems may cause the so-called “state explosion problem” and lead to a too big number of states of the structure. A variety of methods for reducing the state explosion problem have been developed [6,7,8,9,10]; in the context of branching time logics, in [11,12], the authors and others proposed a logic, called selective mu-calculus, equi-expressive to mu-calculus [13], but, such that each formula directly characterizes an abstraction of the system that maintains the truth value of the formula itself. Different action logics, such as [14], whose operators could be also used in a linear fashion to concisely express fairness properties, are not suitable to individuate system abstractions preserving the truth values of formulae. A further problem is the model checking of infinite representation of systems: for example, in trace systems, recursive behaviors are usually represented by means of an infinite set of finite traces (see [15], solutions for branching time logic are in [16]). Finally, it is known that, while a lot of interesting correctness properties, such as mutual exclusion and the absence of starvation, can be elegantly expressed by linear time formulae, the model checking of a linear time logic and that of a branching time logic [5,17,18] have different complexity. For example, given a transition system of size n and an alternation-free temporal logic formula of size m, model checking algorithms for the branching time logic (CTL)run in time O ( n m ) , while those for the linear time logics (LTL) run in time O ( n 2 m ) . This result holds also in the case of generalized model checking [19].
In this work, we give a non-interleaving interpretation of selective mu-calculus formulae using the simplest and best known partial order model for computations, that is Mazurkiewicz’s trace system [3,15,20]. This model allows a compact representation of the system computations using only an element, called trace, to represent an equivalence class of sequences of events with respect to a dependence relation. A similar approach has been carried on with the logic CTL in [21]. More precisely, the author defines an extension of CTL by past modalities, called C T L P , and interpreted over Mazurkiewicz’s trace systems. The author’s aim is to obtain the model checking of properties described in this logic with a linear complexity, as is that of CTL on transition systems: on the contrary, he proved that model checking for C T L P on traces is NP-hard, even if past modalities cannot be nested. Differently from [21], the aim of this article is to check the satisfaction of our formulae directly on traces, i.e., we suppose using a sequential memory representation for traces and no graph representation of the system, but only a representation of dependencies. Moreover, we employ a trace abstraction, induced by the selective formulae, and we simplify the traces by discarding, at each verification step, the events that are no more of interest for the successive verification. Furthermore, to avoid the management of infinite sets of traces, we use partial traces containing holes to give the semantics of recursive behaviors: the holes are expanded step-by-step, until the verification of the formula can be decided, and so, we have always to manage a finite set of traces. The checking method can be easily implemented by rewriting functions that transform each trace with a polynomial complexity in its dimension.
In Section 2, concurrent systems are defined by a simple event-based specification language, taken as an example and whose semantics is a trace system. In Section 3, the syntax of the selective mu-calculus is recalled, and the satisfaction of the formulae is defined on the corresponding trace system. In the successive section, this is shown as each trace can be abstracted with respect to a particular formula, but maintaining its truth value. The last section contains conclusions and comparisons of the presented approach with some related works.

2. Event Language

This section presents a very simple, event-based language that is not a real language, but an exercise one; nevertheless, it contains the basic features to represent behaviors of concurrent systems. The language is actually very similar to some of the most common process algebras. The semantics of the language is given in terms of trace systems.

2.1. Syntax of Expressions

Expressions are obtained composing a finite set A = { a , b , } of symbols, called the alphabet, by means of a set of operators. Each expression represents a possible behavior of the system, while the occurrence of a symbol in an expression represents the occurrence of an event of the system. The syntax to build up expressions is the following:
e : : = n i l | a | e . e | e + e | e e | r e c ( e )
where a ranges over A. The language allows the definition of:
  • the empty expression (the operator n i l );
  • the concatenation of two expressions (the operator “.”); for example, e 1 . e 2 , with e 1 = a and e 2 = b , is the expression a . b ;
  • the choice between two expressions (the operator “+”); for example, e 1 + e 2 , with e 1 = a . c and e 2 = b , is the expression a . c + b ;
  • the parallel composition of two expressions (operator “||”), where the events in each expression can occur independently, except the events with the same name that cause the synchronization of the two concurrent expressions; for example, e 1 e 2 , with e 1 = a . c and e 2 = b , is the expression a . c b ;
  • the unbounded iteration of an expression (the operator “ r e c ”).
We require that the following rules hold for the expressions:
1 . e . n i l = e 2 . n i l e = e 3 . r e c ( n i l ) = n i l 4 . e e = e 5 . e + e = e
while e + n i l = e does not hold; the trace semantics of the language is given by the Definition 2 in the next section, and it guarantees the rules.
For each alphabet A, A * is the set of all finite sequences (strings) of symbols in A; for each string σ, a l p h ( σ ) (the alphabet of σ) is the set of all symbols occurring in σ; this definition can be easily extended over expressions. We denote the set of all expressions by E . In the following section, the semantics of the language is formally given.

2.2. Trace Semantics of Expressions

In this section, we define the trace semantics of expressions of the language.
The first step is the definition of the notion of dependence between events; the relation we use is taken from Mazurkiewicz’s trace theory [3,15].
Dependence.A dependence relation (dependence for short) over a finite alphabet A is any reflexive, symmetric relation D A × A . A dependence D over the alphabet A determines the symmetric and irreflexive relation I D A × A , called the independence relation determined by D and defined as I D = A × A D .
The ordered triple ( A * , . , ϵ ) , where ϵ is the empty string and . is the concatenation operation on strings, is the standard string monoid over A. Usually, the sign . for concatenation is omitted. Let A be an alphabet and σ A * ; then for any alphabet B we denote Π B ( σ ) the string projection of σ onto B, i.e., a string over A B obtained from σ deleting all symbols not belonging to B. If A B = , Π B ( σ ) = ϵ , for any σ. A concurrent alphabet is any ordered pair Σ = ( A , D ) , where A is a finite set of symbols, called also the alphabet of Σ, and D is a dependence over A. Given Σ = ( A , D ) , the trace equivalence for Σ is the least congruence Σ in the string monoid over A, such that for all a , b :
( a , b ) I D a b Σ b a
In other words, it holds that σ Σ σ if there is a finite sequence of strings σ 0 , σ 1 , , σ n , n 0 , such that σ 0 = σ , σ n = σ , and for each i, 1 i n , σ i 1 = δ 1 a b δ 2 , σ i = δ 1 b a δ 2 , for some ( a , b ) I D and δ 1 , δ 2 A * . Equivalence classes of Σ are called traces over Σ; given a string σ A * , [ σ ] Σ is the equivalence class of σ over Σ. When clear from the context, we omit Σ. The set Θ ( Σ ) = [ A * ] Σ is the set of all traces over Σ. The concatenation of traces, denoted by [ σ 1 ] [ σ 2 ] , is defined as [ σ 1 σ 2 ] . For any alphabet B, it holds that Π B ( [ σ ] ) = [ Π B ( σ ) ] and a l p h ( [ σ ] ) = a l p h ( σ ) .
Dependence Closure.Given two dependencies, D 1 and D 2 , defined on the alphabets A 1 and A 2 , respectively, we call dependence closure the derived dependence on the alphabet A 1 A 2 :
D 1 D 2 = D 1 D 2 { ( a 1 , a 2 ) , ( a 2 , a 1 ) | a 1 A 1 , a 2 A 2 }
Trace System.A trace system T S is any ordered pair ( Σ , T ) , where Σ = ( A , D ) is a concurrent alphabet and T Θ ( Σ ) is a trace language over Σ.
To manage only finite sets of traces also in the presence of recursive behaviors, we extend the alphabet A by a set of special symbols (called holes): a hole represents the fact that the trace is incomplete and can be expanded; we also call this type of trace a partial trace. Each hole has the form x , where x is the name of a trace language; in fact, any trace belonging to the language x can be used to fill the hole giving rise to another partial trace.
Some operations can be performed on trace systems.
  • Given the trace system T S = ( ( A , D ) , T ) ,
    its unbounded iteration is the system:
    T S 1 = ( ( ( A , D ) , T ) ) * = ( ( A { T } , D D ) , { [ ϵ ] , T } )
    where :
    D = { ( T , T ) } { ( a , T ) , ( T , a ) | a A }
The following example gives a first hint of the effect of the unbounded iteration of trace systems.
Example 1. Consider the trace system T S 0 = ( ( A 0 , D 0 ) , T 0 ) , with:
A 0 = { a } , D 0 = { ( a , a ) } and T 0 = { [ a ] }
( T S 0 ) * = ( ( { a } { T 0 } , { ( a , a ) , ( T 0 , T 0 ) , ( a , T 0 ) , ( T 0 , a ) } ) , { [ ϵ ] , T 0 } )
The actual traces of ( T S 0 ) * are all of the partial traces that can be obtained by filling its hole by means of the traces of T 0 : some examples are [ a T 0 ] , [ a a T 0 ] , [ a a a T 0 ] ; Definition 1 will show the formal way in which more complete traces can be obtained. Other operations on trace systems are the following ones.
  • Let T S 1 = ( Σ 1 , T 1 ) and T S 2 = ( Σ 2 , T 2 ) , with Σ 1 = ( A 1 , D 1 ) and Σ 2 = ( A 2 , D 2 ) , be two trace systems.
    Their concatenation is the system:
    T S 1 . T S 2 = ( Σ , T )
    where : Σ = ( A 1 A 2 , D 1 D 2 ) , and T = { τ 1 τ 2 | τ 1 T 1 , τ 2 T 2 }
    their nondeterministic composition is the system:
    T S 1 + T S 2 = ( Σ , T )
    where Σ = ( A 1 A 2 , D 1 D 2 ) , and
    T = { τ | τ T 1 τ T 2 }
    their parallel composition is the system:
    T S 1 T S 2 = ( Σ , T )
    where Σ = ( A 1 A 2 , D 1 D 2 ) , and
    T = { Π A 1 ( τ ) T 1 Π A 2 ( τ ) T 2 }
Example 2. Consider the following trace systems :
  • T S 0 = ( ( A 0 , D 0 ) , T 0 ) ,
  • T S 1 = ( ( A 1 , D 1 ) , T 1 ) , and
  • T S 2 = ( ( A 2 , D 2 ) , T 2 ) , with
  • A 0 = { a } , D 0 = { ( a , a ) } and T 0 = { [ a ] } ;
  • A 1 = { a , b } , D 1 = { ( a , a ) , ( b , b ) , ( a , b ) , ( b , a ) } and T 1 = { [ a b ] } ;
  • A 2 = { b , c } , D 2 = { ( b , b ) , ( c , c ) , ( b , c ) , ( c , b ) } and T 2 = { [ b c ] } .
  • T S 0 . T S 2 is the trace system T S 3 = ( ( A 3 , D 3 ) , T 3 ) with
  • A 3 = { a , b , c } , D 3 = { ( a , a ) , ( b , b ) , ( c , c ) , ( a , b ) , ( b , a ) , ( a , c ) , ( c , a ) , ( b , c ) , ( c , b ) } and T 3 = { [ a b c ] } .
  • T S 0 + T S 2 is the trace system T S 6 = ( ( A 6 , D 6 ) , T 6 ) with
  • A 6 = { a , b , c } , D 6 = { ( a , a ) , ( b , b ) , ( c , c ) , ( b , c ) , ( c , b ) } and T 6 = { [ a ] , [ b c ] } .
  • T S 1 T S 2 is the trace system T S 4 = ( ( A 4 , D 4 ) , T 4 ) with
  • A 4 = { a , b , c } , D 4 = { ( a , a ) , ( b , b ) , ( c , c ) , ( a , b ) , ( b , a ) , ( b , c ) , ( c , b ) } and T 4 = { [ a b c ] } .
  • T S 0 T S 2 is the trace system T S 5 = ( ( A 5 , D 5 ) , T 5 ) with
  • A 5 = { a , b , c } , D 5 = { ( a , a ) , ( b , b ) , ( c , c ) , ( b , c ) , ( c , b ) } and T 5 = { [ a b c ] } .
The expansion of the partial traces is obtained through the following definition: a completion step is performed by prefixing each hole T in the partial trace by means of an element of the trace language T; the same occurs for the holes possibly contained in T. Each expansion of the partial trace maintains the capability of a further expansion for each hole.
Definition 1 (one-unfolding)
Consider T S = ( ( A , D ) , T ) and σ A * , all the possible one-unfoldings of σ, denoted by σ ^ , correspond to the set U ( σ , S ) , obtained as follows from an initial value of S = :
U ( σ , S ) = σ i f   σ   c o n t a i n s   n o   h o l e   o r x σ , x S U ( σ σ 1 x / x , S { x } ) i f x σ , x S , i [ 1 . . n ] , σ i x U ( σ σ n x / x , S { x } )
Following the previous definition, if x = { [ d g ] , [ d f y ] } and y = { [ d c ] } ; for example, the trace [ a b c x d ] may be transformed by filling the hole x with the first trace in the language x, so obtaining the partial trace [ a b c d g x d ] . When using the second partial trace in x, we obtain the partial trace [ a b c d f d c y x d ] , since also the hole y must be filled one time. The one-unfolding procedure of a string always terminates after each hole in the initial trace (and each hole in the traces used to fill it) has been filled once.
The semantics of an expression is the trace system built on the basis of the syntactic structure of each expression.
Definition 2 (Semantics) Given the expression e, its semantics is the trace system T S ( e ) = ( ( A , D ) , T ) built as follows:
T S ( n i l ) = ( ( , ) , { [ ϵ ] } ) T S ( a ) = ( ( { a } , { ( a , a ) } ) , { [ a ] } ) T S ( e 1 . e 2 ) = T S ( e 1 ) . T S ( e 2 ) T S ( e 1 + e 2 ) = T S ( e 1 ) + T S ( e 2 ) T S ( e 1 e 2 ) = T S ( e 1 ) T S ( e 2 ) T S ( r e c ( e ) ) = ( T S ( e ) ) *
We remark that the rule e + n i l = e does not hold; in fact, T S ( e + n i l ) always contains the empty trace, while T S ( e ) may not.
The following example clarifies the semantics of the r e c operator.
Example 3. Consider the expression:
e = r e c ( a . r e c ( d ) )
T S ( r e c ( a . r e c ( d ) ) ) = ( T S 0 ( a . r e c ( d ) ) ) *
T S 0 ( a . r e c ( d ) ) ) = T S 1 ( a ) . T S 2 ( r e c ( d ) )
T S 2 ( r e c ( d ) ) = ( T S 3 ( d ) ) *
If T S = ( ( A , D ) , T ) , we have: A = A 0 { T 0 } D = D 0 { ( T 0 , T 0 ) } { ( a , T 0 ) , ( T 0 , a ) | a A 0 } T = { [ ϵ ] , T 0 }
where, for each 0 i 3 , T S i = ( ( A i , D i ) , T i ) is defined as follows:
A 0 = { a , d , T 3 } A 1 = { a } A 2 = A 3 { T 3 } A 3 = { d } D 0 = D 1 D 2 = { ( a , a ) , ( d , d ) , ( T 0 , T 0 ) , ( T 3 , T 3 ) , ( a , d ) , ( d , a ) , ( a , T 3 ) , ( T 3 , a ) , ( a , T 0 ) , ( T 0 , a ) , ( d , T 0 ) , ( T 0 , d ) , ( T 3 , T 0 ) , ( T 0 , T 3 ) , ( d , T 3 ) , ( T 3 , d ) } D 1 = { ( a , a ) } D 2 = D 3 { ( T 3 , T 3 ) , ( T 3 , d ) , ( d , T 3 ) , ( T 3 , d ) ) } D 3 = { ( d , d ) } T 0 = { [ a ] , [ a T 3 ] } T 1 = { [ a ] } T 2 = { [ ϵ ] , T 3 } T 3 = { [ d ] }
The strings below are the one-unfoldings of T 0 :
a T 0 , a d T 3 T 0

3. Selective Mu-Calculus

The selective mu-calculus is a temporal logic proposed by the authors in [11,12] and interpreted on transition systems, as a branching time logic. That calculus has the characteristic that the actions relevant for checking a formula are the ones explicitly mentioned. We propose here a different interpretation that takes into account linear time; the following sub-section recalls the syntax of the calculus (called LTSC, for short) and the satisfaction of LTSC formulae on trace systems.

3.1. The Syntax of the Calculus

Here, slight simplifications are made on the syntax of the selective mu-calculus to avoid useless details. Consider the set A of events. The events a , b range over A, and S A is a set of events with cardinality less than or equal to one. Moreover, Z belongs to a set of variable names. The calculus has the following syntax:
φ : : = tt | ff | Z | φ 1 φ 2 | φ 1 φ 2 | [ a ] S φ | a S φ | ν Z . φ | μ Z . φ
A fixed point formula has the form μ Z . φ ( ν Z . φ ), where the fixed point operator μ Z ( ν Z ) binds free occurrences of Z in φ. An occurrence of Z is free if it is within the scope of no fixed point operator. A formula is closed if it contains no free variables. The formula μ Z . φ is the least fixed point of the recursive equation Z = φ , while ν Z . φ is the greatest one. In the following, we consider only closed formulae and alternation-free mu-calculus formulae [22].
However, note that the syntax of the modal operators can be easily extended as follows (and so their meaning) to manage a set of events without affecting the remainder of the paper.
[ { α 1 , , α n } ] { β 1 , , β m } φ = ( [ α 1 ] { β 1 } φ [ α 1 ] { β m } φ ) ( [ α n ] { β 1 } φ [ α n ] { β m } φ ) { α 1 , , α n } { β 1 , , β m } φ = ( α 1 { β 1 } φ α 1 { β m } φ ) ( α n { β 1 } φ α n { β m } φ )
Example 4. Some examples of formulae are shown in the following.
φ 1 = ν Z . a Z : “there exists a run in which the event a, preceded by any event, can always occur”.
φ 2 = [ c ] { a } a tt : “in any run where an event c, not preceded by the event a, occurs, the event a must always follow.

3.2. The Satisfaction of the Formulae on Trace Systems

To define the formula satisfaction on trace systems, we will build a particular trace presentation; the behavior of the operators that produce this presentation is based on a rewriting rule, which transforms a trace into an equivalent one.
Definition 3 (Rewriting rule). Consider Σ = ( A , D ) and γ b a γ A * .
γ b a γ γ a b γ i f ( b , a ) I D
The rule transforms a string into an equivalent one, since:
( b , a ) I D γ b a γ Σ γ a b γ
The following function D uses the auxiliary function M , shown in Table 1. The function M , given a string β, marks all the events in β. If A is an alphabet, we write A for the set { a | a A a x , x A } . Intuitively, A is the set of all of the symbols of the alphabet A, except for the holes.
Table 1. Marking function.
Table 1. Marking function.
Let A be an alphabet: consider a A and β A * .
M ( a . β ) = a ¯ . M ( β ) M ( ϵ ) = ϵ
Definition 4. Consider Σ = ( A , D ) , σ A * , a A , and S A .
D a , S ( σ ) =
[ M ( δ 2 ) . δ 3 ] ( 1 ) i f σ 1 a σ 2 Σ σ , Π { a } S ( σ 1 ) = ϵ ; a n d ( 2 ) i f δ 1 δ 2 a δ 3 Σ δ 1 a δ 2 δ 3 Σ σ 1 a σ 2 , s u c h   t h a t ( a ) Π { a } S ( δ 1 δ 2 ) = ϵ ; a n d ( b ) no   event   in δ 1 can   move   forward ( by   means   of   the   rewriting   rule ) , t o   p a s s   o v e r   a ;   a n d ( c )   n o   e v e n t   i n   δ 3   c a n   m o v e   b a c k w a r d   t o   p a s s   o v e r   a ( b y   m e a n s   o f   t h e   r e w r i t i n g   r u l e )   a p a r t   t h e   e v e n t   i n   S . n u l l otherwise
D a , S ( σ ) manipulates traces exploiting a simple algorithm that moves events according to the rewriting rule. The result of D a , S ( σ ) is a trace without the events occurring before a in any run (the events in δ 1 do not matter for the successive verification), but that includes the events occurring in some run after a and in some run before a (they are the marked events in δ 2 , with the mark remembering that they do not necessarily occur where they are); it includes also (they are the events in δ 3 ) the events that occur after a in any run and, if present, the event in S (such an event is required to occur after a in all runs of interest; this fact is guaranteed by the constraint expressed in Point a) of Definition 4). The complexity of the trace manipulations is polynomial in the number of elements of the trace itself in the worst case, when at each step, all of the events are moved up and down to verify if they can occur before and/or after a; linear in the better case, when only the reading of all of the events of the trace is required, since the rewriting rule cannot ever be applied. The following example shows, in more detail, the effect of M .
Example 5. Consider the two concurrent alphabets:
Σ 1 = ( { a , b , c } , { ( a , a ) , ( b , b ) , ( c , c ) , ( a , b ) , ( b , a ) } ) and:
Σ 2 = ( { a , b , c } , { ( a , a ) , ( b , b ) , ( c , c ) , ( b , c ) , ( c , b ) } ) .
  • If S = :
    D a , S ( c a b ) = [ c ¯ b ] , for Σ 1 , and
    D a , S ( c a b ) = [ c ¯ b ¯ ] , for Σ 2 , while
    D b , S ( c a b ) = [ c ¯ ] , for Σ 1 , and
    D b , S ( c a b ) = [ a ¯ ] , for Σ 2
  • If S = { b }
    D a , S ( c a b ) = [ c ¯ b ] for both Σ 1 and Σ 2 , while
    D b , S ( c a b ) = [ c ¯ ] , for Σ 1 , and
    D b , S ( c a b ) = [ a ¯ ] , for Σ 2
The formal satisfaction of a formula ψ by a trace [ σ ] is defined as follows. Note that we consider the event set X ˜ = X X ¯ , where X ¯ = { x ¯ | x X } as the alphabet for strings, since it is possible to obtain strings containing marked events. In Table 2, auxiliary cleaning functions are shown that either eliminate the marks from the events of a string ( C l 2 ) before the next verification step is performed or eliminate the marked events at all ( C l 1 ), as shown in the following Example 6. In fact, the marked events must not be considered when checking for the satisfaction of a formula a S ψ .
Table 2. Cleaning functions.
Table 2. Cleaning functions.
Let A be an alphabet; consider a , b A and β A * .
C l 1 ( b , a . β ) = a . C l 1 ( b , β ) C l 1 ( b , a ¯ . β ) = if   ( a = b )   then   C l 1 ( b , β )   else   a . C l 1 ( b , β ) C l 1 ( b , ϵ ) = ϵ C l 2 ( a . β ) = a . C l 2 ( β ) C l 2 ( a ¯ . β ) = a . C l 2 ( β ) C l 2 ( ϵ ) = ϵ
Definition 5. Consider Σ = ( A , D ) , a A , S A , σ A ˜ * ; moreover, σ ^ is any one-unfolding of σ.
[ σ ] tt [ σ ] ff [ σ ] ψ 1 ψ 2 i f f [ σ ] ψ 1   o r   [ σ ] ψ 2 [ σ ] ψ 1 ψ 2 i f f [ σ ] ψ 1   a n d   [ σ ] ψ 2 [ σ ] [ a ] S ψ i f f σ σ ^ , s u c h   t h a t D a , S ( C l 2 ( σ ) ) n u l l , D a , S ( C l 2 ( σ ) ) ψ [ σ ] a S ψ i f f σ σ ^ , s u c h   t h a t D a , S ( C l 1 ( a , σ ) ) n u l l a n d D a , S ( C l 1 ( a , σ ) ) ψ [ σ ] ν Z . ψ i f f [ σ ] ν Z n . ψ   f o r   a l l   n a t u r a l   n u m b e r s   n [ σ ] μ Z . ψ i f f [ σ ] μ Z n . ψ   f o r   s o m e   n a t u r a l   n u m b e r   n
where, for each n, ν Z n . φ   a n d   μ Z n . φ are defined as:
ν Z 0 . φ = tt μ Z 0 . φ = ff ν Z n + 1 . φ = φ [ ν Z n . φ / Z ] μ Z n + 1 . φ = φ [ μ Z n . φ / Z ]
and the notation φ [ ψ / Z ] indicates the substitution of ψ for every free occurrence of the variable Z in φ.
Example 6. Reconsider the previous Example 5 and the formula:
φ 3 = [ a ] [ c ] b tt
to be checked on the trace [ c a b ] . It holds that:
  • D a , ( c a b ) = [ c ¯ b ] , for the Σ 1 , while D a , ( c a b ) = [ c ¯ b ¯ ] , for Σ 2 .
  • Then, after the cleaning,
  • D c , ( c b ) = [ b ¯ ] , and
  • D c , ( c b ) = [ b ] .
  • Finally, again after the cleaning,
  • D b , ( ϵ ) = n u l l , and
  • D b , ( b ) = ϵ ,
  • thus [ c a b ] φ 3 for Σ 2 , but [ c a b ] φ 3 for Σ 1 ; in fact, the marked event b ¯ does not occur after c in
  • any run.
The satisfaction of an LTSC formula by a trace system is defined as follows:
Definition 6. Let T S = ( ( A , D ) , T ) be a trace system.
T S tt T S ff T S ψ 1 ψ 2 i f f T S ψ 1   o r   T S ψ 2 T S ψ 1 ψ 2 i f f T S ψ 1   a n d   T S ψ 2 T S [ a ] S ψ i f f τ T , τ [ a ] S ψ T S a S ψ i f f τ T , τ a S ψ T S ν Z . ψ i f f T S ν Z n . ψ   f o r   a l l   n a t u r a l   n u m b e r s   n T S μ Z . ψ i f f T S μ Z n . ψ   f o r   s o m e   n a t u r a l   n u m b e r   n
Example 7. Now reconsider the expression of Example 3:
e = r e c ( a . r e c ( d ) )
and suppose having to check on e the formula:
φ 4 = ν Z . [ d ] a Z
Since [ ϵ ] satisfies any formula, e ν Z n . [ d ] a Z , n , if [ a T 0 ] , [ a d T 3 T 0 ] satisfy ν Z n . [ d ] a Z ( a T 0   a n d   a d   T 3 T 0 are the one-unfoldings of T 0 ).
Since D d , ( C l 2 ( a T 0 ) ) = n u l l , we have to verify that:
D d , ( C l 2 ( a d T 3 T 0 ) ) = [ T 3 T 0 ] a [ ν Z n 1 . [ d ] a Z ]
Since D a , ( C l 1 ( a , d T 3 a T 0 ) ) = [ T 0 ] , and:
[ T 0 ] ν Z n 1 . [ d ] a Z
then:
[ T 0 ] ν Z n . [ d ] a Z , n

4. Transformation Rules to Obtain Abstract Trace Systems

In this section, we present a syntactic transformation algorithm, which, given a set ρ of events and an expression e, transforms e into e , where both e and e satisfy the same set of LTSC formulae with events occurring in ρ. In general, the trace system corresponding to e is smaller than the one corresponding to e. Our aim is two-fold: given a formula, to find a suitable set ρ and, given ρ, to eliminate from an expression a suitable superset of ρ. A suitable ρ depending on a formula is the following.
Occurring Events: O ( φ ) .Given an LTSC formula φ, O ( φ ) is the union of all the events α and the sets S appearing in the modal operators ( [ α ] S ψ , α S ψ ) occurring in φ.
Definition 7 (Transformation rule). Let Σ = ( A , D ) be a concurrent alphabet and ρ A and e an expression over A . We define T ρ ( e ) as:
T ρ ( α ) = n i l α ρ α α ρ T ρ ( e 1 . e 2 ) = T ρ ( e 1 ) . T ρ ( e 2 ) T ρ ( e 1 + e 2 ) = T ρ ( e 1 ) + T ρ ( e 2 ) T ρ ( e 1 e 2 ) = T ρ ( e 1 ) T ρ ( e 2 ) w h e r e ρ = ρ ( a l p h ( e 1 ) a l p h ( e 2 ) ) T ρ ( r e c ( e ) ) = r e c ( T ρ ( e ) )
The previous rule maintains in the traces events belonging to a suitable superset of O ( φ ) : in fact, besides the occurring events of the formula, it maintains also the communication events .
The complexity of the transformation operator T is linear in the length of the specification. This result, together with Definition 6 of the satisfaction on traces, further reduces the complexity of the model checking of temporal logic formulae.
The following theorem deals with the abstraction of a trace system induced by the formula φ and obtained in two steps: the first step syntactically reduces the event expression; the second one reduces the trace system, which is the semantics of the expression.
Now, we extend the definition of projection to trace systems.
Definition 8. Let T S = ( ( A , D ) , T ) be a trace system and B A ;
Π B ( T S ) = ( ( B { x | x A } , Π B ( D ) ) , Π B ( T ) )
where:
(1) 
Π B ( D ) = { ( a , b ) | ( a , b ) D   a n d   a , b B } ;
(2) 
Π B ( T ) = { Π B ( w ) | w T } , and x A , x = { Π B ( w ) | w x } .
Theorem 1. Consider an expression e and an LTSC formula ψ.
T S ( e ) ψ i f a n d o n l y i f Π O ( ψ ) ( T S ( T O ( ψ ) ( e ) ) ) ψ
Proof: see the Appendix.
Example 8. Consider the following expression:
e = r e c ( a . ( b c ) c . a . ( b c ) )
and try to prove the properties:
φ 1 = ν Z . a Z : “there exists a run in which the event a, preceded by any event, can always occur”.
φ 2 = [ a ] { a } ff : “it is not possible to perform a if a has not occurred before”.
By our methodology, we have to check:
  • T S ( e ) φ 1 , and
  • T S ( e ) φ 2
through the checking of (see Theorem 1):
  • Π ρ 1 ( T S 1 ( T ρ 1 ( e ) ) ) φ 1 , with ρ 1 = O ( φ 1 ) = { a }
  • Π ρ 2 ( T S 2 ( T ρ 2 ( e ) ) ) φ 2 , with ρ 2 = O ( φ 2 ) = { a , a }
The transformation rules applied over e with ρ 1 obtain:
  • T ρ 1 ( e )
  • =    { applying Definition 7 and the rules set for the operators in Section 2.1}
  • r e c ( a . c c )
While, using ρ 2 , similarly we obtain: T ρ 2 ( e ) = r e c ( a . c c . a ) .
The trace language of T S 1 ( r e c ( a . c c ) ) is
{ [ ϵ ] , T 3 } , w h e r e T 3 = { [ a c ] } i s t h e t r a c e l a n g u a g e o f T S 3 ( a . c c ) .
The trace language of T S 2 ( r e c ( a . c c . a ) ) is:
{ [ ϵ ] , T 4 } , w i t h T 4 = { [ a c a ] } t h a t i s t h e t r a c e l a n g u a g e o f T S 4 ( a . c c . a ) .
Finally, by applying Definition 6, we can prove that Π ρ 1 ( T S 1 ) φ 1 and Π ρ 2 ( T S 2 ) φ 2 . In the first case, the trace [ T 3 ] can be unfolded (the first one-unfolding is [ a c T 3 ] ) step by step and simplified for the successive step (the next one-unfolding for the simplified trace produces the trace [ c a c T 3 ] ). We can see that, at each step n, the formula is verified on:
ν Z 0 . φ = tt ν Z n + 1 . φ = φ [ ν Z n . φ / Z ]
Consequently, φ 1 = ν Z . a Z holds on e. For φ 2 = [ a ] { a } ff , both traces should verify the formula: it is easy to see that, for both [ ϵ ] and [ a c a T 4 ] (the one-unfolding of T 4 ), φ 2 holds.

Author Contributions

Antonella Santone and Gigliola Vaglini are both responsible for the concept of the paper, the results presented and the writing. Both authors have read and approved the final published manuscript.

Appendix

Proof of Theorem 1

We first give some technical lemmas.
Lemma 1. Let T S 1 = ( ( A 1 , D 1 ) , T 1 ) and T S 2 = ( ( A 2 , D 2 ) , T 2 ) be two trace systems, A 1 A 2 = and ρ A 1 A 2 .
(1) 
Π ρ ( T S 1 . T S 2 ) = Π ρ ( T S 1 ) . Π ρ ( T S 2 )
(2) 
Π ρ ( T S 1 + T S 2 ) = Π ρ ( T S 1 ) + Π ρ ( T S 2 )
Proof. 
Item 1. Π ρ ( T S 1 . T S 2 )
= { by definition of the concatenation of trace systems }
  Π ρ ( ( A 1 A 2 , D 1 D 2 ) , { τ 1 . τ 2 | τ 1 T 1 , τ 2 T 2 } )
= { by Definition 8 }
  ( ( ρ , Π ρ ( D 1 D 2 ) ) , Π ρ ( { τ 1 . τ 2 ) | τ 1 T 1 , τ 2 T 2 } )
= { by the properties of projection over strings and over dependencies }
  ( ( ρ , Π ρ ( D 1 ) Π ρ ( D 2 ) ) , { Π ρ ( τ 1 ) | τ 1 T 1 } . { Π ρ ( τ 2 ) | τ 2 T 2 }
= { by Definition 8 }
  Π ρ ( T S 1 ) . Π ρ ( T S 2 )
Item 2. This case can be proven in a similar way.
In the following, given a trace language T, by a l p h ( T ) , we denote the set of symbols occurring in all traces belonging to T.
Lemma 2. Let T S 1 = ( ( A 1 , D 1 ) , T 1 ) and T S 2 ( ( A 2 , D 2 ) , T 2 ) be two trace systems, ρ , ρ A 1 A 2 and ρ ρ .
(1)
Π ρ ( T S 1 T S 2 ) = Π ρ ( Π ρ ( T S 1 ) Π ρ ( T S 2 ) )
where ρ = ρ ( a l p h ( T 1 ) a l p h ( T 2 ) )
(2)
Π ρ ( T S 1 i ) = ( Π ρ ( T S 1 ) ) i
Proof.
Item 1. First, we prove, ad absurdum, that: Π ρ ( T 1 T 2 ) Π ρ ( Π ρ ( T 1 ) Π ρ ( T 2 ) )
Suppose that τ Π ρ ( Π ρ ( T 1 ) Π ρ ( T 2 ) ) .
τ Π ρ ( Π ρ ( T 1 ) Π ρ ( T 2 ) )
implies { by Definition 8.2 }
τ { Π ρ ( w ) | w ( Π ρ ( T 1 ) Π ρ ( T 2 ) }
implies { by definition of the parallel composition of trace languages }
τ { Π ρ ( w ) | w { w | Π A 1 ( w ) Π ρ ( T 1 ) a n d   Π A 2 ( w ) Π ρ ( T 2 ) } }
implies:
τ = Π ρ ( w ) and Π A 1 ( w ) Π ρ ( T 1 ) o r   Π A 2 ( w ) Π ρ ( T 2 )
implies: { by Definition 8.2 }
τ = Π ρ ( w ) and ( Π A 1 ( w ) { Π ρ ( k ) | k T 1 } o r   Π A 2 ( w ) { Π ρ ( k ) | k T 1 } )
implies:
τ = Π ρ ( w ) and ( ( Π A 1 ( w ) = Π ρ ( k ) a n d   k T 1 ) o r   ( Π A 2 ( w ) = Π ρ ( k ) a n d k T 2 ) )
absurdum { since ρ = ρ ( a l p h ( T 1 ) a l p h ( T 2 ) ) }
The case Π ρ ( Π ρ ( T 1 ) Π ρ ( T 2 ) ) Π ρ ( T 1 T 2 ) can be proven similarly. The thesis holds by properties of projection over dependencies.
Item 2. This is similar.
Lemma 3. Let s be an expression over A and ρ A .
Π ρ ( T S ( s ) ) = Π ρ ( T S ( T ρ ( s ) ) )
Proof.
The proof is made by the induction on the structure of the term.
Base step. s = n i l : straightforward.
Inductive step. We denote T S ( a ) = ( { a } , { ( a , a ) } ) , { [ a ] } ) .
s = a . s 1 ¯ :
  • Π ρ ( T S ( a . s 1 ) )
  • = { by Definition 2 and Lemma 1.(1) }
  • Π ρ ( T S ( a ) ) . Π ρ ( T S ( s 2 ) )
  • = { by the inductive hypothesis and Lemma 1.(1) }
  • Π ρ ( T S ( T ρ ( a ) ) . T S ( T ρ ( s 1 ) ) )
  • = { by Definition 2 }
  • Π ρ ( T S ( T ρ ( a . s 1 ) ) )
All other cases can be proven in a similar way using Item 2 of Lemmas 1 and 2.
Lemma 4. Let ψ be a selective mu-calculus formula, with O ( ψ ) = ρ and e an expression.
T S ( e ) ψ   i f   a n d   o n l y   i f   Π ρ ( T S ( e ) ) ψ
Proof.
The proof is made by induction on the structure of the formula.
Base step. ψ = tt , ff : straightforward.
Inductive step.
ψ = α S ψ ¯ : Suppose that:
  • T S ( e ) = ( ( A , D ) , T ) then Π ρ ( T S ( e ) ) = ( ( ρ , Π ρ ( D ) ) , Π ρ ( T ) ) .
  • T S ( e ) α S ψ
  • iff { by Definition 6 }
  • [ s ] T . [ s ] α S ψ
  • iff { by Definition 5 }
  • [ s ] T , σ s ^ , such that D α , S ( C l 1 ( α , σ ) ) n u l l   a n d   D α , S ( C l 1 ( ( α , σ ) ) ) ψ
  • iff, {since D α , S ( C l 1 ( α , σ ) ) = D α , S ( C l 1 ( α , Π ρ ( σ ) ) ) and Π ρ ( [ σ ] ) Π ρ ( T ) }
  • Π ρ ( [ s ] ) Π ρ ( T ) , Π ρ ( σ ) Π ρ ( s ^ ) , such that D α , S ( C l 1 ( α , Π ρ ( [ σ ] ) ) ) n u l l   a n d   D α , S ( C l 1 ( α , Π ρ ( [ σ ] ) ) ) ψ
  • iff { by Definition 5 }
  • Π ρ ( [ σ ] ) Π ρ ( T ) . Π ρ ( [ σ ] ) α S ψ
  • if { by Definition 6 }
  • Π ρ ( T S ( e ) ) α S ψ
ψ = [ α ] S ψ ¯ : this case can be proven in a similar way.
The proofs of all other cases follow by a symmetric argument and by inductive hypothesis.
Now, we are ready to prove the main theorem.
Theorem 1. Let e be an expression and ψ a selective mu-calculus formula.
T S ( e ) ψ i f a n d o n l y i f Π O ( ψ ) ( T S ( T O ( ψ ) ( e ) ) ) ψ
Proof.
Let ρ = O ( ψ ) .
T S ( e ) ψ
iff { by Lemma 4 }
Π ρ ( T S ( e ) ) ψ
iff { by Lemma 3 } Π ρ ( T S ( T ρ ( e ) ) ) ψ

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Manna, Z.; Pnueli, A. The anchored version of the temporal framework. In Proceedings of the Linear Time, Branching Time and Partial Order in Logics and Models for Concurrency, School/Workshop, Noordwijkerhout, The Netherlands, 30 May–3 June 1988; Lecture Notes in Computer Science. Volume 354, pp. 201–284.
  2. Emerson, E.A.; Srinivasan, J. Branching time temporal logic. In Proceedings of the Linear Time, Branching Time and Partial Order in Logics and Models for Concurrency, School/Workshop, Noordwijkerhout, The Netherlands, 30 May–3 June 1988; Lecture Notes in Computer Science. Volume 354, pp. 123–172.
  3. Mazurkiewicz, A. Basic notions of Trace Theory. In Proceedings of the Linear Time, Branching Time and Partial Order in Logics and Models for Concurrency, School/Workshop, Noordwijkerhout, The Netherlands, 30 May–3 June 1988; Lecture Notes in Computer Science. Volume 354, pp. 285–363.
  4. Mazurkiewicz, A.; Ochmanski, E.; Penczek, W. Concurrent systems and inevitability. Theor. Comput. Sci. 1989, 281, 281–304. [Google Scholar]
  5. Clarke, E.M.; Emerson, E.A.; Sistla, A.P. Automatic verification of finite-state concurrent systems using temporal logic verification. ACM Trans. Program. Lang. Syst. 1986, 8, 244–263. [Google Scholar]
  6. Bryant, R.E. Graph-based algorithms for boolean function manipulation. IEEE Trans. Comput. 1986, C-35, 677–691. [Google Scholar] [CrossRef]
  7. Burch, J.; Clarke, E.; McMillan, K.; Dill, D.; Hwang, L. Symbolic Model Checking: 1020 States and Beyond. In Proceedings of the Fifth Annual IEEE Symposium on Logic in Computer Science, Philadelphia, PA, USA, 4–7 June 1990; pp. 428–439.
  8. Clarke, E.M.; Grumberg, O.; Long, D.E. Model checking and abstraction. Trans. Program. Lang. Syst. 1992, 16, 343–354. [Google Scholar]
  9. Garavel, H.; Lang, F.; Mateescu, R.; Serwe, W. CADP 2011: A toolbox for the construction and analysis of distributed processes. Int. J. Softw. Tools Technol. Transf. 2013, 15, 89–107. [Google Scholar]
  10. Godefroid, P. Partial-Order Methods for the Verification of Concurrent Systems; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1996; Volume 1032. [Google Scholar]
  11. Barbuti, R.; de Francesco, N.; Santone, A.; Vaglini, G. Selective mu-calculus: New modal operators for proving properties on reduced transition systems. In Proceedings of the FORTE X/PSTV XVII ’97, Osaka, Japan, 18–21 November 1997; Chapman & Hall: London, UK, 1997; pp. 519–534. [Google Scholar]
  12. Barbuti, R.; de Francesco, N.; Santone, A.; Vaglini, G. Selective mu-calculus and formula-based equivalence of transition systems. J. Comput. Syst. Sci. 1999, 59, 537–556. [Google Scholar]
  13. Stirling, C. An Introduction to Modal and Temporal Logics for CCS. In Proceedings of the UK/Japan Workshop on Concurrency : Theory, Language, and Architecture, Oxford, UK, 25–27 September 1989; Lecture Notes in Computer Science. Volume 391.
  14. De Nicola, R.; Vaandrager, F.W. Action versus State based Logics for Transition Systems. In Proceedings of the LITP Spring School on Theoretical Computer Science on Semantics of Systems of Concurrent Processes, La Roche Posay, France, 23–27 April 1990; Lecture Notes in Computer Science. Volume 469, pp. 407–419.
  15. Mazurkiewicz, A. Trace Theory. In Petri Nets: Central Models and Their Properties, Advances in Petri Nets 1986, Part II, Proceedings of an Advanced Course, Bad Honnef, 8–19 September 1986; Lecture Notes in Computer Science. 1987; Volume 255, pp. 279–324. [Google Scholar]
  16. Bradfield, J.; Stirling, C. Local model checking for infinite state spaces. Theor. Comput. Sci. 1992, 157, 157–174. [Google Scholar]
  17. Lichtenstein, O.; Pnueli, A. Checking that finite state concurrent programs satisfy their linear specification. In Proceedings of the 12th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages (POPL ’85), New Orleans, LA, USA, 14–16 January 1985; pp. 97–107.
  18. Sistla, A.P.; Clarke, E.M. The complexity of propositional linear time logics. J. ACM 1985, 32, 733–749. [Google Scholar]
  19. Godefroid, P.; Piterman, N. LTL Generalized Model Checking Revisited. In Proceedings of the 10th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI ’09), Savannah, GA, USA, 18–20 January 2009; Lecture Notes in Computer Science. Volume 5403, pp. 89–104.
  20. Gastin, P.; Petit, A. The Book of Traces, Chapter Infinite Traces; Diekert, V., Rozenberg, G., Eds.; World Scientific: Singapore, Singapore, 1995. [Google Scholar]
  21. Penczek, W. Temporal logics for trace systems: On automated verification. Int. J. Comput. Sci. 1993, 4, 31–67. [Google Scholar]
  22. Bradfield, J. The modal mu-calculus alternation hierarchy is strict. In Proceedings of the 7th International Conference CONCUR’96, Pisa, Italy, 26–29 August 1996; Volume 1119, pp. 233–246.
  23. Chieu, D.V.; Hung, D.V. An extension of Mazukiewicz traces and their applications in specification of real-time systems. In Proceedings of the Second International Conference on Knowledge and Systems Engineering (KSE ’10), Hanoi, The Netherlands, 7–9 October 2010; pp. 167–171.
  24. Kupferman, O.; Vardi, M.Y. Relating Linear and Branching Model Checking. In Proceedings of the IFIP TC2/WG2.2, 2.3 International Conference on Programming Concepts and Methods (PROCOMET ’98), Shelter Island, NY, USA, 8–12 June 1998; IFIP-Chapman-Hall: London, UK.
  25. Kupferman, O.; Vardi, M.Y. Freedom, weakness, and determinism: From linear-time to branching-time. In Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science (LICS ’98), Indianapolis, IN, USA, 21–24 June 1998; IEEE Computer Society: Washington, DC, USA, 1998. [Google Scholar]
  26. McMillan, K.L. Trace theoretic verification of asynchronous circuits using unfoldings. In Proceedings of the 7th International Conference on Computer-Aided Verification (CAV ’95), Liege, Belgium, 3–5 July 1995; Lecture Notes in Computer Science. Volume 939, pp. 180–195.
  27. Wallner, F. Model checking LTL using net unfoldings. In Proceedings of the 10th International Conference on Computer-Aided Verification (CAV ’98), Vancouver, BC, Canada, 28 June–2 July 1998; Lecture Notes in Computer Science. Volume 1427, pp. 207–218.
  28. McMillan, K.L. Using unfoldings to avoid the state explosion problem in the verification of asynchronous circuits. In Proceedings of the 4th International Workshop on Computer-Aided Verification (CAV ’92), Montreal, QC, Canada, 29 June–1 July 1992; Lecture Notes in Computer Science. Volume 663, pp. 164–174.
  29. Bollig, B.; Leucker, M. Deciding LTL over Mazurkiewicz Traces. In Proceedings of the Symposium on Temporal Representation and Reasoning (TIME ’01), Cividale, Italy, 14–16 June 2001; IEEE Computer Society Press: Washington, DC, USA, 2001. [Google Scholar]
  30. Kaivola, R. A simple decision method for the linear time mu-calculus. In Proceedings of the International Workshop on Structures in Concurrency Theory (STRICT), Berlin, Germany, 11–13 May 1995; Workshops in Computing. Springer: London, UK, 1995; pp. 190–204. [Google Scholar]
  31. Thiagarajan, P.S.; Walukiewicz, I. An Expressively Complete Linear Time Temporal Logic for Mazurkiewicz Traces. In Proceedings of the 12th Annual IEEE Symposium on Logic in Computer Science (LICS ’97), Warsaw, Poland, 29 June–2 July 1997; IEEE Computer Society: Washington, DC, USA, 1997; pp. 183–194. [Google Scholar]
  32. Walukiewicz, I. Local Logics of Traces; BRICS Report RS-00-2; BRICS: Aarhus, Denmark, 2000. [Google Scholar]
  33. Kesten, Y.; Pnueli, A.; Raviv, L. Algorithmic Verification of Linear Temporal Logic Specifications. In Proceedings of the 25th International Colloquium on Automata, Languages and Programming (ICALP ’98), Aalborg, Denmark, 13–17 July 1998; Lecture Notes in Computer Science. Volume 1443, pp. 1–16.
  34. Peled, D. All from one, one from all: On model checking using representatives. In Proceedings of the 5th International Conference on Computer-Aided Verification, (CAV ’93), Elounda, Greece, 28 June–1 July 1993; Lecture Notes in Computer Science. Volume 697, pp. 409–423.
  35. Dumas, X.; Boniol, F.; Dhaussy, P.; Bonnafous, E. Context Modelling and Partial-Order Reduction: Application to SDL Industrial Embedded Systems. In Proceedings of the IEEE Fifth International Symposium on Industrial Embedded Systems (SIES ’10), Trento, Italy, 7–9 July 2010; pp. 197–200.
  36. Rozier, K.Y. Linear temporal logic symbolic model checking. Comput. Sci. Rev. 2011, 5, 163–203. [Google Scholar]
  37. Grumberg, O.; Lange, M.; Leucker, M.; Shoham, S. When not losing is better than winning: Abstraction and refinement for the full mu-calculus. Inf. Comput. 2007, 205, 1130–1148. [Google Scholar]
  38. Fecher, H.; Shoham, S. Local abstraction-refinement for the μ-calculus. Softw. Tools Technol. Transf. 2011, 13, 289–306. [Google Scholar]
  39. Esparza, J.; Hansel, D.; Rossmanith, P.; Schwoon, S. Efficient Algorithms for Model Checking Pushdown Systems. In Proceedings of the 12th International Conference on Computer-Aided Verification (CAV ’00), Chicago, IL, USA, 15–19 July 2000; Lecture Notes in Computer Science. Volume 1855, pp. 232–247.
  40. Walukiewicz, I. Pushdown processes: Games and Model Checking. In Proceedings of the 8th International Conference on Computer Aided Verification (CAV ’96), New Brunswick, NJ, USA, 31 July–3 August 1996; Springer-Verlag: Berlin/Heidelberg, Germany, 1996; Volume 1102, pp. 62–74. [Google Scholar]
  41. Bozzelli, L. Complexity results on branching-time pushdown model checking. Theor. Comput. Sci. 2007, 379, 286–297. [Google Scholar]
  42. Carotenuto, D.; Murano, A.; Peron, A. 2-Visibly Pushdown Automata. In Proceedings of the 11th International Conference on Developments in Language Theory (DLT ’07), Turku, Finland, 3–6 July 2007; pp. 132–144.
  43. Kupferman, O.; Piterman, N.; Vardi, M.Y. Pushdown Specifications. In Proceedings of the 9th International Conference, LPAR, Tbilisi, Georgia, 14–18 October 2002; pp. 262–277.
  44. Löding, C.; Rohde, P. Model Checking and Satisfiability for Sabotage Modal Logic. In Proceedings of the 23rd Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS ’03), Mumbai, India, 15–17 December 2003; pp. 302–313.
  45. Benthem, J.V. An essay on sabotage and obstruction. In Festschrift in Honour of Jörg Siekmann, LNAI; Hutter, D., Werner, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  46. Kupferman, O.; Vardi, M.Y.; Wolper, P. Module checking. Inf. Comput. 2001, 164, 322–344. [Google Scholar]
  47. Kupferman, O.; Vardi, M.Y. Module checking revisited. In Proceedings of the 9th International Conference (CAV’97), Haifa, Israel, 22–25 June 1997; Springer-Verlag: Berlin/Heidelberg, Germany, 1997. Lecture Notes in Computer Science. Volume 1254, pp. 36–47. [Google Scholar]
  48. Chatterjee, K.; Doyen, L.; Henzinger, T.A.; Raskin, J. Algorithms for omega-regular games with imperfect information. Log. Methods Comput. Sci. 2007, 3, 1–23. [Google Scholar]
  49. Bozzelli, L.; Murano, A.; Peron, A. Pushdown module checking. In Proceedings of the 12th International Conference on Logic for Programming Artificial Intelligence and Reasoning (LPAR ’05), Montego Bay, Jamaica, 2–6 December 2005; Springer-Verlag: Berlin/Heidelberg, Germany, 2005. Lecture Notes in Computer Science. Volume 3835, pp. 504–518. [Google Scholar]
  50. Bozzelli, L.; Murano, A.; Peron, A. Pushdown module checking, Form. Methods Syst. Des. 2010, 36, 65–95. [Google Scholar]
  51. Aminof, B.; Legay, A.; Murano, A.; Serre, O.; Vardi, M.Y. Pushdown module checking with imperfect information. Inf. Comput. 2013, 223, 1–17. [Google Scholar]
  52. Aminof, B.; Kupferman, O.; Murano, A. Improved model checking of hierarchical systems. Inf. Comput. 2012, 210, 68–86. [Google Scholar]
  53. Alur, R.; Yannakakis, M. Model checking of hierarchical state machines. ACM Trans. Program. Lang. Syst. 2001, 23, 273–303. [Google Scholar]
  54. Alur, R.; Benedikt, M.; Etessami, K.; Godefroid, P.; Reps, T.W.; Yannakakis, M. Analysis of recursive state machines. ACM Trans. Program. Lang. Syst. 2005, 27, 786–818. [Google Scholar]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.